Dolphin Inference Network has launched its node operation for beta testing, inviting participants to process synthetic data generation requests using the Qwen 3.5 35B MoE model. This initiative allows testers to earn $POD rewards and utilizes idle GPUs, requiring at least 60GB of vRAM for full context processing. The MoE design of Qwen 3.5 enhances inference speed by activating specialized sub-networks, reflecting Dolphin AI Lab’s commitment to developing efficient and uncensored large language models. A dataset of 7 million prompts from CodeX-7M has been seeded for this beta phase, with data generation statistics available for monitoring via their platform.
Targon: Targon operates a lightning-fast cloud for scalable GPU and CPU rentals tailored for AI training and deployment. Its inventory includes high-end GPUs recommended for running Dolphin Inference Network nodes during beta testing.
Qwen 3.5: Qwen 3.5 is a large language model series from Alibaba Cloud’s Qwen team, incorporating Mixture-of-Experts architecture for efficient multimodal and agentic capabilities. The MoE variant is deployed in Dolphin Inference Network’s beta to process synthetic data generation requests from seeded prompts.
TargonCompute: TargonCompute is a decentralized platform providing secure GPU rentals with confidential computing features like Intel TDX to protect AI workloads. It supplied hardware such as RTX 6000 PRO for Dolphin’s model training and node testing.
Dolphin Inference Network: Dolphin Inference Network is a distributed and verified AI inference platform developed by Dolphin AI Lab to enable collaborative model running on idle GPUs. It focuses on repurposing hardware for tasks like synthetic data generation during its current beta phase. Testers earn POD rewards while contributing to a dataset derived from coding prompts.
`json
{
“MoE Efficiency”: “Qwen 3.5 leverages Mixture-of-Experts design to activate specialized sub-networks for improved inference speed.”
}
`
