The release of Venice Uncensored 1.2 marks a significant upgrade, featuring enhanced vision support, a fourfold increase in context window, and improved tool-use abilities. This upgraded model, developed in collaboration with AskVenice, is the most uncensored version of Mistral 3.2 24B and achieves full compliance through advanced supervised fine-tuning and KTO reinforcement learning, all without direct weight editing. The model was trained on Bittensor Subnet 4 utilizing confidential decentralized computation, underscoring its innovative training infrastructure.

Venice: Venice is a private AI platform dedicated to uncensored AI interactions and model deployment. It collaborated with dphnAI to release Venice Uncensored 1.2, featuring upgrades like vision support.
dphnAI: dphnAI is an AI lab focused on developing uncensored models and distributed inference solutions. It led the development of Venice Uncensored 1.2 in partnership with AskVenice.
Ask Venice: AskVenice runs Venice.ai, a platform for private and uncensored AI conversations. It announced the live release of Venice Uncensored 1.2, trained on Targon Compute.
Targon Compute: Targon Compute is a decentralized secure compute platform operating as Bittensor Subnet 4, providing confidential computing with high-performance GPUs and CPUs for AI inference and training. It powered the training of Venice Uncensored 1.2 using its Subnet 4 infrastructure.
Mistral 3.2 24B: Mistral 3.2 24B is a multimodal instruction-following model from Mistral AI supporting vision, extended context, and tool use. It serves as the foundational base model for the uncensored Venice Uncensored 1.2 variant.
Venice Uncensored 1.2: Venice Uncensored 1.2 is a fine-tuned, highly compliant uncensored model derived from Mistral 3.2 24B. It introduces vision support, a larger context window, and stronger tool-use capabilities, developed jointly by dphnAI and AskVenice.

Model Capabilities: Upgraded with vision processing, expanded context handling, and enhanced tool integration.
Uncensoring Method: Achieved full compliance through supervised fine-tuning and KTO reinforcement learning without direct weight editing.
Training Infrastructure: Model trained on Bittensor Subnet 4 using confidential decentralized compute.