header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Bittensor Subnet Completes Largest-Ever Scale LLM Pretraining, DeAI Narrative Regression

BlockBeats News, March 16th, according to official sources, the Bittensor subnetwork Templar (SN3) completed the largest decentralized LLM Covenant-72B pre-training in history on March 10th. Community supporters believe this event proves that Bittensor is not a "meme coin," but a decentralized infrastructure capable of truly producing top-tier AI models.


It is reported that Covenant-72B is a language model with 720 billion parameters, pre-trained by the Templar team on the Bittensor Subnet 3, entirely based on the general internet, without the need for centralized data centers. The model achieved a score of 67.1 in the MMLU (zero-shot) test, surpassing centralized baseline models such as LLaMA-2-70B and LLM360 K2 under the same evaluation conditions. It is the largest fully permissionless collaborative language model to date, with over 70 different nodes contributing computing resources throughout the entire operation. The team has released all weights and checkpoints under the Apache license.


Possibly influenced by this news, Bittensor (TAO) and its subnet token saw a general price surge, with TAO rising by 54.8% in the past two weeks. The subnet token τemplar surged by 194% in the last 7 days, currently trading at $19.3.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish