header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Distributed AI Lab Gradient Releases Echo-2 Distributed Reinforcement Learning Framework

BlockBeats News, February 12, Distributed AI Lab Gradient released the Echo-2 distributed reinforcement learning framework, aiming to break through the AI research training efficiency barrier by decoupling Learner and Actor at the architecture level to reduce the post-training cost of large models.


Official data shows that the framework can reduce the post-training cost of a 30B model from $4500 to $425. Echo-2 uses the compute-storage separation technology for asynchronous training (Async RL), supporting offloading the sampling power to unstable GPU instances and Parallax-based heterogeneous GPUs. In addition, Gradient also plans to launch the RLaaS (Reinforcement Learning as a Service) platform Logits, which is currently open for reservation for students and researchers.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish