header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Chen Tianqiao: The mainstream AI is all about "large models in liberal arts." MiroMind aims to create a "large model in science and engineering." What's more frightening than cut-offs in water and electricity supply is a cut-off in wisdom.

According to 1M AI News monitoring, in the same interview, Chen Tianqiao systematically elaborated his cognitive framework for AI. He divided current large models into two paradigms: the mainstream "Liberal Arts Large Model," centered on language generation and text consistency, excelling in simulation, which will become infrastructure like water and electricity in education, communication, and content production. What MiroMind aims to achieve is the "Science Large Model," with the value in "discovery," tracing causal red lines, caring about whether a hypothesis can be negated or confirmed by reality. The ultimate product is not a paragraph but new knowledge.

He set a tough benchmark for the team to maintain close to 99% accuracy in complex reasoning chains of over 300 steps, with each step being verifiable and traceable. He did the math: even with a 99% accuracy per step, by the 300th step, the accuracy would drop to less than 5%. Therefore, "maintaining 99% accuracy every 100 steps is a small achievement each time." Its subsidiary MiroThinker 1.5 set a record in the BrowseComp benchmark test with 30B parameters, topping the global FutureX leaderboard.

Regarding AI ethics, Chen Tianqiao's judgment was unexpected: the core issue is not privacy, not fairness, but "AI access rights." He believes that the hierarchical division of AI capabilities will lead to a fracture in human cognition layers, where individuals at different cognitive levels may "not even be able to discuss the same topic because their AIs will construct different realities." When AI is controlled by a few giants, once the API interface is cut off, "what is more terrifying in the future than a water or power outage is a wisdom outage." He calls on governments to encourage more open-source models to safeguard the general public's right to choose AI access.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish