header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Tether Launches Mobile On-Device Medical AI: $1.7B Small Model Outperforms 16x Larger Model, Completely Eliminating Cloud Dependency

According to 动察 Beating monitoring, Tether, the issuer of USDT, announced today the launch of the QVAC MedPsy series of medical language models by its AI research team, designed for localized medical AI on low-power terminals such as smartphones and wearable devices. The models can run without relying on cloud servers and achieve performance far exceeding model size through an efficient architecture: the 1.7 billion-parameter version achieved an average score of 62.62 on seven closed medical benchmarks, outperforming Google's MedGemma-4B by 11.42 points and surpassing the MedGemma-27B model with nearly 16 times the parameters in real clinical scenarios such as HealthBench Hard; the 4 billion-parameter version scored even higher at 70.54, surpassing larger models comprehensively while significantly reducing inference token consumption (up to 3.2 times) and releasing in a quantized GGUF format (1.7B approximately 1.2GB), suitable for mobile and edge deployment.


This release challenges the traditional assumption of "larger models = better performance," focusing on efficiency through phased medical post-training (supervised, clinical inference data + reinforcement learning) to achieve true local privacy protection and low-latency inference. Tether CEO Paolo Ardoino stated that this enables medical AI to process sensitive data directly on-site in hospitals and device endpoints without the need to transmit to the cloud, thereby reducing costs, latency, and privacy risks, potentially reshaping the infrastructure of medical AI and promoting local deployment globally, especially in underdeveloped regions.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish