header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Alibaba claims HappyHorse-1.0 and will also release another multimodal model

BlockBeats News, April 10th, Alibaba has officially confirmed that the video generation model HappyHorse-1.0 is its self-developed product. The model comes from the original Future Life Experimentation Lab team of Taobao Group, which has been placed under the newly established Alibaba Token Hub (ATH) business group's "AI Innovation Department" in Alibaba's latest organizational adjustment.


In an anonymous vote on the third-party evaluation platform Artificial Analysis, HappyHorse-1.0 significantly outperformed ByteDance's Seedance 2.0 and Kuaishou's Kling 3.0 in pure video generation tasks, and performed equally well as Seedance 2.0 in audio-visual integrated generation.


According to sources close to Alibaba, HappyHorse-1.0 is just one of the self-developed multimodal models of the team, and Alibaba will soon launch another different multimodal model. Currently, HappyHorse-1.0 is not open source, in line with Alibaba's recent overall shift to a closed-source model strategy—since the end of March, Alibaba has successively released several new models that have not been open sourced.


The intensive advancement of this multimodal model is driven by the outstanding performance of ByteDance's Seedance 2.0 during the 2026 Spring Festival, which surprised Alibaba internally. Furthermore, multimodal generation will significantly increase Token consumption, thereby affecting the market share of MaaS (Model as a Service)—according to IDC data, as of the first half of 2025, Volcano Engine had already occupied 49.2% of the market share, while Alibaba Cloud only accounted for 27%.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish