header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Goldman Sachs Hong Kong Staff Banned from Using Claude: No Business Trips to HK Allowed Either, Anthropic Claims Never Formally Supported Hong Kong

According to Cheems Analytics monitoring, employees at Goldman Sachs' Hong Kong office can no longer access Anthropic's Claude model through an internal AI platform as of a few weeks ago. The restriction is geographically enforced: employees traveling from overseas to Hong Kong are also unable to access it while in Hong Kong. The most affected are software engineers who use Claude for coding and financial modeling.

Goldman Sachs performed a stringent review of its contract terms with Anthropic and, after negotiations with Anthropic, determined that Hong Kong employees should not use any Anthropic products. A spokesperson for Anthropic told the Financial Times that Claude was never officially supported in Hong Kong but declined to elaborate further. This restriction does not involve other AI suppliers like OpenAI; Goldman employees can still use ChatGPT and Gemini on the internal platform.

The key concern for the U.S. AI company restricting Hong Kong usage is "distillation": local entities train their own models by intensely querying the outputs of foreign models. While ChatGPT and Claude are already blocked in mainland China, Hong Kong was previously mostly unrestricted, with the boundaries of use set by U.S. companies. Hong Kong serves as the investment banking and cross-border trading hub for many global banks in the Greater China region, and if employees cannot access the top models, they may lag behind other teams. The Financial Times reported it could not confirm if other banks or firms have also implemented similar restrictions.

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish