header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Anthropic is recruiting a Chemical Weapons expert to prevent Claude from being misused, but experts are questioning whether this move will actually increase the risk

According to 1M AI News monitoring, Anthropic has posted a position on LinkedIn for a "Chemical Weapons and High-Yield Explosives Policy Manager," requiring applicants to have at least 5 years of experience in "chemical weapons and/or explosives defense" and knowledge of Radiological Dispersal Devices (dirty bombs). The company stated that this position aims to prevent Claude from being used for "catastrophic misuse," expressing concerns that its AI tools could be used to access information on manufacturing chemical or radiological weapons, necessitating expert assessment of the adequacy of existing security measures. Anthropic noted that this role is similar in nature to positions the company has established in other sensitive areas.

OpenAI has also posted a similar position on its recruitment page, seeking a Biological and Chemical Risk Researcher with a salary of up to $455,000 per year, nearly double the salary for Anthropic's position. In response, Dr. Stephanie Hare, co-host of the BBC program "AI Decoded" and a technology researcher, raised questions: "Is it really safe to have AI systems deal with sensitive information on chemicals, explosives, and radiological weapons, even if AI has been instructed not to use this information?" She also pointed out that currently there are no international treaties or regulations governing such work and the use of AI in conjunction with these weapons, emphasizing that "all of this is happening out of the public eye."

举报 Correction/Report
Correction/Report
Submit
Add Library
Visible to myself only
Public
Save
Choose Library
Add Library
Cancel
Finish