BlockBeats News, May 4th. AI community user "X Freeze" posted that mainstream artificial intelligence models, including ChatGPT, Claude, and Gemini, have shown "less agreement with conservative positions" on issues such as gender, immigration, and crime, raising questions about potential systemic biases in their value orientations.
The viewpoint suggests that as AI capabilities rapidly advance, the "value alignment" process may be influenced by training data and design mechanisms, leading to a tendency towards consistency on certain public issues. This has sparked discussions in the community regarding "training data bias" and "model design orientation."
Currently, mainstream AI development organizations generally state that their model training aims to enhance information accuracy and security, reduce bias through diverse data and evaluation mechanisms, but the debate on AI value neutrality continues.
