BlockBeats News, March 2nd, Anthropic was accused of launching a "Export ChatGPT Memory Data" prompt tool, helping users migrate historical memory information to its affiliated model Claude, sparking industry attention.
According to public content, the tool allows users to export their memory data in OpenAI's ChatGPT by copying and pasting specific prompts and importing it into Claude. Discussions suggest that this move is seen as directly weakening ChatGPT's user stickiness and conversion cost relying on the "memory function."
Market views believe that the memory mechanism is considered an important moat for large-scale model products—the longer the user uses it, the deeper the model understands their preferences, context, and historical dialogues, and the migration cost also increases. If third-party tools can achieve convenient data migration, it may change the current AI product's user lock-in logic.
At the same time, the news also mentioned that Anthropic was previously restricted from use by the U.S. Department of Defense related system, but the company's popularity and attention quickly skyrocketed, reaching the top of some application charts.
Currently, the specific compliance of the tool and platform response mentioned above is still unclear. It is widely believed in the industry that the competition of large models has extended from performance competition to the ecological and data sovereignty levels, and user data portability may become a key variable in the next stage.
