According to Dynamic Beating monitoring, last Friday, a developer with the ID BugstoOai posted a security report in the official OpenAI developer community, stating that they had discovered a logic vulnerability in the iOS version of ChatGPT subscription validation. The report stated that while the OpenAI backend validates the legitimacy of the Apple Pay receipt signature and the validity of the OpenAI auth token in the request, it does not cross-reference whether the Apple ID used for the purchase receipt and the OpenAI account receiving the Plus subscription belong to the same person. The report summarized the current authorization logic as "valid receipt + valid token = Plus activation" and likened it to a cashier only verifying the authenticity of a receipt without checking the customer's identification.
The affected components are the ChatGPT iOS application (tested in version v1.2026.xx, as noted by the report author) and the backend endpoint `/backend-api/subscription/upgrade`. The report also provided mitigation suggestions: binding the receipt to the purchaser's identity, enforcing one-time use of receipts, establishing a fingerprint link between Apple ID and OpenAI account, and monitoring instances of the same transaction_id across different accounts.
The English post only outlined high-level steps, stating that detailed reproduction steps were not publicly disclosed following responsible disclosure principles. The post referenced a Chinese article at the end as the original source (linux.do/t/topic/1981747). The Chinese version explicitly described the exploitation method: using a Turkish region Apple ID to purchase Plus (monthly fee of 499 lira), intercepting the receipt sent from the ChatGPT application to OpenAI using tools like mitmproxy, and then using that receipt to repeatedly call the subscription endpoint to activate Plus for different accounts; the author claimed that the source of low-priced ChatGPT Plus reselling on online marketplaces originated from this method.
As of now, OpenAI has not responded to this report on the forum or any other channels. Some users in the discussion forum have questioned whether the content was AI-generated and have found no evidence of reproducibility. The report did not provide a complete Proof of Concept (PoC), nor has it been independently verified by third-party security researchers. At present, it can only be considered as an unverified disclosure.
