According to 1M AI News monitoring, Context Hub, an AI programming documentation service launched two weeks ago by DeepLearning.AI founder and Stanford University adjunct professor Andrew Ng, has been exposed by security researchers for a supply chain attack risk. Context Hub provides API documentation to programming agents through an MCP server, where contributors submit documentation via GitHub PRs, maintainers merge them, and agents read as needed. Creator of the alternative service lap.sh, Mickey Shmueli, released a proof-of-concept attack, highlighting that this pipeline "has no content review at any step."
Shmueli crafted two sets of fake documentation targeting Plaid Link and Stripe Checkout, each embedding a fake PyPI package name, and tested with Anthropic's three levels of models 40 times each:
1. Haiku consistently wrote the malicious package to requirements.txt without displaying any warnings in the output.
2. Sonnet issued warnings in 48% (19/40) of tests but still wrote the malicious dependency in 53% (21/40) of cases.
3. Opus performed the best, issuing warnings in 75% (30/40) of tests and not writing the malicious dependency in the code.
An attacker only needs to submit a PR that gets merged to successfully poison the supply chain, with a low review threshold: out of 97 closed PRs, 58 were merged. Shmueli pointed out that this is essentially a variant of an indirect injection, where AI models struggle to reliably distinguish between data and commands when processing content, and other community documentation services also lack in content review. Andrew Ng did not respond to requests for comment.
