OpenAI's Aggregator Play, Apple Releases AIM, & OpenAI's Defamation Lawsuit
OpenAI's GPT Store could be an aggregation play, Apple releases their image recognition tech on Hugging Face, and OpenAI faces a defamation lawsuit over a ChatGPT hallucination
OpenAI's Aggregator Play with the GPT Store
Ben Thompson argued this week that OpenAI is making an aggregation play with the GPT Store, comparing the 3 million user-created chatbots to the millions of apps on Apple's App Store, or the over 1 billion websites that rely on Google Search.
This aggregation strategy aligns with developer Allen Pike's theory that OpenAI's broader vision is being an "Everything Engine," where ChatGPT would automatically direct queries to the most suitable custom GPT. Selecting specific GPTs for specific queries may only be temporary.
This "Everything Engine" could eventually incorporate advertising models akin to web search engines, which would help OpenAI monetize all of their free ChatGPT users. While it is up in the air whether OpenAI goes that direction, Pike argued that Google likely will.
Apple Releases Image Recognition Tech AIM on Hugging Face
Apple released their image recognition technology Autoregressive Image Models (AIM) on Hugging Face. Detailed in this week's white paper, AIM is a collection of vision models pre-trained using an autoregressive objective, a concept borrowed from the success of large language models.
These models have shown remarkable scaling properties, achieving as high as 84% accuracy on the ImageNet-1k dataset with a 7 billion parameter model trained on 2 billion images. This breakthrough indicates a promising future for large-scale vision models, offering a new frontier in AI-driven image recognition. AIM's performance scales with both the model capacity and the quantity of data, without showing signs of saturation, suggesting a vast potential for further advancements.
OpenAI Faces Defamation Lawsuit Over ChatGPT's Hallucination
OpenAI is set to contest a defamation lawsuit after a Georgia judge rejected their motion to dismiss the case.
Georgia radio host Mark Walters sued OpenAI after ChatGPT generated a false lawsuit allegation against Walters, which he argues could damage his reputation. OpenAI's defense hinged on the argument that ChatGPT's outputs aren't considered "publications" crucial for a libel claim and that Walters, as a public figure, needed to prove 'actual malice.' However, the court's decision to proceed suggests a need for further examination of Walters' claims.
This case, a first of its kind, will potentially set a precedent for the accountability of AI-generated content and its implications for defamation and libel laws.
Tools & Resources
Tensor Trust is a game created by researchers at UC Berkeley to learn more about the vulnerability of AI to a class of attacks called prompt injection.
You play by defending your password (using defense prompts) or by attacking other players by tricking them to reveal their password.