A mysterious new artificial intelligence image generator has taken the top spot in quality rankings, outperforming well-known platforms like Midjourney and DALL-E 3, despite its unknown origins.
The AI model, identified only as “Red Panda,” has achieved the highest position on Artificial Analysis’s text-to-image generation leaderboard, a platform that uses crowdsourced evaluations to rank AI image generators.
How the Rankings Work
Artificial Analysis employs a rating system similar to chess rankings, known as the Elo system, to evaluate AI image generators. Users are presented with two images created from the same prompt and must choose which one they prefer based on quality and accuracy. The platform randomly pairs different AI models against each other until sufficient data is collected to establish rankings.
Red Panda has achieved an impressive 79% win rate in these comparisons, with an Elo score of 1,244. The model typically generates images in approximately seven seconds, demonstrating both quality and efficiency in its output.
Mystery Surrounds Red Panda’s Origins
Despite its strong performance, virtually no information is available about Red Panda’s developers, underlying technology, or potential launch plans. Industry observers speculate that the model might be undergoing public testing before an official release, a common practice in the AI industry.
The name has led to speculation about possible Chinese origins, as red pandas are native to the eastern Himalayas and southwestern China. However, no concrete evidence has emerged to support these theories.
Current Performance and Competition
Red Panda has consistently outranked major competitors in the Artificial Analysis arena, including:
– Black Forest Labs’ FLUX1.1 Pro
– Ideogram
– Midjourney v6
– DALL-E 3
– Stable Diffusion 3 Large
– Amazon Bedrock
Red Panda’s capabilities can be explored through the Artificial Analysis Text-to-Image Arena, where users can participate in the evaluation process. Red Panda-generated images show only randomly among other AI models’ outputs, however.
There are some reasons to remain skeptical about the new model. A large part of the competitor image generation models have poor performance. Another reason is that the users of the ranking platform haven’t had chance to do many comparisons: only a total of 15,000 direct battles with other generators have been performed. There’s some indication that the score is already dropping so it remains to be seen whether the new model will keep its crown.
(Image by Mathias Appe)