How to tell if an image is AI-generated? Testing the latest OpenAI Image 2.0 model

Recently, whether within professional circles or online discussions, everyone is marveling at the speed of AI evolution. Especially after OpenAI released their Image 2.0 level models, to be honest, as an engineer who works with code and algorithms every day, I sometimes have to stare at an image for a long time to barely tell the truth from the fake. Old experiences like "AI can't draw hands well" or "AI doesn't understand physical lighting and shadows" have all become obsolete.
For those in e-commerce, social media, or even just regular internet users, the feeling that "seeing is no longer believing" is quite unsettling. Since the naked eye is no longer reliable, it's better to use algorithms to deal with algorithms. This was my original intention for spending time building the RealPix detection platform.
Test: Can it really see through AI?
To see how many people current AI can deceive and to test the stability of our system, I ran two sets of extreme tests with the latest models over the weekend. Here is the data from our backend.
Scenario 1: Realistic E-commerce Fake (AI Model + Product Image)
Many merchants now use AI to generate images of human models with products to save costs. For example, in a samurai sword poster, the lighting, metallic texture, and even the muscle tension of the model holding the sword are extremely realistic and commercially deceptive. The backend is capturing the underlying data of this image, analyzing the pixel matrix and lighting consistency.

(Note: You can easily test these kinds of product photos on our homepage.)
Scenario 2: High-Blur "Life Selfie"
Some people think only high-definition images can be tested, or that as long as the image is blurred or filtered, it can evade algorithms. This is not the case. For example, consider a high-blur selfie that looks like it was taken randomly with a mobile phone—the system can still catch the specific noise distribution from its generation.
I put this "selfie" into the system and ran it through the cross-comparison engine to check its noise and information entropy.

Detection Result: 90% Suspicious.
The radar chart clearly shows that the system found nearly 90% abnormal patterns and 74% AI traces. It proves that while AI can intentionally create lens blur to deceive the human eye, it cannot hide the "machine-calculated flavor" at the base pixel level.
Explanation on the "GPT 1.5" Label
Friends in the know might notice that even though I used images from the latest models, the highest match in the detection results is labeled as GPTIMAGE15 or QWEN.
Let me clarify: this is because the neural network we use for underlying feature capture has been updated and can catch the artifacts of new models without issue, but we haven't had time to update the API name mapping library on the frontend. So when the system catches these latest "monsters," it temporarily labels them with old tags. Don't worry about the names; just focus on the comprehensive risk index. The core accuracy remains very stable. We will fix this display issue in a future update.
Direct Use, Completely Free
This was originally a small tool I made to solve my own pain points, and now it's open for everyone to use. Whether you want to check competitor posters, suspect a material is plagiarized, or are unsure about a profile picture, just drop it into RealPix and test it.