OpenAI’s ChatGPT-5 was released on 7th August, but is it really their ‘most useful model yet’ (or is that another hallucination)?
One of the most obvious improvements is the unification of all of the previous specialised models you used to have to choose from. Now a single general model processes simple queries, fast. Side-by-side tests with its predecessor GPT-4 show that Chat GPT-5 is 30-50% faster at processing simple information queries.
OpenAI has decided to prioritise reasoning over stored knowledge with a new ‘Reasoning-First’ architecture. It’s designed to be logical and intelligent by using grounded referencable source information from reliable media sources as the basis for its answers. It does this through using Retrieval-Augmented Generation (RAG) to validate knowledge it has with what it retrieves from a search/source. This should result in greater accuracy and means that SEO has become even more relevant.
ChatGPT-5’s deeper reasoning model is there for addressing harder problems and when it’s faced with more complex queries, its real-time router automatically decides which mode is best to address your question based on conversation type, complexity, intent and tool needs.
When faced with more complex coding challenges, GPT-5 seems to have an implicit understanding of what makes its output more premium. It seems to intuit what’s required rather than requiring more complex prompts to achieve the same outcome as with its predecessors. Its new ‘aesthetic sensibility’ is pretty impressive. However when it comes to its image generation capabilities, A/B comparisons show GPT-5 design outputs to be a downgrade.
OpenAI confidently boasted that GPT-5 was its ‘most capable writing collaborator yet’. Tests do demonstrate a certain flair that was lacking in its more wooden former iterations, even weaving in relatable analogies to illustrate its points.
The most useful improvement to ChatGPT-5 is supposed to be the reliability of its answers. Everybody’s experienced the ‘hallucination’ errors that have been cropping up in GPT results, but with more powerful hardware running it and better ways to benchmark it, ChatGPT-5 shouldn’t struggle so much, but in tests it’s still inventing results. So beware of rogue health advice or recipes!
Ultimately, it’s better in a lot of ways, but not all. Keep up the good work OpenAI!