Skip to main content

GPT-5 Just Scored Higher Than Humans at Work — Should We Be Worried

GPT-5 Just Scored Higher Than Humans at Work — Should We Be Worried?

I was reading something last week that made me put my phone down and just sit with the information for a minute. OpenAI's GPT-5 — the latest version of the model behind ChatGPT — had been tested on a benchmark called OSWorld-V. This benchmark simulates real desktop productivity tasks. The kind of work that happens in offices every day. Drafting documents. Managing files. Navigating software. Completing multi-step workflows. GPT-5 scored 75 percent. The human baseline on the same tasks was 72.4 percent. The AI had crossed the line. It was not catching up to humans anymore. It was ahead. And I sat there thinking — okay. This is the moment everyone has been nervously waiting for. So what do we actually do with this information?

What GPT-5 Actually Did — The Real Story Behind the Score

Before we get into whether you should be worried — I want to explain what actually happened, because the headline version of this story is both accurate and missing important context at the same time.

OSWorld-V is a benchmark developed specifically to test AI performance on real computer tasks. Not writing tasks. Not answering questions. Actual desktop work — the kind that requires navigating real software interfaces, completing multi-step processes, and handling the kind of unpredictable situations that come up when you are working in actual applications. It is one of the most realistic AI benchmarks ever created because it is specifically designed to replicate what work actually looks like rather than what researchers wish work looked like.

GPT-5 — or more precisely, the system OpenAI calls GPT-5 — scored 75 percent on this benchmark. Human workers attempting the same tasks scored 72.4 percent on average. That is not a massive gap. But it is a crossing. The AI has gone from below human performance to above it on this specific measure.

Now here is the context that matters. The 72.4 percent human baseline is an average. Some humans score much higher. Some score lower. The benchmark measures a specific range of productivity tasks — not all work, not creative work, not interpersonal work, not work that requires physical presence or deep domain expertise built over years. It is a meaningful measure. It is not a complete picture of human work capability.

What it is — honestly and accurately — is a signal that we have crossed a threshold that previously existed only in predictions. AI doing certain kinds of office work better than the average human is no longer a future possibility. It is a documented current reality. And that deserves a serious response — not panic, but not dismissal either.

✍️
My Personal Experience

I have been using AI tools for my freelance work and my blog for over a year. And I want to be honest — there are specific tasks where AI is already clearly better than me. Finding the right structure for a complex piece of writing. Generating five variations of a headline in thirty seconds. Researching a topic broadly before I go deep. These are things I used to spend significant time on. AI does them faster and often better. I am not saying this to be alarming. I am saying it because I think the GPT-5 benchmark result is confirming something that people who actually use these tools daily have already been quietly noticing for months. The headline is new. The underlying reality has been building for a while.                            




What This Score Actually Means for Regular Workers

Let me be specific about what crossing this benchmark threshold does and does not mean for real people doing real jobs.

What It Does Mean

It means AI can now reliably complete the kind of structured, process-driven desktop work that makes up a significant portion of many office jobs. Document formatting. Data entry. Navigating software to complete multi-step administrative tasks. Following procedural workflows. Managing files and information across applications. These are real tasks that real people spend real hours doing — and AI has demonstrated it can do them at or above average human performance.

For organisations — this creates a genuine economic incentive to automate these specific tasks. Not necessarily to fire the people doing them immediately. But to think carefully about whether new positions doing primarily these tasks need to be filled, and whether existing roles can be restructured to require fewer of them. That calculation is already happening in boardrooms right now — the GPT-5 result just gave it more concrete justification.

It also means that the people whose jobs consist primarily of these kinds of structured procedural tasks are in a more vulnerable position than they were six months ago. Not because they will all lose their jobs immediately. But because the economic argument for their roles has weakened in a measurable way.

What It Does Not Mean

It does not mean AI can do everything a human worker does. The benchmark measured specific task performance. It did not measure judgment in ambiguous situations. It did not measure the ability to navigate complex human dynamics. It did not measure creative problem-solving when the problem itself is not clearly defined. It did not measure accountability — the thing that happens when something goes wrong and someone needs to answer for it.

Human work is not just a collection of discrete tasks. It is embedded in relationships, context, organisational culture, and the kind of implicit understanding that accumulates over years of experience in a specific environment. AI completed tasks on a benchmark. It did not replicate a full human worker's contribution to an organisation.

The gap between "can complete these tasks better than average" and "can replace a human worker" is real and significant. But it is also narrowing. And the honest thing to do is acknowledge that — rather than either catastrophising or dismissing it.

The Jobs and Skills Most Affected — Honest Assessment

I want to be direct here because vague warnings about "AI affecting jobs" without specifics are not actually useful to anyone.

Higher Risk — Structured Process Work

Data entry and data processing. Document creation from templates. Administrative scheduling and coordination. Basic customer service handling standard queries. Entry-level content writing that follows templates. Basic financial bookkeeping and reporting. These are not bad jobs. They are jobs where the primary value is in executing a defined process reliably. And that is exactly what the GPT-5 benchmark measures AI getting better at.

If your role is primarily composed of tasks like these — the risk is not theoretical anymore. It is real and it is worth taking seriously in terms of what skills you are building alongside your current responsibilities.

Medium Risk — Changing But Not Disappearing

Content creation roles where quality and originality matter. Marketing work that requires understanding specific audiences. Project management that involves complex human coordination. Teaching and training where the relationship and personalisation matter. Design work where brand identity and creative judgment are central.

These roles are changing significantly. AI handles more of the execution. The human contribution shifts toward direction, judgment, quality evaluation, and relationship management. The jobs do not disappear but the skills that make someone valuable in them shift considerably.

Lower Risk — Human Judgment and Presence

Work requiring physical skilled presence — healthcare, skilled trades, emergency response. Work requiring genuine ethical accountability — senior legal work, medical diagnosis, financial advice where liability is real. Work requiring deep sustained human relationships — therapy, counselling, mentoring. Leadership roles where the human dimension of who you are matters as much as what you can do.

These are not immune to AI influence. But they require things that benchmarks do not and cannot measure — and that AI cannot replicate in any meaningful near-term timeframe.

The Mistakes People Make When They Hear News Like This

I have watched how people respond to AI capability announcements for a while now and the same patterns keep appearing. These are worth naming specifically.

Mistake 1 — Treating a benchmark result as a complete picture of reality. GPT-5 scored higher than humans on a specific benchmark measuring specific tasks. This is meaningful. It is not the same as GPT-5 being better than humans at all work or most work. Benchmarks measure what they measure. They do not measure everything that matters. The people who panic at every benchmark result and the people who dismiss every benchmark result are both responding incorrectly. The right response is to understand specifically what was measured and think carefully about whether that measurement is relevant to your situation.

Mistake 2 — Assuming this changes nothing because previous predictions were wrong. There is a legitimate history of AI job displacement predictions that did not materialise on the timelines predicted. This has made some people broadly skeptical of any AI job threat claims. But the OSWorld-V result is different from previous predictions because it is not a prediction — it is a measured result. The AI has already done the thing. Dismissing it because previous warnings were premature misses that this one is documented current reality.

Mistake 3 — Waiting to respond until the impact is immediate and personal. The time to build skills that reduce your vulnerability to AI displacement is before the displacement pressure is on you — not during it. People who are building AI skills, developing judgment-heavy capabilities, and expanding their professional value beyond structured task execution right now have significantly more options than those who wait until their specific role is under direct threat.

Mistake 4 — Thinking "my job is different" without actually analysing how. This is the most common and most dangerous mistake. Everyone believes their specific role has elements that make it harder to automate than the generic description of their job title suggests. Sometimes that is true. Often it is less true than it feels from the inside. Do an honest audit — what percentage of your actual daily work hours are spent on structured, process-driven tasks that follow defined patterns? That percentage is your realistic exposure.

Mistake 5 — Focusing entirely on threat without seeing opportunity. The same AI capability that creates displacement pressure also creates real opportunities for people who position themselves correctly. Building AI skills while the demand for those skills is outpacing supply. Using AI to dramatically increase your own productivity and output. Moving into roles that sit at the interface between AI capability and human judgment. The window for these opportunities is open right now and will not stay open indefinitely.

✍️
My Personal Experience

I made mistake number four about my own work. I told myself that blogging and content creation required human voice and experience and perspective in ways that meant AI could not really threaten it. That was partly true and partly comfortable self-deception. When I actually listed out what I spend time on — a significant portion was structured tasks. Formatting posts. Generating outlines. Researching keywords. Finding examples to illustrate points. Writing first drafts of sections I already knew what I wanted to say. All of this AI can now do. The parts of my work that AI genuinely cannot replicate are my personal stories, my specific perspective, my relationship with readers, my judgment about what is worth writing about. Those parts are real and valuable. But they are a smaller percentage of my total work hours than I had been telling myself. That honest audit was uncomfortable. It also changed how I approach my time.

What Actually Helps — How to Position Yourself Right Now

This is the section I actually wanted to write — because information without direction is just anxiety-inducing. Here is what I think genuinely helps based on what I have observed and experienced.

  • Do an honest audit of your work. List every significant thing you do in your job or work. Categorise each item honestly — structured and process-driven, or requiring judgment and relationship and context that is specific to you. The proportion matters. If most of your work is in the first category — that is your actual risk exposure and it is worth taking seriously now.
  • Start using AI for the structured parts of your work — deliberately. This sounds counterintuitive. But using AI for the structured tasks yourself — rather than waiting for your employer to automate them away — gives you two things. First, you become more productive. Second, you free up your time for the judgment-heavy, relationship-heavy work that is harder to automate. You are essentially pre-emptively repositioning your own contribution.
  • Build skills in AI evaluation and direction — not just AI use. The people who will be most valuable as AI capability increases are not the ones who can use AI tools. It is the ones who can direct AI effectively, evaluate its outputs critically, catch its errors, and apply human judgment to decide when AI output is good enough and when it needs human intervention. This is a skill set that requires genuine engagement with AI tools — not just theoretical knowledge about them.
  • Invest in your domain expertise. Here is something genuinely counterintuitive — AI is making deep domain expertise more valuable, not less. Because AI can produce generic outputs in any field, the person who can evaluate whether those outputs are actually correct, appropriate, and useful for a specific context becomes more important. Your years of experience are not worthless. They are the thing that makes AI outputs in your field usable by people who lack that experience.
  • Pay attention to where human judgment is legally or ethically required. There are growing areas where regulation, liability, and professional ethics require human decision-making. Medical diagnosis. Legal advice. Financial recommendations with real consequences. Safety-critical engineering decisions. These requirements are not going away — if anything they are being reinforced as AI capability increases. Positioning yourself in roles where human accountability is legally mandated provides a form of structural protection that pure performance competition does not.

Frequently Asked Questions

Q1. Does GPT-5 scoring higher than humans mean AI will replace most jobs?

Not most jobs — but specific roles and specific task categories within many jobs are genuinely at risk. The benchmark measured a specific range of structured productivity tasks. Jobs that are primarily composed of these kinds of tasks face real displacement pressure. Jobs that are primarily composed of judgment, relationship, creativity, and physical skilled work are less immediately threatened. The honest answer is somewhere between "nothing will change" and "everything will change" — which is the most useful but least satisfying kind of answer.

Q2. What is GPT-5 and how is it different from GPT-4?

GPT-5 is OpenAI's latest language model — a significant upgrade from GPT-4 in reasoning capability, instruction following, and the ability to take multi-step actions across software environments. The most significant difference for the job displacement conversation is that GPT-5 includes what OpenAI calls "agentic" capability — meaning it can autonomously execute multi-step workflows across different applications rather than just answering questions or generating text. That is what makes the OSWorld-V benchmark result meaningful — it tested this agentic capability on real work tasks.

Q3. Should students change what they study because of AI advancement?

Probably yes — in terms of how they study and what additional skills they build alongside their core subject, if not necessarily what subject they study. Every field needs people who deeply understand the domain and can work effectively with AI tools within it. The students who will be most valuable in five years are not necessarily those who chose the most AI-resistant field — they are those who combined genuine domain knowledge with strong AI literacy. Adding AI skills to whatever you are studying is more valuable than trying to predict which field AI will and will not affect.

Q4. How should freelancers respond to AI getting better at work tasks?

By using AI to do more — not by avoiding it out of fear. Freelancers who use AI tools to dramatically increase their output and quality will outcompete those who do not, regardless of whether AI could theoretically do similar work. Your value as a freelancer is not just in the output — it is in the relationship, the accountability, the contextual understanding of the specific client, and the judgment that comes with your experience. Use AI to produce more of the output. Keep developing the judgment and relationship skills that justify why clients hire you specifically rather than just using AI themselves.

Q5. Is India more or less affected by AI work displacement than other countries?

India has specific exposure because a significant portion of India's service economy — BPO, IT services, data processing, back-office work — is built on exactly the kinds of structured process tasks that AI is getting better at. This does not mean India's economy is about to collapse — it means the specific sectors that have driven much of India's IT and service sector growth face real transition pressure over the next five to ten years. The opportunity for India is in the enormous talent base that could, with the right investment in AI skills, move up the value chain into the kinds of AI-augmented professional roles that are harder to automate.

Q6. What is the single most important thing I can do right now in response to this?

Do the honest audit I described — list your actual daily work tasks and assess honestly which category each falls into. That audit will tell you more about your real risk exposure than any general article about AI and jobs — because it will be specific to you rather than about a generic worker. Once you have done that honestly, the right next steps become much clearer. If most of your work is structured and process-driven — start building judgment and AI-direction skills urgently. If most of your work already requires unique human judgment — focus on deepening that judgment and using AI to amplify your output rather than worrying about displacement.

So Should You Be Worried About GPT-5 Scoring Higher Than Humans?

After thinking through this as carefully as I can — here is my honest answer to whether you should be worried about GPT-5 outperforming humans at work tasks.

Worried — no. That word implies a feeling that is not productive and does not lead to useful action. Worried is what you feel when you cannot do anything about something. This is something you can do something about.

Serious — yes. Treating this as real, significant, and relevant to decisions you make about your skills and your career over the next few years is entirely warranted. The benchmark result is not hype. It is not a prediction. It is a documented current measurement. Dismissing it because previous AI predictions were premature is the wrong response.

The people who will look back on this period most positively are the ones who took the signal seriously without being paralysed by it — who used it as motivation to audit their own skills honestly, build AI literacy deliberately, and invest in the capabilities that are genuinely hard to replicate. That combination — human judgment, domain depth, AI fluency, and genuine accountability — is not something any benchmark currently measures. And it is not something any current AI system has.

That combination is yours to build. The window to build it on your own terms — before the pressure is immediate — is still open. Not indefinitely. But right now.

What does your honest work audit look like — how much of your actual daily work is structured and process-driven versus judgment-heavy and relationship-driven? I am genuinely asking because I think the answers vary enormously by person and by role and I want to understand what people are actually seeing in their own situations. Drop it in the comments. 😊

Comments

Popular posts from this blog

AI Is Now Eating Software — What This Means for People Who Don't Know How to Code

AI Is Now Eating Software — What This Means for People Who Don't Know How to Code A few weeks ago a client asked me to build a small tool for their business. Nothing complicated — just something that could automatically sort through customer enquiries and categorise them by urgency. A year ago this would have required either hiring a developer or learning to code. My client had neither the budget nor the time for either option. So I sat down, described what the tool needed to do in plain English to an AI system — no code, no technical specifications, just a clear description of the problem — and within a few hours we had something working. My client was slightly stunned. I was slightly stunned. And then I started thinking about what had just happened — not just as a one-off convenience, but as a sign of something much larger that is actively changing who gets to build things and how. AI is eating software. And for people who have spent their whole lives being told they cannot ...

$300 Billion Went Into AI in Just 3 Months — What Does That Actually Mean for People Like Us?

$300 Billion Went Into AI in Just 3 Months — What Does That Actually Mean for People Like Us? I was reading my feed last week when a number stopped me completely. $300 billion . That is how much money flowed into global venture funding in just the first three months of 2026. And 80 percent of it — eighty percent — went directly into AI. I just sat there for a moment trying to understand the scale of that. Not because I invest in startups or follow Wall Street. But because I use AI tools every single day for my blog, my freelance work, my studies. And when that much money moves in one direction that fast — it changes things for everyone, not just the billionaires making the bets. The question I actually wanted answered was simple: what does this mean for me? For you? For regular people who just want to use AI tools and get on with their lives? That is what this post is actually about. 📋 Table of Contents What Does $300 Billion in AI Funding Actually Mean in Simple Terms? ...

ChatGPT as a Therapist? Scientists Just Found Serious Problems — What You Need to Know

ChatGPT as a Therapist? Scientists Just Found Serious Problems — What You Need to Know A friend of mine told me something a few months ago that I did not know how to respond to. She said she had been talking to ChatGPT every night before sleeping. Not about work. Not about blog posts or productivity. About her anxiety. Her relationship problems. The things that were keeping her up at night. She said it helped. It felt like talking to someone who actually listened. No judgment. Always available. Never tired of her. I did not say anything critical at the time because — honestly — I understood the appeal. I had done something similar myself. But then a study came out from Brown University that stopped me. Researchers found that ChatGPT as a therapist — even when specifically instructed to follow proper therapy ethics — routinely broke fundamental ethical standards that every real therapist is trained to uphold. And when I read what those violations looked like in practice, I thought ...