top of page

AI Strategist, Author & Speaker

The Human Algorithm, Part 1: The Moment I Realised I Was a Fraud

  • Writer: Scott Bales
    Scott Bales
  • 2 hours ago
  • 5 min read

I've spent the better part of a decade telling organisations how to think about AI.

I've stood on stages in 50 countries and walked C-suite teams through the implications of machine learning, generative AI, and what it means to lead a workforce increasingly augmented by technology. I've written the books. I've built the frameworks. I've advised the boards.


And for years, I used AI the same way most people use a microwave: to quickly reheat something I'd already made.


I was a fraud. Not intentionally. But a fraud nonetheless.


This is the story of how I found out, and what happened when I finally stopped treating AI as a tool and started treating it as infrastructure.

Competence Masquerading as Mastery

When ChatGPT launched publicly, I was an early adopter. When GPT-4 dropped, I was already building workflows. I could hold my own in any conversation about model architecture, prompt engineering, or the difference between a RAG pipeline and fine-tuning. I dropped terms like "agentic" and "multimodal" before they were standard keynote vocabulary.


I genuinely believed I was ahead of the curve.


What I didn't realise was that I was using a Formula One car to drive to the shops. I had access to something extraordinary, and I was using it for tasks that barely tapped its capabilities. Ask it to draft an email. Summarise a document. Suggest a framework. Repeat.


I was extracting 10% of the value and calling it a transformation.


The moment that changed everything came during a training block I had no business attempting.

The HYROX Problem

I'm 45. I train for HYROX, a global fitness race that combines running with functional strength exercises, and I take it seriously. I was also, at the time, gaining weight I couldn't explain. I was training hard, eating reasonably well (I thought), and watching the scale move in the wrong direction. My performance was plateauing. My recovery was poor. I was frustrated.


So I did what any self-respecting technologist would do: I asked ChatGPT to build me a training and nutrition plan.


I gave it everything I had. My race targets. My training schedule. My dietary habits. My age, weight, and training history. I was thorough. I was precise. I was proud of the prompt I wrote.


ChatGPT gave me a plan. A solid plan, actually. Progressive overload built in. Macros calibrated to my training volume. Recovery windows factored in.


I started the plan. And then, a few exchanges in, while I was describing my ongoing fatigue and the weight gain that wouldn't budge, it asked me a question I wasn't expecting.

"When was the last time you did a full blood test, including hormones and cortisol? Maybe there was an underlying issue I wasn't aware of."

I stopped.


Not because it was a complicated question. But because it was the right question, and it was one that I, with all my experience and "knowledge" of my own body, had not thought to ask. I had been applying effort to a problem without understanding the system. ChatGPT had done something I hadn't: it had looked at my situation holistically, identified a gap in its own data, and asked for what it needed.

That's not what I expected from a tool. That's what I'd expect from a good doctor. Or a great coach.

What That Question Actually Revealed

The blood test came back with markers that explained everything, hormone levels, cortisol patterns, recovery indicators that painted a picture my training log never could.


But the larger revelation wasn't about my health.


It was about how I had been relating to AI.


I had been operating on the assumption that AI was there to answer my questions. What I hadn't considered was that AI could, and should, be asking me better ones. I had been thinking of it as a search engine with better syntax. It was capable of being more of a thinking partner.


The difference between those two postures is everything.


When you treat AI as a search engine, you get faster answers to the questions you already thought to ask. When you treat AI as a thinking partner, when you give it context, invite it to push back, and ask it to identify what's missing, you get something qualitatively different. You get challenged. You get surprised. You get smarter.


I had been doing the former and calling it innovation. The irony of a keynote speaker on AI being one of the last people to actually use it well was not lost on me.


What Happened Next

That question kicked off a year-long experiment.


I went back to every domain of my life where I thought I was "using AI" and asked a harder question: Am I using this as a tool, or have I built it into how I actually think and operate?


The answer, almost universally, was the former.


So I rebuilt. Finance. Health. A vintage Porsche restoration project I'd been circling for years. Even my legal understanding of contracts and tenancy agreements, where a postgraduate qualification in Business Law suddenly became a much more powerful asset when paired with the right AI workflow.


This series, The Human Algorithm, documents that year. What I built. What broke. What surprised me. And what I think it means for every leader who is right now in the position I was in: technically literate, vaguely impressed by AI, but not yet transformed by it.


Because that gap, between understanding AI and actually letting it change how you operate, is the most important gap in business today. It's not a technology problem. It's a leadership problem.


And it starts with being honest enough to ask whether you're really using this thing, or just carrying it around like a status symbol.


Your Turn

Before you read Part 2, here's a prompt worth sitting with.

Open your AI tool of choice. Have this conversation:

"I want to identify three areas of my personal or professional life where I might be leaving significant value on the table by not using AI more intentionally. Ask me five questions to help us figure out what those might be."

Then answer honestly. Don't steer it toward the answer you want. Let it ask you the things you haven't thought to ask yourself.

See what happens.

---

The Human Algorithm is a 10-part series documenting Scott Bales' year-long experiment in building AI into every domain of life — not as a tool, but as infrastructure. New posts every two weeks.

Scott Bales is a keynote speaker and AI strategy advisor based in Singapore. He has delivered keynotes in 50+ countries and advises organisations across Asia-Pacific on responsible AI adoption and leadership.

bottom of page