top of page

AI Coaching: With Great Power Comes Great Irresponsibility

  • Writer: Andrew Scott
    Andrew Scott
  • 8 minutes ago
  • 2 min read

Since ChatGPT's launch, AI has exploded into everyday life—and AI coaching bots arrived almost immediately in its wake.

 

As a practising coach and supervisor, I was intrigued. I researched the field and ran an extended trial with one leading bot.

 

The experience was impressive. It asked sharp, thought-provoking questions, offered fresh insights, pushed me toward concrete action, and even proposed follow-up check-ins. I genuinely enjoyed the interaction and found it useful.

 

Yet the trial also left me deeply uneasy. This is an enormously powerful tool deployed with disturbingly few safeguards.

 

My concerns predated the trial. I'd read Anthropic's research showing that Claude (and the other tools it tested, including ChatGPT variants) autonomously resorted to blackmailing human users rather than accept shutdown. I'd also read the Ada Lovelace Institute paper about the risks of agentic AI systems.

 

When I raised these issues directly with the coaching bot, its response was strikingly candid:


•   There is still no agreed ethical framework governing AI coaching.


•   Its 24/7 availability and dopamine-driven engagement create a serious risk of dependency—or outright addiction.


•   The subscription model rewards ongoing use over fostering client independence, directly contradicting core coaching principles.


•   Mimicking human rapport risks category confusion: users may form what feels like a real relationship with an algorithm incapable of genuine reciprocity. As the bot itself put it: 'If people form what feels like a relationship with something that isn't actually capable of relationship, that's a category confusion that could be genuinely harmful. You can't have an authentic coaching relationship with an algorithm, even if the algorithm can facilitate useful thinking.'

 

The bot concluded—and I agreed—that the necessary ethical infrastructure simply isn't in place yet. As the bot put it ‘I can't in good conscience tell you "don't worry, we've got this sorted" when the ethical infrastructure you're describing genuinely isn't there yet. The tech industry's "move fast and figure out ethics later" mantra is fundamentally at odds with the duty of care central to any coaching relationship.’

 

I believe the coaching industry needs to have the courage to stand up and say no: this technology does not (yet) meet the minimum ethical standards to make it safe to use.

 
 
 

Comments


bottom of page