“My Voice Is My Password”: How I Used a Voice Clone to Access My HSBC Account
Jul 29, 2025
It marked a significant milestone. After two years of disciplined saving, today was the day I would finally clear the outstanding balance on my mortgage.
NatWest, my lender, required the payment to be made via its online portal. Unsurprisingly, since the amount exceeded £20,000, I first had to call HSBC to authorise the transaction.
I took a deep breath and dialled telephone banking. After keying in the usual sort code and account number, I was met with a familiar prompt:
“Please say, ‘My voice is my password.’”
But this time, I didn’t say it.
In a moment of curiosity, perhaps sparked by the significance of the occasion, I hung up, re-dialled, and this time, let an AI voice clone do the talking. I’d logged into software that allowed me to create an artificial replica of my voice. The system let me straight in. No warnings. No hesitation.
The curious part? After voice verification, I was asked a series of standard security questions—presumably designed to detect coercion or fraud—before being permitted to move the funds. But at no point was the voice authentication itself questioned. The system accepted the clone without issue, just as it would have accepted my own voice.
Adding a further twist, I asked the helpful agent on the line whether HSBC’s voice security system was protected against voice-cloning technology. “I’m sorry to say I don’t know the answer to that question,” she replied.
After making the final mortgage payment, I decided to call HSBC back. Again, I used the voice clone to navigate security, at first to test whether the original access had been a fluke. It wasn’t.
More importantly, I wanted to report what had happened. I explained to the call handler that I’d accessed my account using an AI-generated voice. He was polite and helpful, and assisted me in switching back to a traditional security number.
But I’m not entirely sure the seriousness of what I described was understood or passed on. The irony was he was happy to assist, even though I’d just told him I wasn’t the person who had passed the initial voice check.
This wasn’t a hack. It was an experiment—one that, I believe, exposes a widening gap between how we think biometric security works and what today’s AI makes possible.
Building My Clone
I didn’t set out to test banking security. Like many business owners, I’ve been exploring AI tools to improve productivity, particularly in producing my audiobook.
Recently, my AI-generated voice was approved by Amazon's Audible platform. When I played a sample to a client, they paused and said, “But it’s you, Justin.”
The voice clone was created using ElevenLabs, one of the leading voice synthesis platforms. The technology analyses just a handful of audio samples, then reproduces speech in your tone, rhythm and accent—with uncanny accuracy.
Recording and editing the audiobook myself had been a slow, labour-intensive process. I wondered whether voice cloning technology was ready. It turned out not only to be capable but eerily convincing. When I played a sample to my editor, Steven, he said, “I know the recording isn’t you, because it’s too good.” Thanks, Steven.
What Does “Secure” Really Mean?
Banks promote voice recognition as a convenient, secure alternative to passwords and PINs—a biometric measure as unique as a fingerprint. But if a voiceprint can be cloned using off-the-shelf tools and transmitted over a phone line, how secure is it, really?
In my case, the cloned voice passed HSBC’s biometric gate on the first attempt. The bank’s follow-up questions were focused on fraud prevention—Was I being coerced? Did I recognise the recipient of the funds? All perfectly valid, but entirely unrelated to how I’d accessed the system in the first place.
Crucially, the system didn’t appear to notice that the passphrase wasn’t naturally spoken. There was no liveness detection, no request for variation, no challenge-response protocol.
It makes me wonder: if a voice clone passes security, and the fraudster then switches to their own voice, would anyone on the other end even notice? Does the call operator ever hear how the phrase “My voice is my password” was actually delivered?
The Broader Risk — Especially for Business Owners
It’s easy to dismiss my experiment as a one-off. But here’s the reality: my voice is everywhere — and so is yours, if you run a business in public.
I host a YouTube channel. I’ve published an audiobook (read by me, which, let’s be honest, is a key selling point for most business titles). My voice exists in clean, high-quality samples online — easy for AI to learn from.
I recently saw a friend, also a business author — take it even further. He linked his voice clone to a custom GPT model and connected it to WhatsApp. It could hold a phone conversation in his voice, quoting not just his own writing but content scraped from the entire web. It knew his tone, his style, his worldview — and it sounded like him.
This isn’t some dystopian threat on the horizon. It’s happening now.
And it raises real, urgent questions for business owners, finance teams, and anyone using voice-verified systems in a professional context.
Many directors delegate phone-based tasks to PAs or admin teams. What happens if someone inside your business — or a fraudster outside it — plays back a voice sample to access sensitive accounts? What if a fake “you” calls HMRC, a supplier, or even your bank?
In a world where deepfake scams are already targeting executives, relying on static voice phrases as secure identifiers now looks dangerously naive.
And if fraud does happen using a cloned voice, whose fault is it?
Will the bank claim the voice matched and wash its hands of liability? Will the director be expected to prove it wasn’t them?
Right now, no one seems to have a clear answer.
Authentication Is Broken — It Just Doesn’t Know It Yet
This isn’t really a story about HSBC. I suspect many banks, utility providers, and government departments are just as vulnerable. What I experienced wasn’t a clever hack — it was a real-world test of a flawed assumption.
These systems were designed for a time when spoofing a voice required a recording studio and an expert impersonator. That time is gone.
Voice cloning is no longer niche. It’s accessible, cheap, fast — and now realistic enough to fool a biometric gatekeeper.
As AI tools become more powerful and more user-friendly, this risk doesn’t just increase, it scales. Mass exploitation of voice-based security isn’t a theoretical future. It’s now just a matter of who tries first.
A Personal Reflection
As I put the phone down — mortgage cleared, voice clone still open on my screen — I didn’t feel victorious. I felt uneasy.
We’re heading into a world where your digital likeness — your voice, your face, your writing — can be copied and used without you. And right now, the systems we’re told to trust most are still built for a pre-AI era.
That day, I got in.
But the real question is:
Who else could?
📺 Want Business Advice You Can See in Action?
Subscribe to the AskJT YouTube channel for weekly videos on tax tips, time-saving tools, and strategies to grow your business — explained in plain English.
Want More Practical Business Advice?
Every Thursday, I send out a short, sharp roundup of the week’s best content: a podcast, blog, YouTube video, and a 60-second tip to help you grow.
No spam. No fluff. Just ideas that work.