But in the pre-smartphone era, having suitable recordings to draw upon was far less common.
When Ezekiel could locate only one very short and poor quality clip, Poole said his “heart sank”.
Nearly cried
The clip from a 1990s home video was just eight seconds long, muffled and with background noise from a television.
Poole turned to technology developed by New York-based AI voice experts ElevenLabs that can produce not only a voice based on very little but can also make it sound more like a real human being.
He used one AI tool to isolate a voice sample from the clip and a second tool — trained on real voices to fill the gaps — to produce the final sound.
The result, to Ezekiel’s delight, was very close to her original, complete with her London accent and the slight lisp that she had once hated.
“I sent samples to her and she wrote an email back to me saying she nearly cried when she heard it,” Poole said.
“She said she played it to a friend who knew her from before she lost her voice and it was like having her own voice back,” he added.
According to the UK’s Motor Neurone Disease Association, eight in 10 sufferers endure voice difficulties after diagnosis.
But the timing, pitch and tone of current computer generated voices “may be quite robotic”.
“The real advance with this new AI technology is the voices are really human and expressive, and they just really bring that humanity back into the voice that previously sounded a bit computerised,” Poole said.
Personalising a voice was a way of preserving someone’s “identity”,” he added.
“Particularly if you acquire a condition later in life, and you lost your voice, being able to speak using your original voice is really quite important, rather than using some off the shelf voice,” he said.