A hot potato: Amazon is developing capabilities that will allow its Alexa voice assistant to mimic any human voice after hearing them speak for less than a minute. Dismissing the potential creepiness of the feature, some are concerned about the potential for abuse.

Rohit Prasad, who leads the Alexa team at Amazon, said the goal of the project is to "make the memories last" after "so many of us have lost someone we love" as a result of the pandemic.

Alexa could be trained to imitate a voice using pre-recorded audio, meaning the person doesn't have to be present - or even alive - to serve as a source. In a video segment shown during a conference this week, a child asked Alexa if grandma could finish reading The Wizard of Oz. Sure enough, Alexa changes voices to mock the child's grandmother and finish reading the story.

Prasad said during the presentation that Alexa now receives billions of requests per week from hundreds of millions of Alexa-enabled devices across 17 languages in more than 70 countries around the globe.

The potential for abuse seems high. For example, the tool could be used to create convincing deepfakes for misinformation campaigns or political propaganda. Fraudsters could leverage the capabilities for financial gain, like in 2020 when scammers tricked a bank manager into transferring $35 million to fund an acquisition that didn't exist.

What are your thoughts on the matter? Is Amazon taking the concept of voice cloning a bit too far here, or are you intrigued by the idea of having a "conversation" with someone from the grave?

Image credit: Jan Antonin Kolar