Recently, there have been more and more cases of fraud using AI technology. Not only in China, but also in the United States, there have been a number of AI scams, the number of which has increased by more than 50% compared to the same period last year. These incidents have raised concerns about the misuse of AI and the risks it poses.
Recently, Jennifer, a parent in the United States, received a strange phone call…
“Listen, your daughter is in my hands, if you call the police or tell anyone else, I will drug her and take her to Mexico, and you will never see her again.” A deep man’s voice threatened over the phone and demanded that Jennifer pay a $1 million ransom.
Jennifer was stunned that she couldn’t raise $1 million, and the man later “lowered” the price to $50,000. After hanging up, Jennifer’s friend called the police and tried to convince her that it was a scam. However, as a mother who loves her daughter dearly, Jennifer simply cannot ignore the real and heartbreaking cry of “daughter”! She began to discuss the way to send money with the other party, but fortunately her daughter called in time to report that she was safe and avoided property damage.
“When a mother can recognize her child’s voice, even if it’s just across a building, when she cries, I can feel that it’s my child.” Jennifer recalled the exact same voice as her daughter and was still surprised.
Advances in AI technology have made it very easy to synthesize sounds. In just 3 seconds, the AI can generate extremely realistic sounds, such as crying. Fraudsters use this technology to carry out extortion at a cost as little as $5 per month for the use of AI programs. This hyper-realistic sound synthesis technology makes it difficult to tell the real from the fake, increasing the chances of success in the scam.
Fraudsters target not only the gullible elderly, but also the business elite. They use AI technology to synthesize the CEO’s voice, commit fraud, and successfully defraud large sums of money. This shows that the harm of AI fraud is everywhere, and we all need to be vigilant and preventive.
In the face of the threat of AI scams, we need to stay alert and learn to recognize the real from the fake. When receiving a suspicious call or message, it is best to confirm the identity of the other party, and the authenticity can be confirmed by callback, video call, etc. Fraud prevention has to be vigilant. As AI synthesis technology matures, we need stronger regulatory mechanisms and legal frameworks to prevent its abuse. At the same time, strengthening public education and awareness, so that people can better identify AI synthetic voices, is an important part of preventing AI fraud.