AI合成技术可高仿声线,警方呼吁长者提高警觉。
AI合成技术可高仿声线,警方呼吁长者提高警觉。

AI Used to Mimic Grandson's Voice in Scam, Elderly Couple Loses Entire 11,700 Yuan Savings

Published at Jul 02, 2025 12:12 pm
Rapid advancements in AI technology have brought convenience to everyday life, but at the same time, it has also become a new tool for scam syndicates. Several recent fraud cases have taken place in Huangshi, Hubei, China, where criminals used AI-generated voice technology to impersonate a grandson’s voice and scam at least two elderly victims.

Police investigations reveal that scammers collect voice samples by making repeated phone calls, which are then used for voice mimicry scams, making it nearly impossible to distinguish real from fake.

According to China’s CCTV report, one victim, Ms. Liu, recalled that on April 28 this year, she received a call from someone claiming to be her grandson. The caller claimed he’d gotten into a conflict at a supermarket and injured someone’s head, leading him to be detained by the police, and that he urgently needed 20,000 RMB (about 11,700 MYR) to settle the matter. In order to gather the sum, Ms. Liu and her husband scrambled to raise money and, following the "grandson’s" instructions, handed the cash at a designated location to someone claiming to be a classmate’s parent. Because the voice sounded just like her grandson’s, she handed over the money without hesitation, only to realize she’d been duped when her grandson returned home.

Police further found that scammers mainly used landlines for communication, exploiting the elderly’s unfamiliarity with communication technology and their trust in family, combined with synthesized voice, to successfully mislead victims. In addition, scammers also recruit unwitting "cash mules" by offering high-paying part-time jobs online, thus forming a chain of crime.

Besides voice scams, criminals have also used AI "face-swapping" technology for illegal phone card activations. In Xiangyang, Hubei, police busted a fraud gang whose members used illegally obtained personal information and photographs to synthesize dynamic facial videos in order to bypass telecom operators’ facial recognition systems, successfully activating over 200 SIM cards. These cards then ended up in the hands of scam gangs, with the case involving more than 5 million yuan (about 2.93 million MYR).

Experts point out that current online authentication mechanisms relying solely on “ID card + facial recognition” are insufficient to guard against advanced AI attacks. Service providers are urged to reinforce technical defenses, such as using AI to counter AI forgery and introducing new verification technologies like heat maps and deep learning algorithms, which can raise recognition accuracy to over 95%.

In addition, the “Administrative Measures for Security of Facial Recognition Technology Applications” have been implemented since June this year, requiring companies to adhere to principles of lawfulness, propriety, and necessity when using facial recognition, and to adopt security measures such as separate storage and data encryption to avoid risks of data leakage.

Legal experts point out that under the “Anti-Telecom and Online Fraud Law of the People’s Republic of China,” telecom operators that fail to properly implement real-name registration and data security protection may face fines of up to 5 million yuan, or even be forced to suspend business operations. In the face of AI scam risks, “countering technology with technology” and strict regulation will be key to curbing fraud.

Author

联合日报newsroom


相关报道