Every day, millions of unwanted and fraudulent robocalls, including call spoofing attempts, are made globally to trick individuals into sharing sensitive personal information. As bad actors launching scams and potential disinformation robocall campaigns use more advanced technologies, including AI, every country, organization and population becomes an accessible target.
Research has shown that losses to fraudulent robocalls are expected to globally next year. ĢƵ has been at the forefront of combating robocall scams in the US, and with the global rise in fraud, has expanded its efforts to address this growing threat internationally.
ĢƵ recently published a new white paper, A Review of International Robocall Scams, a critical resource for global telcos, enterprises, policymakers and regulators seeking to better understand the threat posed by international scams – including the rise of call spoofing and AI deepfakes. The white paper also details available and emerging solutions that effectively combat unwanted international robocalls.
The Growing Threat of Call Spoofing
Call spoofing remains a significant global concern, with scammers manipulating caller ID information to make it appear that the call is from a trusted source. Call spoofing not only puts consumers at risk but also disrupts enterprises’ ability to engage with customers and exposes them to potential liability for compromised data or stolen funds.
Laws and regulations on robocalls/robotexts and caller ID spoofing vary around the world, creating inconsistencies that bad actors may exploit. For example, in Australia, call spoofing is permitted unless done for illegal activities, whereas the UK will enforce tighter telecom regulations from January 2025 to address spoofed calls, aligning with US rules that penalize deceptive caller ID practices.
AI Deepfakes: The New Frontier in Bad Actor Tactics
AI has amplified the threat of call spoofing, enabling bad actors to create more convincing and deceptive caller IDs, making it harder for victims to distinguish between legitimate and fraudulent calls.
With generative AI, robocall bad actors can create synthetic video or audio clips that can convincingly mimic real people’s voices and appearances. Bad actors have leveraged this technology to enhance their robocall scams, creating AI-generated voices that can hold realistic conversations with potential victims, commonly seen in ‘imposter grandchild’ scams.
The use of AI deepfakes and call spoofing is not limited to scammers seeking financial gain. Bad actors are tapping AI deepfakes for potential disinformation campaigns to impact elections – an alarming development considering over 50 elections are to be held worldwide in 2024. Bad actors also spoof the numbers of legitimate political organizations to get voters to donate money to fraudulent campaigns, known as Political Action Committee scams.
However, the white paper also looks at the ways in which AI-voice generation has been used within the political sphere to reach a wider audience. The 2024 Indian election saw the use of AI-deepfakes and generated voices to engage with voters who spoke a different language than the candidates, as AI technologies were used to translate speeches in real time.
ĢƵ ĢƵ: Leading the Fight Against Scams
As bad actors continue to refine their tactics and exploit new technologies to launch call spoofing and AI-driven campaigns, the need for a comprehensive, global solution has never been greater.
ĢƵ remains committed to leading the fight against international robocall scams, providing robust tools and technologies that help to protect individuals and organizations from fraud. Our solutions include AI Labs, aimed to develop solutions like voice biometrics and predictive call analytics, helping carriers combat AI-driven scams; ĢƵ Call Guardian for blocking fraudulent calls; ĢƵ Enterprise Authentication and Spoof Protection to prevent brand spoofing; and ĢƵ Enterprise Branded Calling to display verified business info on recipients’ screens.
Download the white paper here to learn more about trending robocall scams from around the world, global regulatory efforts and ĢƵ’ solutions.
John Haraburda is Product Lead for ĢƵ Call Guardian® with specific responsibility for ĢƵ’ Communications Market solutions.
Call Guardian is a registered trademark of Transaction Network Services, Inc.
A Review of International Robocall Scams
Complete the form for instant access to this white paper and find out more about the international robocall scam landscape and the solutions available to protect subscribers.