How AI is Making it Easier for Scammers to Succeed with Social Engineering Scams
Social engineering scams are fraudulent schemes that use psychological manipulation to trick individuals into giving away sensitive information or money. These scams have been around for decades, but with the rise of artificial intelligence (AI), they are becoming even more sophisticated and easier for scammers to succeed with.
One of the biggest benefits of AI for scammers is the ability to automate and scale their operations. With AI, scammers can generate and send out thousands of phishing emails or robocalls in a matter of minutes, increasing the likelihood that someone will fall for their scam. Additionally, AI can help scammers personalize their scams to make them more convincing, by using information from social media profiles or other online sources to tailor their approach.
Bad actors, including some with no development experience, are using tools such as ChatGPT to create malicious tools. With scripting and automation, they can create an infinite number of mass-produced customized communications using AI that can learn in real time what works and what doesn't.
Typically, foreign phishing campaign operators would hire English-speaking students and youths to write their phishing emails, slowing down the workflow and adding costs. Now they can use ChatGPT to create phishing emails that are much higher in quality than most of the emails that cyber criminals are generating today. We should expect to see a steep growth in phishing emails that don't have tell-tale grammar and punctuation mistakes. It's not just individual phishing emails that will become indistinguishable from real ones, but entire websites.
Another way AI is making it easier for scammers to succeed with social engineering scams is through deepfake technology. This technology can be used to create highly realistic videos or audio recordings that can be used to impersonate someone else, such as a government official or a financial institution. For example, a deepfake video of a CEO asking employees to transfer money to a certain account could be used to steal large sums of money from a company.
Moreover, AI can also be used to analyze vast amounts of data and identify patterns that can be used to target specific individuals or groups. This can make it easier for scammers to identify vulnerable individuals and tailor their scams to maximize their chances of success.
To mitigate the risks posed by AI-assisted social engineering scams, individuals and organizations must be aware of the latest threats and take steps to protect themselves. This includes being vigilant and skeptical of unsolicited emails or calls, verifying the identity of anyone who asks for sensitive information or money, and using multi-factor authentication whenever possible.
In conclusion, AI is making it easier for scammers to succeed with social engineering scams by automating and scaling their operations, personalizing their scams, and leveraging deepfake technology and data analysis to target specific individuals or groups. To protect against these threats, it is important for individuals and organizations to stay informed and take proactive steps to safeguard their information and assets.
THE RISE OF THE BOTS - How fraud awareness can thwart scam-as-a-service attacks.
Social engineering scams are fraudulent schemes that use psychological manipulation to trick individuals into giving away sensitive information or money. These scams have been around for decades, but with the rise of artificial intelligence (AI), they are becoming even more sophisticated and easier for scammers to succeed with.
Fraud is no longer carried out by sophisticated criminals. It’s too easy to do. It’s far too lucrative. And most of the time there are zero consequences.
Hundreds of thousands of new fraudsters have been activated during the pandemic.
Becoming a scammer is increasing in popularity with Scam-as-a-service models making it easier for people to buy off-the-shelf tools that enable them to project attacks without any prior knowledge of coding. The Fraud as a Service Industry is growing exponentially as expert fraudsters and scammers turn their attention to selling their methods, services, and fraud-perpetrating tech to others. Fraudster automation will rapidly accelerate, turning newbies into experts instantly. They may even begin to incorporate Ai to make them smarter, more targeted, and more human-like. These automated bots could include account opening bots, loan application bots, credential stuffing bots, and new hyper-realistic social engineering text and chatbots.
Bots are ushering in a new era of fraud automation:
Bots create a new level of social engineering tools designed to make fraud easier for those hundreds and thousands of new fraudsters that are entering the scene. In June of 2021, OTP Bot services began to appear which completely automated the pilfering of One-Time Password (OTP) passcodes from victims with zero human-to-human interaction.
OTP bots introduce automation to what used to be a manually-intensive social engineering process.
Instead of contacting victims individually by phone or SMS, OTP bots do the work automatically and at scale. This implies more account takeover (ATO) attacks and more victims. As a result, the returns for fraudsters using OTP bots are high and correlate with the volume of prospective victims targeted. The more victims targeted, the greater the gains.
Consequently, OTP bots are driving substantial losses for financial and other institutions.
Several factors are driving this. First, the bot calls are skillfully crafted, creating a sense of urgency and trust over the phone. The calls rely on fear, convincing victims to act to "avoid" fraud in their accounts. Second, victims are accustomed to providing a code for authentication as it has become common practice for companies to request a verification code when speaking with a call center representative.
Here is how an OTP bot works:
- Fraudsters purchase subscriptions to activate OTP bots.
- The fraudster attempts to log in to the victim’s online bank account and, at the same time, prompts the bot by inputting the victim’s phone number and the name of the financial institution the victim is banking with.
- The bot will robocall the victim and attempt to manipulate the victim to provide the 2FA code and other information as needed.
- In addition to robocalling, some of these services can also automate attacks via email or SMS that target social media accounts like Facebook, Instagram and Snapchat; financial services like PayPal and Venmo; or investment apps like Robinhood or Coinbase.
Conclusion:
There are a variety of fraud awareness topics that will help to educate your account holders. People need to remain vigilant in understanding how technology works. Criminals will attempt to exploit consumers through many channels, including email, SMS messaging, messaging services, and through direct calls to consumers. People are finally understanding that a cybersecurity incident can happen at any time, to anyone and that it really is everyone's responsibility to prevent it.
There is going to be a huge wave to consumer fraud as more inexperienced fraudsters use out-of-the-box tools that make it easy and with less risk. There are many topics that would help to educate consumers and you should be consistently reminding your account holders about all of them. Here are a few examples: Email Safety, Recognizing Phishing Scams, Understanding Data Breaches, Imposter Scams, Securing Home Networks, Mobile Phone Safety, Deepfake Fraud, How to Report Fraud & ID Theft, Identity Safety, Social Media Safety, Account Takeover Prevention, and the countless other methods fraudsters use to trick people into divulging their PII.