NIXSOLUTIONS: Fraudsters are Increasingly Using AI to Phishing and Steal Money

Research has shown that fraudsters are increasingly using artificial intelligence (AI) technologies to commit crimes and bypass security systems. In particular, ChatGPT and similar chatbots enable the generation of more believable and competent texts for phishing emails. Such letters are difficult to recognize based on traditional signs of fraud.

AI-Powered Scams and Deep Fakes

According to The Wall Street Journal, scammers are using AI to imitate the voices of real people and create so-called deep fakes – fake audio and video. This helps to impersonate bank employees or company executives, demanding the transfer of money or disclosure of confidential data. According to Matt Neill, a former US intelligence agent, it is now much more difficult to recognize who is actually being talked to. Criminals can engage in meaningful dialogue by using the victim’s personal information to increase trust.

NIXSOLUTIONS

However, major US banks, including JPMorgan Chase, are developing their own AI systems to identify new fraud schemes. But criminals are still ahead – they are attacking more and more people, and thefts are becoming larger. According to the US Federal Trade Commission, Americans lost a record $10 billion to scammers in 2023, which is $1 billion more than in 2022. However, only 5% of victims report their losses, so the actual damage could reach $200 billion.

Banks’ Countermeasures and Recommendations

According to experts, scammers are actively using AI tools to collect personal data on social networks. In doing so, AI helps generate personalized messages on behalf of trusted individuals to persuade people to hand over money or sensitive information. According to a survey of bank specialists, 70% of them believe that hackers use AI more effectively than financial institutions. Additionally, a further significant increase in the number of scams using artificial intelligence is predicted.

Fraudsters are also actively using AI tools to automate the hacking of online user accounts. If they gain access to the email and password of a victim for any service, they use AI programs that quickly check whether this data is suitable for access to other accounts of that person, for example, bank accounts or social media profiles.

To counter this threat, banks are implementing their own artificial intelligence systems that monitor customer behavior, such as how data is entered and from which devices the application is logged in. If there are deviations from normal behavior, the system raises an alarm and may require additional authentication. Banks also analyze typing speed to determine whether a person is acting under duress. The AI also detects suspicious signs such as pasting copied text or typing too uniformly, adds NIXSOLUTIONS.

Experts recommend enabling two-factor authentication to protect online accounts and being vigilant in situations where someone suddenly begins to demand urgent action to transfer a sum of money. We’ll keep you updated on the latest developments in AI and fraud prevention to help you stay informed and secure.