July 03 2023

Phishing For Fraud Just Got Easier

Back to news overview
Save as PDF
Print
icon

ChatGTP has recently attracted wide-spread news coverage for its ground-breaking AI chatbot abilities. But no sooner had the ink dried on such reports – its capabilities were advanced yet further; and with it came several warnings of the risks of scammers and more sophisticated phishing.

Developed by OpenAI, ChatGTP is now being powered by the more powerful GTP-4 (released on 14 March 2023).  As these advances in generative AI technology continue unabated, lawyers are increasingly being warned of the growing risks.

A top risk remains phishing, which could very soon be accompanied by voices for good measure.

What is phishing?

Phishing is a modern-day device used to defraud, and relies on sending emails purportedly from genuine organisations. The criminals’ intention is to persuade email recipients to reveal personal information, such as passwords and financial details.

Phishing is also used to instal ransomware or other forms of malware on the recipients device.

Unfortunately, phishing emails are becoming more sophisticated and can, at face value, appear completely genuine.

In legal practice, so-called ‘Friday afternoon fraud’ is a form of phishing particularly associated with real estate lawyers. It is aimed at enticing home-buying clients (who often chose to complete their purchases on a Friday in the UK, hence the moniker) to transfer purchase funds into an account which they are led to believe is their law firm’s account.

GTP-4

GTP-4 is being hailed for its superior reasoning and language comprehension capabilities; as well as its voice technology.  Though it offers even greater possibilities for genuine businesses, so does the opportunity for criminals to become more skilful in their software coding creation – and supplement phishing emails with mimicked human voices.

At least this is being promptly recognised: reports are already coming of fraudsters employing generative AI to clone voices.  In Canada, for example, Benjamin Perkin has spoken out about how AI generated voice tech was used to defraud his elderly parents out of thousands of dollars.

Separately, the US Federal Trade Commission has also issued a warning of voice cloning by criminals to perpetuate fraud.  And there are warnings from cyber security experts that GPT-4 could accelerate cybercrime, for instance making phishing fraud even more dangerous. So how could this impact law firms?

Risk to law firms

One of the fundamental obligation for law firms is to implement appropriate cybersecurity measures to guard against the risks. As those risks change, so too should cybersecurity measures in response. A key challenge is the increasing difficulty in recognising a fake email – let alone a fake voice.

AI voice technology is so sophisticated that criminals can now identify and capture the actual voice of (for example) a senior lawyer in your firm; simulate it; and follow up a phishing email with a telephone call to the victim. The victim will have no reason to doubt the voice is genuine and follow the instructions given (undoubtedly involving the parting of money and or sensitive information).

‘Friday afternoon’ frauds and similar phishing fraud could be about to take a dark turn. While the risk of these types of fraud cannot be removed, knowledge and education is key. Firms need to be consistent in providing robust training for lawyers and support staff to facilitate a collective awareness of new and emerging risks.

In March this year (prior to GPT-4’s release), Thomson Reuters Institute published the results of a survey of 443 mid-sized and large law firms in the US, Canada and the UK, leading to its report, ChatGPT and Generative AI within Law Firms.

The vast majority of respondents (9 out of 10) indicated an awareness of ChatGPT and similar, but only 51% said it should actually be used in legal work. But the report also revealed a stark divide between partners (80% of whom recognized the risks) and associates and other junior lawyers, around only half of whom saw risks involved.

The disparity may reflect a lack of knowledge and awareness of the threat posed by generative AI among those at the coal-face of legal practice: this in itself should be considered a risk factor that could be effectively managed by regular training.

Final word

Lawyers need to know that the risks associated with ChatGPT, GPT-4 and their ilk go far beyond issues of cybercrime and fraud. There are, for instance, implications for data protection and privacy. The warnings issued should not be considered as mere scaremongering.

Taking strategic and robust advice from the commercial lawyers at ParrisWhittaker is one of the best steps you can take before considering using the latest generative AI tools. Contact us on: info@parriswhittaker.com or +1.242.352.6112

CLOSE X

c1f84afce64b29069b27ffb36226af5a