Scammers duped a multinational company out of about $26 million by posing as top executives using deepfake technology, Hong Kong police said on Sunday, in one of the first such cases in the city. Law enforcement agencies are struggling to keep up with generative artificial intelligence, which experts say has potential for misinformation and misuse, such as fake images showing people saying things they never said. An employee of a company in the Chinese financial center received “video conference calls from someone posing as senior managers of the company and requesting to transfer money to designated bank accounts,” police told AFP.
Police received a report of the incident on January 29, by which time some HK$200 million ($26 million) had already been lost through 15 transfers. “Investigations are ongoing and no arrests have been made so far,” police said, without revealing the name of the company. The victim worked in the finance department and the scammers posed as the chief financial officer of the UK-based company, according to Hong Kong media reports.
Acting Superintendent Baron Chan said several participants participated in the video conference, but all except the victim were impersonated. “The scammers found publicly available videos and audios of the spoofing targets through YouTube, then used deepfake technology to emulate their voices… to entice the victim to follow their instructions,” Chan told reporters. The deepfake videos were pre-recorded and did not involve dialogue or interaction with the victim, he added.
This case highlights how advanced technology like deepfakes can be used for criminal activities. With AI becoming increasingly sophisticated, it’s becoming more challenging for law enforcement agencies to combat these types of scams effectively.
Deepfakes have also become an issue beyond just financial fraud. They have been used for creating non-consensual pornography involving both adults and minors. In response to this growing concern, lawmakers across various states in the US have been considering legislation related to deepfakes.
At least 10 states have enacted laws criminalizing non-consensual deepfake pornography involving minors while others are considering similar measures this year. Some states have also given victims rights to sue those who create images using their likeness without consent.
However, detecting and preventing deepfakes remains a challenge. While there are efforts being made towards developing detection algorithms and embedding codes into content that can indicate whether they’ve been created using AI, there is no perfect solution yet.
In light of these challenges surrounding deepfake technology, policymakers are faced with tough decisions on how best to regulate its use without infringing on free speech protections or stifling innovation in AI development.
As we navigate this complex landscape, it’s crucial for individuals with an online presence to remain vigilant against falling victim to deepfake scams by taking appropriate actions such as reporting incidents to social media platforms or law enforcement if necessary.
According to Hindustan Times Tech News ( it’s important for both individuals and authorities alike to stay informed about developments in deepfake technology and work together towards finding effective solutions that balance privacy protection with freedom of expression.