The next evolution of social engineering is deepfake employment interviews. A glimpse into the future arsenal of criminals who use convincing, faked personae against business users to steal data and commit fraud has been offered by the latest trend.
The FBI Internet Crime Complaint Center warned of increased activity this week from fraudsters trying to game the online interview process for remote-work positions. Information technology, computer programming, database maintenance, and software-related job functions are some of the work-from- home jobs criminals are using a combination of deepfake videos and stolen personal data to impersonate.
According to the advisory, law enforcement officials have received a number of complaints from businesses.
The actions and lip movement of the person interviewed on camera do not match the audio of the person speaking, according to the advisory. Actions such as coughing, sneezing, and other auditory actions are not aligned with what is presented visually.
Criminals were using stolen personally identifiable information (PII) in conjunction with fake videos to better impersonate applicants with later background checks digging up discrepancies between the individual who interviewed and the identity presented in the application.
Potential Motives of Deepfake Attacks
The positions applied for by these fraudsters were ones with some level of corporate access to sensitive data or systems, which is why the advisory didn’t specify the motives for these attacks.
According to security experts, one of the most obvious goals in a remote interview is to get a criminal into a position to steal from an organization.
Some reported positions include access to customer data, financial data, corporate IT databases and/or proprietary information.
Gil Dabah, co-founder and CEO of Piiano, says that a fraudster that hooks a remote job takes several giant steps towards stealing the organization’s data crown jewels. They are much harder to detect now that they are an Insider threat.
DJ Sampath, co-founder and CEO of Armorblox, said that short-term impersonation might be a way for applicants with ainted personal profile to get past security checks.
He says the deepfake profiles are set up to get through the company’s recruitment policy.
In addition to getting access for stealing information, foreign actors could be trying to deepfake their way into US firms to fund other hacking enterprises.
The FBI security warning is one of many that have been reported by federal agencies in recent months. According to Stuart Wells, the CTO of Jumio, the US Treasury, State Department, and FBI released an official warning stating that companies need to be cautious of North Korean IT workers pretending to be contractors in order to steal revenue from their country. Organizations that pay the North Korean hackers could face legal consequences.
What This Means for CISOs
Political or social issues have been the focus of a lot of the warnings over the past few years. The use of synthetic media by criminals points to the relevance of deep fake detection in business settings.
Dr. Roy-Chowdhury is an electrical and computer engineering professor at the University of California atRiverside. It’s easy to detect a deepfake video for a meeting. Small companies may not have the technology to detect deepfake videos, which may make them vulnerable to being fooled. Images can be very convincing and personal data can be used to create fraud in the workplace.
One of the most unnerving parts of this attack is the use of stolen information to help with the impersonation.
As the prevalence of the DarkNet with compromised credentials continues to grow, we should expect these malicious threats to continue in scale ISOs have to go the extra mile to upgrade their security posture when it comes to background checks. A tighter procedure is needed to mitigate the risks of these processes.
Future Deepfake Concerns
The most public examples of criminal use of deepfakes in corporate settings were used to support business email compromise attacks. An attacker used deepfake software to impersonate the voice of a German company’s CEO in order to convince another executive at the company to send a wire transfer of $243,000 to support a made-up business emergency. A criminal used fake audio and email to convince an employee of a company in the U.A. to transfer $35 million to an account owned by the bad guys, in order to trick them into thinking it was in support of a company acquisition.
Matthew Canham, CEO of Beyond Layer 7 and a faculty member at George Mason University, said that attackers are going to use deepfake technology as a creative tool in their arsenals to help make their social engineering attempts more effective.
Canham presented research at Black Hat last year on how to combat deepfake technology, which he says is going to take social engineering to another level.
The good news is that researchers like Canham and Roy-Chowdhury are making progress in coming up with detection and countermeasures for deepfakes. In May, Roy-Chowdhury’s team developed a framework for detecting manipulated facial expressions in deepfaked videos.
He thinks that new methods of detection can be used quickly in the cybersecurity community. He thinks they can be operationalized in one or two years with collaboration with professional software development that will take the research to the software product phase.