Publications

JIFA: Deepfakes and Insurance Fraud: Seeing Is Not Believing

Posted by Kendra Smith

BY Steve Adams, Product Marketing Manager, Skopenow | November 23, 2021

Deepfakes are an emerging technology where the likeness of another person replaces a person in existing media. Deepfakes are synthetic videos, photos, or audio, simulated to closely resemble real videos but which convincingly portray the subject of the media as another person. Deepfakes, a portmanteau of ‘deep learning’ and ‘fake’, are generated by Artificial Intelligence using deep learning algorithms to create media that depicts manipulated events that have not occurred as portrayed.

The rise of deepfakes is likely to have severe implications for insurance fraud. Deepfakes can be used by fraudsters to support a range of insurance scams, including identity theft,  imposter scams, payment fraud, extortion, and stock manipulation. The majority of deepfake videos created at this time have telltale signs that identity media is manipulated, however, the technology is advancing at pace and experts predict that soon, deepfakes will be indistinguishable from real images.1

Experts have suggested that few organizations have taken such action to prepare against the risk of deepfakes because they’re unaware of how quickly this technology is advancing.2 With the use of deepfakes increasing in insurance fraud, insurers need to prepare to ensure that they can detect deepfakes and handle the media fallout of failure to detect them.

With the current extent of deepfake technology, where flaws exist in the majority of produced deepfake media, consumers are already vulnerable to the use of manipulated media to defraud them. Deepfakes have successfully been used to defraud businesses and the average consumer with limited digital literacy may be unable to detect the warning signs of deepfakes. When deepfake videos have been adjusted by experts to remove flaws, which is a time-consuming process, the likelihood of detection by consumers is unlikely, meaning that criminal groups that can invest time and money into deepfake experts could utilize this technology to produce almost flawless media to support high-value frauds.

Deepfakes enable the creation of media that portrays individuals making statements and actions that they have never made. Pornography is the most frequent reason for deepfake creation3, however, deepfakes can also facilitate crimes.

Usually involving a person’s face being grafted onto the face of the original person in a video, imposing their appearance and expressions, recordings of a person may be altered, including the mouth movement and voice, so they appear to make statements they never made. Altering the statements that a person makes can support a range of insurance scams, including altering claims video statements to portray false events, artificial phone calls with colleagues to support financial theft or system access, and identify theft to support telemedicine and synthetic identity fraud.

Image and video deepfakes are created by running thousands of images of the faces of two people through an AI encoder. The AI encoder identifies the similarities between the two faces and compresses them to their shared features. A second algorithm, an AI decoder, is then used to recover the faces from the compressed images. The decoder reconstructs a face in an image or video with the features, movements, and expressions of the new face.

Amateurs can create deepfakes

The technical process to create a deepfake may read as expensive or complicated, placing it out of reach for a layperson. Deepfakes, however, are frequently produced by skilled amateurs. Within the Reddit community, amateur deepfakes are shared on the subreddit r/deepfakes. At this time, the average person is unlikely to have the ability to create a deepfake sufficient to enable insurance fraud, lacking the knowledge and technology to facilitate deepfake creation. However, as the creation of high-quality deepfakes becomes less complex and cheaper, the likelihood is that the use of deepfakes to support insurance fraud will become more widespread.

With the technology readily available online, the primary requirement to create a deepfake is the required number of images of the intended subject. Deepfakes require thousands of photos of a person to enable the AI encoder to create a model. 

With the rise of social media, many individuals may be posting enough photos to satisfy this requirement. Another risk is that videos are generally recorded at 30 frames per second, which means that one minute of video of a person would contain thousands of images of that person that can be used to create a deepfake.

The ongoing development of deepfake technology has resulted in highly-convincing media that makes it difficult to discern between real and fake content. Criminals can exploit deepfakes of this quality for social engineering attacks and fraud scams. The annual cost of deepfake enabled crime was estimated to exceed $250 million in 20204 and is likely to grow as the technology develops. 

The FBI’s Internet Crime Complaint Center issued a Private Industry Notice in March 2021. It warns businesses about the dangers of deepfakes that may be used for Business Identity Compromise — where deepfake tools are employed to create “synthetic corporate personas” or imitate existing employees. This could result in “very significant financial and reputational impacts to victim businesses and organizations.”5

Hao Li, who helped bring Paul Walker’s character back to life in Fast and Furious 7 using deepfake technology, explained that the technology is advancing rapidly and is also open source. This makes it easily accessible with continued improvements. Li suggests this technology can create media that is 90% imperceptible to the naked eye, while the remaining 10% can be masked with algorithms or “noise” to hide inconsistencies.6

The U.S. Defense Advanced Research Projects Agency has recognized the potential threat and use cases of deepfakes, resulting in two programs to detect deepfakes. The first is Media Forensics, which involves the development of algorithms to assess media integrity to provide insights into the production of fake content. The second is Semantic Forensics, which looks for and catalogs semantic inconsistencies, including irregular facial features, background, and jewelry. 

10 scenarios detail deepfake risks

Distant forms of interaction work alongside deepfakes to enable potential criminals while making it harder to track down perpetrators. In its working paper, “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios,” the Carnegie Endowment for International Peace think tank put forward 10 scenarios detailing the risks of deepfakes to the financial sector. These scenarios involve broadcast media aimed at mass consumption, and “narrowcast” media made for small, targeted audiences and sent directly through private channels.7 Within these 10 scenarios, six reveal the greatest possible financial harm to the insurance market which could result in an increase in claims due to the damage caused to victims and erroneous payouts due to the criminal use of deepfakes to make false insurance claims:

  1. Identity theft. Videos used to facilitate access to a victim’s personal information, bank accounts, or corporate information.
  2. Imposter scam. Deepfake videos or “voice skins“/”voice clones” used to impersonate public officials to make false or malicious statements.
  3. Payment fraud. Media used to impersonate managers or family members to request a money transfer to a compromised bank account.
  4. Cyber extortion. Videos depicting activity that never occurred, including crime or sexual activity, created to blackmail victims.
  5. Stock manipulation. Media depicting fabricated events to damage a company’s reputation, or endorse another business or product to influence investor behavior.
  6. Fabricate government activity. Media depicting government officials suggesting there will be government action such as economic policy changes.

There have already been several public examples of deepfake technology enabling crime within these six core scenarios. Scammers deployed deepfake technology against an energy firm in 2019: A fraudster used a deepfaked voice to convince an executive that he was receiving instructions from his boss at a parent organization, and convinced him to part with $243K for a payment to a supplier.8 Similarly, in 2020, an audio deepfake voicemail was left for a tech firm employee using a voice that sounded like the CEO of the company. The message asked for “immediate assistance to finalize an urgent business deal.” The employee flagged his suspicions, and payment was not made.9

With deepfake technology still in its infancy and with the quality of deepfakes still developing, deepfake crimes are rare. There is a minimal immediate threat to the stability of the insurance industry, however, deepfake technology will continue to develop at pace and insurance carriers and fraud investigators should be preparing now to ensure that they are prepared for large-scale impact to the insurance industry. Nevertheless, at this time bad actors could already use deepfakes during scams to target individuals and companies as per the highlighted six core scenarios above. 

In the case of stock manipulation, deepfakes could support disinformation campaigns that attack an executive or brand to impact public confidence and the company stock value. Stock manipulation scams enable fraudsters to short a particular stock for a profit. Stock manipulation scams that affect the value of a company are likely to result in insurance claims in an attempt to recoup that lost value to ensure that the business can continue to operate.

Crime insurance may cover thefts

With payment fraud deepfake scams, fraudsters target employees with a call requesting immediate payment for services rendered. Crime insurance policies could potentially cover the theft of funds where deepfakes are used in the impersonation of a company executive. Cyber insurance or crime insurance might provide coverage for damage that occurred as a result of deepfakes. However, this depends on how those policies are triggered. Some companies have expanded their terms to include coverage for financial loss resulting from reputational harm following a cyber incident or privacy breach. Yet most policies require network penetration or a cyberattack to justify a claim payment, which a deepfake scam is unlikely to require. 

Identity theft and imposter scam deepfake scams can support social engineering scams, depicting a call from a known and trusted source encouraging activity that would compromise a company’s network. Such scams are usually followed by a data breach or ransomware attack, as the fraudsters have accessed the companies technical infrastructure. Due to the data breach or ransomware attack, it is likely that cyber insurance policies would cover crimes of this nature and that insurance companies would be liable to payout.Identity theft scams can also use deepfakes to alter claims video statements, portraying witnesses or victims making false statements that work in favor of fraudsters. Similar to a famous deepfake video showing President Obama calling President Trump expletives, deepfaked crash videos can portray the likeness of innocent parties. Deepfaked vehicle crash videos can support a claim that an innocent party was involved or remove images of responsible parties. The BBC’s The Capture showed the possibility of altered video evidence portraying innocent parties conducting criminal activity. Whilst The Capture showed the use of deepfakes to portray violence, this same technology could be used to portray other innocent parties committing crimes that could result in implications to insurance fraud investigations, such as embezzlement or vehicular collisions.

Enables ghost fraud for life insurance

Identity theft enabled deepfake scams can also support ghost fraud for life insurance. Ghost fraud occurs when a fraudster steals the data of a deceased person to impersonate them for financial gain. Using deepfakes, the criminal can create videos and audio that depict the dead victim as if they are still alive — enabling them to support applications for accounts and claim life insurance policy payouts for years.

Application fraud in policy purchase is another example of potential identity theft through deepfakes. This involves using stolen or fake identities to open new bank accounts. An example of New Account Fraud was reported in 2017, when a criminal opened a fraudulent account with a provider even though the victim already held a legitimate bank account.10 Combining deepfake technology with personally identifiable information purchased via the dark web enables cybercriminals to provide the required evidence to open new fraudulent bank accounts. A bank account created through identity theft can result in significant financial debt in the victim’s name. Victims initially appear responsible for activities conducted by offenders, such as maxing out credit cards or taking out loans. Banks and other financial institutions are unable to get their money back, resulting in insurance claims to cover the loss. 

Identity theft and imposter scam deepfakes could also be used to support telemedicine fraud. Telemedicine is the distribution of health-related services and information mainly via phone or video. Assistant professor at Tsinghua University in Beijing, Yanan Sui, has used deepfakes to anonymize patients, ensuring that facial movements are not removed by facial blurring, which could hinder diagnosis.11 Deepfakes could also be used within telemedicine to negative effect. Criminals could use deepfakes and stolen identities of doctors and patients to bill for services never provided.

Synthetic identity fraud is another example of identity theft using deepfake scams. Synthetic identity fraud is a sophisticated online fraud where the details of multiple victims are combined to create a fictional “person” that a policy can be created for. Falsified deepfake videos and ID documents could portray a deceased person to bolster a synthetic identity and provide the evidence required to set up a new policy. Experian has highlighted synthetic identity fraud as the fastest-growing type of financial cybercrime. Deepfake enabled identity fraud presents a growing risk to the insurance market. 

Cyber extortion-enabled deepfake scams can be used to blackmail employees to damage the reputation and finances of a business, resulting in damage that leads to large-scale insurance claims. Deepfake makers can modify normal photos into realistic nude simulations, which can be used for blackmail to facilitate large-scale cyber extortion for profit. Even though the images themselves are fake, employees may fear that the release of the images could cause irreversible harm. Blackmailing employees could provide fraudsters access to a secure network, which could cause a serious loss of data and infrastructure. 

Fraud fighters should prepare now

Deepfakes can clearly provide a variety of opportunities for criminals to commit insurance fraud. Fraud fighters should be ready for the threat of deepfakes and prepared with detection methods. 

Biometrics. Facebook, Google, Amazon Web Services, and Microsoft announced the Deepfake Detection Challenge in 2020.12 Similarly, financial institutions and insurance carriers have embraced biometric authentication systems. They analyze voice and video records alongside fingerprints to confirm the identity of their customers when opening and accessing their accounts, making transactions, and claiming on policies.

Liveness detection. Current online identity verification methods rely on a government-issued photo ID and a corroborating selfie. However, with the development of deepfake images that criminals have utilized to manipulate ID documents and selfies, liveness detection is vital to ensure that the person is physically present. Current liveness detection may ask the user to blink, nod, or say a specific phrase, activities that deepfake video could generate.

Take selfies. For creating new accounts, liveness detection may prove insufficient in detecting deepfakes. One solution is requiring returning users to take a selfie to re-establish their authenticity. Liveness detection should detect the authenticity of the real person that set up the account rather than a spoofed video as a deepfake is unlikely to exactly match the real user.

The insurance industry relies heavily on customers submitting media such as photos and videos during claims, enabling possible deepfakes for fraudulent claims. Only 39% of insurers are taking or planning to take steps to counter the risk of deepfakes, meaning that detection is based solely on the efforts of human investigators.13 

Voice software. Audio and video deepfakes may appear legitimate to the human eye and ear, yet at a digital level may include indicators that they are fraudulent. Voice biometric software can identify differences that the human ear cannot detect. Several signals can indicate audio or video deepfakes. These signals, outlined below, can be detected manually and can also be identified through AI. 

  • Unnatural blinking or movement
  • Lip movements are out of sync with speech
  • Poor audio or video quality
  • Unnatural speech cadence
  • Robotic tone
  • Unnatural movement
  • Unrealistic facial-feature positioning, body or posture
  • Unrealistic hair and teeth
  • Image blurring or misalignment
  • Inconsistent noise or audio
  • Images that look unnatural when slowed down

Conduct unique activities. High-quality deepfakes take time to create, so asking customers to conduct a unique activity at the time of a claim or purchase can help to ensure their legitimacy. Asking customers to read out a newly generated unique alphanumeric sequence during a transaction could support liveness detection and biometric voice checks to evidence that a customer is who they claim to be. 

Multiple factor authentication. Another method for restricting the use of deepfakes is multiple factor authentication. Requiring a PIN code alongside video or audio is a secure method to limit the exploitation of deepfakes in insurance fraud, ensuring that customers also have access to their own electronic devices. Corporate training can also ensure that employees are aware of the risk of deepfake phishing calls and application videos, ensuring that multiple-factor authentication methods are used to authenticate customers.

DeepFake-o-meter. Using open source intelligence (OSINT) tools deepfaked images, video, and audio can be identified, verified, and debunked. The University at Buffalo’s DeepFake-o-meter is a free tool that analyzes uploaded videos to determine if they are deepfakes. It uses up to 11 different detection methods. Images generated by artificial intelligence, including generative adversarial network (GAN) images, fail to accurately or consistently depict the reflections in the eyes of the subjects of videos, possibly due to the many photos combined to generate the deepfake. 

The DeepFake-o-meter exploits this shortcoming by spotting tiny deviations in reflected light in the eyes of deepfakes, allegedly with 94% efficiency with portrait-like photos. The Deepware Scanner is another deepfake detection tool designed to enable users to analyze a suspicious video to detect synthetic manipulation. In addition to video analysis tools, platforms like Skopenow can automatically investigate identified deepfake fraudsters. This enables law enforcement and internal security teams to aggregate and analyze all publicly available information on a subject to create actionable intelligence.

To conclude, deepfakes are an emerging technology that insurers should watch closely, and begin preparing for. Deepfakes present several scenarios where possible financial harm could impact the insurance market, including identity theft, imposter scams, payment fraud, cyber extortion, stock manipulation, and fabricated government activity. 

Deepfake technology is currently imperfect, meaning deepfakes are usually detectable. Deepfake technology is still developing, and the quality of created media will increase, so fraud fighters should be ready for the increasing threat of deepfakes and prepared with effective detection methods.

About the author: Steve Adams is the Product Marketing Manager at Skopenow. Steve is a criminal intelligence and internet investigations specialist who previously worked in UK law enforcement. Steve communicates the features and benefits of Skopenow’s products, and demonstrates techniques for internet investigations (OSINT) through free monthly webinars and written communications.


References
  1. Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared, Rob Toews, Forbes, 2020
  2. Fintechs fear deepfake fraud, Alex Scroxton, Computer Weekly, 2020
  3. The State of Deepfakes: Landscape, Threats, and Impact, Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen, Deeptrace, September 2019
  4. The Threat of Deepfakes, iProov, 2020
  5. Private Industry Notification, Federal Bureau of Investigation Cyber Division, March 2021
  6. Deepfakes: Ghosts in machines and their effect on the financial world, Arachnys, 2021
  7. Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios, Jon Batement, Carnegie Endowment, 2020
  8. Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case, Wall Street Journal, 2019
  9. Listen to This Deepfake Audio Impersonating a CEO in Brazen Fraud Attempt, Vice, 2020
  10. New banking scam sees fraudsters open ‘twin’ account next to your real one, Amelia Murray, The Telegraph, 2017
  11. Telemedicine takes center stage in the era of COVID-19, Allison Marin, Science, November 2020
  12. Deepfake Detection Challenge Results: An open initiative to advance AI, Facebook, 2020
  13. Deepfake: A Real Hazard, Jeff Dunsavage, III, 2021