Amid unprecedented amounts of e-commerce since 2020, the number of digital payments made every day around the world has exploded. $6.6 trillion last year in value, a 40 percent jump in two years. With all that money pouring down the world’s payment rails, there’s even more reason for cybercriminals to find ways to get their hands on it.
Ensuring the security of payments today requires advanced game theory skills to outwit and outsmart highly sophisticated criminal networks that are on track to deliver up to $10.5 trillion in “loot”. to steal through cybersecurity damage, according to a recent study. † Payment processors around the world are constantly playing against fraudsters and “upping their game” to protect customers’ money. The target is invariably moving and scammers are getting more and more sophisticated. Staying ahead of fraud means companies must keep changing security models and techniques, and there is never an endgame.
SEE: Password Breach: Why Pop Culture and Passwords Don’t Mix (Free PDF) (TechRepublic)
The truth remains: there is no surefire way to reduce fraud to zero except to shut down online business altogether. Nevertheless, the key to reducing fraud lies in maintaining a careful balance between applying intelligent business rules, complementing them with machine learning, defining and refining the data models, and recruiting intellectually curious personnel who can consistently monitor the effectiveness of the questions current security measures.
An era of deepfakes is dawning
As new, powerful computer-based methods evolve and iterate based on more advanced tools, such as deep learning and neural networks, so does their plethora of applications — both benign and malicious. One practice that is making its way into recent mass media headlines is the concept of: deepfakes, a portmanteau of ‘deep learning’ and ‘fake’. Its implications for potential security breaches and losses for both the banking and payments industries have become a hot topic. Deepfakes, which can be difficult to detect, are now considered the most dangerous crime of the future, according to researchers at University College London†
Deepfakes are artificially manipulated images, videos and audio in which the subject is convincingly replaced by someone else’s likeness, leading to great potential to deceive.
These deepfakes put some off with their near-perfect replication of the subject.
Two stunning deepfakes that have been widely covered include a deepfake by Tom Cruiseborn by Chris Ume (VFX and AI artist) and Miles Fisher (famous Tom Cruise impersonator), and deepfake young Luke Skywalkercreated by Shamook (deepfake artist and YouTuber) and Graham Hamilton (actor), in a recent episode of ‘The Book of Boba Fett’.
While these examples mimic the intended subject with alarming accuracy, it’s important to note that with today’s technology, an experienced impersonator, trained in the subject’s inflections and mannerisms, is still needed to create a compelling fake.
Without a similar bone structure and the subject’s signature movements and expressions, even today’s most advanced AI would have a hard time making the deepfake perform credibly.
For example, in the case of Luke Skywalker, the AI was used to replicate Luke’s voice from the 80s, responderused hours of recordings of original actor Mark Hamill’s voice at the time the movie was filmed, and fans still found the speech to be an example of the “Siri-esque… hollow recreationsThat should arouse fear.
On the other hand, without prior knowledge of these important nuances of the person being replicated, most people would find it difficult to distinguish these deepfakes from a real person.
Fortunately, machine learning and modern AI work on both sides of this game and are powerful tools in the fight against fraud.
Security Gaps in Payment Processing Today?
While deepfakes pose a significant threat to authentication technologies, including facial recognition, today there are fewer opportunities for fraudsters to commit scams from a payment processing standpoint. Because payment processors have their own implementations of machine learning, business rules, and models to protect customers from fraud, cybercriminals must work hard to find potential gaps in the defenses of payment rails — and these gaps narrow as each merchant creates more relationship history with customers.
The ability of financial firms and platforms to “know their customers” has become even more important in the wake of the rise of cybercrime. The more a payment processor knows about past transactions and behavior, the easier it is for automated systems to validate that the next transaction fits into an appropriate pattern and is likely to be authentic.
Automatically identifying fraud in these cases uses many variables, including transaction history, transaction value, location and past chargebacks – and it doesn’t look at the person’s identity in a way that deepfakes play a role. can play.
The greatest risk of deepfake fraud for payment processors lies in the manual review, especially in cases where the transaction value is high.
In manual review, fraudsters take advantage of the opportunity to use social engineering techniques to trick the human reviewers into believing, through digitally manipulated media, that the transactor has the authority to execute the transaction.
And, as reported in The Wall Street Journal, these types of attacks can unfortunately be very effective, with fraudsters even using deepfaked audio to impersonate a CEO to scam a UK-based company out of almost a quarter of a million dollars†
Because the stakes are high, there are several ways to narrow the fraud gaps in general while staying ahead of fraudsters’ attempts at deepfake hacks.
How to avoid the losses of deepfakes?
Advanced methods exist to expose deepfakes, using a number of varied checks to identify errors†
For example, because the average person doesn’t like photos of themselves with their eyes closed, selection bias in the source images used to train AI to create the deepfake can cause the fabricated subject to either not blink, blink at a normal rate, or just get the compound facial expression before the blink is wrong. This bias can affect other deepfake aspects, such as negative expressions, because people tend not to post these kinds of emotions on social media – a common source for AI training materials.
Other ways to identify today’s deepfakes include spotting lighting problems, differences in the weather outside relative to the supposed location of the subject, the time code of the media in question, or even anomalies in the artifacts created by filming, recording or encoding the video or audio relative to the type of camera, recording equipment or codecs used.
While these techniques now work, deepfake technology and techniques are quickly approaching a point where they can even fool this kind of validation.
Best processes to fight deepfakes
Until deepfakes can fool other AIs, the best current options for fighting them are:
- Improve training for manual reviewers or use authentication AI to better recognize deepfakes, which is only a short-term technique while the errors are still detectable. For example, look for blinking errors, artifacts, repeated pixels, or issues with the subject making negative expressions.
- Collect as much information about sellers as possible to better use KYC. For example, take advantage of services that scan the deep web for potential data breaches affecting customers and flag those accounts for potential fraud.
- Prefer multi-factor authentication methods. For example, consider using Three Domain Server Security, token-based authentication and one-time use password and code.
- Standardize security methods to reduce the frequency of manual reviews.
Three Security Best Practices
In addition to these methods, several security practices should help immediately:
- Hire an intellectually curious staff to lay the groundwork for building a secure system by creating an environment of rigorous testing, re-testing, and constant questioning of the effectiveness of current models.
- Set up a control group to measure the impact of anti-fraud measures, provide “peace of mind” and provide relative statistical assurance that current practices are effective.
- Implement constant A/B testing with step-by-step introductions, increasing the use of the model in small increments until proven effective. These ongoing tests are crucial to maintaining a strong system and defeating scammers with computer-based tools to crush them at their own game.
Endgame (for now) vs. deepfakes
The key to reducing deepfake fraud today is mainly gained by limiting the circumstances under which manipulated media can play a role in the validation of a transaction. This is achieved by developing anti-fraud tools to limit manual reviews and by constantly testing and refining toolsets to stay ahead of well-funded, global cybercrime syndicates day by day.
Rahm RajaramVP of operations and data at EBANXis an experienced financial services professional with extensive expertise in security and analytics topics in leadership roles at companies such as American Express, Grab and Klarna.