Increasingly, digital businesses are using real-time behavioral analytics on pre-submitted data to look for criminal digital footprints and enhance their fraud prevention strategies.
Fraud rings are groups of criminals working together, usually on a massive scale, to steal from people and organizations. Traditional fraud and anomalous activity detection approaches often miss their activities. And unfortunately, their financial impact is growing larger and larger each year.
RTInsights recently sat down with Don Bush, Senior Vice President of Marketing at NeuroID, to talk about fraud rings, the scope of their work, and how online behavioral data and digital body language can provide needed insights to determine nefarious intent or legitimacy. Here is a summary of our conversations.
RTInsights: What are fraud rings?
Bush: Fraud rings are coordinated or networked fraudsters either in one large cybercrime syndicate or several individuals or smaller groups working together. These can come through human interaction or automated software programs called “bots” that try to infiltrate networks, create new accounts, etc., with the intention to steal products, services, or money.
RTInsights: Why are they so challenging to protect against?
Bush: The difficulty in recognizing fraud rings is due to many factors, including:
1) The data that they use is most likely real information from an actual person or group of people that has been purchased through illegal channels. This personal data, often called personally identifiable information or PII, is what fraud and identity verification systems use to determine if a user is who they say they are. When this PII is in the wrong hands, it can be hard to detect and hard to stop.
2) Fraud rings often study, test, and probe their victims’ websites before they launch a larger, coordinated attack. This gives them many clues as to what fraud detection systems are in place and how they might be able to get past them.
3) Velocity is another tactic fraud rings use. For example, when an online company runs a promotional campaign, they expect increased traffic to their site. Fraud rings often use this period to attempt large-scale attacks under the disguise of the higher traffic, hoping to blend in and that the company either does not have their fraud system scaled up or that the manual processes of approval are overwhelmed, allowing more bad actors into their system.
4) Bots are also a major concern when it comes to fraud attacks. They are fast-acting, usually creating a spike in activity that may not be recognized until damage is done. This automated process for attacking digital brands can be used to steal, test for fraud systems technology, and cyber espionage, among other things.
5) The last area is simply poor defenses to detect and deter fraud. It sounds fantastic, but even in today’s online world, many companies do not have technology that matches the criminals they are trying to stop. These cyber-criminals are well-funded, sophisticated, and have technology that would rival any Fortune 500 company.
RTInsights: How do traditional methods break down against fraud rings?
Bush: Generally speaking, fraud and identity systems are designed to look at the PII that is submitted and make a determination whether this person is who they say they are. They rely on one or more of the following:
- Something the user knows, like a password, personal information, etc.
- Something the user has, like a random code generator, a phone, etc.
- Something the user is, like a thumb scan, facial recognition, etc.
Unfortunately, all of these methods use data that must be stored and recalled and compared to the answers the user submits on the form they are filling out. Therefore, it can be stolen or otherwise compromised and used against the actual owner of the data.
Also, this PII does not change very often or not at all. When was the last time you changed your social security number, address, or email? You really can’t change your thumbprint, can you? This means that once the data has been compromised (stolen) through one of the thousands of data breaches that occur every year, it can be used and resold for years to come.
Another point to make here is that the data must be submitted before the company can run it through their systems in order to make the determination whether this is a real, genuine user or a fraudster/fraud ring. Sometimes this triggers additional cost in the process. Two-factor authentication is an example of an additional cost. If the company is an online insurance carrier, running an application for a quote has a cost that can be as high as $30 each. If they haven’t filtered out the fraud before then, they are stuck with those costs. Multiply that by hundreds or thousands of fraudulent applications in a single fraud ring attack, and you can quickly see how the costs add up.
RTInsights: What’s needed to protect against them?
Bush: True protection needs to come from a method that does not rely on this personal data for fraud detection or identity verification. As long as a fraud ring can use a person’s data to get past systems designed to validate that same data, we will have a big problem. This is one of the main reasons we continue to see fraud numbers online increase dramatically every year.
RTInsights: How does NeuroID help?
Bush: NeuroID has developed technology that can show the behavior of a person as they interact with a form or application online. Behavior cannot be mimicked or stolen in a database. Behavior is a very specific, unique footprint that each individual leaves when they interact with an online form or application. How a person types, texts, swipes, and flows from field to field is very telling when it comes to the intentions of that user.
Sometimes called “digital body language,” this behavior can be analyzed as the user is interacting with the form, even before they hit the submit button. NeuroID’s patented technology detects when a user is NOT who they say they are when their behavior is inconsistent with someone familiar with their own PII.
It’s sort of like when you were in school, and your teacher required you to show your work on a test. Anyone can get the right answer, but how you came to that answer speaks much louder than the answer itself.
With NeorID’s ID Crowd Alert, digital brands can see the behavior of the crowd and are alerted to fraud ring and bot attacks very early in the process, seeing the behavior before the user can hit the submit button and informing fraud and identity systems of the pending attack. Working with existing fraud and identity systems, this added visibility at the very top of the online funnel is unprecedented and extremely valuable when it comes to identifying and deterring fraud.
Going one level deeper, NeuroID’s ID Orchestrator delivers specific field-by-field data to your fraud or identity system at the session or user level to help escalate deterrence for high-risk/fraudsters or reduce friction for genuine users making their experience better and increasing conversion.
RTInsights: Can you give some examples of success/implementation?
Bush: I have several examples.
One very large online credit card issuer ran a promotional campaign and saw its traffic triple during the period. Hidden amongst all this additional traffic, NeuroID’s ID Crowd alert detected a fraud ring and alerted the company. They were able to decline applications that would have otherwise made it through their current system. They estimate they save about $800,000 in the one attack attempt.
A large digital insurance carrier was able to detect a bot that was filling out applications to check pricing for policies. Over a two-week period, more than 3,500 applications using this bot were detected and declined before they ran a quote, saving the carrier nearly $90,000 in that two-week span. (This appeared to be what we call digital espionage to better understand competitive pricing. Digital espionage is not uncommon but most often goes undetected.)
Another online lender was able to look at the behavior as reported by NeuroID. Users that showed genuine behavior and were familiar with their own PII were fast-tracked through the application process resulting in a doubling of the conversion and loan closing.
RTInsights: How are behavioral analytics different from biometrics when it comes to fraud ring detection?
Bush: Behavioral analytics is how a person interacts with a form or application, for instance. A person’s behavior is unique to each individual and cannot be stolen or mimicked. Behavior does not rely on PII, nor does it look at the data in the fields of a form. This is why behavioral analytics is such a powerful tool in the fight against fraud and identity verification.
Biometrics, on the other hand, is a simple record of your fingerprint, facial or retinal scan. This data must be stored and recalled for comparison each time a user is looking to validate themselves online. Since this is a record, it can be stolen and misused to get past fraud and identity verification systems. And, since it is part of who you are, it cannot be changed, so once the data is stolen, it is out on the dark web forever to be used as often as possible and sold from criminal to cybergang many times over.
Behavior is a frictionless and more accurate determinant of fraud rings because it does not rely on anything other than the current user’s interactions.
Read next: Ushering In a New Era For Behavioral Analytics Applied To Fraud (Special Report)
See what behavioral data can do for you. Watch the demo now.