Deep Dive: QSRs Leverage AI To Fight ATOs And Credential Stuffing

Restaurants are diving head first into digital innovations, such as mobile ordering and rewards programs.

These merchants have always been vulnerable to in-person fraud, including coupon or promotion scams, but digital channels create new avenues that bad actors can exploit.

Security standards have largely not kept up with such threats, according to a joint study from Javelin Research and Kount. Their research found that nearly half of surveyed restaurants’ online and mobile ordering solutions require only usernames and passwords to log in — a known security weakness given that customers often use the same passwords for multiple accounts.

The study also noted that only 27 percent of restaurants’ digital investments were focused on fraud mitigation, with establishments instead allocating resources to mobile ordering.

This also means fraud runs relatively unchecked among quick-service restaurants (QSRs), with total fraud losses on an average order of $15 reaching as high as $36.25. Many chains have thus been turning to fraud detection programs driven by artificial intelligence (AI) to make the most of their limited prevention resources, leveraging various techniques to stop bad actors’ advances.

Selecting the Target

Fraudsters target mobile order-ahead apps for many reasons and use several methods to do so, including credential stuffing, which sees bots automatically entering usernames and passwords stolen from other websites to try to find matches, and account takeovers (ATOs), which allow them to skim personal details like passwords and credit card data and use that information to carry out cybercrimes.

Rewards programs are especially popular among hackers as they can hold large amounts of valuable data, including payment information. Rewards points are also valuable as bad actors can either spend them or sell them on dark web marketplaces.

Coffee giant Dunkin’ fell victim to a credential stuffing attack in October 2018, and the fraudsters who initiated the scheme were soon after selling users’ loyalty credits on dark web marketplaces for a fraction of their values. One Dream Marketplace listing offered $25 in Dunkin’ credits for $10.

AI to the Rescue

AI systems can prevent these attacks and are inexpensive when compared to the cost of human fraud prevention teams. QSRs and third-party ordering apps process thousands of transactions every day, making it impossible for human analysts to examine each exchange for fraud. AI tools can analyze thousands of transactions in less than a second, but these solutions are not perfect. Human analysts are necessary to give suspicious purchases second glances and clear false positives.

Many AI-based fraud detection solutions also leverage machine learning (ML), which allows the system to learn on its own. ML can be divided into two types: supervised and unsupervised. Supervised ML requires set outcomes, like when a human analyst knows which transactions are fraudulent and can determine if the system has performed its job correctly. Unsupervised ML does not require set outcomes and relies solely on the AI’s judgement to find patterns and groups. This is particularly useful when analyzing large quantities of transactions in which fraud may or may not be taking place, but it does have a higher chance of producing false positives.

The differences in the processes make both types of ML useful in different situations, and pairing ML with AI can mean unparalleled fraud detection capabilities at a fraction of the cost of human analysts. Many QSRs and third-party ordering apps are thus already using these tools to enhance their fraud detection procedures.

AI in Action

One use case for AI-powered security comes from third-party ordering app ChowNow, which uses such security systems to analyze transactions conducted on its app. These tools cross-reference new orders with others to determine legitimacy. Some are obviously fake, such as orders in which one customer is placing orders in several cities at the same time, but most transactions are assigned trustworthiness scores based on several factors, including how recently the user’s email address was created and if the credit card associated with the transaction has previously been logged in a fraud database.

ChowNow processes $40 million in transactions each month, so it only uses its human team to check for false positives. Its AI system automatically blocks users who it determines are untrustworthy, and users can contest those decisions if they believe the AI has made a mistake.

Mexican QSR Chipotle is another mobile order-ahead player leveraging AI for security. Its system works in tandem with human analysts, but it can often do the job on its own.

“When you’re looking at account takeovers, for example, it’s predominantly automated bot attacks that have an identifiable signature,” Curt Garner, Chipotle’s chief technical officer, explained in an interview with PYMNTS. “As a retailer, you can say there’s no practical purpose why a customer would be trying to log on to your network using a bot. The security platforms that utilize AI and machine learning can also spot attack patterns as they try to morph into different vectors, and very quickly block those transactions as well.”

AI systems like Chipotle’s are not always fully confident in their examinations, at which point the process is transferred to human analysts. Orders from new or unrecognized devices are always subjected to additional scrutiny before they are allowed through, for example.

Pairing AI-powered solutions with human analysts seems to be an optimal solution to preventing mobile order-ahead fraud. Such technologies are quickly advancing, however, meaning human analysts will likely play smaller roles as time goes on.