Bad Bots and Online Selling: Q&A with DataDome

In 2021, the average breach costs retailers, including those doing business online, $ 3.27 million, an increase of almost 63% from the $ 2.01 million it would have cost in 2020. With the rising cost of data breaches, more and more ecommerce businesses are looking for ways to protect their online stores. IT Business Edge spoke with Benjamin Fabre, co-founder of DataDome, to better understand how these retailers can protect their sites.

Established in 2015, DataDome is a provider of bot protection, which means they help their customers block bad bots that may want to retrieve data or steal Personally Identifiable Information (PII). The platform uses artificial intelligence (AI) and machine learning (ML) to block harmful bots in less than two milliseconds.

Jenn Fulmer: The rapid shift to e-commerce last year has left many retailers vulnerable. What should these organizations watch out for on their eCommerce sites?

Benjamin Fabre: We have seen a huge increase in traffic to e-commerce websites since the start of the pandemic, and this is creating incredible opportunities for hackers, because most of the quick actions taken by e-commerce websites, most of the time, were not very sure because it was necessary to go fast.

What we recommend that our customers be protected against automated threats, especially credential stuffing and account takeover. With the massive password leak we saw recently from Yahoo !, Facebook and LinkedIn, hackers are using bots to automate millions of login password attempts on every e-commerce website on the planet. . Even a small e-commerce site can be threatened by malicious bot traffic. A few years ago bot protection was a ‘good to have’ and now it’s a ‘must have’ for e-commerce websites.

Also Read: How the Switch to Ecommerce Affects Retail Cyber ​​Security

Bad actors use bad bots to launch DDoS attacks on retail websites. How can retailers tell the difference between good and bad traffic?

To detect bot traffic, you need to collect as much information as possible. At DataDome, we collect huge volumes of signals for over a thousand events per day, including mouse movement, keyboard use, and various touch events performed by a user on a mobile app. This is a huge volume of signals that must be collected to determine if the interaction is being carried out by a human or by a bot.

The second part is to be able to use this information in real time using a low latency machine learning algorithm. Today, we are able to make a decision in less than two milliseconds by collecting this huge volume of information, so we can allow or deny every request to access the website or mobile app. Next, we need to separate the good bot from the bad bot. Of course, there are some good bots on the internet. For example, Google has crawlers, and Facebook or Twitter pulls information from websites to generate nice snippets when you share the link. So there are many legitimate use cases for running bots.

We have implemented a strong authentication mechanism to ensure that the request is actually coming from Google. This is necessary because today we see that 30% of requests containing “Google Bot” in the name are not from the real Google, but they are bad bots trying to impersonate the reputation. from Google. We’ve developed a strong authentication mechanism to make sure it’s the real Google, the real Facebook, or the real Twitter, and anyone claiming to be Google but not from a Google IP address, for example, or Google reverse DNS we will block because we know they are trying to impersonate them.

When we detect that a request is from a bad bot, we try to figure out what the bot is trying to do. Is this just an attempt to scrape a competitor who is trying to get real-time price from the e-commerce website to adjust their pricing strategy? Is it a bot attempting a credential jamming attack or an account takeover to generate data leaks? Or is it a DDoS attack that tries to generate website downtime by generating massive amount of requests?

I had heard that when you do one of these reCAPTCHA boxes, the reason you are sometimes allowed to skip right away or sometimes you have to redo the check is because you moved your mouse too directly over a line. Is it true?

Yes, exactly, so behavioral detection will try to adjust the complexity of reCAPTCHA based on their trust in you, so if you have legitimate mouse movement, on the page, the page using reCAPTCHA will, the Most of the time easier, but as soon as you have something a little weird it can start to be quite painful. We try to avoid the use of reCAPTCHA as it can kill the user experience for end users.

Why are bots such a big threat to online retailers?

There are dozens of different bot threats. The OWASP tries to categorize these threats and some of them are related to the security itself. There are bots that scan for vulnerabilities and can attempt to breach your database and gain access to sensitive information. There are many different situations where bots are involved in data breaches that could damage your reputation for reputation.

Then on the business side of the website, when your competitor is able to get all of your pricing strategies in real time, that means you are losing a competitive advantage. Finally, if we are talking about a DDoS attack when a website is down for a few minutes, every second is a direct loss of revenue that sometimes you won’t get back at all.

Also Read: 5 Best Practices To Mitigate DDoS Attacks

How does DataDome prevent these bad bots from flooding a website while still allowing good bots, like the ones Google uses, to index a site?

With every page request or mobile app request, DataDome will collect many signals, and we will let the human through, we will authenticate the good bots, and we will be able to detect the bad bots and prevent them from accessing the website. . We have deployed our technology in 25 points of presence to be ultra-fast in terms of detection, and we are able to make a decision in less than two milliseconds.

Whenever someone tries to reach, for example, The New York Times‘login section or an ecommerce site like Footlocker, DataDome’s AI will be involved in determining whether the request should be allowed or denied before it reaches our client’s app and before it can generate serious data damage.

Aside from partnering with DataDome, what would be your top recommendation for online retailers looking to protect their site?

If I had a top three I would say number one is to use multi-factor authentication when possible on the most sensitive sections, like login or checkout pages.

The second recommendation would be to use a bug bounty program, in which hackers will try to find vulnerabilities in your website and share them with you, as it is always better to be found by the good than the bad. You have to be humble because every website has vulnerabilities, and the best way to find them is to be tested by as many brains as possible daily.

The last element is to integrate a security expert into any team of developers. Some companies may have a standard engineering team and then a separate security team. What we have done at DataDome is to integrate a guy in each team who is in charge of security to avoid having two distinct mindsets: one in charge of security and the other who tries to ‘go fast and avoid any security constraint.

Read more: The best cybersecurity tools for small businesses

Source link

Anne G. Cash

Leave a Reply

Your email address will not be published.