As I mentioned in yesterday’s blog, AI and ML are some of the
terms in Cybersecurity that have been thrown about carelessly in the last couple
of years. My fingers point especially to
the vendors out there that claim their products and solutions contain AI and
ML, and are at the forefront for any customer when it comes beefing up the lines
of defenses.
While it may be true there could be some basic algorithms that
have been incorporated into it, it is nothing of rocket science that the vendor
has invented.
They are simply overextending their definition of AI and ML
(I would actually like to use the word “lie”, but that might be going a little
too far with it), in order to woo customers to purchase whatever they have to
offer. And of course, they will fall in
hook, line, and sinker, and get it, because they think they are getting the “best
in breed” solutions.
Heck, even I have used AI and ML many times before, but I
try to be careful in the context as to how it is used, and I am also conscious as
to what it can and cannot do. In other words,
I try not to overstate anything to the best that I can.
In this regard, I have only given very generic examples as
to where they could be used in the best.
One area that could be best suited for AI and ML in Cyber is in task
automation and threat modeling.
But as I think about it further, I don’t think I have ever
seen any concrete case studies where AI and ML have really been used, with hard
core numbers to actually prove the results.
In fact, as much as I read the news headlines every day, I don’t ever
remember a vendor even talking about a case study.
Well, that is until today, where I finally found an article which
gives a glimpse as to how they are using AI and ML, and some of the benefits
that they have derived from it.
Remember the one thing that Cyberattackers are after primarily
is money. They will get it at whatever
way they can, whether it is draining your bank account with compromised
credentials, or launching ID Theft attacks, etc.
Because of this, the major credit card companies now have their
guards up to the highest levels possible in order to not only protect their customers,
but to minimize credit card fraud as much as possible. With the sheer volume of electronic based
transactions that occur on a daily basis around the world, there is no that human
beings would be able to comb through all of that data to find any evidence of
fraud or malicious behavior.
Therefore a leading credit card company, VISA, has embarked upon
a massive program to incorporate AI and ML into their IT and Network infrastructures,
for these very purposes. They have
finally released some of their numbers, and they will, frankly, quite astonish
you:
*They have invested over $9 billion in AI and ML technologies;
*The have over 60 Petabytes of information and data that
reside in their databases;
*AI and ML have been deployed in over 60 different
technological components of VISA;
*One of their in-house tools, which is known as the “Visa
Advanced Authorization” (aka “VAA”), can determine if a credit card transaction
is fraudulent or not in just 300 milliseconds.
Because of its quickness, over $26 Billion of credit card fraud attempts
were blocked in 2022;
*Visa has also developed a new tool called the “Visa
Behavioral Analytics” to examine the qualitative aspects of credit card
fraud. In this regard, over 400 million
authentication requests were compared against 12 million unique devices over a two-year
time span. Because of this, Visa was
able to block over $2.2 Billion in credit card fraud.
While these numbers are truly astounding, there is always a
flip as well. For example, technology
can make mistakes also, especially when it comes to flagging a fraudulent
transaction, when actually, it was a legitimate one. These are technically called “False Declines”,
and a credit card company could lose business very quickly if this happens too
often.
In fact, studies have even shown that after one False
Decline, a customer will leave and get a new credit card, and this happens about
89% of the time. To avoid this, and to
keep their existing base, Visa has also invested heavily into Deep Learning technology
to further understand the purchasing behaviors of their customers. So far, this effort has proven to be
successful, with the total number of False Declines declining as much as 30%.
But Visa has not forgotten about using the traditional tools
of Penetration Testing and Vulnerability Scanning, and according to them doing
these tests has prevented over $31 Million in fraud attempts from taking place.
My Thoughts On This:
Well, there you have it, a solid case study which points out
the good that AI and ML can do. But keep
in mind also, that equally important is the human side of this all. While it would be nice to have all of this automated,
we are still not yet at that point. Visa
is full cognizant of this, because of that, they have launched various “Cyber
Fusion Centers”, which is much like a SOC.
In fact, they even have acknowledged the fact that AI and ML
works best when it used in conjunction tools and technologies that have been designed
to detect fraudulent activity.
Honestly, it is quite refreshing to see that they take this stance. Not many companies that I have written about have
taken this viewpoint, it is either an all or none proposition.
If you want to get a deeper dive of using AI and ML in
preventing financial crimes, you should download this eBook here:
https://www.pymnts.com/tracker/preventing-financial-crimes-playbook-august-2020/
Banks are also getting into the AI and ML game, and those
with over $100 Billion in assets are going to be key players here as well. To get more insight into this, check out this
article here:
https://www.businessinsider.com/ai-in-banking-report
Finally, the source of this posting and the numbers
presented come from:
https://www.darkreading.com/edge-articles/a-peek-into-visa-s-ai-tools-against-fraud
No comments:
Post a Comment