Sunday, September 22, 2024

The Top 6 Nefarious Uses Of Generative AI In 2024

 


In the world of Cybersecurity, another common denominator between most of the vendors is the sheer love to publish reports as to what is the latest that is happening on the Cyber Threat Landscape.  These are also published by agencies from within the  Federal Government as well.  Probably one of the best known and most reputable reports is actually published by Verizon.

They do this on an annual basis, and they are entitled the “Data Breach Investigations Report”, also known as “DBIR” for short.  To access the 2024 report, click on the link below:

http://cyberresources.solutions/blogs/2024-dbir-data-breach-investigations-report.pdf

What I especially like about this report is that they cover a wide range of Cyber issues, such as:

*Patterns In Incident Response

*Systems Intrusion

*Social Engineering

*Web Application Attacks

*DDoS Attacks

*Heisted Digital Assets

*Misuse Of Privileges

It also covers a wide range of industries upon which the above-mentioned threat vectors can have a huge impact on.  In this report, the following market segments are analyzed:

*Food/Entertainment

*Education

*Finance/Insurance

*Healthcare

*Information Technology

*Manufacturing

*Professional/Scientific Services

*Public Administration

*Retail

And of course, the heavy emphasis on this 2024 is on Generative AI, and especially how it is being used for nefarious purposes by the Cyberattacker.  Here is what they covered:

1)     Phishing:

As most of us know, Phishing is not only of the oldest threat variants around, but believe it or not, it is still widely used.  Previously, you could tell if you received a Phishing email by examining for any attachments, typos, misspellings, grammatical mistakes, etc.  But, the report found that many hackers are now actually using ChatGPT to not only create Phishing emails with hardly any errors in them, but to also provide advice to non-English speakers as to how they can create convincing Phishing emails.  Because of the absence of the telltale signs, it now only takes about 21 seconds for the victim to click on a malicious, and a mere 28 seconds to give away their confidential information.

(SOURCE:  https://www.darkreading.com/vulnerabilities-threats/genai-cybersecurity-insights-beyond-verizon-dbir)

2)     Malware:

In the past, the Cyberattacker would take their time to write the code for a malware that they wanted to deploy onto the victim’s device.  Not anymore. Through the sinister evil twin of ChatGPT, which is called “WormGPT”, the Cyberattacker can now create and design a piece of stealthy malware in just a matter of a few minutes. It is primarily by Large Language Models (also known as “LLMs”). In this regard, the most commonly crafted malware is that of the Keylogger.

3)     Websites:

Back in the days of the COVID-19 pandemic, it was a common place for the Cyberattacker to create phony and fake websites in order to lure the victim to make a payment to a fictitious cause.  Of course, all of this money would then be transferred to an offshore account, such as in China, Russia, or North Korea.  But with Generative AI, the Cyberattacker can not only create a very convincing website, but even deploy malicious artifacts behind them.  Not only this, but these web pages can be dynamically created on the spot by using the right kind of Neural Network Algorithm.

4)     Deepfakes: 

These made their first mark in the 2016 Presidential Elections.  Essentially, this is where the Cyberattacker can  take an image of a real person,  and actually make a video from it.  For example, through Generative AI, a Cyberattacker can take an image of a real politician, and make that into a video that can be easily put onto YouTube.  One of the most common tactics here is to ask for donations for a political cause.  Worst yet, Deepfakes are also being created to spoof Two Factor (2FA) and Multifactor (MFA) authentication mechanisms. 

5)     Voice:

Just as much as the Cyberattacker can take a real image in order to create a fake one, the same can also be said of your voice.  In this instance, through the use of Machine Learning, they can take any legitimate voice recording that is available, and recreate that to make it sound like the voice of the real person.  Typically, it is well known people that are targeted.  Thus, if you receive a call from a phone number that you do not recognize, just don’t answer it.  If the caller leaves a voice mail, delete that as well.  Also, be careful as to what you post on the social media sites, especially when it comes to videos, especially if you are talking on them.

6)     OTPs:

This is an acronym that stands for “One Time Password”.  As its name implies, these are only used once, and are typically used to further verify your login credentials.  For example, if you log into a financial portal, such as your credit card or your bank, the second or third step in the verification process would be that of the OTP.  This is normally sent as a text message to your smartphone.  It usually expires after just a few minutes, and you have to enter it in, if you want to gain full access to your account.  But, the Cyberattacker is now using Generative AI to create fake ones, used in “Smishing” based attacks.  This is where you get a phony message, but rather than getting it on an email, it comes straight through as a text message.  If you get one of these unexpectedly, just delete it!!!

My Thoughts On This:

Interestingly enough, one of the major conclusions of this report is that there is a lot of hype around Generative AI.  In my view, this is certainly true, as many of the Cyber vendors use this keyword in order to make their products and more services that much more enticing for you to buy.  These days, it is hard to tell what is real and what is not.

In a recent class I just taught on Generative AI, some of the students asked me how they should deal with this particular issue.  I told them that the truth of the matter is that it is hard.  Your only true lines of defenses are to trust your gut.  If something doesn’t feel right, just delete it, or don’t click on it.  And always, confirm  the authenticity of the sender!!!

No comments:

Post a Comment

4 Grave Risks Of Using Non Human Identities & How To Fix Them

  As the world of Generative AI continues to explode, there is a new trend that will be emerging:   The Non-Human Identity.   You may be won...