Saturday, December 23, 2023

How The Good Guys Are Fighting The Dark Side Of AI

 


Apart from the political environment that we are facing today, one of the other hot topics has been that of AI, especially that of the Generative one.  We have heard it in the news daily, and the stocks of some of the major AI players have skyrocketed (Nvidia is one of those).  It seems like we are in a bubble, much like the .com era back in the late ‘90s.  But with all bubbles, there will of course be a burst.

But a catalyst that could be driving this one on a possible downtrend is that of the Cyberattacker.  As much as the potential and promise there is of Generative AI, there are also the real risks that it could be used for malicious purposes as well.  So in this blog, we look at four areas where the Cyberattacker has done this:

1)     Phishing Emails:

As we all many know, this is probably the oldest of the threat variants out there.  But over time, we learned how to spot the signs of it, such as misspelled words, typos, redirected URL’s that did not match, etc.  But with ChatGPT now in existence, a Cyberattacker can now create a Phishing email that has hardly any of these mistakes.  Another common type of Phishing email has been the BEC one, in which an administrative assistant is sent an invoice asking for a large some of money to be wired to a bank account.  There have been warning signs with this as well, but once again, ChatGPT has made this almost impossible to tell what is real and not.  In fact, according to a recent report by SlashNext, there has been a 1,265% rise in Phishing emails since ChatGPT came out into the market.  You can download this report at the link below:

http://cyberresources.solutions/blogs/Phishing_2023.pdf

2)     Impersonation Attacks:

This happens when a Cyberattacker uses an AI tool to create a voice that sounds authentic.  For example, with robocalls in the past, it was usually some digital voice that chimes in if you answered the phone, and you could more or less tell that it was a fake.  But not with ChatGPT being used.  Now, it sounds like a real person, which is close to impossible to realize that it is actually a fake.  More detail about this can be seen at the links below:

https://www.darkreading.com/cyberattacks-data-breaches/ai-enabled-voice-cloning-deepfaked-kidnapping

https://leaderpost.com/news/local-news/regina-couple-says-possible-ai-voice-scam-nearly-cost-them-9400

3)     Deepfakes:

These have actually been around longer than even before ChatGPT made its mark.  One of the best examples of this are the Presidential campaigns.  Back in the 2016 election, fake videos of the candidates were created, which looked almost like the real candidates.  In them, they would ask for campaign donations.  But of course, any money sent would actually be deposited to an offshore bank account once again.  Deepfakes are really hard to detect, but if you look closely enough, there are some very subtle cues that will give it away.  So my soapbox here is that in the election next year, please be extremely careful if you encounter these kinds of videos.  An example of this can be seen at the link below:

https://www.cbsnews.com/chicago/news/vallas-campaign-deepfake-video/

4)     Chatbots:

These are the digital agents that you see on a lot of websites today.  They usually appear in the lower right had side of your screen.  I have a seen a ton of them, and believe me no two are alike.  But once again, given AI today, a chatbot can be literally created in just a matter of a few minutes, and be used for malicious purposes.  For example, the chatbot could employ the tactics of Social Engineering in order to con you in giving out personal information, or worst yet, submit your credit card number or banking information.

My Thoughts On This:

Of course, the good guys are starting to get on top of this.  One way that this is being done is through the use of “Generative Adversarial Networks”, also known as “GANs”.  It also consist of two subcomponents which are:

*The Generator:  It creates new data samples

*The Discriminator:  This discriminates generated data against the information the GAN has been trained on.

Because of this unique combination, Threat Researchers can now model what potential threat variants will look like, as described in the last section. 

But despite all of this, for you, the best defense still remains is your gut.  If something does not feel right, or if your first impressions of a video, chatbot, or an even email raises triggers, then disconnect yourself immediately from that platform.

The longer you are engaged with it, the worse the consequences could be.

But it is not just here in the United States, but even other countries and governments around the world are fearful about the negative use of Generative AI.  For example, in a recent report published by KPMG, over 90% of Canadian CEOs think that using it will make their business far more vulnerable than ever before. 

More details can be seen here at this link:  https://kpmg.com/ca/en/home/media/press-releases/2023/10/generative-ai-could-help-and-hinder-cybersecurity.html

No comments:

Post a Comment

Why Students From K-12 Are So Vulnerable In Becoming A Cyber Victim

  Whenever we hear about a Cyberattack or a security breach, we often think that the entity involved is a Fortune 500 company, or even a hea...