Friday, May 19, 2023

3 Grave Weaknesses Of ChatGPT You Need To Know About

 


Well here we are, almost approaching June.  Can’t believe where this year is going.  But as time goes on, so does the world of Cybersecurity.  Probably the biggest thing making news right now is Artificial Intelligence, or AI. 

There have been a ton of stories of people for it, and also people against it.  Heck, there have even been cries in American society that perhaps it is time to put the brakes on AI, and let us try to get an understanding of what it is really about it.

I even attended a major Cyber event last Tuesday at a rather posh hotel in Schaumburg.  Though of course all of the talk was on Cyber related stuff, one of the main points of discussion was AI.  People were fearful of its impact, while some were really interested in it, and how it can be used in Cyber. 

When I was having lunch with some of the other attendees, I told them that I wrote a complete book on AI and ML.  I even mentioned that my dad was a Professor of Neurosciences at Purdue.

I told them that the bottom line is that we will never even come close to even fully understanding how the human brain works, much less even try to replicate it. In fact, at best, we will only come to 0.5% of any kind of understanding of it at all. I even mentioned the fact that all AI will be best used for is just automation, especially when it comes to mundane and ordinary tasks.

But as AI continues to dominate, so will ChatGPT.  I think I have written about this in a couple of recent blogs recently.  I even wrote an entire whitepaper about it for a client.  While it does have its advantages, it also spawned fears amongst a lot of people, especially when it comes to Cyber.  What kinds of fears are those?  Well, the biggest one is that it will be used for nefarious purposes.

Here are some of the areas in which it is believed that it will happen:

1)     Phishing:

This is probably the oldest attack vector ever known to history.  It stems all the way back to the early 90s, and the first public breach was done to AOL and its subscriber base in the later 90s.  Ever since then, it has evolved and grown, and has even become stealthier and almost hard to recognize at times.  But for some reason or another, there are telltale signs that are left behind, such as misspelled words, poor grammar, different URLs being used, etc.  But the fear now is that with ChatGPT, all of these signs of a Phishing email will now disappear, because it is so “intelligent”.  Well guess what, it is not.  The damned thing cannot even extract sources of information and data from the Internet.  It is simply and purely garbage in and garbage out.  While the signs of a Phishing email now will not be so obvious, there will be still be something that looks funny.  The trick now is to take your time and find them, if your gut is telling you that something is not right.  The bottom line:  Try to treat every email received as a Phishing one, and apply the same level of caution to everything that comes into your inbox.

2)     Coming out with the opposite:

In technical terms, this is also known as “Reverse Engineering”.  In simpler terms, this is where you can take a product of some sort, and break it down into its raw components to see what the initial ingredients were.  In the world of Cyber, although this was a security risk, it was never really too much of a concern, because it took a lot of effort and tome to do it, and of course, the Cyberattacker would not be interested in doing this kind of thing.  But with ChatGPT, not only can you create source code, but you can even reverse engineer existing code into its former building blocks.  From here, a Cyberattacker that is extremely well trained in software development can now even ask ChatGPT what the weaknesses are, and even where the backdoors in the code exist.  So rather than taking the time to find them on their own, ChatGPT can do it for you, in just a matter of minutes.  One of the biggest fears of this is that the most traditional forms of web application attacks, such as those of SQL Injection, will happen very quickly, and even go unnoticed.

3)     Smarter malware:

It is important to note that malware is a catch all term, which encompasses just about every threat variant that is out there.  Long story short, malware can be considered as a malicious (hence the acronym “mal”) piece of code that can cause extensive damage to an IT or Network based infrastructure.  In the past, and even up until now, the Cyberattacker had to manually deploy the malware into a weak spot, and from there remotely control so it can be deployed whenever and wherever.  But with ChatGPT, this extra reach is no longer needed.  A Cyberattacker can now write a piece of code, and install permutations from within it in order for the malware to deploy itself, where it will cause the most damage possible.  Thus, the term “Smart Malware”.  In fact the first known instances hit upon Samsung, and more detail about this can be seen at the link below:

https://www.darkreading.com/vulnerabilities-threats/samsung-engineers-sensitive-data-chatgpt-warnings-ai-use-workplace

My Thoughts On This:

At the end of the day, ChatGPT will be around.  Given its attention and notoriety that it has now, it is quite likely that it will only grow.  But like all good things, it too will have its doomsday.  The hysteria and craze and anxiety that it is causing now will die out for sure. 

Now is probably the best time to get your IT Security team to fully explore the weaknesses of ChatGPT, and believe they do exist.

Then from there, you need to train your employees how to spot any threat variants that looks like they could have evolved from ChatGPT.  To keep your business even safer, you should even restrict your employees from not using it during work hours, unless it is required for their job functions.

And stay tuned on this blog site.  As I continue to learn more about ChatGPT, especially its weaknesses, I will post them here as well.

No comments:

Post a Comment

4 Ways How Generative AI Can Combat Deepfakes

  Just last week, I authored an entire article for a client about Deepfakes.   For those of you who do not know what they are, it is basical...