Saturday, January 7, 2023

Introducing The Latest In Cyber AI: The ChatGPT

 



Here we are into the first full weekend of2023.  Still sort of feels like the holidays are upon us.  So, you may be wondering at this point what will the topic for today?  Well, it has to do with a topic that I really love reading about it, and even talking about. 

But it is sooo misused in the Cyber industry that I get nauseated any time a so-called Cyber expert talks about. It is Artificial Intelligence, also known as AI for short.  Long story condensed, it is an area of computer science where a computer is trying to replicate the thinking and reasoning powers of the human brain.

Obviously, we can come nowhere to explaining to how the human brain really works.  At best, we probably have reached only .1% of how it truly works.  My father was a neuroscientist at Purdue, and this is his exact quote, if I remember correctly. 

But when it comes to AI, one of the key objectives to deploying in the Cyber world is that for task automation.  For example, in the world of Pen Testing, there are many, mundane tasks which are often quite repetitive.

So the goal here is to try to get a tool with AI functionality that can do these tasks so it can free up the Pen Tester’s time to focus on other key areas of the exercise.  There have even been attempts to try to model the future Cyber threat landscape using AI.  In this regard, it is hoped that we model that threat vectors could like perhaps months from now, or even within a short amount of time, like days.

But the key thing to remember that an AI tool has to learn, like how the human brain has to learn from past experiences.  But the only way to do this is to feed the AI tool tons of information and data, which are technically known as “datasets”. 

The initial (or first) learning phase will need quite a bit of this, but as time goes on, the AI tool will start to learn from past trends, provided that you keep giving it these data sets on a 24 X 7 X 365 basis.  But a huge disadvantage here is that you have to make sure that all of your datasets are optimized, or “cleansed”.  This is the only way that you will get unbiased results.

If you fail to do this, your results could be greatly skewed in the end.  So as you can imagine, the old saying of “Garbage In Garbage Out” fits quite nicely with AI.  But Cyber is not the only industry in which AI is being used.  Another big one is that in the creation of Chatbots. 

These are the little dialog boxes that you see in the lower right hand of your screen when you are at a website.  I find them to be rather annoying, so I hardly ever use them, unless I am at a website which I fully trust.

The fundamental idea of a Chatbot is to give you an automated reply when you ask it a question.  But the goal here is not just give you any kind of canned response, rather, people are trying to design it in such a way that it gives you a realistic, smart, and personalized response to you, based upon your previous interactions either with it, or with customer service. 

One such group that is trying to develop this kind of Chatbot is known as “OpenAI”.  They have developed a new mechanism which is known as “ChatGPT”, which is a partial acronym for “Generative Pre-trained Transformer”.

But for right now, it is only available for Beta testing, but even with this crowd, it appears to be quite popular.  The organization is planning to launch a full-blown public version of this sometime later this year, and it will be known as “ChatGPT-4”. 

What separates this Chatbot from the others in the pack is that it can provide very detailed types of responses, and even admit completely when it is wrong.  But another very powerful feature of it (which I think is way cool) is that it can even compile source code, and even write content (but to what degree, I do not know).

But with the good, comes the bad, namely the disadvantages, especially as it relates to Cyber. Here are some examples of it:

*By its own very nature, ChatGPT will not create a piece of bad code, or what is known as malware.  It has protocols built inside it to prevent this from happening.  But hackers, given their inquisitive minds, have claimed to find a way around these protocols so that they can get ChatGPT to write piece of malware, which can be deployed anywhere, at any time.  In this instance, the Chatbot is not directly asked to create a piece of malware, rather, it is asked what are the steps to create one.  Answers are provided, even with sample code.

*Business Email Compromise (BEC) are forms of Phishing attacks in which an employee is conned into wiring large sums of money to a phony, offshore account.  Security tools are much better now in terms of quarantining these kinds of emails before they reach the inbox, so the trick is to create a different email every time a new BEC is launched.  But it can take some time for the Cyberattacker to create these kinds of messages so they cannot be racked.  But if the ChatGPT were used for this, unique BEC messages could be created literally on the fly.  Also, the Chat GTP can be used even for regular Phishing attacks, by getting rid of many of the mispronunciations, missing words, and typos that are found in today’s Phishing email.

The illustration below shows how the ChatGPT can be used to create a basic BEC email:


(SOURCE:  https://www.darkreading.com/omdia/chatgpt-artificial-intelligence-an-upcoming-cybersecurity-threat-).

My Thoughts On This:

IMHO, we have to take a balanced view of what AI can do for Cyber.  Yes, it can do great things, but on he flip side, it can do many bad things as well, especially if it is used for nefarious purposes.  A good example of this is the Deepfakes.  This is when a fake image is created of a real, live person. 

This can be used for phony purposes, especially when it comes to election time.  In fact, it has been said that it was used in the 2016 Presidential Election campaign in order to raise campaign money. 

But given the way the digital world is today, who is to know what is real and not?  The lines in deciding this have become so blurred that it can be hard for a trained Cyber professional to even discern these differences.

A disclaimer should also be made at this point about the illustration of the BEC email.  As stated previously, you cannot just simply to ask it directly to create one.  Rather, you have to keep prompting it with many questions so it can create one for you.  By doing it this way, it is also learning, and as result, it can create a BEC email for you in a shorter period of time, and without having to ask it so many questions.


No comments:

Post a Comment

How To Launch A Better Penetration Test In 2025: 4 Golden Tips

  In my past 16+ years as a tech writer, one of the themes that I have written a lot about is Penetration Testing.   I have written man blog...