Saturday, July 15, 2023

6 Ways In Which Generative AI Can Be Used In The SOC

 


Just last Tuesday, I completed a rather intense, 6 hour long Generative AI course from Microsoft.  Not only that, but the exam was also very challenging.  I would put it on par with any of the (ISCC)2 exams.  But anyways, probably the biggest buzzword right now in the world of tech (not just Cyber) is that of “Generative AI”.  What is it you may be asking?

Here is a technical definition of it:

“Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.”

(SOURCE:  https://research.ibm.com/blog/what-is-generative-AI)

So in other words, you feed a Machine Learning (ML) system some data, and it gives you an output.  Honestly, there is really new nothing new about that, this has been under research and development.  But what is different about it this time is that it will create new content for you, based upon the query that you submit to it.

One of the best examples of this is ChatGPT.  It is created by OpenAI, and uses the GPT4 algorithms (which are essentially Large Learning Models, or LLMs for short).  You simply as it, for example, to write a poem, and it will create one for you.  Of course, it can create more complex outputs for you as well, depending on what you ask it to do.

So, as you can see, Generative AI has the potential to be applied to a lot of industries.  One such area is that (and you guessed it correctly) of Cybersecurity.  AI and ML to some degree are already being used here when it comes to conducting routine and mundane tasks (a god example of this are Penetration Testing and Threat Hunting). 

But another area in Cyber where Generative AI has a very strong potential is for use in the Security Operations Center, also known as the “SOC” for short.  This would  be yet another way in which to keep an SOC modern, and updated at all times.  Other non-AI and ML recommendations can be seen here at this link:

https://www.darkreading.com/vulnerabilities-threats/5-tips-for-modernizing-your-security-operations-center-strategy

So, how can Generative AI be used for the SOC?  Here are some tips, based upon the job title that are working there:

1)     The Frontline Folks:

These are the people that are tasked with getting all of the warnings and alerts that come in.  It is their job to filter through them, and to sort out what is real and not.  While this might seem to be time-consuming, in reality it is not nearly as bad now.  This is because with the advent of the SIEM and the AI and ML tools that are incorporated into it, a lot of this is now done automatically, based upon the rules and configurations that have been deployed into the system.  But even despite this, there are false positives that can still filter through, leaving the team to have to manually filter them out.  But Generative AI can take this one step further, and with this, the team can ask specific queries in order to filter out certain alerts and warnings that are truly legitimate.  This will give the team a better understanding of what they could be imminently facing.

2)     The Threat Researcher:

I have written about this role before, and essentially, these are the people who gather all of the intel that can, and from there, try to make hypotheses about the threats that are on hand, and even possible ones down the road.  This is by no means an easy job, and it takes a lot of analysis of the data, both quantitative and qualitative in nature.  It can be truly painstaking work, but Generative AI can be used to ask questions about the available data that is present, and from there, do a more sophisticated analysis, or make even more realistic projections as to what future threat variants could possibly look like. 

3)     Other Use Cases:

There are other areas in Generative AI can be used in the SOC:

*Risk Assessments:  At the present time, this can be a very laborious and time-consuming process. But with AI and ML, the process can be greatly sped up, especially when it comes to analyzing which of the digital and physical assets are most at risk.

*Threat Content Management:  This is where the AI or ML system can also collect data and information on a real time basis, and them automatically to its training sets.  This will result in a lesser need for human involvement.

*Customer Service:  The best example of this are the Chatbots.  These tools are seeing a huge explosion right now, and will continue to do so as long as AI and ML continue to be around.  Rather than having to wait in a queue to talk to somebody in the SOC, it is hoped that the Chatbot can alleviate these wait times by providing “smart answers” to what the client is asking about. 

My Thoughts On This:

While all of this certainly sounds advantageous, keep in mind that Generative AI is still a piece of technology, and it has its shortcomings just like everything else.  It has to be fed the right pieces of data so it can generate the right outputs.  Also, the Generative AI tool is only as good as the questions that are being asked.

If you want detailed answers, you have to structure your questions (or queries) in a certain way.  Believe it or not, this is an up-and-coming field, known as “Prompt Engineering”.  But there is also yet another flip side to this, and that is where the Cyberattacker can use Generative AI for malicious and malicious purposes.

There is already great levels of fear in this regard, as ChatGPT can potentially be used to create malicious code from which Ransomware attacks can be launched. 

In my view, we will never replace the human brain.  AI and ML can only augment, and not replace processes.  We still need to have the human element around.  Using Generative AI will bring us one step closer to one of the ultimate goals in Cyber:  To reduce the time it takes to detect and respond to threats.

No comments:

Post a Comment

Beware Of That IoT Device You Are Going To Give As A Gift!!!

  As we fast track now into Thanksgiving and the Holidays, gift giving is going to be the norm yet once again.   To me, I think it should be...