If there is
anything that is making the news headlines on a non-political note is that of
Generative AI. While the applications
keep on growing, and as Nvidia keeps on making new GPUs, and as the algorithms get
better all of the time, there is always that thirst to push Generative AI even
further, to what it can do now. While this is a need for the many industries that
currently use it, it is even more so
pronounced in the world of Cybersecurity.
At the present
time, Generative AI is being used for the following purposes:
*Automation
of repetitive tasks such as those found in Penetration Testing and Threat
Hunting.
*Filtering
out for false positives and only presenting the real threats to the IT Security
team via the SIEM.
*Wherever possible,
using it for staff augmentation purposes, such as using a chatbot as the first point
of contact with a prospect or a customer.
*Being used
in Forensics Analysis to a much deeper dive into the latent evidence that is
collected.
But as mentioned,
those of us in Cyber want Generative AI to do more than this. In fact, there is a technical term that has
now been coined up for it, which is: “Critical
Thinking AI”. Meaning, how far can we
make Generative AI think and reason on its own just like the human brain,
without having the need to pump into gargantuan datasets?
The answer to
this is a blatant “No”. We will never
understand the human brain at 100%, like we can the other major organs of the human
body. At most, we will get to
0.0005%. But given this extremely low
margin, there is still some hope that we can push what we have now just a
little bit further. Here are some examples
of what people are thinking:
*Having Generative
AI train itself to get rid of “Hallucinations”.
You are probably wondering what this is, exactly? Well, here is a good definition of it:
“AI
hallucinations are inaccurate or misleading results that AI models
generate. They can occur when the model generates a response that's
statistically similar to factually correct data, but is otherwise false.”
(SOURCE: Google Search”).
A good
example of this is the chatbots that are heavily used in the healthcare industry. Suppose you have a virtual appointment, and rather
than talking to a real doctor, you are instead talking to a “Digital Person”. You tell it the symptoms you are
feeling. From here, it will then take
this information, go its database, and try to find a name for the ailment you
might be facing. For instance, is it the
cold, the flu, or even worse, COVID-19?
While to some degree this “Digital Person” will be able to provide an answer,
your next response will be: “What do I take
for it?”. Suppose again, it comes back
and says that you need to take Losartan, which is a diuretic. Of course, this is false, because for the
diagnosis, a “water pill” is not needed.
This is called the “Hallucination
Effect”. Meaning, the Generative
AI system has the datasets that it needs to provide a more or less accurate prescription,
but it does not. Instead, it give a
false answer. So, a future goal of “Critical
Thinking AI” would be to have the Digital Person quickly realize this
mistake on its own, and give a correct answer by saying you need to
an antibiotic. The ultimate goal here is
to do this all without any sort of human intervention.
*Phishing
still remains the main threat variant of today, coming in all kinds of flavors. Generative AI is now being used to filter for
them, and from what I know, it seems to be doing somewhat of a good job at
it. But the case of a Business Compromise
Email (BEC) attack. In this case, the administrative
assistant would receive a fake, but albeit, a very convincing email from the
CEO demanding that a large sum of money be transferred to a customer, as a payment. But of course, if any money is ever sent, it would
be deposited into a phony offshore account in China. But if the administrative assistant were to notice
the nuances of this particular email, he or she would then backtrack on their own
to determine its legitimacy. But this of
course can take time. So, the goal of “Critical
Thinking AI” in this case would be to have the Generative AI model look all of
this into its own (when queried to do so), determine the origin of it, and give
a finding back to the administrative assistant.
My
Thoughts On This:
So, how can get
to the point of “Critical Thinking AI”?
Well, first, it is important to note that the scenarios that I depicted
above are purely fictional in the reality of what we are expecting the Generative
AI model to do. We could get close to
having it do them, but the reality is that human intervention will always be
needed at some point time.
But to reach
that threshold, the one missing thing that we are not providing to the Generative
AI model as we pump into large amounts of datasets is “Contextual Data”. This can be technically defined as follows:
“Contextual
data is the background information that provides a broader understanding of an
event, person, or item. This data is used for framing what you know in a larger
picture.”
(SOURCE: https://www.sisense.com/glossary/contextual-data/)
For example,
back to our chatbot example, all that we feed into the “Digital Person” are
both quantitative and qualitative datasets in order to produce a specific
answer. But what is needed also is to
train the Generative AI model to understand and inference why it is giving the answer
that it is. So in this case, had contextual
data been fed into it, it probably would have given the correct answer of the antibiotic
the first time around.
If we can
ever reach the threshold of “Critical Thinking AI”, we might just be able to
finally understand as to how we can use the good of Generative AI to fight its
evil twin. More information about this
can be seen at the link below:
https://kpmg.com/nl/en/home/insights/2024/06/rethinking-cybersecurity-ai.html
No comments:
Post a Comment