Back in the
day, when I was in graduate school during the Internet Boom, the two big buzzwords
back then were “B2B” and “B2C”. These
both stand for “Business to Business”, and “Business to Consumer”,
respectively. Well fast forward from those
times to now, where the world of Cybersecurity is now filled with all kinds of
techno jargon.
And with the
explosion of Generative AI, the dictionary here expands with a newer one, which
is called “M2M”, or “Machine to Machine”.
Now this term
has just recently given birth to an even newer one, which is now “NHI”, or “Non-Human
Identities”. These
are the Chatbots, Virtual Personal Assistants, and even the Digital
Personalities that you engage in instead of speaking with a real human being.
These kinds of NHIs are now literally
dominating the world. In fact, it has
even been cited that they outnumber actual human identities by a factor of 50:1.
Because when
you communicate with an NHI, you are giving away your own personal information
and data. In turn, the Generative AI models
that power them are storing them to not only recall this information/data if
you were to engage with them, but to also train the algorithms so that they can
be fully always optimized.
Apart from
all of this being transmitted back and forth, there is now a cry in the
Generative AI world that there needs to be set of bet practices and standards that
businesses must adhere to if they make use of NHI that is customer facing. Here are the factors that are driving this movement:
1)
Complexity:
Gone
are the days when businesses just had an On Premises Infrastructure. As I have written about before in previous
blogs, there is now a strong movement to go to the Cloud, using a major platform
such as that of the AWS or Microsoft Azure.
But, there are still some CISOs who relish the old fashioned ways of doing
things, so they opt for a hybrid based approach, which is a combination of both
On Premises and the Cloud. This kind of
blend does not make things any easier, in fact, it makes it much more complex
for the IT Security team to manage. If
an NHI is created and deployed by using both words, then everybody needs to follow
and stick to the same set of rules, especially when it comes to data/information
storage and processing.
2)
Automation:
One
of the biggest benefits of that Generative AI brings to the table is that it
can be used to automate processes and functions. For example, this can be seen with robotic
arms in a car assembly plant, and even in Cybersecurity, it is being used in
Penetration Testing and Threat Hunting in the more mundane and routine tasks. But, there is a key problem here: You simply cannot rely upon automation 100%
(at least in my view). For example, what
if there is no human intervention if a Data Exfiltration Attack were to occ. to
an NHI? Well, the bottom line is that it
would go completely unnoticed, until it is way too late. This debate about whether to completely automate
or not is currently one of the biggest debates that is happening and will continue
for a long time in the world of Generative AI.
3)
Easy
Prey:
An
NHI in any form is actually a very easy prey for the Cyberattacker to go
after. They may not attack it directly,
but they can very easily go after the connections which they are linked
to. These are often not very secure, and
once a Cyberattacker is able to penetrate through just one of these network
lines of communications, they can wreak all sorts of havoc in just a short
matter of time.
4)
Mergers
And Acquisitions:
There
are a lot of buyouts that are taking place with Cyber Vendors. Some of these include the following:
Ø
Authomize
being purchased by Delinea.
Ø
Venafi
being purchased by Cyber Ark
It
should be noted that the above two buyers are Privileged Access Management
(PAM) vendors. The point here is that
with all these mergers happening, there is a lot of information and data that
is being transferred, especially with the Generative AI models and the NHIs
that they power. It is very necessary
here to have a standard checklist that both the buyer and the buyee must abide
by to make sure that nothing is leaked out, intentionally or not.
My
Thoughts on This:
As for me, I
am still very old fashioned. Although I
am a technical writer in the world of technology, I absolutely hate technology. I like the way we do things. I am not sure if I am up for all this Generative
AI stuff. For example, I would much
rather see my doctor in person, rather than chat with a Digital Personality.
But if I must
bend on this, we cannot depend upon Generative AI on its own. We need to have human intervention here. As for the best practices and standards, it
is about time that Corporate America did something about it. The Federal Government has done something
about it, but it is way too slow to keep up with the rapid changes that are happening
in Generative AI.
To use the
old proverb, “its going to take a village” for all this to happen. It will require a hands-on deck cooperation
with the private and public sectors, as well as with academia. But even if we were to produce such a set of
best practices and standards, who is going to enforce it?
The FBI? The DHS?
These are tough questions that still must be answered, and as a society,
we must figure all of this out soon.
No comments:
Post a Comment