Sunday, October 13, 2024

4 Grave Risks Of Using Non Human Identities & How To Fix Them

 


As the world of Generative AI continues to explode, there is a new trend that will be emerging:  The Non-Human Identity.  You may be wondering what it is?  Well here is a good definition of it:

“Non-human identities (NHIs) are digital entities used to represent machines, applications, and automated processes within an IT infrastructure. Unlike human identities, tied to individual users, NHIs facilitate machine-to-machine interactions and perform repetitive tasks without human intervention.”

(SOURCE:  What is a Non-Human Identity? | Silverfort Glossary)

Remember, I have written about Digital Person before?  Essentially, this is an avatar, or even a chatbot that is given human-like qualities in order to interact with you.  Instead of typing in a message, you can talk to it and have a conversation with it. 

One of the best examples of this is its use in customer service.  Instead of waiting on hold for hours on end to speak with an actual human being, you can summon up the Digital Person within a matter of seconds. 

If you are not satisfied with the answers, you can always request the Digital Person to be referred to as an actual representative.  This is an example of a Non-Human Identity, or also known as “NHI” for short.  While you can call the Digital Person by a name, in the grand scheme of things, it really does not have any form of identification.

NHIs can be a particularly useful tool to have around, especially when it comes to processing automation and augmentation, when it comes to monitoring all the interconnections that exist today in the world.  In fact, it has been estimated that for every 1,000 people, there are some 10,000 of these kinds of connections.  It is almost impossible for any human being to keep close tabs on all of them, that is why the NHI is so beneficial.

But despite this, there are certain risks that are borne out by using this advancement in Generative AI.  Here is a sampling of some of the major ones:

1)     Expansion of the attack surface:

In the world of Cybersecurity, mostly everybody that is in it has heard of this term.  For example, if you have too many network security devices, this can expand your attack surface.  This goes in direct contradiction of the old proverb that “more is better”.  The same can also be said of the NHI.  While deploying many of them could prove to be beneficial, in the intermediate and long term, it also greatly expands the attack surface of all your interconnections.  Since these are mostly powered by Generative AI, there are still vulnerabilities in them that the Cyberattacker can exploit very quickly.

2)     Hard to see:

It is important to note that many of the NHIs that are deployed tend to function and operate in the background.  As a result of this, they tend to be forgotten about, especially when it comes to time to upgrade and/or optimize them.  This is yet another blind spot that the Cyberattacker knows very well about and can thus use it quickly launch a malicious payload into them.  The net effect of this is a negative, cascading effect across your entire IT/Network Infrastructure in just a matter of sheer minutes.

3)     Violation of PAM:

This is an acronym that stands for “Privileged Access Management”.  These are the rights, privileges, and permissions that are assigned to super user accounts.  An example of this would be a network or database administrator.  They will of course have elevated access to keep the networks and databases running smoothly, respectively.  But these same types of PAMS based accounts are also assigned to the NHI so that they can carry out automated tasks without human intervention.  But once again, the IT Security team forgets about this as well, and the consequence of this is that the Cyberattacked can gain very quick access to these accounts and gain immediate access to anything that they want to.

4)     Third parties:

In today’s world, many businesses outsource many functions to a third-party provider.  And now, instead of having direct contact with them, the entity that hired them now uses the NHI for this communication.  While this can save time to focus on more pressing issues, there is also an inherent risk with this as well.  For example, if the third-party supplier is hit with a security breach, it will also impact the NHI that is connected to it, and in turn, it will have an impact onto your business.  This is yet another form of a Supply Chain Attack, but on a different kind of level.

My Thoughts on This:

Here are some things I recommend that you can do to mitigate the risks of an NHI from being an unintended threat to your business:

Ø  To keep your attack surface as low as possible, deploy NHIs as you absolutely need them.  It is important to get away from thinking that deploying a lot of them will make you more productive.  They simply will not.

 

Ø  If you have smaller NHIs, it will also make it easier for you to keep an eye on them.  But in the end, no matter how many of them you have, you should have a stipulation in your security policy that a constant level of visibility must be maintained on them.

 

Ø  Always make sure that the Generative AI models that you are used to power your NHIs are always updated with the latest security patches.  If you have a Cloud based deployment, this should be automatically taken care of for you.

 

Ø  Watch the level of rights, permissions, and privileges that you assign to the NHIs.  Just like you would for an actual human employee, assign what is only needed, following the concepts of Least Privilege.

 

Ø  You should always by thoroughly vetting your third-party suppliers, but in case you use an NHI to communicate with them, make sure that they have at least the same number of controls that you have for your own IT/Network Infrastructure.  Also, share any security updates with them, so that they can be on the same page as you as well.

 

The fundamental key here is to always be as proactive as possible when using Generative AI.  The downside is that the models are evolving so rapidly, this can be difficult to do.  But it is always important to do the best that you can in this regard.

Sunday, October 6, 2024

The Evolution Of A Federal Generative AI Bill: What Needs To Be Done

 


One thing that I have written about extensively are the data privacy laws that not only the United States has enacted, but also other nations.  While the intention of them is to give consumers the right to know what is happening with their datasets, but to also make sure that the companies that are the stewards have deployed more than enough controls to make sure that the datasets are as protected as possible.

While this is of course a huge step forward, there is just one huge problem:  There is no uniformity amongst them.  Take for example our own 50 states.  Because of this lack of centralization, each one of them is producing their own version of a data privacy law. 

So, if a business were to conduct financial transactions with customers in all of the states, are they bound to each one?  This is a very murky area in which there no clear-cut answers, and unfortunately, will not be so for a long time to come.

Now, as Generative AI is coming into the fold of our society, it appears that each state now is producing their laws in an effort to protect consumers and their datasets, in the very same manner as they have approached data privacy laws.

One such example of this is California.  A number of years ago, they passed the CCPA. Now, they have produced their own Generative AI bill, which was designed to do the following:

*Create a comprehensive regulatory framework to govern the use of Generative AI, in all foreseeable aspects.

*Create a set of standards and best practices to ensure that the datasets the models use are not prone to security breaches.

This became known officially as Senate Bill 1047.  But believe it or not, the governor of California, Gavin Newsom, rejected the passage of this bill.  Why did he do this, you might be asking?  Well, here are his direct words:

““While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."”

(SOURCE:  https://www.darkreading.com/application-security/calif-gov-vetoes-ai-safety-bill)

Here are other reasons why he rejected this bill:

*The emphasis of it was purely on large scale Generative AI models.  There also needs to be a focus on more specialized models, which serve different purposes.

*The bill appeared to be too stringent to the governor.  His reason for this was that it could stifle innovation and ideas.  To counter this, he proposed that a much more flexible approach needs to be taken, and that each model should be taken into account on a case-by-case basis.

*The bill did not address the deployment of Generative AI in those environments that are deemed to be of high risk. 

As a result of this, the following pieces of advice were offered for consideration:

*Create a joint task force that includes a representative sample who will be involved in this process.  This will include people all the way from consumers to the private sector, to academia, and all levels of both the state and federal governments.

*The focus of Generative AI should on the size and the resources that the models use, but rather, there needs to be a huge emphasis on the risks that are borne from using AI to begin with.

*Implement a process where the any passed legislation on Generative AI can be updated as the technology evolved and advances.  Of course, as we know from the efforts in doing this for Cybersecurity, this is very tall order to fill.  In other words, the passage of any updates simply will not keep up with the pace of the rapid advances being made in Generative AI.

*It is highly recommended that any new bill that is presented to the governor for signing be modeled after the bill that the European Union (EU) recently passed.  This is known as the “EU Artificial Intelligence Act”, and is actually highly regarded as a comprehensive approach to regulating Generative AI.  More details about this can be seen at the link below:

https://artificialintelligenceact.eu/

My Thoughts On This:

This is bill that was rejected by the governor of California was officially known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”  Many people supported the passage of this bill (even Elon Musk), but there was also a fair share that rejected it as well.  It has been viewed as a good step forward, but of course, a lot of work still needs to be done on, as I have eluded to previously.

The bottom line is that creating any kind of regulatory bill on Generative AI is going to be very complicated.  For example, it is not just a few segments of American society that are impacted by Generative AI.  Rather it is the entire population and almost every business. 

Also, there are too many unknowns and variables that are involved in the actual creation of a Generative AI model, and the list here will just keep on growing.

On a very macro level, my thinking is that we simply need to have a Department of Cybersecurity created, in the very same manner that the Department of Homeland Security was right after 9/11.  But, we should not wait for a disaster to happen in Generative AI in order for this to happen.  The federal government needs to act now in order to start this effort.

Under this newly created department,  Generative AI would also fit into here as well.  This will not only lead to a centralization of the data privacy laws, but it will also lead to the same result for Generative AI.  Apart from this, we need to start simple first. 

Let us draft a bill that details a framework for all aspects of AI, such as Computer Vision, Natural Language Processing, Large Language Models, Neural Networks, Machine  Learning, etc.

The bottom line here is that Generative AI is not a field all in its own world.  It includes all of these aspects.  What impacts one area will have a cascading effort on the other as well.  Then over time, updates should be added to this framework, which although will take a very long time to accomplish, I am a huge proponent of it.

4 Grave Risks Of Using Non Human Identities & How To Fix Them

  As the world of Generative AI continues to explode, there is a new trend that will be emerging:   The Non-Human Identity.   You may be won...