Monday, October 28, 2024

What "End Of Life" Means, And The Cyber Risks Of It

 


If you are an ardent user of Windows, you know that Microsoft typically retires their products after a certain period of time.  The good thing here is that they give you plenty of time notifying their customers, and even after a product has been discontinued, they still offer some level of support for a brief period.

While it is a good and even necessary thing to do this, unbelievably, people still use outdated software packages even after they have been discontinued.

A notable example of this is one of my cousins.  She works for the Federal Government in a high-level role, and despite this, unbelievably, they are still using Windows 7.  Not only is this a bad practice, but it is a very grave Cybersecurity Risk as well. 

If you are using an Operating System (OS) that no longer offers any type or kind of software upgrades or patches, you are leaving many back doors open that the Cyberattacker can very easily penetrate through and wreak all kinds of havoc.

The typical example of this is Data Exfiltration, when the Cyberattacker will steal the datasets in a very covert way.  They will do this very slowly, bit by bit, and when you do notice something is missing, it will very often be too late to do anything about it. 

Likely, it will have been sold on the Dark Web, or the Cyberattacker is getting ready to launch some kind of Ransomware or Extortion like attack. 

So, let us explore some reasons wat businesses still like to keep outdated software, even though they know they need to upgrade at some point in time.  Here are some findings:

1)     Money:

This is the biggest reason.  True, now, things are tight with companies right now, so most of them do not want to expend the extra money to upgrade, and keep things modernized.  But the truth of the matter is that if you use outdated software and hardware well beyond where no support is provided, once again you are taking a huge Cyber Risk.  And, if you are impacted by a security breach because of this, the cost of recovery will far exceed the cost it would have taken your business to get the new software.

2)     Shadow IT:

The formal term for this is “Shadow IT Management”.  When it comes to the workplace, this refers to when an employee is overlooking the shoulder of another employee to see what their login information is (such as the username and password).  But when it comes to the situation that we are talking about in this blog, it simply means that the CISO and their IT Security team are knowingly letting their employees use outdated software and are fully cognizant of that fact.  Astonishingly enough, according to a recent study, there are still some 47% of companies that let this happen.  To see more details about this, click on the link below:

Unmanaged Devices Run Rampant in 47% of Companies | 1Password

My Thoughts on This:

It could be the fact that some vendors clearly do not communicate with their customers about when their products will be discontinued.  But given the world today, that will be a huge risk for them to take, as the effects of reputational and branding damage will be exceedingly high if an outdated product a customer was using was the culprit for a major security breach.

So here are two tips of advice, from my side:

Ø  The CISO and their IT Security team need to keep a constant eye for what products and/or services are coming to an end.  Once they get a whiff of something that they are using is going to be outdated, plans need to be drawn up immediately in how to procure the next release or update.  Also, plenty of time must be allocated to present a new budget to the C-Suite, with explanations why these steps are necessary.

 

Ø  Always maintain a clear line of communication not only with all the stakeholders in your company, but also with the vendors with whom you procure IT related products and/or services from.

 

Microsoft has done a wonderful job with communicating the “End of Life” (this is the technical term when a product and/or service will no longer be available, and when support will no longer be available).  FYI, it will be terminated next year, and for more information on that, click on the link below:

Companies “wary” of Windows 11 migration challenges as Windows 10 EOL draws closer | ITPro

Sunday, October 20, 2024

What Zero Day Attacks Are In Generative AI Models

 


If you are in Cybersecurity, one of the new pieces of techno jargon that you will often hear about is a “Zero Day Attack”.  I have heard about it numerous times, especially when I did the auto upgrades to my Windows machines.  But to be honest, this is the first time I have written about it.  So, if you are like me when I was a few months ago, wondering what it is was all about, here is a technical definition of it:

“A zero-day (or 0-day) vulnerability is a security risk in a piece of software that is not known about, and the vendor is not aware of. A zero- ay exploit is the method an attacker uses to access the vulnerable system. These are severe security threats with high success rates as businesses do not have defenses in place to detect or prevent them.

A zero-day attack is so-called because it occurs before the target is aware that the vulnerability exists. The attacker releases malware before the developer or vendor has had the opportunity to create a patch to fix the vulnerability.”

(SOURCE:  What is a Zero Day Attack? | Fortinet)

Let us break this definition down into its components:

Vulnerability:  A gap, or weakness that exists in a software application.

Exploitation:  The Cyberattacker discovers this weakness and takes advantage of it by deploying it into a malicious payload.

Attack:  This is where the Cyberattacker attempts to do some damage, such as Data Exfiltration.

As it relates to Zero Day, it is a hole that exists that nobody, not even the vendor knows about. The Cyberattacker discovers this just by pure chance, or through some covert intel.  Because it is not known, they can then exploit this weakness without anybody noticing, and from there, launch the attack. 

The key point here is that though this process, a Zero Day Attack can be very devastating, because it takes everybody by surprise.  When the damage is done, it is then too late to fully recover it.  But now, with Generative AI exploding on the scene and its subsets, especially that of Machine Learning, Zero Day Attacks are now becoming much more pronounced.

One of the primary reasons for this is that the models are constantly evolving and becoming more dynamic by nature.  Even if the CISO and the IT Security team were to discover any gaps or weaknesses and remediate them, the chances of new ones coming out the next day are very high.  Add to this the fact that these models also increase the attack surface, which makes it even more complex to get a true gauge of the Cyber Threat Landscape.

Here are some examples of Zero Day attacks as it relates to the models of Generative AI:

1)     Prompt Injection:

This can be technically defined as:

“Prompt injection is the use of specially crafted input to bypass security controls within a Large Language Model (LLM), the type of algorithm that powers most modern generative AI tools and services.”

(SOURCE:  What Is Prompt Injection, and How Can You Stop It? - Aqua)

               To make this definition clearer, let us backtrack a little bit.  Suppose you use ChatGPT for daily               job         tasks, and one day you have been asked to visit a customer on site.  True, you could use         Google Maps for this, but you want noticeably clear and concise directions on how to get there.  You simply enter your query into ChatGPT, and it gives you various routes you can choose    from.  But in order to get the specific answer you are looking for; you must create the query   with specific keywords.  These are also technically called “Prompts”.  In fact, this has given              birth to an entirely  new field called “Prompt Engineering”.  But as it relates to a Zero Day         Attack with a Generative AI model, a Cyberattacker can very easily hijack your ChatGPT session,            and insert their own prompts.  The end result is that you are given a set of directions, which   although will get you to   the client site, will take you in a far more convoluted manner than what you had intended.  The   consequences of this kind of Zero Day Attack is far more dangerous if       you ask ChatGPT to automatically log into your financial portals (such as          your credit card or bank      account), and ask, or “prompt” it to give you advice on how you should manage your money.

2)     Training Data:

As I have analogized before, a Generative AI model is like a car.  Like this needs fuel to drive, the model needs data (and lots and lots of it) to propel the queries or the “prompts” into giving you the right answers (also known as the “Outputs”).  But you simply cannot dump all kinds of data into the model.  First, you need to make sure that whatever you feed into it is relevant.  For example, if you have developed a model to predict prices for certain stocks, you need to pump in those datasets that belong to them.  Not those of other stocks.  Second, you need to make sure that the data you feed into the model are as optimized and cleansed as much as possible.  This simply means that there are no outliers that exist in the dataset.  If you do not do this, your results will be highly skewed, in the negative direction.  In this regard, it is quite possible that the Cyberattacker can find a hole in the model as it is being developed.  From there, they can then exploit by inserting fake datasets (also known as “Synthetic Data”), into it.  Thus,  once the model is formally launched into the production environment, it can wreak havoc to your business like nobody has seen before.

My Thoughts on This:

Apart from the dynamic nature of Generative AI models as mentioned before, it is very often typically the case, that the time to market of them takes more precedence than developing the secure design of them.  Also, the AI scientists who create these models have security far from their mindset, because they are simply not trained in this area. 

Thus, to help mitigate the risks of Zero Day Attacks from happening, there is now a new movement that is happening now in Corporate America.  This is the adoption of what is known as an “MLSecOps” team.  This is where the AI scientists work in tandem with the IT Security Team and Operations Team to ensure that security model design starts from the very beginning and receives top priority after the model has been launched and deployed for public use.

An important concept here is also the “MLBOM”, which is an acronym that stands for the “Machine Learning Bill Of Materials”. This will be examined in closer detail in a future blog.

Sunday, October 13, 2024

4 Grave Risks Of Using Non Human Identities & How To Fix Them

 


As the world of Generative AI continues to explode, there is a new trend that will be emerging:  The Non-Human Identity.  You may be wondering what it is?  Well here is a good definition of it:

“Non-human identities (NHIs) are digital entities used to represent machines, applications, and automated processes within an IT infrastructure. Unlike human identities, tied to individual users, NHIs facilitate machine-to-machine interactions and perform repetitive tasks without human intervention.”

(SOURCE:  What is a Non-Human Identity? | Silverfort Glossary)

Remember, I have written about Digital Person before?  Essentially, this is an avatar, or even a chatbot that is given human-like qualities in order to interact with you.  Instead of typing in a message, you can talk to it and have a conversation with it. 

One of the best examples of this is its use in customer service.  Instead of waiting on hold for hours on end to speak with an actual human being, you can summon up the Digital Person within a matter of seconds. 

If you are not satisfied with the answers, you can always request the Digital Person to be referred to as an actual representative.  This is an example of a Non-Human Identity, or also known as “NHI” for short.  While you can call the Digital Person by a name, in the grand scheme of things, it really does not have any form of identification.

NHIs can be a particularly useful tool to have around, especially when it comes to processing automation and augmentation, when it comes to monitoring all the interconnections that exist today in the world.  In fact, it has been estimated that for every 1,000 people, there are some 10,000 of these kinds of connections.  It is almost impossible for any human being to keep close tabs on all of them, that is why the NHI is so beneficial.

But despite this, there are certain risks that are borne out by using this advancement in Generative AI.  Here is a sampling of some of the major ones:

1)     Expansion of the attack surface:

In the world of Cybersecurity, mostly everybody that is in it has heard of this term.  For example, if you have too many network security devices, this can expand your attack surface.  This goes in direct contradiction of the old proverb that “more is better”.  The same can also be said of the NHI.  While deploying many of them could prove to be beneficial, in the intermediate and long term, it also greatly expands the attack surface of all your interconnections.  Since these are mostly powered by Generative AI, there are still vulnerabilities in them that the Cyberattacker can exploit very quickly.

2)     Hard to see:

It is important to note that many of the NHIs that are deployed tend to function and operate in the background.  As a result of this, they tend to be forgotten about, especially when it comes to time to upgrade and/or optimize them.  This is yet another blind spot that the Cyberattacker knows very well about and can thus use it quickly launch a malicious payload into them.  The net effect of this is a negative, cascading effect across your entire IT/Network Infrastructure in just a matter of sheer minutes.

3)     Violation of PAM:

This is an acronym that stands for “Privileged Access Management”.  These are the rights, privileges, and permissions that are assigned to super user accounts.  An example of this would be a network or database administrator.  They will of course have elevated access to keep the networks and databases running smoothly, respectively.  But these same types of PAMS based accounts are also assigned to the NHI so that they can carry out automated tasks without human intervention.  But once again, the IT Security team forgets about this as well, and the consequence of this is that the Cyberattacked can gain very quick access to these accounts and gain immediate access to anything that they want to.

4)     Third parties:

In today’s world, many businesses outsource many functions to a third-party provider.  And now, instead of having direct contact with them, the entity that hired them now uses the NHI for this communication.  While this can save time to focus on more pressing issues, there is also an inherent risk with this as well.  For example, if the third-party supplier is hit with a security breach, it will also impact the NHI that is connected to it, and in turn, it will have an impact onto your business.  This is yet another form of a Supply Chain Attack, but on a different kind of level.

My Thoughts on This:

Here are some things I recommend that you can do to mitigate the risks of an NHI from being an unintended threat to your business:

Ø  To keep your attack surface as low as possible, deploy NHIs as you absolutely need them.  It is important to get away from thinking that deploying a lot of them will make you more productive.  They simply will not.

 

Ø  If you have smaller NHIs, it will also make it easier for you to keep an eye on them.  But in the end, no matter how many of them you have, you should have a stipulation in your security policy that a constant level of visibility must be maintained on them.

 

Ø  Always make sure that the Generative AI models that you are used to power your NHIs are always updated with the latest security patches.  If you have a Cloud based deployment, this should be automatically taken care of for you.

 

Ø  Watch the level of rights, permissions, and privileges that you assign to the NHIs.  Just like you would for an actual human employee, assign what is only needed, following the concepts of Least Privilege.

 

Ø  You should always by thoroughly vetting your third-party suppliers, but in case you use an NHI to communicate with them, make sure that they have at least the same number of controls that you have for your own IT/Network Infrastructure.  Also, share any security updates with them, so that they can be on the same page as you as well.

 

The fundamental key here is to always be as proactive as possible when using Generative AI.  The downside is that the models are evolving so rapidly, this can be difficult to do.  But it is always important to do the best that you can in this regard.

Sunday, October 6, 2024

The Evolution Of A Federal Generative AI Bill: What Needs To Be Done

 


One thing that I have written about extensively are the data privacy laws that not only the United States has enacted, but also other nations.  While the intention of them is to give consumers the right to know what is happening with their datasets, but to also make sure that the companies that are the stewards have deployed more than enough controls to make sure that the datasets are as protected as possible.

While this is of course a huge step forward, there is just one huge problem:  There is no uniformity amongst them.  Take for example our own 50 states.  Because of this lack of centralization, each one of them is producing their own version of a data privacy law. 

So, if a business were to conduct financial transactions with customers in all of the states, are they bound to each one?  This is a very murky area in which there no clear-cut answers, and unfortunately, will not be so for a long time to come.

Now, as Generative AI is coming into the fold of our society, it appears that each state now is producing their laws in an effort to protect consumers and their datasets, in the very same manner as they have approached data privacy laws.

One such example of this is California.  A number of years ago, they passed the CCPA. Now, they have produced their own Generative AI bill, which was designed to do the following:

*Create a comprehensive regulatory framework to govern the use of Generative AI, in all foreseeable aspects.

*Create a set of standards and best practices to ensure that the datasets the models use are not prone to security breaches.

This became known officially as Senate Bill 1047.  But believe it or not, the governor of California, Gavin Newsom, rejected the passage of this bill.  Why did he do this, you might be asking?  Well, here are his direct words:

““While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."”

(SOURCE:  https://www.darkreading.com/application-security/calif-gov-vetoes-ai-safety-bill)

Here are other reasons why he rejected this bill:

*The emphasis of it was purely on large scale Generative AI models.  There also needs to be a focus on more specialized models, which serve different purposes.

*The bill appeared to be too stringent to the governor.  His reason for this was that it could stifle innovation and ideas.  To counter this, he proposed that a much more flexible approach needs to be taken, and that each model should be taken into account on a case-by-case basis.

*The bill did not address the deployment of Generative AI in those environments that are deemed to be of high risk. 

As a result of this, the following pieces of advice were offered for consideration:

*Create a joint task force that includes a representative sample who will be involved in this process.  This will include people all the way from consumers to the private sector, to academia, and all levels of both the state and federal governments.

*The focus of Generative AI should on the size and the resources that the models use, but rather, there needs to be a huge emphasis on the risks that are borne from using AI to begin with.

*Implement a process where the any passed legislation on Generative AI can be updated as the technology evolved and advances.  Of course, as we know from the efforts in doing this for Cybersecurity, this is very tall order to fill.  In other words, the passage of any updates simply will not keep up with the pace of the rapid advances being made in Generative AI.

*It is highly recommended that any new bill that is presented to the governor for signing be modeled after the bill that the European Union (EU) recently passed.  This is known as the “EU Artificial Intelligence Act”, and is actually highly regarded as a comprehensive approach to regulating Generative AI.  More details about this can be seen at the link below:

https://artificialintelligenceact.eu/

My Thoughts On This:

This is bill that was rejected by the governor of California was officially known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”  Many people supported the passage of this bill (even Elon Musk), but there was also a fair share that rejected it as well.  It has been viewed as a good step forward, but of course, a lot of work still needs to be done on, as I have eluded to previously.

The bottom line is that creating any kind of regulatory bill on Generative AI is going to be very complicated.  For example, it is not just a few segments of American society that are impacted by Generative AI.  Rather it is the entire population and almost every business. 

Also, there are too many unknowns and variables that are involved in the actual creation of a Generative AI model, and the list here will just keep on growing.

On a very macro level, my thinking is that we simply need to have a Department of Cybersecurity created, in the very same manner that the Department of Homeland Security was right after 9/11.  But, we should not wait for a disaster to happen in Generative AI in order for this to happen.  The federal government needs to act now in order to start this effort.

Under this newly created department,  Generative AI would also fit into here as well.  This will not only lead to a centralization of the data privacy laws, but it will also lead to the same result for Generative AI.  Apart from this, we need to start simple first. 

Let us draft a bill that details a framework for all aspects of AI, such as Computer Vision, Natural Language Processing, Large Language Models, Neural Networks, Machine  Learning, etc.

The bottom line here is that Generative AI is not a field all in its own world.  It includes all of these aspects.  What impacts one area will have a cascading effort on the other as well.  Then over time, updates should be added to this framework, which although will take a very long time to accomplish, I am a huge proponent of it.

How To Launch A Better Penetration Test In 2025: 4 Golden Tips

  In my past 16+ years as a tech writer, one of the themes that I have written a lot about is Penetration Testing.   I have written man blog...