Sunday, January 14, 2024

The Top 2 Grave Weaknesses Of AI & How To Fix Them

 


AI and ML are now fast becoming the big buzzwords in just about ever industry, and not Cybersecurity.  Where it is really making its splash are in those industries where automation is needed the most.  A typical example of this is in the automotive industry.  Here you can see where robotic arms are now being used for things like tightening bolts in the car parts and even painting them.  This is an area of AI that is known as “Robotic Process Automation”, or “RPA” for short.

But with all of the advantages that they both bring to the table, there is also the downside as well.  Probably the biggest one is that AI and ML could be used for the opposite purpose, mainly for nefarious intents.  Obviously a Cyberattacker is not going to brute force into an AI system, but just like any other digital asset, they are going to enter into the vulnerabilities, and exploit things further that way.

You may be asking at this point, what are they?  Well, here is a sampling for you:

1)     The use of open-source tools:

As I have mentioned before, software developers love to use APIs for their source code development.  While this does significantly cut down on the cost and time it takes to deliver a project to a client, there is one huge drawback.  That is, open-source tools are rarely checked for any weaknesses, and they are hardly ever upgraded by the hosting repositories.  But, software developers often think that the APIs are safe to use, and thus, never double check them on their own.

2)     The fuel:

As such as a car needs fuel to go, the AI and ML models need data in the same way.  They need data not to only to initially learn, but to also keep them optimized and updated as they mode forward in terms of usage.  But here in lies yet another problem:  These datasets are also stored by the AI and ML models to some degree or another, and because of that, they have also become a prime source of attacks by the Cyberattacker.  Datasets are always a prized token to have, no matter where it initially resides.

While the above is not an all-inclusive list, these are definitely some of the most important ones that you need to pay attention to.  What are some proactive steps that you can take?  Here are a few:

1)     It’s not the just the AI/ML models:

Remember, these models are also interdependent upon other functionalities as well, it is not just the datasets that are fed into them.  So in this regard, you probably should scan for vulnerabilities on these as well.  A good way to get started with this is to start with a comprehensive Vulnerability Scan or Penetration Test.

2)     Have strong IAM policies:

This is an acronym that stands for “Identity and Access Management”.  Being said, you need to make sure that you assign only the appropriate amounts of rights, privileges, and permissions to the people that are authorized to full access the AI and ML models.  There is yet another area that you need to also be concerned about, and is called “Role Based Access Control”, also called “RBAC” for short.  A part of assigning the right permissions is to give them based upon the roles that the authorized users have.  Obviously, an IT Security team member will have more rights that would say, an administrative assistant.

3)     Stronger protection:

This is especially true of the datasets that you are feeding into the AI and ML models.  Make sure that you have the right controls in place, and that you audit them on a regular basis.  This is a must, because you will now come under greater scrutinization of the data privacy laws, such as the GDPR and the CCPA.

4)     Fortify your team:

In the past, I have written about DevOps and DevSecOps.  They are created to help out the software development scan for the gaps and weaknesses that are found in the source code that they compile.  Likewise, there is a new concept now called the “MLSecOps”.  This is where the IT Security team works in close tandem with the team that are developing the AI and ML models, and work under the same principle as the former two teams.

My Thoughts On This:

The time to be proactive about AI and ML security is now!!!  These are areas that are advancing quite rapidly, and you and your teams have to work to be even faster just to keep up.

No comments:

Post a Comment

Beware Of That IoT Device You Are Going To Give As A Gift!!!

  As we fast track now into Thanksgiving and the Holidays, gift giving is going to be the norm yet once again.   To me, I think it should be...