Sunday, January 7, 2024

The New Defense In Adversarial ML: The Homomorphic Encryption Algorithm

 


About one year ago, I first started hearing about ChatGPT. I thought to myself, “OK, this is something new, it probably won’t last for too long”.  But today, the fever around it this platform, and anything around Large Language Models (LLMs) and the GPT4 algorithms will be even the craze of this year.  While things like ChatGPT can bring many advantages to the table, the biggest fear now is that the bad guys will also equally use it.

This can happen in many fronts, just consider the following:

Ø  The Cyberattacker of today does not really invent new threat variants.  Rather, they take the framework of what has worked before, and tweak it ever so slightly so that it can evade detection, but yet, still cause maximum damage.  This is the case of Phishing.  Ransomware did not exist before this, it only happened after crux that made up Phishing was tweaked so that it could lock up the files in a device.  But to this extreme, Ransomware now has become extreme, even being used to launch extortion like attacks.  So, all the Cyberattacker has to do now is merely enter the anatomy of a previous malicious payload, and have something like ChatGPT come with some new source code for it.

 

Ø  Creating more nefarious code.  Speaking of this, SQL Injection Attacks have always been a tried and true method of breaking into any kind of SQL like database, even the ones that are most widely used today, such as MySQL.  Normally, it would take some time to produce something to modify the baseline code to it, but not anymore.  Just simply input into an AI platform that is driven by the GPT4 algorithms, and ask it to create something somewhat “different”.  Apparently, you simply can’t ask the AI model to create something malicious directly, but you can get pretty creative about it by using some clever “Prompt Engineering”.

 

Ø  Generative AI is now the new wave of the so-called traditional AI.  What is different this time is that it can produce the outputs in a wide range of formats, all the way from images to videos to even voice generated calls.  There are fears about this on three fronts:  1) Deepfakes:  This is when a fictitious video of a real-life person is created.  It is popular during the times of elections, and this year could be the worst for it, given the current political climate and how advanced Gen AI has become.  2)  Robocalls:  These have always been in a pain, but will only get worst as now it only takes a matter of mere minutes to create an audio file, and disperse it to hundreds of victims on their smartphones.  3)  Social Engineering:  A Cyberattacker can now scope out the Social Media profiles of their victims, and feed that into ChatGPT.  From there, it can analyze it, and point out weak spots that the Cyberattacker can prey onto in a very slow, but quite dangerous manner.

Because of all of these fears, the Cyber industry (and even others) are now starting to wake up and thinking of what proactive steps can be taken so that the Cyber threat landscape does not become the proverbial “New Wild West”.  In this aspect, some of the vendor neutral groups such as the OWASP and MITRE have upgraded their vulnerability databases to include risks posed by the Generative AI models.

Heck, even the NIST has come out with its own framework for Generative AI best practices.  The entire document can be downloaded at this link:

http://cyberresources.solutions/blogs/NIST_AI.pdf

Also, a new force has emerged, which has been appropriately dubbed the “MLSecOps”.  This is actually a new kind of organization, sort of like the OWASP.  They have started to formulate a sense of guiding principles that your business should consider implementing.  But before that, you need to have your own MLSecOps team first.  This will of course be a combination of your IT Security team, Operations team, and your AI team (if you have one). 

This is very similar to the concepts that drive the DevSecOps and the DevOps models that are also starting to be used widely as well.  But rather than being an open-ended thing, MLSecOps focuses upon the following:

Ø  Supply Chain Vulnerability.

 

Ø  Model creation with respect to the Data Privacy Laws.

 

Ø  Governance, Risk, and Compliance (GRC).

 

Ø  Trusted AI:  Making AI fair, objective, and trustworthy.

 

Ø  Adversarial AI:  Exploring new ways how AI/ML can be used for nefarious purposes.

My Thoughts On This:

There has also been movement in the Cyber industry to encrypt what goes in and what comes out of an AI or ML model.  In this regard, new developments have been made in what is known as “Fully Homomorphic Encryption”, also known as “FHE” for short.  While this does hold some great promise, the encrypted causes some great concern right now:  IT can be 20x greater than the plaintext it is supposed to scramble.

I foresee another rat race on the horizon, but unfortunately, I think the Cyberattacker will be well ahead of the AI and ML curve.  But at least we have started with some positive steps in the right direction.

 

No comments:

Post a Comment

4 Ways How Generative AI Can Combat Deepfakes

  Just last week, I authored an entire article for a client about Deepfakes.   For those of you who do not know what they are, it is basical...