Sunday, September 10, 2023

The Top 3 Rules You Need To Put Into Your Generative AI Policy

 


Today, the biggest trend in Cybersecurity is that of Generative AI.  Most of us have heard of this term, some of us have not.  So for the latter, here is a technical definition of it:

“Generative AI enables users to quickly generate new content based on a variety of inputs. Inputs and outputs to these models can include text, images, sounds, animation, 3D models, or other types of data.”

(SOURCE:  https://www.nvidia.com/en-us/glossary/data-science/generative-ai/)

So unlike the traditional AI models of the past, with these new algorithms  like GPT4 and Large Language Models (LLMs), you can submit a question or query to ChatGPT, and it will create a brand-new piece of content for you, based upon the data that has been fed into it.  So for example, if you are an author, and you ask it to create new sci fi, it will output just exactly that.

So as you can see from a very general level, Generative AI can be used for the good, and even for the bad.  But this is still all so new in terms of applications, it is hard to predict what the future will hold in this regard. 

But it is not just individuals, but now many businesses are adopting for their own uses as well.  Thus, there is now fear that employees could potentially misuse this, especially if they are given free access to it by their employers.

So what can be done about this?  Well, here are three tips you can use:

1)     Create the policies:

Most companies now have policies place to protect their digital and physical assets.  How granular this becomes, of course depends primarily upon the CISO or even vCISO that is in charge.  But whatever the situation is, now is the time to update these policies with what can be called “Acceptable AI Usage”.  This is something that you will probably need a good lawyer for, as there is not too much legal  precedence out there for this kind of stuff.  Basically you will have full control over company issues devices, but not personal devices.  This becomes even trickier with a remote workforce.  In this regard, some of the things you need to consider putting in your policies include:

*How Generative AI can be used for work purposes in terms of productivity;

*How it will be monitored on company issued devices, especially during off hours and break times.

For some more insight into this, click  on the link below:

https://www.darkreading.com/analytics/following-pushback-zoom-says-it-won-t-use-customer-data-to-train-ai-models

2)     Watch how it is being used:

You and your IT Security team need to keep close tabs on what kind of information and data is being shared with ChatGPT.  Once again, when it comes  to the personal devices of your employees, you have no control over this.  The best you can do is to provide proper security awareness training for them on a regular basis.  This will be needed as tools like ChatGPT grow in popularity and usage.  But for the company issues devices, you can keep a very careful on how it is being used.  But you will need  to warn employees ahead of time that they will be monitored in this regard.  If you make use of Social Media as well for your marketing purposes, this is yet another area in which you should include in your new security policies. 

To see a good discussion on this,  click on the link below:

https://www.darkreading.com/vulnerabilities-threats/generative-ai-projects-cybersecurity-risks-enterprises

3)     Have accountability:

At this point in time, it is difficult to hold employees accountable for actions or work-related activities that have taken place with Generative AI tools.  Typically, even if you ask it who worked with it when, it won’t give an answer.  So somehow, you and your IT Security team will have  to come up with some sort of audit trail and the access times as well as the IP addresses  of the devices that have accessed these tools. Another area that you need to be concerned with is the quality of data that is being fed into the Generative AI tools.  Remember, it is still essentially “garbage in and garbage out”.  So not only do you have to make sure on a real time  basis that the training data  is optimized, but you also need to constantly remind employees to check their work if they use  AI, in order to make sure that the output is accurate.  Unfortunately at the present time, these kind of check and  balance systems are not available in Generative AI. 

A good review on this can be seen at the link below:

https://www.darkreading.com/application-security/chatgpt-other-generative-ai-apps-prone-to-compromise-manipulation

My Thoughts On This:

Right now, there is great fear and angst  that AI will take over the world and replace human beings.  This is nothing but a huge myth.  We are far from understanding what the human brain is all about, and we neve will.  All that AI will do is help to augment existing processes, not replace them.

No comments:

Post a Comment

How To Launch A Better Penetration Test In 2025: 4 Golden Tips

  In my past 16+ years as a tech writer, one of the themes that I have written a lot about is Penetration Testing.   I have written man blog...