Sunday, January 21, 2024

6 AI Privacy Actions You Can Implement Today

 


As the world delves deeper into AI, we are seeing both the good sides of it.  But unfortunately, we are also starting to see the bad sides of it as well.  For example, it can be used by a Cyberattacker to create malicious code, create Deepfakes, or even launch a Ransomware attack.  But apart from this, there is also a growing area of concern amongst the American public:  Protecting our privacy, and the use our data that is fed into an AI model for its training purposes.

While the Federal Government is starting to take some action in this regard, AI is moving fat too fast before any of the legislation can catch up with it.  So therefore, it is up to the private sector to help instill some sense of confidence in Americans that our private information and data is being protected, and that the right controls are being implemented to help mitigate any chances of it being leaked.

So you may be asking what can be done?  Here are some areas that can be tackled by Corporate America:

1)     Use it on  case-by-case basis:

Rather than creating an ad hoc AI model of sorts, companies should take the extra step to insure that if a customer or a prospect wants to use their model, it should be customizable to how he or she wants to use it.  That way, they will feel they have more control as to what they input.

2)     Take it to the Edge:

Edge Computing happens when all of the data processing happens  closer to the physical location of the end user’s device.  In this way, AI models should be deployed also.  In other words, rather than processing all of the information and data in a central Cloud location, do it on a virtual server that is closer to the origin points of where more of your customers and prospects are.

3)     Allow for tracking to happen:

Even to this day, AI models are viewed as a “black box” phenomenon.  This simply means that it thought of as “garbage in and garbage out”, with no visibility as to what is happening on the inside of it.  Of course the AI vendors don’t want to give this out, because this will give away their bread and butter – namely the algorithms.  But you don’t necessarily have to give this away – you need to be transparent enough to your end users to show them what has been in the AI model that has used their information and data.  In other words, provide a tracking history, or activity page as to what has been used with those datasets.  Honestly, most of the American public will not want to know all of the technical details of the model, just how their data is being used and why.

4)     Keep everybody informed:

Just like you see today how websites make it a point to formally accept their cookies ands before you submit any information/data on a contact form, you agree to have your data stored in accordance with the data privacy laws.  This should also be the same for AI models.  If you are a company that makes use of AI models in the delivery of products and services, you need to notify your prospects and customers that they will be at least partially acquiring your goods through the AI, and they information/data about that may be collected in order to insure prompt delivery.

5)     Don’t give in too easily:

This is for the prospect or customer:  Never, ever give your financial information to a chatbot, and email or even a text message. If you have to submit something, make sure that you are talking with a real human being, and that the company has a strong reputation.

6)     Always use situational awareness:

As a company, you will always want to blend in AI into your website, in order to make it look sharper and induce prospects in. But if you really want to be fair about this as a business owner, you should let it be known that AI is being used to drive your website.  At first, you may have a fall down in the total number of prospects downloading or visiting stuff, but think of the long term.  You will be viewed as a business that is forthcoming and honest, and in the end, this is what your customers and future ones will value the most.

My Thoughts On This:

The above are just some of the steps that you, as a business owner, can take.  But keep in mind that if you are planning to use AI in a big way of various sorts, then it is your responsibility to keep up with how you can best protect that data.  Don’t just simply wait for a set of guidelines or a framework to come out with from the Federal Government.  At best, they will be initially tentative and broad.

Remember that in the end, you are the steward for the data that you store, process, and archive.  And, you just like how the data privacy laws give customers the option to have their data removed from any system, you need to offer the same when it comes to your AI models.

No comments:

Post a Comment

How To Launch A Better Penetration Test In 2025: 4 Golden Tips

  In my past 16+ years as a tech writer, one of the themes that I have written a lot about is Penetration Testing.   I have written man blog...