As this world,
and especially here in the United States dives deeper into AI, and especially
that of Generative AI, many people are still left scratching their heads,
especially in the business and academia communities, as to how to move
forward. For example, there are many
questions that still have yet to be answered, both from the technical and
social implications standpoints.
IMHO, the
Generative AI is right now in a huge bubble.
We are seeing this with the hyper inflated stock values of those companies
that are involved with it, such as Nvidia, Vertiv, etc. ad all of the other companies
that are involved with the GPU making process and in the construction of data
centers to support all of the applications that are created by Generative AI.
But this
bubble will burst, just like the “.com boom” we saw in the late 1990s. But what is different about this is that AI
in general has been around for a long time (in fact, since the 1950s), and because
of that, it will still be around even longer for decades yet to come. VC funding will come and go, but the research
into Generative AI will still be strong as new algorithms are being developed on
almost daily basis.
So, in order to be prepared for all of this,
businesses need to centralize their efforts in a top-down approach not only to
make sure that what they are investing in will produce some sort of positive
ROI, but also that the concerns of employees, customers, prospects, and other
key stakeholders can also be addressed quickly and effectively. So you are probably asking now at this point,
how can all of this be started?
It can be
done through what is known as the “AI Steering Committee”. In a way, this will be similar to other
committees that exist in a business, but its exclusive focus will be that of
just Generative AI, and nothing more.
Some of the key members that should be an integral part of this include the
following:
Ø
The
CISO and a member of the IT Security team, with a managerial title.
Ø
A
legal representative, such as that of an attorney, but it is imperative that
they are well versed in AI and the Data Privacy Laws.
Ø
If
a business has one, the Chief Compliance Officer (they make sure that all of the
Data Privacy Laws are being adhered to).
Ø
Key
representatives of those that will be involved in the Generative AI
process. Examples of this include AI scientists,
AI engineers/architects, etc.
Ø
Any
other key stakeholders, especially those from Third Party Suppliers.
Ø
A
consultant who can provide advice and direction on the “social impacts” of
Generative AI, especially as it relates to customers and employees.
So, once this
committee is formed, the next step is to actually get some action items created
so that things can move forward. Here
are some suggestions on how to do this:
1)
Start
with a Risk Assessment:
Just
like how you would conduct a Cyber Risk Assessment, the same holds true for Generative AI. But, here the committee needs to figure first
if and how Generative AI has been deployed to begin with, and if so, what the impacts
it has had both in terms of the technical and marketing standpoints. If there already have been some projects that have been implemented, then you
and your committee need to figure out if it has posed any kind of risk. By this I mean, are there any gaps or
vulnerabilities that have been identified in the Generative AI app? If so, what steps, if any, have been taken to
remediate it? Out of anything else, this
is what will matter the most. If there
are any holes, this could make the app prone to data leakages, or worst yet,
even Data Exfiltration Attacks. Also,
since the data that is housed in a Generative AI Model is now coming under the scrutiny
of the Data Privacy Laws (such as the GDPR, CCPA, HIPAA, etc.) the committee
also needs to make sure that the right Controls are in place. This entire process of adding new ones or upgrading
existing ones needs to be thoroughly documented. For more information on this,
click on the link below:
https://www.darkreading.com/cyber-risk/building-ai-that-respects-our-privacy
2)
Used
a Phased In Approach:
Like
with anything else that is new, you do not want to deploy 100% all at once. You need to implement it in various steps, or
phases, so that you will not get buy in from your employees, but most importantly,
your customers. This will give time for people
who are resistant to change to adapt, at a pace that works for them. As it relates to Generative AI, the first
step here would be thoroughly test a new app in a Sandbox Environment. If everything checks out, then start to do
pilot studies with employees and customers over a period of time to see how
responsive they are to it. If all turns
out to be positive, even in the smallest of degrees, then deploy the Generative
AI app into the production environment, a bit at a time. This process is of course very general, but
you sort of get the idea. A lot here
will depend upon how the existing processes are currently set up in your
business.
3)
Be
Positive:
As
the fears and concerns still surround Generative AI in general, it will be
imperative for the AI Committee to maintain a positive attitude, but yet to be
cautious. In this regard, it is critical
that a 24 X 7 X 365 hotline be available so that all key stakeholders can relay
their concerns on a real-time basis. But
the key here is that they must be addressed quickly, if not seeds of doubt will
start to get planted about Generative AI, and how your company plans to use
it. It is key that the AI Committee be
as transparent as possible, and if you don’t know the answer to a question,
simply say: “I don’t know, let me get
back to you once I get more information”.
But don’t ignore this person, always keep them updated as much as
possible.
My
Thoughts On This:
Now how this proposed
AI Steering Committee will move forward into the future will depend a lot on how the actual members of it take their
role seriously. Today, Generative AI is
still like a big jigsaw puzzle, and in order for it to be solved, centralization
is key, starting with the AI Steering Committee.
No comments:
Post a Comment