Well, it has
been an awhile since I have written anything about Generative AI. It’s still continuing to make the news
headlines, and most of the publicly traded companies are seeing their Earnings
Per Share (EPS) going to even newer highs, such as the case with Nvidia, even
after their recent stock split.
But despite all
of this, and rightfully so, there is still a growing angst amongst the general
public here in the United States as to how the tools that have Generative AI baked
into them can be misused.
For example, one
of them is how the video conferencing platforms, such as those of Zoom, Webex,
Teams, etc. record conversations in a
meeting. For example, when you have a meeting
with your coworkers or manager, you often have the option to have a recording of
it, to be used as a future reference, if the need arises.
Here are some
of the scenarios which pose some of the greatest risks:
1)
Flaws
in the transcription:
As
I have written about before, Generative AI (and for that matter, all branches
of AI) are primarily “Garbage In and Garbage Out”. Meaning, the output that you get in the end
is only as good as the datasets that are fed into the model. Even if you take the time to make sure that all
of these datasets you feed into it are as cleansed and optimized as possible,
mistakes can happen, whether it is intentional or not. For example, if you have a meeting, and choose
to have it recorded, there could be flaws in the actual language of the transcript
that could convey a very negative connotation.
Thus, before the transcript is ever released to the team, it is imperative
that you double this language first to make sure that all is good.
2)
The
right to use it or not:
Very
often, it is the originator of the meeting that has the option to launch a
recording session or not. Unfortunately,
the other members who have been invited to it do not have that option. Thus, if an employee does not like the idea
of being recorded, they still may feel forced to, especially if the meeting
originator is their boss and wants to use it.
Although the recording mechanisms very often do notify the employees
ahead of time that the conversation in the meeting will be recorded, a quick
fix to this is to have the meeting originator actually reach out to each team
member to make sure it’s OK that they are being recorded. If the majority say no, then it will be time
to do things the old-fashioned way, by having a professional minute taker present to take notes.
3)
Data
exfiltration:
In
today’s world, many online meetings occur in which private and confidential
information is very often shared amongst the members. The thinking here is that since everybody
knows each other, all is good. But unfortunately,
this is far from the truth. For
instance, there is the grave possibility the transcript could be the target for
a Data Exfiltration attacks. When we
hear about this, we often think of databases being hacked into. Because of this, we often forget about the
other places where data might be saved, especially those in video conference meetings. The Cyberattacker is fully aware of this, and
thus makes this a target. While there is
no sure fix for this, the best thing you can do is to make use of the tools
that your Cloud Provider gives you to monitor your AI Apps. A great example of this is Purview from
Microsoft, which is available in any Azure or M365 subscription.
4)
Third
party usage:
Many
of the vendors that create AI based products and services very often, and
covertly, use the data that you submit in order to further refine the AI algorithms
that are being used in their models. This
is also true with the recording of the video conference meetings, and the transcripts
that come of them. A perfect example of this
is the recent Zoom debacle, where this occurred. This led to an 86 million Dollar lawsuit. More details in this can be found at this link:
While
you can’t have a direct control over what is collected initially, make sure
that you read all of the licensing and end user agreements carefully. And, if after you start using the AI recording
tool and feel that the data is being misused in this fashion, you do have
rights under the data privacy laws, such as those of the GDPR and CCPA. But it is always wise to consult with an
attorney first to see the specific rights you are afforded under them, and how
you can move forward.
5)
Covert
participants:
Back
in the days of the COVID-19 pandemic, “Zoombombing” was one of the greatest
Cyber threats that were posed to the video conferencing platforms. While this may dissipated to a certain degree,
the threat is still there. But this
time, given how stealthy the Cyberattacker has become, they don’t even have to
make an appearance. They can still listen
covertly, and record that way as well, without you even knowing it. Probably one of the best ways to mitigate
this risk from happening is to make sure that your video conference meeting is encrypted
to the maximum extent possible, and that you require login password that is
long and complex (a good tool to use here is the Password Manager).
My
Thoughts On This:
All of that I
have described in this blog is known technically as “AI Eavesdropping”. It is also important to keep in mind that this
risk is not just born out of the video conferencing platforms, it can happen on
any device that has Generative AI built into it. A good example of this are the various “fit
bits” that you can wear as a watch.
As Generative
AI continues to further evolve at a very fast pace, you, the CISO should also
take responsibility for creating a separate security policy that is targeted
just towards Generative AI. Some of the
things that should be addressed are how your company uses the data that is
collected from Generative AI, how it is stored and used, and the rights that
your employees have if they feel they have been violated against.
No comments:
Post a Comment