Just last
week, I authored an entire article for a client about Deepfakes. For those of you who do not know what they
are, it is basically a replication made of an individual, but it is primarily used
for sinister purposes.
But the catch
here is it is Generative AI that is used to create, and they very often come in
the way of videos, most often posted on YouTube. One of the best examples of Deepfakes is in the
Election Cycles. They are created to
create an impostor video of the real politician, but it gets more dangerous
here.
For example,
the video will very often have a link to it that will take you to a phony
website, asking you to donate money to their campaign. But in the end, the money that you donate is
not going to that cause, rather, it was probably sent to an offshore bank
account located in a nation-state Threat Actor, such as Russia, China, or North
Korea. Just to show the extent that
Deepfakes have created, consider these statistics:
*Deepfakes
are growing at a rate of 900% on an annual basis.
*One victim of
a Deepfake Attack actually ended up paying over $25 Million after a fake video
of their CFO was posted on the Social Media Platforms.
So what exactly
can be done to curtail the rising danger of Deepfakes? Well, the thinking is that the Federal Government
(and for that matter, those around the world) need to start implementing serious
pieces of legislation that will provide steep financial penalties and prison
time. But unfortunately, these actions
have not taken place yet, due two primary reasons:
*The legislations
that are passed simply cannot keep up with the daily advances that are being
made in Generative AI.
*Even if a
perpetrator is located, it can take law enforcement a very long time to
justice, given the huge caseloads that they already have on the books related
to security breaches.
*Trying to
combat Deepfakes on a global basis takes intelligence and information sharing amongst
the nations around the world, some of which
are not ready for this task or simply are unwilling to participate.
So, now the
thinking is that the business community should take the fight directly now to
the Cyberattacker. But what tools can be
used? Believe it or not, Generative AI
can also be used here, but for the good.
Here are some thoughts that have been floating around the Cyber world:
*It can be
used to carefully analyze any kind of inconsistencies between what is real and
what is fake. For example, in a video, there
will always be subtle changes in lighting, or unnatural facial movements. These are very difficult to spot for the human
eye, but to a Gen AI tool programmed with the right algorithms, it can seek
them out, and fairly quickly.
*While Deepfakes
are great at replicating images, they are not so good yet at recreating the
voice of the victim. In fact, the voice
will almost sound “robotic like”. If
Voice Recognition can be used in conjunction with Gen AI here, this will
probably be yield the first, definitive proof that a Deepfake has been used for
malicious purposes. Also, this kind of evidence
should also hold up in a Cout of Law in case the perpetrator is ever brought to
justice.
*If the
company even makes use of other Biometric Modalities such as that of Facial Recognition,
it can also be used to great level of certainty to determine if an image or a
video is an actual Deepfake or not.
*Another
option that a company can use is what is known as “Content Watermarking”. These are hidden identifiers that can be
placed in an actual image, and at the present time, a fake replication of these
will not be able to notice them. Thus,
this is an easier way to tell if an image or video is real or not.
My
Thoughts On This:
Even to implement
the above-mentioned solutions, it is going to cost money for a company to
do. Given the mass layoffs in the tech
sector as of late, and how Cybersecurity is still a “back seat” issue with many
C-Suites, these solutions will not be deployed in the near term. And IMHO, it’s a disaster that is waiting to
happen, as once again, Gen AI is advancing at a clip that really nobody can keep
with yet.
In fact ,
according to a recent report from Cisco, only 3% of companies in Corporate
America have even increased their funding to fight off the nefarious purposes
that Gen AI can bring to the table. More
details about this study can be found at the link below:
To view the actual
letter that was signed by some of the major Gen AI pioneers mandating further
Federal Government intervention, click on the link below:
https://openletter.net/l/disrupting-deepfakes
No comments:
Post a Comment