Although I have my political views and beliefs, I try to
remain as agnostic as possible in my tech writing work. Sometimes its not easy to be, but I try my
hardest to do so. That is until
now. I am going to take a bold political
stance, and finally say that I think the Biden Administration, when compared to
previous ones, has done a lot more to help strengthen our Cyber defense
posture.
True, we may not all agree with all of the fine points in
the bills and legislations that have been passed, but the sincere effort is
there. And that is what I applaud.
Now, as the dawn of AI comes upon us (primarily driven by
ChatGPT), the Biden Administration has stepped into the foray again to try to
quell all of the fear, angst, and unknowns that have been brought upon by this
new trend. Here are some examples of what
has been, or what will be accomplished:
*The Blueprint For An AI Bill Of Rights. The exact text can be seen here at this link:
http://cyberresources.solutions/AI_Ebook/AI_Bill_Of_Rights.pdf
*The National Science Foundation is also launching a new AI initiative,
called the “Strengthening and Democratizing the U.S. Artificial Intelligence
Innovation Ecosystem”. The exact text
can also be seen at the link below:
http://cyberresources.solutions/AI_Ebook/AI_NSF.pdf
*The National Institute of Standards and Technology is also
coming out with a brand-new AI framework, and the content of this can be seen
at the link below:
https://www.nist.gov/itl/ai-risk-management-framework
So, IMHO, these are great steps forward that the Biden
Administration is taking. But they are
also taking one more unique approach as well. They are actually going to sponsor
an event at an upcoming Cyber event, which is called “DEF CON”.
The main objective of this is to publicly evaluate and
release any disclosures of the newest AI technologies that have recently come
out.
In other words, this is a vetting event for the public in which
they can get the truth from the vendors about the AI products that they are
peddling. Some of the companies that
will be taking part in this vetting process include the following:
*Anthropic
*Google
*Hugging Face
*Microsoft
*Nvidia
*OpenAI
*Stability AI
Another main objective of this public vetting process is to
address the concerns the public has about the social implications of AI, such
as racial profiling, discrimination, etc.
The idea for the public exposure for all of this is that the Biden
Administration feels it is very important for these AI vendors to directly
address and correct the fears us American citizens have about using AI in
everyday life.
As it was noted, it is time to take off the black box from
AI, and demonstrate what it can do, and most importantly what it
cannot do.
But most importantly, ChatGPT and its maker, Open AI is going
to come under the microscope as well.
The main issue to be dealt here are not just the social implications,
but also the Cyber ones as well. While
ChatGPT is great for doing certain things, its biggest drawback is that it will
be used to the most extreme, nefarious purposes possible.
The biggest pint of angst is that even a kid with no Cyber
experience or knowledge, can use ChatGPT in order to launch a massive Cyberattack
on the likes that nobody has seen before, especially on our Critical Infrastructure. For instance, do you think the Solar Winds
security breach was damaging enough?
Well, ChatGPT could possibly even be used to launch even grander attacks
than that.
But one of the biggest fears is that ChatGPT will be used to
spread a horrible amount of misinformation to the public at large, especially
that on Social Media. Even more so, as the
next Presidential Election comes, ChatGPT could also even be used to create
Deepfakes that are so compelling and real that even experts will not be able to
tell at first glance what is real and not.
Even more troublesome is that these Deepfakes can also be
used in large Phishing attacks in order to lure in large scale donors.
My Thoughts On This:
Truth be told, AI is nothing new. It has been around since at least the mid-1950s,
but it has not made its claim to fame until now, thanks to the propulsion of
ChatGPT. The bottom line is that AI and
ML are going to be around with us for a very long period of time.
It has its advantages and minuses also. But it is very important to keep in mind that
not only will we never fully understand the human brain, but we will never even
be able to replicate all of the reasoning powers of it.
At best, we may only understand a mere 0.5% of it. This is where AI tools such as ChatGPT will have
their limitations. I wrote a rather exhaustive
whitepaper for a client on this very topic, and there are some serious restrictions
that it has. In my view, AI and ML will
best be used only for automation processes, where mundane and ordinary tasks
are done on a daily basis.
An area of this which has evolved is known as “Robotic
Process Automation”, also known as “RPA” for short. A typical example of this are the robot-like
arms that you see in car manufacturing plants.
Will there be job loss here? Yes,
there will be.
But it will nowhere be to the extent that people are fearful
of today. We will always need human intervention
when it comes to AI. Keep this also in
mind: Any AI tool needs to have a large
amount of data fed into it, so it can learn.
How is this possible?
With humans of course. Also, the algorithms
that make up an AI system have to also be optimized on a 24 X 7 X 365
basis. And of course, humans will still
be needed here as well. What we are going
through is just a hysteria and a bubble brought on by ChatGPT.
Eventually, and probably soon enough, it will die like the
.com bubble in the late ‘90s.
No comments:
Post a Comment