In yesterday’s blog, I wrote about ChatGPT, and the fear that
it has brought upon society. While it
certainly has its plusses, it too has its many minuses as well. The trick is in learning more about it, and
getting yourself ready for whatever may come of it.
True, this is far easier said than done, but being proactive
from the security side of things will keep you that much further ahead of the game.
So with this in mind, I bring to you some of the other Cyber
threats that AI in general, not just ChatGPT per se, can potentially bring to
not only your business, but even also to you personally. So, here we go:
1)
Poisoning the model:
As I also mentioned in yesterday’s
blog, AI does not, and will never mimic humans ever. Essentially, all AI is garbage and garbage
out. Meaning, whatever you feed into it
will give you the output. It is as
straightforward as that. But the trick
here is that you have to cleanse and optimize the datasets on a daily basis in
order to make sure that you get what you need.
If you don’t do this, then whatever is outputted to you will be of no
use. But this is an area where the Cyberattacker
can come into as well. If there is any
kind of weakness or backdoor in your AI system, the Cyberattacker can literally
tap into your datasets, and alter them in a way that will allow for malicious payloads
to be deployed into it. This is also
technically referred to as “poisoning”. For
more information about this, click on the link below:
https://spectrum.ieee.org/ai-cybersecurity-data-poisoning
2)
Data privacy:
I think it was about a week ago or
so that I wrote a blog specifically on this topic. True, there are laws out there now like the GDPR
and the CCPA that are designed and aimed to protect our PII datasets in general,
but how about when it comes to those pieces of data that are used in AI
systems? Unfortunately, there is no law
around this yet. There has been talk about
it from within the Biden Administration, but as you know how it goes in
politics, it will take forever to get anything passed, especially given the bickering
that is happening right now in Congress, between both the Republicans and the Democrats. Worst yet, a hijacked AI system can even be
used to guess other datasets of yours that may be in the database of the company. For more detailed on this, click on the link
below:
https://www.usenix.org/system/files/sec21-carlini-extracting.pdf
3)
DDoS like attacks:
This is probably one of the most old-fashioned
attacks that could ever exist, along with Phishing. This is essentially where a Cyberattacker launches
malformed data packets towards a server, and brings it to a screeching halt
with total bombardment of them. The
server never really shuts down per se (thought it could) but it makes any
service availability so slow that it will take minutes to access anything
versus the normal seconds that it would take.
The same thing can happen to AI systems as well. The Cyberattacker can launch similar pieces
of malicious payloads towards it, and make the system consume so much hardware
power that it too will literally shut down as well. This is called a “Sponge Attack”, and more information
about it can be seen here:
https://ieeexplore.ieee.org/document/9581273
4)
Phishing attacks:
This was elaborated on in much more
detail in yesterday’s blog. Essentially,
there are always tell-tale signs of a Phishing based email. But with ChatGPT and other AI tools, anyone
with nefarious goals in mind can always use these tools to craft out a Phishing
based email, which is not only hard to detect, but it can even evade any sort
of firewall or antimalware system. In
fact, there already have been reports of an escalation of ChatGPT for these
very purposes. More information about this can be seen at the link below:
https://www.darkreading.com/vulnerabilities-threats/bolstered-chatgpt-tools-phishing-surged-ahead
It is even listed as a Top 5
attack:
https://www.darkreading.com/attacks-breaches/sans-lists-top-5-most-dangerous-cyberattacks-in-2023
5)
Deepfakes:
This is when AI can be used to replicate
a real-life person. Although this is
scary enough, it can even be used to create a video of them, even duplicating their
voice. Probably the best example of this
is during any election cycle. A
Cyberattacker can create a rea like video of a candidate, and ask for donations
for the cause. But in reality, any money
that is collected will simply go to an offshore account somewhere, and never
even being able to retrieve the money.
Or worst yet, this could be a bait to lure in victims to a phony website
where their login details can be easily heisted from.
More information
about Deepfakes can be seen at the link below:
https://www.darkreading.com/threat-intelligence/threat-landscape-deepfake-cyberattacks-are-here
6)
Malware getting worst:
At some point in time, malware was
an evil that could be managed, but now it seems like it is only getting
worse. It has come to the point now
where it can evade all forms of detection, even the most sophisticated of
firewalls. But as AI and ChatGPT further
evolve, creating even stealthier forms of malware which can pretty much go
undetected forever will now be the norm.
The greatest fear now is that this kind of newly bred malware will be
used to infiltrate the Critical Infrastructure.
More details about this can be seen here:
My Thoughts On This:
In this blog, I have described those AI threats that are
most relevant to individuals and businesses alike. There are other threats out there that can
stem from this, and they are as follows:
*Evasion Attacks
*Prompt Injections
*Model Theft
*Weaponized Models
I will cover these kinds of attack vectors in a future blog,
so stay tuned, and be proactive!!!
No comments:
Post a Comment