Sunday, January 12, 2025

Risks And Opportunities For Generative AI In 2025

 


As we now go deeper into January, many people have started to predict already what the hot markets will be in Cybersecurity.  Without a doubt, one of the gold mines will be that of Generative AI.  Although ChatGPT (created by OpenAI) may not be all the glamour now, it is still being used quite by both businesses and individuals alike.  But it does one thing:  It opened the eyes of the world to what Generative AI is all about, and its opportunities, but also its huge risk potential as well.

One of the biggest concerns here is that of Deepfakes.  This is where a Cyberattacker can take an image or a video of a real person, and replicate that into a fake one, using Gen AI based models.  These are then often used to launch both Phishing and Social Engineering Attacks. 

One of the prime-time venues for this is during any kind of election season here in the United States.  In these cases, the Cyberattacker will create a fake video of the leading political candidate and put that somewhere like on You Tube.  The video will convincingly ask voters to donate money for their election, but any of it sent over will be sent to a phony, offshore bank account.

There are other threats that can also come about as well, but for now, here are some of the main concerns going into this year:

1)     LLMs:

This is acronym that stands for “Large Language Models”.  It is a part of Generative AI, and it can be technically defined as follows:

Large language models (LLMs) are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.

(SOURCE:  What Are Large Language Models (LLMs)? | IBM)

Although the models that drive them can be quite complex, the bottom line is that the goal of them is the words we speak, understand the context in which they are spoken, and provide an appropriate output.  A great example of this is the Digital Personalities that you may engage in, for example, when you have a virtual doctor’s appointment.  It is LLM that drives this kind of application, and learns from the conversation, so that it can talk back to you like a real-life human would.  But the downside of this is many of these models are proprietary in nature, which therefore makes them a very tempting target for the Cyberattacker to break into and wreak all kinds of havoc on the models.

2)     The Cloud:

Right now, the two main juggernauts are AWS and Microsoft Azure.  As companies are starting to realize the benefits of moving their entire IT and Network Infrastructures, there is one problem:  Both of these vendors also offer very enticing tools to create and deploy Generative AI models.  Although they have taken steps to help safeguard their security, especially from the standpoint of Data Exfiltration Attacks, the other main problem is that the Cloud Tenants have not set up the appropriate rights, permissions, and privileges for the authorized users to gain access.  Very often, they give out too much, which can lead to unintentional misconfigurations in the development of the Gen AI models.  As a result, this can lead to unknown backdoors being opened, or worse yet, this could lead to an Insider Attack happening.  Therefore, careful attention needs to be paid in creating both the Identity and Access Management (IAM) and Privileged Access Management (PAM) security policies.

3)     An Aid:

Over the last year or so, one of the biggest issues in Web application development is the lack of attention by the software development team in the security of the source code.  One of the driving factors behind this is that they very often make use of open-sourced APIs.  While this does have its advantages (such as not having to create source code from scratch), its main weakness is that the libraries that host them for downloading do not update them on a real time basis.  Rather, they leave this up to the software developers to do, and they do not.  In an effort to secure the source code before final delivery of the project is made to the client, businesses are now opting to use what is known as “DevSecOps”.  Long story short, this is where the software development team, the It Security team, and the Operations team all come together to serve as a counterbalance amongst one another to ensure that the source code has checked, and even double checked for any weaknesses.  Depending upon the size and scope of the project, this can be quite a tall order.  But the good news here is that Generative AI can be used as aid to help automate some of this checking process.  But, it is important to note that it should not be relied upon 100%, as human intervention is still needed in this regard.

My Thoughts on This:

Well, there you have it, some of the risks and opportunities that Generative AI brings to the table this year.  But, there is yet another area which has not received a lot of publicity yet.  And that is, the Data Privacy Laws of the GDPR, CCPA, HIPAA, etc.  Keep in mind that Generative AI Models (including those in LLMs also) need a lot of data to learn and stay optimized.

Because of this, the regulators of these Laws have placed huge scrutiny as to how businesses are safeguarding these kinds of data that are being used.  If the right controls are not put into place, the chances of a Data Leakage are much greater, and this could put the company to face a stringent audit and even face huge financial penalties.  For instance, under the tenets and provisions of the GDPR, this can be up to 4% of the total gross revenue.

This is really something to think about!!!

 

 

No comments:

Post a Comment

Risks And Opportunities For Generative AI In 2025

  As we now go deeper into January, many people have started to predict already what the hot markets will be in Cybersecurity.   Without a d...