Friday, July 7, 2023

2 Key Reasons Why Generative AI Will Fail

 


As I have written about before, and I think in even just last week’s blog, AI and ML are taking the world by storm.  Now, there is nothing new when it comes to their science.  Researchers have been examining this since the 1950s. 

 

But what is different this time is how much more attention people are paying to it.  A lot of this has been triggered by ChatGPT, but interestingly enough, interest in its use seems to be waning, according to some of the latest news headlines.

But to me, this is not surprising.  As I have said, and will continue to say so, it is all just a hysteria, much like the .com era of the late 90s was all about.  Any business that had a “.com” in its name had VC money pouring into it.  But of course, by 2000, it all died down.  The same thing will happen with ChatGPT.  But what is different this time is that the traction that AI and ML have brought to the forefront will never go away.

Instead, people are going to try build better mousetraps (as the saying goes) when it comes to AI, and try to explore different use cases for it.  One such area that will get more attention in this aspect is that of securing the source that is used to create the web apps of today.  This is an area that has long been forgotten about, and it is only until now that it is coming under the limelight.

For the longest time, software developers were never held accountable for double checking the security of what they have created.  But now this is not the case.  As much as CISOs and employees are now being held more responsible abiding for abiding by the security policies of their employers, so too are software development teams.

But to give them some benefit of the doubt, they also need some help to fix the vulnerabilities in the source code that they create.  This is possibly where both AI and ML can come into play.  Consider some of these statistics:

*A bulk of software development teams, at least 65% of them, are planning to use AI and ML to help double check their code within the next three years. 

(SOURCE:  https://about.gitlab.com/press/releases/2023-04-20-gitlab-seventh-devsecops-report-security-without-sacrifices.html)

*Over 66% of all businesses reported that there are well over 100,000 weaknesses in the source code that they create, and 50% of these cases still remain open well after three months.

(SOURCE:  https://www.rezilion.com/wp-content/uploads/2022/09/Ponemon-Rezilion-Report-Final.pdf

https://www.veracode.com/press-release/quarter-technology-applications-contain-high-severity-security-flaws-which-pose)

Now while there are good possibilities for AI and ML to come to the rescue, they do have some limitations, which are explored as follows:

1)     It’s only as good as what it is fed:

There is a popular myth that AI and ML can think and reason on their own, much like the human brain.  However, this is far from the truth.  While the human brain can initiate a thought process on its own, AI and ML simply can’t do that.  What they need first is fuel in order to ignite this process.  This comes in the way of feeding it datasets, and large amounts of it, on a 24 X 7 X 365 basis.  Also, the datasets that are used have to be cleansed and optimized as well, in order to help prevent erroneous results from being produced.  This is where human intervention is still needed and will continue to be required.  Once this happens, can only then the AI and ML tools learn something and produce a reasonable output.  Now here lies another disadvantage.  At the present time, AI and ML can only solve relatively easy problems in the source code.  There are nowhere near to the point where they can fix complex issues.  For this, they will need to be fed with different datasets that are far more sophisticated in order for them to come to this stage.  Keep in mind that although a human eye can spot gaps, weaknesses, and vulnerabilities without any formal training, an AI or ML system needs to have this.

2)     There is no verification involved:

When source code is checked with human intervention, it is usually checked in an iterative fashion in order to make sure that any known issues have been remediated, and is now as airtight as possible.  But AI and ML simply cannot do this.  Yes, given the right inputs they can calculate a reasonably decent output, they have not evolved to the point yet where they can verify and confirm the results on their own.  But human nature dictates to believe whatever an AI or ML systems produces, well, because it is more sophisticated than the human brain.  But it is not!!!  And it will never be!!!  We need to get away from this kind of thinking and always remember any output from an AI or ML system has to be fully verified as well!!!  Also remember that with AI and ML, there is an inherent level of trust that is afforded to them.  But, the world is now going to what is known as the “Zero Trust Framework”, where nobody can be trusted at all.  So how will AI and ML work in this kind of environment?  That is yet to be seen, and no doubt be a huge issue further down the road.

My Thoughts On This:

This area of AI and ML, which this blog has reviewed falls under the realm of “Generative AI”.  Loosely put, this is where it is hoped that these tools one day will be able to initiate their own thinking processes, like the human brain can.  But this will never happen, and even if it does, it will be only to a very miniscule amount at best.  Also, it will take many, many years to even reach that level.

Remember that in Cybersecurity, it takes the best of both of words of technology and humans to make anything work.  You can’t go too much to the extremes in either direction, you need a balance of both, where all of the pieces are working together in a harmonious fashion.

BTW, if you are interested in learning more about the latest ChatGPT algorithms, you can download a whitepaper from Open AI at this link:

http://cyberresources.solutions/blogs/gpt_4.pdf

 

No comments:

Post a Comment

4 Ways How Generative AI Can Combat Deepfakes

  Just last week, I authored an entire article for a client about Deepfakes.   For those of you who do not know what they are, it is basical...