Sunday, August 25, 2024

The Birth Of Critical Thinking AI: Reality Or Myth?

 


If there is anything that is making the news headlines on a non-political note is that of Generative AI.  While the applications keep on growing, and as Nvidia keeps on making new GPUs, and as the algorithms get better all of the time, there is always that thirst to push Generative AI even further, to what it can do now. While this is a need for the many industries that currently  use it, it is even more so pronounced in the world of Cybersecurity.

At the present time, Generative AI is being used for the following purposes:

*Automation of repetitive tasks such as those found in Penetration Testing and Threat Hunting.

*Filtering out for false positives and only presenting the real threats to the IT Security team via the SIEM.

*Wherever possible, using it for staff augmentation purposes, such as using a chatbot as the first point of contact with a prospect or a customer.

*Being used in Forensics Analysis to a much deeper dive into the latent evidence that is collected.

But as mentioned, those of us in Cyber want Generative AI to do more than this.  In fact, there is a technical term that has now been coined up for it, which is:  “Critical Thinking AI”.  Meaning, how far can we make Generative AI think and reason on its own just like the human brain, without having the need to pump into gargantuan datasets?

The answer to this is a blatant “No”.  We will never understand the human brain at 100%, like we can the other major organs of the human body.  At most, we will get to 0.0005%.  But given this extremely low margin, there is still some hope that we can push what we have now just a little bit further.  Here are some examples of what people are thinking:

*Having Generative AI train itself to get rid of “Hallucinations”.  You are probably wondering what this is, exactly?  Well, here is a good definition of it:

“AI hallucinations are inaccurate or misleading results that AI models generate. They can occur when the model generates a response that's statistically similar to factually correct data, but is otherwise false.” 

(SOURCE:  Google Search”).

A good example of this is the chatbots that are heavily used in the healthcare industry.  Suppose you have a virtual appointment, and rather than talking to a real doctor, you are instead talking to a “Digital Person”.  You tell it the symptoms you are feeling.  From here, it will then take this information, go its database, and try to find a name for the ailment you might be facing.  For instance, is it the cold, the flu, or even worse, COVID-19?  While to some degree this “Digital Person” will be able to provide an answer, your next response will be:  “What do I take for it?”.  Suppose again, it comes back and says that you need to take Losartan, which is a diuretic.  Of course, this is false, because for the diagnosis, a “water pill” is not needed.  This is called the “Hallucination  Effect”.  Meaning, the Generative AI system has the datasets that it needs to provide a more or less accurate prescription, but it does not.  Instead, it give a false answer.  So, a future goal of “Critical Thinking AI” would be to have the Digital Person quickly realize this mistake on its own, and give a correct answer by saying you need to an antibiotic.  The ultimate goal here is to do this all without any sort of human intervention.

*Phishing still remains the main threat variant of today, coming in all kinds of flavors.  Generative AI is now being used to filter for them, and from what I know, it seems to be doing somewhat of a good job at it.  But the case of a Business Compromise Email (BEC) attack.  In this case, the administrative assistant would receive a fake, but albeit, a very convincing email from the CEO demanding that a large sum of money be transferred to a customer, as a payment.  But of course, if any money is ever sent, it would be deposited into a phony offshore account in China.  But if the administrative assistant were to notice the nuances of this particular email, he or she would then backtrack on their own to determine its legitimacy.  But this of course can take time.  So, the goal of “Critical Thinking AI” in this case would be to have the Generative AI model look all of this into its own (when queried to do so), determine the origin of it, and give a finding back to the administrative assistant.

My Thoughts On This:

So, how can get to the point of “Critical Thinking AI”?  Well, first, it is important to note that the scenarios that I depicted above are purely fictional in the reality of what we are expecting the Generative AI model to do.  We could get close to having it do them, but the reality is that human intervention will always be needed at  some point time.

But to reach that threshold, the one missing thing that we are not providing to the Generative AI model as we pump into large amounts of datasets is “Contextual Data”.  This can be technically defined as follows:

“Contextual data is the background information that provides a broader understanding of an event, person, or item. This data is used for framing what you know in a larger picture.”

(SOURCE:  https://www.sisense.com/glossary/contextual-data/)

For example, back to our chatbot example, all that we feed into the “Digital Person” are both quantitative and qualitative datasets in order to produce a specific answer.  But what is needed also is to train the Generative AI model to understand and inference why it is giving the answer that it is.  So in this case, had contextual data been fed into it, it probably would have given the correct answer of the antibiotic the first time around.

If we can ever reach the threshold of “Critical Thinking AI”, we might just be able to finally understand as to how we can use the good of Generative AI to fight its evil twin.  More information about this can be seen at the link below:

https://kpmg.com/nl/en/home/insights/2024/06/rethinking-cybersecurity-ai.html

Sunday, August 18, 2024

How To Bridge The Gap Of Ineffective Cyber Communications: 3 Proven Tactics

 


Next month, I will be teaching my first class as an Adjunct Instructor at Haper College, located in Palatine, IL.  This class will be about all about the fundamentals of Phishing, and how Generative AI is being used to create emails that are so convincing that it is getting close to impossible what is real and what is fake. 

Harper College actually has announced a bunch of new Cybersecurity Initiatives for its students, and  my business partner and I attended a number of meetings leading to its buildup.

One of the key questions that was asked is:  “What skills should be emphasized in these new initiative?”  Of course, most of the attendees in the meetings thought that learning technical skills was the most important.  This includes learning how to code and write scripts (using Perl, Python, PHP, etc.), learning all about the mechanics of AI, and so forth.

But I was one of the few people that actually said that while this is all important, teaching students how to communicate effectively in a team is to me, what is most important.  My premise for this argument was that (and still is), is that you can have a college graduate that will have all of the certs, and tech knowledge, but what is the good of all of that, if it cannot be communicated and applied into a team environment?

I further lamented that although having a set of baseline skills is very important, the further skills that an employer requires can be learned on the job.  Take the case with me.  Although I have been doing IT Security and Cyber tech writing for 15 years now, I knew nothing of how to write a Request For Information (RFI) or a Request For Proposal (RFP). 

But as I started my full-time job almost one year ago, my managers and coworkers have taught me the basic skills of how to compose these kinds of documents.

But it is not just university or junior college graduates in Cybersecurity that have issues with effective communications.  Many seasoned professionals also have a hard time with it as well. For example, in a recent survey that was conducted by Tines, entitled:  “The Voice Of The SOC”, as many as 18% of the respondents admitted that they have poor communication skills, and that trying to share their ideas with their co workers was a huge “chore to do”. 

One of the primary reasons cited for this is that they do not waste time having to distill all of the technical data they collect and bring it down to a level so that key stakeholders can understand.  In my opinion, this is a truly pathetic excuse to make. 

For example, how to Pre Sales Engineers convey the technical stuff so that prospects and existing customers can understand the solution that they are proposing?  The entire report can be downloaded at this link below:

http://cyberresources.solutions/blogs/Tines_Report.pdf

So, what can be done to alleviate this serious issue?  Well, when it comes to existing workforce, a number of solutions are proposed, some of them which are:

1)     Deploy Automation:

The thinking here is that if the more mundane tasks are automated, that will leave time for the worker to actually focus on communicating something that makes sense to anybody.  A prime example of this is Penetration Testing.  There are many tasks that are involved here, and ultimately, a final report has to be prepared for the client to a level that they can understand.  By automating more of these routing tasks, which will leave the Penetration Tester to actually compile the document so that it is easy for the client to go through and review.

It is also believed that if more business processes were to be automated, the siloes which exists between the departments will be broken down as well.  This is especially important for the IT Department.

2)     Prompt Engineering:

Whether you like it or not, ChatGPT is going to be around for a long time to come.  Many individuals and organizations use it now to get answers to questions or to get new ideas onto something.  But remember that with AI, the key is that it is all “Garbage In and Garbage Out”.  This simply means that the answers you are going to get from ChatGPT are only as good as the data that is fed into it.  But keep in mind that with this platform, it does not simply give you a list of links to go through to find the answer to your questions.  Rather, it tries to give you a very specific answer to your questions.  Therefore, you need to feed into ChatGPT an exact query, using the right keywords.  This is technically known as “Prompt Engineering”, and by learning how to do this, it is also another great way for the Cyber professional to hone in on their communication skills.  In fact, according to one researcher at MIT, learning Prompt Engineering is the top AI skill that you can have.  More details on this can be seen at the link below:

https://www.cnbc.com/2023/09/22/tech-expert-top-ai-skill-to-know-learn-the-basics-in-two-hours.html

3)     Implement The Tabletop:

This is a kind of scenario in which you gather up some employees, and give them a fictitious security breach that has happened.  From there, you instruct them to analyze the situation, and communicate effectively what they think happened.  This is serves to great purposes:

*Not only will it help to enhance communication skills, but it will also help to bring down the siloes as just described before, as employees from different departments will be involved in this particular exercise. 

*If your company were to be actually hit by a security breach, one of the first things you need to do is have the ability to effectively communicate what has happened to key stakeholders in a way that they can understand it.  Doing Tabletop exercises will be of great importance here as well.  After all, it will be your company’s brand reputation that will be as stake.

My Thoughts On This:

Having a great set of communication skills is also very crucial when it comes time to Incident Response and Disaster Recovery.  You don’t want members of these teams running around trying to figure who said what.  Rather, you want them to jump up  to the cause, and put out the fires as quickly as possible.

Technology can help do this, but up to a certain point.  The other key component is efficiency, human based, communication skills.

Sunday, August 11, 2024

Quantum Artificial Intelligence: The Good, The Bad, & The Ugly

 


I remember back in my college days at Purdue, I was absolutely terrified of computers.  I never wanted to take a class that them as part of the curriculum.  But being an Ag Econ major, I had no choice but to face my fears of computers, because while I loved the subject matter, technology was a big part of the major. My first major test of computers occurred when I took a class called “CS 110”.  This was simply an introductory class in computers, but there was a lot of work involved.

Probably the most terrifying part of the class for me was sitting through the actual lab final, where we only had something like two hours to complete it.  But somehow, I managed to get a “B” in the class, and boy, I was happy with that.

My next experiences with computers took place in my graduate school days at both SIUC and BGSU.  For the former, I had to learn (the very hard way) how to do mainframe SAS programming.  With the latter, I actually ended becoming an MIS major, and even worked full time as a computer consultant for the university.  Back then, I was dealing with the Mac Classics.

But fast forward to today, and look where we are at.  We can spin up a Virtual Machine in just a matter of minutes, at just a fraction of the cost it would have been back in the late 90s and early 2000s.  Probably the best example I give of this is setting up an Oracle Enterprise Database. 

Back in the day, it would have cost at least $30,000 to set up an On Prem Server (most of this was in the licensing costs).  Now, you can create the exact same thing in Microsoft Azure for as little as $80.00 per month.

So, what does the future now hold?  It is an area known as “Quantum Computing”.  Wondering what it is?  Well, here is a technical definition of it:

“Quantum computing is an emergent field of cutting-edge computer science harnessing the unique qualities of quantum mechanics to solve problems beyond the ability of even the most powerful classical computers.”

(SOURCE:  https://www.ibm.com/topics/quantum-computing)

Given its sheer power and speed, Quantum Computing has a number of key use cases, which are as follows:

*Analyzing financial portfolios of all kinds of clients, no matter how small or large they may be.

*Improving and optimizing the lifespan of Electrical Vehicle (EV) batteries.

*Further enhancing the drug research and discovery process (for example, Generative AI has already propelled this, but Quantum Computing is expected to take it beyond leaps and bounds).

*Creating new GPU and NPU chips for Generative AI based applications.

Capgemini just published a report on the state of Quantum Computing, and you can access it at this link:

http://cyberresources.solutions/blogs/Quantum_Computing.pdf

But another area where Quantum Computing will make a huge mark as well is in Generative AI.  Technically, this is a field of Artificial Intelligence which is known as “Quantum AI”, or also commonly referred to as “QAI”.  Right now, we marvel at how quick the GPT4 algorithms can deliver an output in just a matter of a few minutes by using ChatGPT.  But, the algorithms that will derived from QAI will deliver that very same output in just a matter of seconds.  The reason for this is that Quantum Computing relies heavily on massive parallel processing. 

This simply means that many processes are run at the same time in order to compute very complex calculations in just a fraction of the time it would take the computers of today to do.  The other bit of good news here is that since the QAI algorithms will be far more efficient, it will demand less energy from the data centers in which they are hosted in.  Right now, cooling and using fresh water for that is a huge issue in this kind of environment.

But, now come the downsides of QAI.  Probably the biggest one here is that of the Cyberattacker manipulating the algorithms in order to break the strong levels of Encryption that exist today. 

To paint how bleak this picture is, it is even expected that with the next five years or so, the Cyberattacker will have the ability to break all of these Encryption Protocols.  More details on this can be seen at the link below:

https://www.csoonline.com/article/651125/emerging-cyber-threats-in-2023-from-ai-to-quantum-to-data-poisoning.html

People in fact have given serious thought to this, especially by our own Federal Government.  For instance, back in 2022, Congress passed what is known as the “Quantum Computing Cybersecurity Preparedness Act.” 

This ensures that all of the related agencies have developed and are testing their Incident Response/Disaster Recovery/Business Continuity plans should a breach actually occur.  More details about this can be seen at the link below:

https://www.congress.gov/bill/117th-congress/house-bill/7535

Also, the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and National Institute of Standards and Technology (NIST) have created a document known as the Quantum Readiness: Migration to Post-Quantum Cryptography.”  This was a joint effort in order to address the Cybersecurity risks that are associated with QAI.  It can be downloaded from this link:

http://cyberresources.solutions/blogs/CSI-QUANTUM-READINESS.pdf

My Thoughts On This

Whether we like it or  not, QAI will make its grand entrance into our society in a way that ChatGPT did a few years ago.  But, this time, there is a lot more at stake, given just how powerful Quantum Computing is.  It will impact every walk of life all over the world. 

But we have one key advantage here:  At the present time, we are more or less learning about the risks that Generative AI poses (and those that are evolving, but have not been discovered yet), and we can apply those lessons learned to QAI. 

But the key thing here is that we all mut be proactive about this.  QAI will be here a lot quicker than we realize.  For more details on what this means to you, Capgemini also published a supplemental report, and it can be downloaded here:

http://cyberresources.solutions/blogs/Quantum_Computing_Supplement.pdf

Also, CISOs and their IT Security teams must address at what levels they use Encryption.  They will of course need to be trained in QAI, and many of the Encryption Infrastructures (most notably that of the Public Key Infrastructure) will have to be redesigned and even redeployed in order to keep up with Cybersecurity risks that QAI will bring to the table. 

In the end, this will be known as “Crypto Resiliency”, just like what Cyber Resiliency means to us today.

Sunday, August 4, 2024

Should Ransomware Payments Be Made 100% Illegal? The Debate Rages On

 


Over the last week or so, I have had a number of podcasts, with guests speaking about differing areas of their expertise.  But one common question kept coming up (which I have to admit I prodded for) was about Ransomware.  For instance, I asked them what they thought about it, and the big question:  Should the ransom actually be paid?  Astonishingly enough, the answers were mixed on this.

In my opinion, a ransom should never be paid.  First, it shows to the Cyberattacker that you will bend, even though you may not want to.  As a result, the chances are much greater that they will come around the next time for you, and even demand a higher ransom payment. 

Second, if you are good at maintaining backups on a prescribed schedule, then restoring mission critical operations should not be a problem.  But this is largely dependent upon what kind of environment your IT/Network Infrastructure is hosted in.

If you are in the Cloud, like using Microsoft Azure, then this will not be a problem.  You could be up and running even within just a few hours.  But if you have some part of it that is On Prem, it could take much longer, and the restoration process could take much longer, assuming that you have to resort to tape backups. 

Third, if you do make a ransom payment, many insurance companies are now refusing to make payments on filed claims in these cases.  I think this all started with a French insurance company called Axa, when they all of a sudden said that they would stop making payments. 

As a result, other carriers followed suit.  Fourth, it can even be considered illegal if you make a ransom payment, especially if it is done  a nation state actor,  such as Russia, China, Iran, and North Korea.

So now, this begs a new question:  Should all Ransomware payments be made illegal?  In other words, even if you paid just a few thousand dollars to a Cyberattacker, should you still face the legal consequences for it?  Here are some considerations:

1)     Ransomware is getting uglier:

Gone are the days when a Cyberattacker would simply deploy a piece of malicious payload, lock up your computer, and encrypt your files.  It has become far worse now, with extortion-like attacks now taking place, which could even threaten the lives of the victims that are involved.  If this were to happen, the first instinct is to pay up.  But if it is made completely illegal, would you still do it???

2)     Not all businesses are equal:

This is where you would compare an SMB to a Fortune 500 company.  With the latter, if ransom payments were made illegal, these entities have a far better chance of surviving than the former.  It would totally wipe them in a matter of hours.  Also, keep in mind that many Cyberattackers are now targeting SMBs given just how vulnerable they are.

3)     Payments can still be made:

Even if they were made completely illegal, businesses will still try to find a way to make a payment to the Cyberattacker, that is covertly.  But given all of the audit trails that financial institutions have to now implement, the payor would eventually be caught.  But bringing him or her to justice would take an enormous amount of time and expense not only to collect the forensics evidence, but from the standpoint of litigation as well.  Again, if this all worth it, if the ransom payment was only just a few thousand dollars?  Probably not.

4)     More participation from law enforcement:

While the Federal Government agencies, such as the FBI, are doing a great job of tracking down those Cyberattackers that have launched Ransomware attacks, their resources are obviously limited.  Because of this, their priority is to first go after those attacks that have caused a large amount of damage, or if there is an extortion plot going on.  They obviously don’t have the resources to chase down after those people that make the small ransom payments.

5)     The Cyberattacker will find another way:

If ransom payments are made 100% illegal, no matter what the circumstance is, the Cyberattacker will find another way to be compensated.  But this time, the consequences of this could be far deadlier and even more extreme.

My Thoughts On This:

So given the considerations I just listed (and there are probably many more of them), is it worth it to make ransom payments totally illegal?  While it may have short-term advantages, the long run will not be served.  In the end, businesses should have the option if they want to pay up or not, even though I still think they should not. 

There are calls now for the Federal Government to enact more best practices and standards for businesses to follow, but in the end, it will be up to the business owner to implement them.  The only thing they would be obligated to do is if it becomes actual law.  But, by the time this actually happens, the newly enacted legislation will be far too outdated for the latest Cyber threat variants.

So, you may be asking what can be  done?  Simple, keep a proactive mindset within you and your IT Security team.  Always create backups!!!  The costs  of taking the steps to mitigate the risks of your business from being hit by a Ransomware attack pales in comparison to what the actual damage will be in the end.

Understanding What An EDR Really Is Without The Techno Jargon

  The Cybersecurity world, as I had mentioned in one of my previous blogs, is no doubt full of techno jargon.   While using these fancy term...