Sunday, August 25, 2024

The Birth Of Critical Thinking AI: Reality Or Myth?

 


If there is anything that is making the news headlines on a non-political note is that of Generative AI.  While the applications keep on growing, and as Nvidia keeps on making new GPUs, and as the algorithms get better all of the time, there is always that thirst to push Generative AI even further, to what it can do now. While this is a need for the many industries that currently  use it, it is even more so pronounced in the world of Cybersecurity.

At the present time, Generative AI is being used for the following purposes:

*Automation of repetitive tasks such as those found in Penetration Testing and Threat Hunting.

*Filtering out for false positives and only presenting the real threats to the IT Security team via the SIEM.

*Wherever possible, using it for staff augmentation purposes, such as using a chatbot as the first point of contact with a prospect or a customer.

*Being used in Forensics Analysis to a much deeper dive into the latent evidence that is collected.

But as mentioned, those of us in Cyber want Generative AI to do more than this.  In fact, there is a technical term that has now been coined up for it, which is:  “Critical Thinking AI”.  Meaning, how far can we make Generative AI think and reason on its own just like the human brain, without having the need to pump into gargantuan datasets?

The answer to this is a blatant “No”.  We will never understand the human brain at 100%, like we can the other major organs of the human body.  At most, we will get to 0.0005%.  But given this extremely low margin, there is still some hope that we can push what we have now just a little bit further.  Here are some examples of what people are thinking:

*Having Generative AI train itself to get rid of “Hallucinations”.  You are probably wondering what this is, exactly?  Well, here is a good definition of it:

“AI hallucinations are inaccurate or misleading results that AI models generate. They can occur when the model generates a response that's statistically similar to factually correct data, but is otherwise false.” 

(SOURCE:  Google Search”).

A good example of this is the chatbots that are heavily used in the healthcare industry.  Suppose you have a virtual appointment, and rather than talking to a real doctor, you are instead talking to a “Digital Person”.  You tell it the symptoms you are feeling.  From here, it will then take this information, go its database, and try to find a name for the ailment you might be facing.  For instance, is it the cold, the flu, or even worse, COVID-19?  While to some degree this “Digital Person” will be able to provide an answer, your next response will be:  “What do I take for it?”.  Suppose again, it comes back and says that you need to take Losartan, which is a diuretic.  Of course, this is false, because for the diagnosis, a “water pill” is not needed.  This is called the “Hallucination  Effect”.  Meaning, the Generative AI system has the datasets that it needs to provide a more or less accurate prescription, but it does not.  Instead, it give a false answer.  So, a future goal of “Critical Thinking AI” would be to have the Digital Person quickly realize this mistake on its own, and give a correct answer by saying you need to an antibiotic.  The ultimate goal here is to do this all without any sort of human intervention.

*Phishing still remains the main threat variant of today, coming in all kinds of flavors.  Generative AI is now being used to filter for them, and from what I know, it seems to be doing somewhat of a good job at it.  But the case of a Business Compromise Email (BEC) attack.  In this case, the administrative assistant would receive a fake, but albeit, a very convincing email from the CEO demanding that a large sum of money be transferred to a customer, as a payment.  But of course, if any money is ever sent, it would be deposited into a phony offshore account in China.  But if the administrative assistant were to notice the nuances of this particular email, he or she would then backtrack on their own to determine its legitimacy.  But this of course can take time.  So, the goal of “Critical Thinking AI” in this case would be to have the Generative AI model look all of this into its own (when queried to do so), determine the origin of it, and give a finding back to the administrative assistant.

My Thoughts On This:

So, how can get to the point of “Critical Thinking AI”?  Well, first, it is important to note that the scenarios that I depicted above are purely fictional in the reality of what we are expecting the Generative AI model to do.  We could get close to having it do them, but the reality is that human intervention will always be needed at  some point time.

But to reach that threshold, the one missing thing that we are not providing to the Generative AI model as we pump into large amounts of datasets is “Contextual Data”.  This can be technically defined as follows:

“Contextual data is the background information that provides a broader understanding of an event, person, or item. This data is used for framing what you know in a larger picture.”

(SOURCE:  https://www.sisense.com/glossary/contextual-data/)

For example, back to our chatbot example, all that we feed into the “Digital Person” are both quantitative and qualitative datasets in order to produce a specific answer.  But what is needed also is to train the Generative AI model to understand and inference why it is giving the answer that it is.  So in this case, had contextual data been fed into it, it probably would have given the correct answer of the antibiotic the first time around.

If we can ever reach the threshold of “Critical Thinking AI”, we might just be able to finally understand as to how we can use the good of Generative AI to fight its evil twin.  More information about this can be seen at the link below:

https://kpmg.com/nl/en/home/insights/2024/06/rethinking-cybersecurity-ai.html

Sunday, August 18, 2024

How To Bridge The Gap Of Ineffective Cyber Communications: 3 Proven Tactics

 


Next month, I will be teaching my first class as an Adjunct Instructor at Haper College, located in Palatine, IL.  This class will be about all about the fundamentals of Phishing, and how Generative AI is being used to create emails that are so convincing that it is getting close to impossible what is real and what is fake. 

Harper College actually has announced a bunch of new Cybersecurity Initiatives for its students, and  my business partner and I attended a number of meetings leading to its buildup.

One of the key questions that was asked is:  “What skills should be emphasized in these new initiative?”  Of course, most of the attendees in the meetings thought that learning technical skills was the most important.  This includes learning how to code and write scripts (using Perl, Python, PHP, etc.), learning all about the mechanics of AI, and so forth.

But I was one of the few people that actually said that while this is all important, teaching students how to communicate effectively in a team is to me, what is most important.  My premise for this argument was that (and still is), is that you can have a college graduate that will have all of the certs, and tech knowledge, but what is the good of all of that, if it cannot be communicated and applied into a team environment?

I further lamented that although having a set of baseline skills is very important, the further skills that an employer requires can be learned on the job.  Take the case with me.  Although I have been doing IT Security and Cyber tech writing for 15 years now, I knew nothing of how to write a Request For Information (RFI) or a Request For Proposal (RFP). 

But as I started my full-time job almost one year ago, my managers and coworkers have taught me the basic skills of how to compose these kinds of documents.

But it is not just university or junior college graduates in Cybersecurity that have issues with effective communications.  Many seasoned professionals also have a hard time with it as well. For example, in a recent survey that was conducted by Tines, entitled:  “The Voice Of The SOC”, as many as 18% of the respondents admitted that they have poor communication skills, and that trying to share their ideas with their co workers was a huge “chore to do”. 

One of the primary reasons cited for this is that they do not waste time having to distill all of the technical data they collect and bring it down to a level so that key stakeholders can understand.  In my opinion, this is a truly pathetic excuse to make. 

For example, how to Pre Sales Engineers convey the technical stuff so that prospects and existing customers can understand the solution that they are proposing?  The entire report can be downloaded at this link below:

http://cyberresources.solutions/blogs/Tines_Report.pdf

So, what can be done to alleviate this serious issue?  Well, when it comes to existing workforce, a number of solutions are proposed, some of them which are:

1)     Deploy Automation:

The thinking here is that if the more mundane tasks are automated, that will leave time for the worker to actually focus on communicating something that makes sense to anybody.  A prime example of this is Penetration Testing.  There are many tasks that are involved here, and ultimately, a final report has to be prepared for the client to a level that they can understand.  By automating more of these routing tasks, which will leave the Penetration Tester to actually compile the document so that it is easy for the client to go through and review.

It is also believed that if more business processes were to be automated, the siloes which exists between the departments will be broken down as well.  This is especially important for the IT Department.

2)     Prompt Engineering:

Whether you like it or not, ChatGPT is going to be around for a long time to come.  Many individuals and organizations use it now to get answers to questions or to get new ideas onto something.  But remember that with AI, the key is that it is all “Garbage In and Garbage Out”.  This simply means that the answers you are going to get from ChatGPT are only as good as the data that is fed into it.  But keep in mind that with this platform, it does not simply give you a list of links to go through to find the answer to your questions.  Rather, it tries to give you a very specific answer to your questions.  Therefore, you need to feed into ChatGPT an exact query, using the right keywords.  This is technically known as “Prompt Engineering”, and by learning how to do this, it is also another great way for the Cyber professional to hone in on their communication skills.  In fact, according to one researcher at MIT, learning Prompt Engineering is the top AI skill that you can have.  More details on this can be seen at the link below:

https://www.cnbc.com/2023/09/22/tech-expert-top-ai-skill-to-know-learn-the-basics-in-two-hours.html

3)     Implement The Tabletop:

This is a kind of scenario in which you gather up some employees, and give them a fictitious security breach that has happened.  From there, you instruct them to analyze the situation, and communicate effectively what they think happened.  This is serves to great purposes:

*Not only will it help to enhance communication skills, but it will also help to bring down the siloes as just described before, as employees from different departments will be involved in this particular exercise. 

*If your company were to be actually hit by a security breach, one of the first things you need to do is have the ability to effectively communicate what has happened to key stakeholders in a way that they can understand it.  Doing Tabletop exercises will be of great importance here as well.  After all, it will be your company’s brand reputation that will be as stake.

My Thoughts On This:

Having a great set of communication skills is also very crucial when it comes time to Incident Response and Disaster Recovery.  You don’t want members of these teams running around trying to figure who said what.  Rather, you want them to jump up  to the cause, and put out the fires as quickly as possible.

Technology can help do this, but up to a certain point.  The other key component is efficiency, human based, communication skills.

Sunday, August 11, 2024

Quantum Artificial Intelligence: The Good, The Bad, & The Ugly

 


I remember back in my college days at Purdue, I was absolutely terrified of computers.  I never wanted to take a class that them as part of the curriculum.  But being an Ag Econ major, I had no choice but to face my fears of computers, because while I loved the subject matter, technology was a big part of the major. My first major test of computers occurred when I took a class called “CS 110”.  This was simply an introductory class in computers, but there was a lot of work involved.

Probably the most terrifying part of the class for me was sitting through the actual lab final, where we only had something like two hours to complete it.  But somehow, I managed to get a “B” in the class, and boy, I was happy with that.

My next experiences with computers took place in my graduate school days at both SIUC and BGSU.  For the former, I had to learn (the very hard way) how to do mainframe SAS programming.  With the latter, I actually ended becoming an MIS major, and even worked full time as a computer consultant for the university.  Back then, I was dealing with the Mac Classics.

But fast forward to today, and look where we are at.  We can spin up a Virtual Machine in just a matter of minutes, at just a fraction of the cost it would have been back in the late 90s and early 2000s.  Probably the best example I give of this is setting up an Oracle Enterprise Database. 

Back in the day, it would have cost at least $30,000 to set up an On Prem Server (most of this was in the licensing costs).  Now, you can create the exact same thing in Microsoft Azure for as little as $80.00 per month.

So, what does the future now hold?  It is an area known as “Quantum Computing”.  Wondering what it is?  Well, here is a technical definition of it:

“Quantum computing is an emergent field of cutting-edge computer science harnessing the unique qualities of quantum mechanics to solve problems beyond the ability of even the most powerful classical computers.”

(SOURCE:  https://www.ibm.com/topics/quantum-computing)

Given its sheer power and speed, Quantum Computing has a number of key use cases, which are as follows:

*Analyzing financial portfolios of all kinds of clients, no matter how small or large they may be.

*Improving and optimizing the lifespan of Electrical Vehicle (EV) batteries.

*Further enhancing the drug research and discovery process (for example, Generative AI has already propelled this, but Quantum Computing is expected to take it beyond leaps and bounds).

*Creating new GPU and NPU chips for Generative AI based applications.

Capgemini just published a report on the state of Quantum Computing, and you can access it at this link:

http://cyberresources.solutions/blogs/Quantum_Computing.pdf

But another area where Quantum Computing will make a huge mark as well is in Generative AI.  Technically, this is a field of Artificial Intelligence which is known as “Quantum AI”, or also commonly referred to as “QAI”.  Right now, we marvel at how quick the GPT4 algorithms can deliver an output in just a matter of a few minutes by using ChatGPT.  But, the algorithms that will derived from QAI will deliver that very same output in just a matter of seconds.  The reason for this is that Quantum Computing relies heavily on massive parallel processing. 

This simply means that many processes are run at the same time in order to compute very complex calculations in just a fraction of the time it would take the computers of today to do.  The other bit of good news here is that since the QAI algorithms will be far more efficient, it will demand less energy from the data centers in which they are hosted in.  Right now, cooling and using fresh water for that is a huge issue in this kind of environment.

But, now come the downsides of QAI.  Probably the biggest one here is that of the Cyberattacker manipulating the algorithms in order to break the strong levels of Encryption that exist today. 

To paint how bleak this picture is, it is even expected that with the next five years or so, the Cyberattacker will have the ability to break all of these Encryption Protocols.  More details on this can be seen at the link below:

https://www.csoonline.com/article/651125/emerging-cyber-threats-in-2023-from-ai-to-quantum-to-data-poisoning.html

People in fact have given serious thought to this, especially by our own Federal Government.  For instance, back in 2022, Congress passed what is known as the “Quantum Computing Cybersecurity Preparedness Act.” 

This ensures that all of the related agencies have developed and are testing their Incident Response/Disaster Recovery/Business Continuity plans should a breach actually occur.  More details about this can be seen at the link below:

https://www.congress.gov/bill/117th-congress/house-bill/7535

Also, the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and National Institute of Standards and Technology (NIST) have created a document known as the Quantum Readiness: Migration to Post-Quantum Cryptography.”  This was a joint effort in order to address the Cybersecurity risks that are associated with QAI.  It can be downloaded from this link:

http://cyberresources.solutions/blogs/CSI-QUANTUM-READINESS.pdf

My Thoughts On This

Whether we like it or  not, QAI will make its grand entrance into our society in a way that ChatGPT did a few years ago.  But, this time, there is a lot more at stake, given just how powerful Quantum Computing is.  It will impact every walk of life all over the world. 

But we have one key advantage here:  At the present time, we are more or less learning about the risks that Generative AI poses (and those that are evolving, but have not been discovered yet), and we can apply those lessons learned to QAI. 

But the key thing here is that we all mut be proactive about this.  QAI will be here a lot quicker than we realize.  For more details on what this means to you, Capgemini also published a supplemental report, and it can be downloaded here:

http://cyberresources.solutions/blogs/Quantum_Computing_Supplement.pdf

Also, CISOs and their IT Security teams must address at what levels they use Encryption.  They will of course need to be trained in QAI, and many of the Encryption Infrastructures (most notably that of the Public Key Infrastructure) will have to be redesigned and even redeployed in order to keep up with Cybersecurity risks that QAI will bring to the table. 

In the end, this will be known as “Crypto Resiliency”, just like what Cyber Resiliency means to us today.

Sunday, August 4, 2024

Should Ransomware Payments Be Made 100% Illegal? The Debate Rages On

 


Over the last week or so, I have had a number of podcasts, with guests speaking about differing areas of their expertise.  But one common question kept coming up (which I have to admit I prodded for) was about Ransomware.  For instance, I asked them what they thought about it, and the big question:  Should the ransom actually be paid?  Astonishingly enough, the answers were mixed on this.

In my opinion, a ransom should never be paid.  First, it shows to the Cyberattacker that you will bend, even though you may not want to.  As a result, the chances are much greater that they will come around the next time for you, and even demand a higher ransom payment. 

Second, if you are good at maintaining backups on a prescribed schedule, then restoring mission critical operations should not be a problem.  But this is largely dependent upon what kind of environment your IT/Network Infrastructure is hosted in.

If you are in the Cloud, like using Microsoft Azure, then this will not be a problem.  You could be up and running even within just a few hours.  But if you have some part of it that is On Prem, it could take much longer, and the restoration process could take much longer, assuming that you have to resort to tape backups. 

Third, if you do make a ransom payment, many insurance companies are now refusing to make payments on filed claims in these cases.  I think this all started with a French insurance company called Axa, when they all of a sudden said that they would stop making payments. 

As a result, other carriers followed suit.  Fourth, it can even be considered illegal if you make a ransom payment, especially if it is done  a nation state actor,  such as Russia, China, Iran, and North Korea.

So now, this begs a new question:  Should all Ransomware payments be made illegal?  In other words, even if you paid just a few thousand dollars to a Cyberattacker, should you still face the legal consequences for it?  Here are some considerations:

1)     Ransomware is getting uglier:

Gone are the days when a Cyberattacker would simply deploy a piece of malicious payload, lock up your computer, and encrypt your files.  It has become far worse now, with extortion-like attacks now taking place, which could even threaten the lives of the victims that are involved.  If this were to happen, the first instinct is to pay up.  But if it is made completely illegal, would you still do it???

2)     Not all businesses are equal:

This is where you would compare an SMB to a Fortune 500 company.  With the latter, if ransom payments were made illegal, these entities have a far better chance of surviving than the former.  It would totally wipe them in a matter of hours.  Also, keep in mind that many Cyberattackers are now targeting SMBs given just how vulnerable they are.

3)     Payments can still be made:

Even if they were made completely illegal, businesses will still try to find a way to make a payment to the Cyberattacker, that is covertly.  But given all of the audit trails that financial institutions have to now implement, the payor would eventually be caught.  But bringing him or her to justice would take an enormous amount of time and expense not only to collect the forensics evidence, but from the standpoint of litigation as well.  Again, if this all worth it, if the ransom payment was only just a few thousand dollars?  Probably not.

4)     More participation from law enforcement:

While the Federal Government agencies, such as the FBI, are doing a great job of tracking down those Cyberattackers that have launched Ransomware attacks, their resources are obviously limited.  Because of this, their priority is to first go after those attacks that have caused a large amount of damage, or if there is an extortion plot going on.  They obviously don’t have the resources to chase down after those people that make the small ransom payments.

5)     The Cyberattacker will find another way:

If ransom payments are made 100% illegal, no matter what the circumstance is, the Cyberattacker will find another way to be compensated.  But this time, the consequences of this could be far deadlier and even more extreme.

My Thoughts On This:

So given the considerations I just listed (and there are probably many more of them), is it worth it to make ransom payments totally illegal?  While it may have short-term advantages, the long run will not be served.  In the end, businesses should have the option if they want to pay up or not, even though I still think they should not. 

There are calls now for the Federal Government to enact more best practices and standards for businesses to follow, but in the end, it will be up to the business owner to implement them.  The only thing they would be obligated to do is if it becomes actual law.  But, by the time this actually happens, the newly enacted legislation will be far too outdated for the latest Cyber threat variants.

So, you may be asking what can be  done?  Simple, keep a proactive mindset within you and your IT Security team.  Always create backups!!!  The costs  of taking the steps to mitigate the risks of your business from being hit by a Ransomware attack pales in comparison to what the actual damage will be in the end.

Sunday, July 28, 2024

How The CrowdStrike Attack Will Translate Into Water Supply Attacks

 


The CrowdStrike Supply Chain Attack from last Friday is still being felt, especially with the ripple effects being felt by the major airlines.  But, more than that, people are still wondering how could a software update cause so much turmoil around the world? 

Although it will take some time to unravel all of this, the bottom line is the sheer level of interconnectivity among devices, both physical and digital, that is happening today.  Just one little flaw or a vulnerability can exploited very quickly by a Cyberattacker, and cause even more devastation.

Even though CrowdStrike claims that this was an error in the actual patch, it was a Cyberattack.  But only time will tell.  But this attack underscores yet another area in our American society that still eludes the security pundits today:  how to make ou Critical Infrastructure from a large scale Cyberattack.  Unfortunately, the answer is that there is no clear-cut solution.

The primary reason for this is that many of the systems that were designed to support our Critical Infrastructure was designed back in the 1960s and 1970s.  Many of the vendors who created them are either no longer in existence, or have merged with another company. 

Therefore, finding the parts to replace these legacy components is almost close to impossible. If anything, new ones will have to be created, which could take months or even years.

The other issue here is that when these components were built, Cybersecurity was not even a concept that was thought about.  All of the attention was paid to physical access security.  So, even trying to add new software packages to the ones that are already in place is by no means an easy task either. 

For instance, the main risk is that of interoperability between the two.  If they don’t work together, then the chances are much greater, something even worse could go wrong.

In the last couple of years, we have seen attacks to our Critical Infrastructure actually happen.  Probably one of the best examples was the attack on the Colonial Gas Pipeline.  Deliveries were delayed for over a week, and the futures markets that trade in this were also rattled.  In the end, the CEO paid a ransom of well over $4 million. 

Now, one of the greatest fears is that something like this could happen to our precious water supply.  Can you imagine not having a fresh water supply for over a week?  If this were to happen, we would all perish.  While the fix to this is very difficult to figure out at the moment, over time, something will evolve. 

But it will most likely take a lot of time.  But this does not mean that you, the CISO, have an excuse for not taking proactive steps to mitigate this risk happening, if you are tasked with seeing the IT side of a water supply system.

So, what can you do, you might be wondering?  Here are some steps that you can take:

1)     Figure out where all of the data lies:

Yes, even a company that deals with the water supply has large amounts of data that they collect and store.  But many times, when a CISO is asked if they know about where their company’s data is stored at, they very often go “Huh”?  There is no excuse for these, IMHO.  Take the time to figure out where the datasets reside at, and how they are stored.  Create data maps so that you will also have a visual to refer to.

2)     Conduct Risk Assessments:

When this term is used, the image of doing this on digital assets often comes to mind.  But, this kind of methodology can also be used for the Critical Infrastructure as well, even the water supply systems.  In this regard, take close stock of what is protecting your databases.  This is one of the first areas that a Cyberattacker will go after, so you will need to make sure that you have at least some controls in place.  While putting in new ones may not be an option right now, you could certainly at least explore the possibilities of at least trying to optimize them more.

3)     Look at network traffic:

Even with the legacy technologies that are in place, there is still network traffic that happens.  Take the time to analyze this, and make sure that all of the traffic that happens within is always encrypted.  Perhaps even consider upgrading your firewalls, routers, hubs, network intrusion devices, etc.  The issue of interoperability with the legacy systems should not be an issue here, as you are just trying to fortify the lines of defenses for the flows of network traffic.

4)     Update the documentation:

More than likely, the documentation that comes with a piece of Critical Infrastructure will be outdated.  Therefore, take the time to try to update them.  This will be very crucial if indeed you are impacted by a security breach.  Of course, this also underscores the importance for Incident Response/Disaster Recovery/Business Continuity planning as well.

My Thought On This:

Unfortunately, we will be seeing many more Supply Chain Attacks just like the CrowdStrike one.  Btu rather than having digital assets being impacted, it will be our Critical Infrastructure.  Remember the days of 9/11?

 Well, instead of planes crashing into buildings, we could very likely see major attacks hitting our Critical Infrastructure at the large cities here in the United States, but in a simultaneous fashion.  This is something I don’t even want to think about, but the harsh reality is that it could very well  happen.

And the worst part yet is, how long will it take to recover?  Weeks? Months?  Something to think about, especially for you, the CISO.

Sunday, July 21, 2024

Why The CISO, And Not The Employee, Is The Weakest Link

 


On Friday, in the early morning hours, the world woke up to what will quite possibly be the world’s largest Cybersecurity breach ever.  Many Cyber pundits are merely calling it a “large scale outage”, in my humble view, I think it was a security breach.  Why do I say this?  It is too eerily close to the Solar Winds attack.  Just one vulnerability was exploited, and from there, it had a cascading effect to over 1,000 victims, ranging from the smallest of the SMBs to the Fortune 500 to even the Federal Government.

So of course, a lot of finger pointing has been going around, and unfortunately, it was Microsoft that took the brunt of the blame for it.  However, this is far from the truth.  Microsoft is a client of CrowdStrike, and is heavily dependent upon their services to actually work right for the gargantuan Azure Cloud Platform.  But in the end, somebody will have to take the fall for it, and only a thorough investigation will reveal that.

What happened Friday is also directly related to another hot button topic in Cybersecurity today:  The notion that employees are the weakest link in the security chain.  I will share my views about this at the end of the blog.  But it is true, ever since the COVID-19 pandemic, the need for security awareness training has never been greater.

Many people have written blogs, articles, whitepapers, and even books as to what makes a great security awareness training program.  But it all comes down to three things:

*The training has to be made interesting so that the audience will remember what they have learned.

*It has to be specific to the department, job title, or what roles the employee does on a daily basis.

*There has to be follow-up to make sure that employees are applying what they have been taught.

For this blog, I will focus on the last one.  I know of companies that after having given a training program on Phishing, will actually launch a mock Phishing exercise to see how many employees have fallen prey to it.  For those that do, very often, a warning or a slap on the wrist is usually given, and then it is forgotten about.  But there is where often the failure starts.  For these  employees, a further, personalized approach needs to be taken. 

Here are three tips to get started with this:

1)     See what the employee is doing wrong:

Don’t simply bring him or her into your office, it will be much more intimidating for them.  Rather, take a very friendly, casual approach, such as taking a coffee break, or even take the employee out to lunch.  Tell them what you have been noticing in their Cyber Hygiene, and try to figure out why they are doing what they do.  For example, why are they using the same password over and over again?  Why are they not double checking the emails they get in their inbox?  Or, why are they consistently using apps for their work when they have not been authorized to do so?  And so forth.  This should give you a much greater insight into their ways of doing things.

2)     Create a “Credit Score”:

Once you have figured out what the employee is doing wrong, or why they are not following the security policies that you have set forth, try to create something like a “Credit Score” for them.  However, do not share this with them, it will make your employees feel as if Big Brother is watching them.  Just use this numerical value as a metric, or even as a Key Performance Indicator (KPI) to see just how well they are improving over time (which is hopefully the case).

3)     Give one on one help:

I remember when I was back in high school, I was struggling through Algebra II, and after my parents gave up on helping me, they resorted to finding me a tutor, who could give me that one on one time.  This tutor helped me in the specific areas that I was weak in, and over time, my grades improved.  This is the same approach that you have to take as well with your employee who is exhibiting a low level of Cyber Hygiene.  But, in my view, hire a person that is specially trained in this.  Don’t just farm out somebody from your IT Security team, as they have more than  enough to deal with on a daily basis.    Try to find a contractor that specializes in offering Cyber education, as they will be the most accustomed to offering tutoring sessions.

4)     Reward the employee:

As the tutoring goes on, and  if you see an improvement in their respective “Credit Score”, reward your employee.  This can take place with just a simple pat on the back, sending out positive messages with the right emojis, giving them a gift card, or even taking them out to lunch again.  The bottom  line is that once the employee feels appreciated for the efforts and remediations  that they are undertaking, they will continue with this trend for a long time to come, until you don’t have to coach them anymore.

My Thoughts On This:

You might be thinking at this point:  “I don’t have the time and resources to do this for each and every employee”.  Of course, you don’t.  These strategies are designed to help those employees that display the lowest, or weakest behavior when it comes to their Cyber Hygiene.  There will be some that will get it right the first time after the training, and those that will lie somewhere in the middle.  But the idea is that once other employees see others maintaining strong levels of Cyber Hygiene, they will feel compelled to do the same.

In the end, it comes down to what is known as “Behavioral Analysis”.  In others words, trying to figure out why people act and do things the way they do.  This is becoming a hot sector now in Cybersecurity, and rightfully so, with all that is going on, especially now with Generative AI being so dominant.

So now, to back to that one thing:  I do not think at all that employees are the weakest link to the security chain.  Rather, I find that the CISO and the other members of the C-Suite to be the weakest link.  They do not practice what they preach, and if they did, we would see a much different picture in terms of employee Cyber Hygiene today.

In the end, it takes both people and technology to have a great line of Cyber defense for your business.

Sunday, July 14, 2024

How To Avoid In Becoming A Victim Of AI Eavesdropping: 5 Point Checklist

 


Well, it has been an awhile since I have written anything about Generative AI.  It’s still continuing to make the news headlines, and most of the publicly traded companies are seeing their Earnings Per Share (EPS) going to even newer highs, such as the case with Nvidia, even after their recent stock split. 

But despite all of this, and rightfully so, there is still a growing angst amongst the general public here in the United States as to how the tools that have Generative AI baked into them can be misused.

For example, one of them is how the video conferencing platforms, such as those of Zoom, Webex, Teams, etc.  record conversations in a meeting.  For example, when you have a meeting with your coworkers or manager, you often have the option to have a recording of it, to be used as a future reference, if the need arises.

Here are some of the scenarios which pose some of the greatest risks:

1)     Flaws in the transcription:

As I have written about before, Generative AI (and for that matter, all branches of AI) are primarily “Garbage In and Garbage Out”.  Meaning, the output that you get in the end is only as good as the datasets that are fed into the model.  Even if you take the time to make sure that all of these datasets you feed into it are as cleansed and optimized as possible, mistakes can happen, whether it is intentional or not.  For example, if you have a meeting, and choose to have it recorded, there could be flaws in the actual language of the transcript that could convey a very negative connotation.  Thus, before the transcript is ever released to the team, it is imperative that you double this language first to make sure that all is good.

2)     The right to use it or not:

Very often, it is the originator of the meeting that has the option to launch a recording session or not.  Unfortunately, the other members who have been invited to it do not have that option.  Thus, if an employee does not like the idea of being recorded, they still may feel forced to, especially if the meeting originator is their boss and wants to use it.  Although the recording mechanisms very often do notify the employees ahead of time that the conversation in the meeting will be recorded, a quick fix to this is to have the meeting originator actually reach out to each team member to make sure it’s OK that they are being recorded.  If the majority say no, then it will be time to do things the old-fashioned way, by having a professional minute taker present  to take notes.

3)     Data exfiltration:

In today’s world, many online meetings occur in which private and confidential information is very often shared amongst the members.  The thinking here is that since everybody knows each other, all is good.  But unfortunately, this is far from the truth.  For instance, there is the grave possibility the transcript could be the target for a Data Exfiltration attacks.  When we hear about this, we often think of databases being hacked into.  Because of this, we often forget about the other places where data might be saved, especially those in video conference meetings.  The Cyberattacker is fully aware of this, and thus makes this a target.  While there is no sure fix for this, the best thing you can do is to make use of the tools that your Cloud Provider gives you to monitor your AI Apps.  A great example of this is Purview from Microsoft, which is available in any Azure or M365 subscription.

4)     Third party usage:

Many of the vendors that create AI based products and services very often, and covertly, use the data that you submit in order to further refine the AI algorithms that are being used in their models.  This is also true with the recording of the video conference meetings, and the transcripts that come of them.  A perfect example of this is the recent Zoom debacle, where this  occurred.  This led to an 86 million Dollar lawsuit.  More details in this can be found at this link:

https://www.darkreading.com/cybersecurity-analytics/following-pushback-zoom-says-it-won-t-use-customer-data-to-train-ai-models

While you can’t have a direct control over what is collected initially, make sure that you read all of the licensing and end user agreements carefully.  And, if after you start using the AI recording tool and feel that the data is being misused in this fashion, you do have rights under the data privacy laws, such as those of the GDPR and CCPA.  But it is always wise to consult with an attorney first to see the specific rights you are afforded under them, and how you can move forward.

5)     Covert participants:

Back in the days of the COVID-19 pandemic, “Zoombombing” was one of the greatest Cyber threats that were posed to the video conferencing platforms.  While this may dissipated to a certain degree, the threat is still there.  But this time, given how stealthy the Cyberattacker has become, they don’t even have to make an appearance.  They can still listen covertly, and record that way as well, without you even knowing it.  Probably one of the best ways to mitigate this risk from happening is to make sure that your video conference meeting is encrypted to the maximum extent possible, and that you require login password that is long and complex (a good tool to use here is the Password Manager).

My Thoughts On This:

All of that I have described in this blog is known technically as “AI Eavesdropping”.  It is also important to keep in mind that this risk is not just born out of the video conferencing platforms, it can happen on any device that has Generative AI built into it.  A good example of this are the various “fit bits” that you can wear as a watch. 

As Generative AI continues to further evolve at a very fast pace, you, the CISO should also take responsibility for creating a separate security policy that is targeted just towards Generative AI.  Some of the things that should be addressed are how your company uses the data that is collected from Generative AI, how it is stored and used, and the rights that your employees have if they feel they have been violated against.

CrowdStrike One Year Later: 3 Key Lessons Learned

  Well guess what people?   It has been a year since the CrowdStrike fiasco, and from what we know, it was the biggest Cybersecurity   fiasc...