Sunday, May 18, 2025

Detail Is Important, But Holism Is Even More To Incident Response

 


Some time ago, I wrote a blog about metrics and KPIs, and how nobody really likes to be judged by them, no matter what the industry is.  Well, the same is said to be about Cybersecurity as well.  Probably one of the two most important ones are the:

Ø  The Mean Time to Detect (MTTD):  This reflects how long it takes an IT Security team  to detect a threat variant.

 

Ø  The Mean Time to Respond (MMTR):  This reflects how long it takes for the IT Security team to make a security breach, if one is occurring.

But one thing I failed to mention in that blog post is that metrics are also key in these following documents:

Ø  Incident Response:  This is the plan that details how an IT Security team should respond to an incident.

 

Ø  Disaster Recovery:  This is the plan that provides not just how the IT Security team, but the entire company, should proceed to restore mission critical processes and functions.

 

Ø  Business Continuity:  This is the plan that provides guidance as to how the company should restore back to a state of normalcy, at least the same or better than what they were before.

 

For the purposes of this blog, we will just focus on Incident Response.  In today’s times, and especially with the advent of Generative AI, simply creating a document and booking it back on the shelf will no longer suffice.  Rather, a much more comprehensive approach needs to be taken, and this is technically referred to as the “Cyber Incident Response Program”, also known as the “CSIRP” for short.  It is a policy that maps out the following:

Ø  Responsibilities of all the team members.

 

Ø  The expected outcomes.

 

Ø  All the objectives that Incident Response have been met, and better yet even exceed expectations.

One of the key benefits of taking this holistic type of approach is that all employees will be able to understand the ramifications and gravity of just how seriously Incident Response should be taken.  This is particularly for C-Suite,  whose main vision of the company is unfortunately driven by just pure numbers. 

By having this kind of grasp of it, it is hoped that that they will also see just how important Cybersecurity should be taken, and that they should get away from the thinking that “if it hasn’t happened to us, then it probably never will”.  In this regard, it is also important for the CISO to create this kind of policy keeping the various Cyber priorities in mind.  Meaning, one size fits all document will no longer work.  Rather, documentation needs to be created for each kind of threat that can exist.  For example, there should be one dealing Ransomware, one for countering a Phishing attack, etc.  True, this is a tall order, but here are two ways in which this can be broken down:

1)     Take the whole view:

Just do not restrict you and your IT Security team to just the well-known and established metrics and KPIs.  Rather, try to back this trend by first taking a critical look at all the data that you have collected about any security breaches that may have hit your business.  From there, see any unhidden trends that you can create a new metric out of, and try to apply that for the future.  Some key areas that should be examined include:

Ø  Efficiencies

 

Ø  Any gaps, weaknesses, or vulnerabilities that went undetected which resulted in that particular security breach occurring.

 

Ø  The resources you need.  Trying to put this in either quantitative or qualitative terms will go a long way when approaching the other members of the C-Suite when it comes time to ask for funding your Cyber-based initiatives.

 

2)     Usefulness:

After you have defined your new metrics and  KPIs for the CSIRP, it is important at some later point in time for both you and your IT Security team to take stock of them and evaluate each one of them, and determine how they can be made going better into the future.  A good one to look at here is vulnerability detection.  Are you not only fast enough to find them, but also to remediate them?  If the number is lower than you want it to be, then you know that metric needs to be refined to be where you want to be. But keep in mind that refining simply does not mean changing the metric around.  Rather, all the variables that go into it need to be very carefully looked at, which is a direct function of what your IT Security team needs to be doing.

3)     Proactiveness:

It is important to keep in mind that you should not let your newly created metrics and KPIs for the CSRIP go stale.  Rather, you also need to be initiative-taking about them and determine which ones should be retired and if any other new ones must be created.  Remember, the Cyber Threat Landscape is always changing,  and the metrics and KPIs that you initially produced need to reflect that.  In other words, it is a process of evolvement, and it should not ever be viewed as merely as a static one.

4)     Communications:

You and your IT Security team need to get away from living in the world of silos.  Whatever you do in the CSIRP will impact everybody else in your company, and this CSIRP and the benefits that it brings to the table need to be clearly and effectively communicated, in a transparent way.

My Thoughts on This:

One of the other primary benefits of creating and implementing a CSIRP is that this will help you immensely to come into compliance with the many data privacy laws that abound today, such as the GDPR and the CCPA.  But even more importantly, this will help to mitigate the chances of any audits being made by regulators and facing severe financial penalties.

Monday, May 12, 2025

USA Vs China: Who Will Win The Gen AI Battle?

 


With all the political turmoil that is happening today, the news headlines do not seem to be coming out as quickly about Generative AI as it once did, say, going until the end of last year.  The biggest fear is China, not just from the standpoint of tariffs, but also in terms of competition. 

In fact, if you recall, they came out with something remarkable like ChatGPT.  It was developed by a company called Deep Seek, and the cost of running the algorithms and the hardware needed (such as the GPUs) is much lower.

Also,  Nvidia took a decent hit with a financial charge of over $5  billion,  with the restrictions that have been put into place on sending GPUs to China.  But despite all this turmoil, there is yet another headwind that both produce and make use of Generative AI must contend with:  Data Privacy, and Compliance that comes along with it.

As I have written before, the fuel that runs Generative AI models are the datasets that are fed into it.  Not only do they need it to train, but they also need them to create the output you are seeking when you ask  it a specific query.  Generative AI Compliance will come across three different  angles:

Making sure that the right controls have been implemented on the training datasets.

The same with the above, but for the output that has been generated.

Also, the same with the above, but making sure that any data which is submitted by the end user  is  also as secure as possible.

To this end, the trends for this year are expected to be as follows:

1)     Efforts From The EU:

They have produced a new piece of legislation called the “NIS2”.  It is an acronym that stands for the “Network and Information Security”.  Just like the GDPR, it applies to any entities that conduct business in the EU, even if they are not physically located there. The tenets and the provisions are almost similar, but they also take a strong stance to Generative AI.  But, the financial penalties are very harsh for non-compliance:  It can be up to 2% of the revenue that has been generated on a global basis.

2)     The DORA:

This is an acronym that stands for the “Digital Operational Resilience Act”.  It was created and enacted by the EU as well.  But apart from Generative AI compliance, it has two key specific focuses:

Ø  Proving that you cannot just only create backups, but that you can also restore the mission critical data from them, if you are ever impacted by a disaster, natural or man-made.

Ø  That the backups which have been created are segregated in terms of the physical and logical ones.  The goal here is to make sure that businesses are storing their backups in various locations, such as On Premises or in the Cloud.

 

3)     More From China:

Take for example, that you have a hosting account with a domain registrar that is located here in the United States (such as GoDaddy, Namecheap),  etc.).  You decide to host your application in a datacenter that is in the US.  Although this may be technically correct once you launch your web application, the datasets that it uses could be stored at a datacenter in entirely different country that you may not even be aware of.  So, the hot topic of debate here is who takes custody of it?  Well, the Chinese Government is making this even clearer now, especially when  it comes to Generative  AI.  To this extent, they have passed two distinct laws:

Ø  The Personal Information Protection Law ( also known as the “PIPL”).

Ø  The Data Security Law ( also known  as the “DSL”).

Ø  The Cybersecurity Law (also known as the “CSL”). 

The result is that China is now relaxing its restrictions on storing “foreign datasets” on the datacenters that are located there and are now highly encouraging businesses from all  over the world to even put there backups as well. 

4)     The Rise of E2EE:

This is yet another acronym that stands for “End to End Encryption”.  Encryption has always been a favored tool in the arsenal to protect anything data related.  After all, it scrambles it so that if anything was to be intercepted by a third party, there is nothing  that they can do with it unless they have the appropriate key to decode it.  But with the E2EE, the IT Security team will have no choice on what can be encrypted, by default, everything will be.  While this is heavily targeted towards  the Generative AI algorithms and the datasets they use,  using the E2EE can be a bad thing as well.  For instance, even a Network or Database Administrator with the right permissions can be denied access.

My Thoughts on This:

On a theoretical level, all of this sounds great, taking more steps that the datasets that Generative AI use are now even more protected.  But in the real-world sense, just how enforceable is all of this?  Normally in a world where there is not much chaos or confusion, this all could very well be done.  But once again, given the political climate that we now have in the United States, who knows how this will all come together.

Then there is the issue of China.  They are the second largest economy in the world, and in fact, their manufacturing and supply chain logistics far surpass that of the United States.  For example, we are still trying finish construction on the next Ford class aircraft carrier, the “USS John F Kennedy”. 

During this time, the Chinese are already working on I believe, their third carrier, which would be quite compatible.

So, there are still many complexities and uncertainties which lie ahead because of these tariffs.  But one thing is for sure in this regard:  Given their sheer dominance, I bet they will far  outpace the United States when it comes to Generative AI development and production.  Not only can they do it faster and cheaper, but the quality  in the end may prove to far superior in the end.

The Top 4 Risks Of Outsourcing Gen AI To China

 


While there seems to be no end to the tariff war in China, many top CEOs are warning the Administration that there could very well be empty shelves in the major grocery stores, and other related goods stores here in the United States.  In fact, even the major shipping containers that are coming from China are now starting to slow down.

Because of this, many countries could very well be turning to China now to be the major trading partner, replacing the US entirely.  One such area in which this is happening is in the Generative AI Industry. We have already seen this with Nvidia, where severe restrictions are now being placed upon them onto the kinds of chips that they can export there.

But one area which people could very well turn to China is around actually developing the models that drive Generative AI.  After all, why pay more here in the US when you can have the same thing done there faster and cheaper (but of course, the quality of the development will still be an issue.

But there are inherent risks depending upon another country to do this.  Here are some of them:

1)     Biasness :

The technical definition of Generative AI biasness is:

“Artificial intelligence bias, or AI bias, refers to systematic discrimination embedded within AI systems that can reinforce existing biases, and amplify discrimination, prejudice, and stereotyping.”

(SOURCE :  https://www.sap.com/resources/what-is-ai-bias)

To put it in another way, this is when the out output that has been yielded by the model produces some kind of content that is deemed to biased, or even racial in some way.  Although this is a direct product of the datasets that have been fed into the model, a good Gen AI programmer could still tweak the algorithms, so that they can still produce this same kind of content, even though the data might have checked before time. 

2)     Optimization:

In the world of Generative AI, this is also known as “fine tuning”.  This is where you are trying to keep all the models in top condition so that they produce the best possible outputs.  Obviously, if you have created the model, you will know immediately how to do this.  But what if  you had outsourced the model creation to another company in China?  Obviously, they are not going to tell the secret sauce to their recipe, so fine tuning here could be a major problem, because you will not know the inner workings of the model.

3)     A Deepfake?

A Deepfake, as its name implies, is a “fake” version of a real person.  This is quite widely used during the political election seasons, where a Cyberattacker could post a fake video of a politician asking for donations to their respective campaign.  So, in this regard, how do you know that a Generative AI model that has been developed for you is not the real thing?  What if you are just getting a “Deepfake” of it.  This is an especially worrisome situation, since your customers will also be inputting data and information into the submittal forms of your web application.  This in turn will also be fed into your Gen AI Model, so that you can analyze any trends to help you determine the viability of new products and services.

4)     The Creation:

Whenever you hire an outside source to develop your Gen Models, you will also want to meet the team that will be doing, whether it is virtual or face to face.  Be very leery of hiring a company from overseas that does not introduce their team to you.  After all, it could be a Cyberattacker that is creating it and could put all kinds of covert backdoors into the code so that they can gain direct access to your IT and Network Infrastructure. 

My Thoughts on This:

The risks that I have described here can not only happen in China,  but it could even very well happen here in the US.  The key difference is that we contract in place that can be enforced in a court of law, though it may take some time. 

If you choose to outsource this to a company, say once again to China, and they violate the terms of the contract that they have signed with you, it will be very difficult best, if not impossible, to gain any kind of legal recourse.

So, while faster and cheaper might be the way to go, think twice about that.  Quality will always beat those two in the end, no matter what the need or the application is.

Sunday, May 11, 2025

How To Plan Your Infrastructure as a Code Environment: 4 Point Checklist

 


There is one thing I don’t  think that I have ever written about before in a blog:  That is, “Infrastructure as a Code”, also known as “IaC” for short.  It is a term that is commonly thrown around the world of Cloud lingo, but many people do not really know what it is about.  In fact, as much as I have written about Cloud, I never really paid too much attention to it.

So, before we go any further, here is a technical definition of it:

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes.”

(SOURCE :  https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac)

Although the intricacies behind it can be quite complex, long story short, it just gives you another way to manage your Cloud based deployment (whether it is in the AWS or Microsoft Azure), on an automated basis, using various programming languages, such as that of Python. 

But despite the tools that the Cloud providers give to manage your IaC environment, many businesses in Corporate America still fail in configuring it properly.

Consider some of these statistics:

*According  to the “2024 Security Report” from Check Point Software, 82% of businesses failed to properly configure their IaC environment.

*Back in 2022, because of a misconfiguration, the ICICI Bank leaked out well over 3.6 million files.

To see the detailed report that was published by Check Point Software, click on the link below:

http://cyberresources.solutions/blogs/2024-Cloud-Security-Report-CheckPoint.pdf

So, if you decide to make use of an IaC environment to help run your Cloud deployment, here are a few key tips to keep in mind:

1)     Plan:

The first thing that you need to do is map out exactly what you want  the IaC environment to do, but even more importantly, you need to take into sincere consideration the security issues that are involved, such as:

Ø  Defining the functional and controls requirements.

 

Ø  The kind of Cloud environment that you want to have, for example, Private Cloud, Public Cloud, Hybrid Cloud, etc.  and the security that goes along with all of that.

 

Ø  Use tools like Terraform or Cloud Formation to build out your IaC environment.

 

Ø  Create a backup strategy for your images that you create for your Cloud apps (such as Virtual Machines, Virtual Desktops, etc.).

 

2)     Software Development:

               The primary goal of the IaC environment is to use code libraries to build it out.  Since source   code creation is often forgotten about when it comes to security, keep the following in mind:

Ø  Create the appropriate rights, permissions, and privileges for everybody that is on the DevSecOps team.

 

Ø  Keep a version control of all software builds that go into the IaC environment.

 

Ø  Always test any Open-Source APIs that you may use in a sandboxed environment first.

 

Ø  Monitor your Privileged Access Management (PAM) environment very closely.

 

3)     Testing:

Before you put your IaC framework into the production environment, you should first evaluate it:

Ø  Assess each component of it in a sandboxed environment (as just discussed).

 

Ø  Make sure that the source code has been completely vetted of any gaps or vulnerabilities.  This can very often be done through Vulnerability Scanning or Penetration Testing.

 

In fact, according to the “2024 State of Cloud Security Report” by Orca, over 74%  of businesses failed to detect any issues with the Cloud deployments (as it relates to the IaC environment that was created).  To get more details on this, click on the link below:

 

http://cyberresources.solutions/blogs/2024-State-of-Cloud-Security-Report%20(1).pdf

 

4)     Deploy and Monitor:

After you have implemented your IaC framework into your Cloud based deployment, you now have made sure that it is running smoothly.  Here are some points in this regard:

Ø  Make use of SIEM to notify you on a real time basis of any abnormal network activity or behavior.

 

Ø  Try to make use of Generative AI to filter out for the False Positives.  This will help alleviate “Alert Fatigue” on your IT Security team.

 

Ø  Have a Change Management process in place for any updates or reconfigurations that you need to do for your IaC environment, and make sure that everything is well documented.

 

My Thoughts on This:

As Cloud technologies evolve over time, there is no way that you can keep track of everything on a manual basis.  You will need automation, and even lots of it.  This is where IaC will come in very useful.  But remember, it is also prone to Cyber Threat Variants, as the given how powerful the IaC is, it will soon become a prized target for the Cyberattacker.

Therefore, keep checking the controls that you have in place for them, and change/upgrade them as necessary.  Also, we will see IaC being used heavily in Edge Computing.  This is where all the data processing occurs in a location that is close to your device.  This helps to avoid any downtime or network latency when you need your datasets the most.

 



Sunday, April 20, 2025

We Are In A Defining Moment At The Intersection Of OT & Critical Infrastructure

 


I have an upcoming that will be published later this year.  It is all about Supply Chain Attacks, and in fact, one whole chapter is devoted to how the Crowd Strike and Solar Winds breaches happened.  But, It is not just digital assets that are at risk, even physical ones are also prone as well. 

In this regard, it is our nation’s Critical Infrastructure that is at grave risk.  Examples of this would include our water supply, gas and oil pipelines, the national power grid, our food supply system – all that we need to live comfortably every day. 

But the problem that drives the issue of instability in the Critical Infrastructure is that the technology that drives is too outdated.  This is referred to as “Operational Technology”, and it can be technically defined as follows:

“[It is defined as technology that interfaces with the physical world and includes Industrial Control Systems (ICS), Supervisory Control and Data Acquisition (SCADA) and Distributed Control Systems (DCS).

(SOURCE:  https://www.ncsc.gov.uk/collection/operational-technology)

These components were built in the late 1960s and early 1970s, and neither the parts for them are no longer available nor have the vendors simply just disappeared.  There have been serious thoughts given to simply gutting out the old components and putting new ones in,  but this is almost impossible.  There are many other subcomponents that rely upon them and would not collaborate well with the newer staff.

Thoughts have even been given to just adding new Cybersecurity technologies to the existing OT staff, so that they would not have to be ripped out.  Btu yet once again, interoperability is the issue.  The old simply will not play nicely with the new.  Because of this, our Critical Infrastructure is at grave risk.  Consider some of these stats:

*Ransomware attacks to the OT that drive the Critical Infrastructure has risen by 87% on a Year Over Year (YOY) basis.

*Through a study that they conducted, Palo Alto Networks discovered that at least 70% of businesses (which do not necessarily include the Critical Infrastructure) have suffered some sort of OT related security breach.

(SOURCE :  https://www.darkreading.com/ics-ot-security/boards-fix-ot-security-regulators)

But it is also important to note that the Cyberattacker can quite easily attack the weak points in the Critical Infrastructure, because there are so many of them.  But rather than doing that, and  in effort to cause as a cascading effect of damage, they typically pierce through a backdoor in the IT and Network Infrastructure. 

That way, they can stay in for long periods of time, and wreak havoc on say the national gas pipeline system, as in the case of Colonial  Gas.

But it’s not just here in the United States, these kind of attacks are happening all over the world, with most of the headlines coming out of the Ukraine.  In these cases, their Critical Infrastructure is not being hit directly per se, but rather, through the OT or other IT/Network systems that drive them.  One  of the best-known cases occurred in Lviv. 

Back in 2024, a Russian hacking group deployed a malicious payload in the OT that drove the heating utility company there.  As a result of this, over six hundred buildings lost much needed heat for well over 48 hours. 

In fact, the very same thing even happened here in the United States, though it was not made public.  The Chinese hacking group deployed a piece of malware (known as the “Volt Typhoon”) into the OT systems of the national power grid. 

This went undetected for an alarming one-year period!!!  Luckily, nothing happened about it,  but the Cyberattackers had every opportunity to move in a lateral fashion to attack our water supply as well. 

My Thoughts on This:

Unfortunately, at the present time, there is not much we can do, at least in my opinion, to really beef up the lines of defenses at our Critical Infrastructure.  To do this, we would have to implement  new controls into the components of the OT itself, which are the ICS, SCADA, and DCS (as it was presented in the definition). 

But once again, you simply cannot expect the new to have a nice tango dance with the old – not going to happen.

The other option would be to hold the Board of Directors, and their corresponding C-Suite take more action. But while they may acknowledge the fact that it is an issue, the chances of them taking any action on it are almost nil. 

Heck, if they cannot address Cyber issues that directly impact them, what makes one think that they will act on Critical Infrastructure?

True, the Federal Government could step in,  but given the political chaos that is happening today, this is too far-fetched a reality.  Even if any bills were passed into legislation, it would be far too outdated to keep up with the pace of technology.

But there is one  option that could prove viable.  That is the Zero Trust Framework.  With this, the IT and Network Infrastructure of a Critical Infrastructure would be divided up into different segments or “zones”.  Each one of these would have their own layer of protection, making use of Multifactor Authentication. 

That way, no modern technology of a huge amount would have to be implemented, the only items that would really be needed are the authentication mechanisms that would be needed to confirm the identity of the end user.

The main premise behind this is that if the Cyberattacker can break through one “zone”, the chances of them breaking through all of them becomes statistically zero.  But, as a country, we absolutely must come together as one to figure out best to upgrade the OT systems and the Critical Infrastructure.  It’s not just one business that will be impacted; it will be the lives of all Americans that could be gravely impacted in one fell swoop.

 

 

Friday, April 18, 2025

The New Cyber Metrics We Need Today: 5 Golden Ones

 


I usually do not write blogs over the week but today is an exception.  It’s a holiday today where I work at today, so in that regard, Happy Easter!!!  One thing has humans that we hate to happen to us is to be judged by others, whether it is in our personal or professional lives.  We always want to feel good around the people with are with, but unfortunately, it is a part of life where we will be judged.

Such is the case in Cybersecurity.  This field has a lot of metrics with it, and in fact, I just wrote and published a book about it just last year.  You can see it in more detail at this link:

https://www.routledge.com/Generative-AI-Phishing-And-Cybersecurity-Metrics/Das/p/book/9781032820965

In it, I cover the major Key Performance Indicators (KPIs) and other metrics that the CISO and their IT Security team need to be aware of.  There are two of them, which are of prime importance:

1)     The Mean Time to Detect:

This is also referred to as the “MTTD”.  This reflects how long it takes an IT Security to detect that a threat or security breach is actually happening.  Believe it or not, the average time  for detection is a staggering 7 months.  Nobody really has a firm answer to why it takes so long, either the IT Security team is too overwhelmed putting out other fires, or the Cyberattacker has become  that stealthy and covert.

2)     The Mean Time to Respond:

This is also commonly known as the “MTTR”.  This metric reflects how long it takes an IT Security team to contain actual breach.  There are no hard numbers on this one (as is the case with the MTTD), but the total time for containment will vary depending upon the severity of the threat variant itself.  In this instance, documents such as the Incident Response, Disaster Recovery, and Business Continuity Plans come into prime importance.

But many Cyber pundits are now claiming that these established metrics are now too outdated and stale.  Meaning, they do not consider other variables that can impact detection and containment, such as that of Generative AI.  As I have also written about previously, it can be used for both the good and bad.  So, you may be asking at this point:  “So what next is to come?”  Here are some thoughts that have echoed, as a result:

1)     Priority:

Many people have pointed out that, for example, the MTTR and the MTD cannot be blanket metrics that are used for every kind and type of security breach that happens.  Rather, these metrics must be adjusted to consider the following:

Ø  Exploitability

Ø  Impact

Ø  The sources that were used to detect/contain the threat.

 

In other words, the degree of potential severity (or actual severity if the security breach has occurred) needs to be the key factor here to take into consideration, when calculating these two metrics.

2)     Monitoring:

A metric needs to be formulated which shows that although a security breach has been detected, how long it takes the IT Security team to contain it.  True, this sounds just like the MTTR, but in this case, this is just a static number.  It only reflects only having the entire breach has been put out. This new metric would show long containment takes on a real time basis. 

3)     Practice:

To the best of my knowledge, the metrics that exist in the Cyber world today are used primarily for real world situations.  How about creating a metric or a group of metrics that gauge the effectiveness of both the CISO and the IT Security team when conducting mock Cyberattacks?  Everybody seems to keep talking about doing them but not measuring the results of it at the end.  In my opinion, there should be a strong emphasis on this, as having this in mind of measure will only home in on the IT Security to sharpen their skills and response times when an actual breach happens.

4)     Culture:

 

The sad matter of fact is that we live in a reactive society.  We only act when something  bad happens.  Therefore, there have been calls to create a new metric or group of metrics that reflect the overall proactiveness of the IT Security team on a real-time basis, and how that has led them to be successful (or not) in  the detection and containment of a security breach.  But, it is particularly important to keep in mind that this would be a qualitative metric to calculate, as more subjective variables must be included here as well.

 

5)     After:

Yes, the MTTR shows how long it takes for the IT Security team to contain the threat var. t.  But what afterwards?  Such as how long does it take to restore mission critical business operation?  How long does it take for the business to get back to where the levels it was before the security breach hit?  Some potential metrics here could revolve around both Disaster Recovery and Business Continuity. 

My Thoughts on This:

Me personally, I do not like metrics, but in this case, I fully support them as it relates to Cybersecurity.  This is the only way that we will truly know if the CISO and  the IT Security team are truly doing their jobs do the best levels that they can.  In the end, having good metrics not only will bring a strong reputational image in the eyes of the public, but it can also be the make or break if money and budget is to be approved by the C-Suite for any kind of Cybersecurity efforts to be undertaken into the future.

CrowdStrike One Year Later: 3 Key Lessons Learned

  Well guess what people?   It has been a year since the CrowdStrike fiasco, and from what we know, it was the biggest Cybersecurity   fiasc...