Well guess what people?
It has been a year since the CrowdStrike fiasco, and from what we know,
it was the biggest Cybersecurity fiasco
to happen in a long time. It cost at least
$5.4 Billion in overall damage. It
became known as a “Supply Chain Attack”, in that only one point of entry is
used by the Cyberattacker to deliver the malicious payload, so that it can inflict
damage upon thousands of victims in almost a few minutes. Usually, there can be simultaneous impacts,
or they can be delivered in a staggered fashion.
This is what happened with the Solar Winds fiasco. Through their “Orion” platform, the Cyberattacking
group was able to insert a malicious payload through just one point of vulnerability,
and from there, once again, thousands of victims were impacted. So what lessons have been learned from
this? Here is the proverbial laundry list:
1) Determining
what happened:
Following the aftermath, CrowdStrike
launched what is called a “Root Cause Analysis”, or “RCA” for short. This is simply
Cybersecurity jargon for getting down to the bottom of what exactly
happened. Here is what they found:
Ø There
were a bunch of software validation errors which were the culprit of the
disaster.
Ø There
was not enough testing that was done of these software patches/upgrades, which led
to the validation errors.
Ø A flawed
deployment model, which too had not been evaluated, which impacted millions of
people around the world, especially the airlines and the airports.
2) More
best practices:
As a result of this, many
government and private sector entities created their own set of Frameworks to
mitigate any chances of a future CrowdStrike incident happening ever
again. Some of the best examples of these
are the:
Ø The ISO
27001: More information can be found at
this link: https://en.wikipedia.org/wiki/ISO/IEC_27001
It spells out the details for
properly managing IT and Network Infrastructure for any type or kind of entity.
Ø The
ISA 62443: More information can be found
at this link: https://en.wikipedia.org/wiki/IEC_62443
It too essentially spells out
the needs for further software testing before it is released into the production
environment.
Ø The
CISA: In August of 2024, they created
what is known as the “Software Acquisition Guide”. More information about it can be seen at this
link: https://www.cisa.gov/resources-tools/resources/software-acquisition-guide-government-enterprise-consumers-software-assurance-cyber-supply-chain.
They also heavily recommend
that businesses participate and agree to their “Software by Design Pledge”. More details on it can be seen at this
link: https://www.cisa.gov/securebydesign/pledge
3) A
sense of sharing:
There is no person that is
entirely responsible for making sure that the Software Development Lifecycle
(also known as the “SDLC”) has been carried properly. It is the responsibility of both the primary
entity and any outside third parties that they may have decided to use to
create a particular kind of software package.
In other words, if a catastrophe does happen, everybody needs to assume
their sense of responsibility and ownership for what exactly happened.
My Thoughts on This:
Here are some other thoughts that I have about this as well:
Ø
Testing for security holes and vulnerabilities simply
cannot wait at the very end of the SDLC.
It must be addressed at the very beginning,
even with the project requirements and milestones being drawn up.
Ø
All entities, no matter what industry they
belong to, must adopt a Change Management Policy. This is where a version history of all the changes
to the SDLC are noted, documented, and archived for later use. It is also equally important to have a Change
Management Committee. This is where a
selected individual from each committee participates in this entire process, especially
if it impacts the entire company.