Sunday, September 15, 2024

Understanding What An EDR Really Is Without The Techno Jargon

 


The Cybersecurity world, as I had mentioned in one of my previous blogs, is no doubt full of techno jargon.  While using these fancy terms might be great for marketing efforts in order to attract new customers, the bottom line is that at some point in time, you are going to have to break this down for people to understand.  This is especially critical when you onboard a new customer. 

They are not going to care about the techno jargon that you dazzled with them before, now they want to make sure that the product or solution is going to work, and yield a positive Return On Investment (ROI) down the road.

Such is the case with this new piece of techno jargon.  It is called “Endpoint Protection”.  Although the deployments that are involved with this can be fairly complex, depending upon your requirements, simply put, all that it means is beefing up the lines of defenses that you have for all of your devices, whether they are physical or even in the cloud.

Probably the most typical example of this are the wireless devices that you have given to your employees in order for them to conduct their daily job tasks.  Obviously, given the sheer importance of them, you will want to ensure that are as Cyber secure as possible. 

So how can one go about doing this, in clear and simple terms?  Well, here are some tips:

1)     Deployment:

It is always preferable to use the same Cyber vendor for Endpoint Protection solutions, unless you have a compelling reason to use different vendors.  But whatever route that you do decide to go with, always try to stick to the same deployment methodology.  True, each product/solution will be different, develop a set of best standards and practices that are uniform.  That way, it will be easier to troubleshoot issues, and do upgrades in a consistent manner over time.

2)     Configuration:

As just mentioned, whenever you do software patches and firmware upgrades, keep a detailed history of what has actually been installed.  Or if you make any changes to the Endpoint Protection solution itself, that has to be documented as well.  Remember, depending upon how large your organization is, you will need to inform all of your employees well ahead of time of the changes that will occur.  But first, it is highly advisable to have a meeting with the representatives from the other departments to see what the impact will be, and how it can be minimized.  This is technically known as “Configuration Management”.

3)     Logging:

If in the unfortunate chance your business has been hit with a security breach, you will want to at some point conduct a detailed forensics investigation to determine how exactly it happened.  You will need all of the evidence that you can get, and one of the best forms of this are the log files that are outputted from the Endpoint Solution.  Thus, make sure that data is being collected on a real time basis, and that your solution is optimized at all times.  Further, by using Generative AI, keep track of any unusual or abnormal behavior that occurs on the network traffic to and from all of your Endpoint Devices.

4)     XDR:

Not to throw more techno jargon out there but this is an acronym that stands for “Extended Detection Response”.  This is actually a much more sophisticated version of the traditional Endpoint Solution; in that it can do the following:

*It can actually be a very proactive approach by always changing the attack surface that may exist on all of your Endpoint Devices.  This is an attempt to confuse the Cyberattacker in case they are targeting a specific device of a particular employee.  The main benefit of this is that it will make any vulnerabilities harder to detect and subsequently exploit. 

*It can further beef up the defenses for both the CPU and the memory.  This is a critical area in your Endpoint Devices that the Cyberattacker can literally hide out in going unnoticed, and even deploy malicious payloads onto them, making detection almost impossible.

*It’s database will always be updated on a real time basis with the latest threat profiles, so that it can offer maximum protection to your devices.  Also, since Generative Ai is now being used in Endpoint Protection solutions, it can even now learn on its own and even make reasonable extrapolations as to what future threat vectors could possibly look like.  This is a far cry from the traditional Antivirus and Antimalware software packages of today.  For example, their databases are only updated at intervals, and the timing of that is largely dependent on the vendor.

My Thoughts On This:

Although procuring and deploying an Endpoint Protection solution may appear to be an expensive proposition, the truth is that they are really not.  A lot will depend thought upon how many devices you want to protect. 

Of course, it is always wise to make sure that all of them are Cyber fortified.  In fact, if you make use of a cloud deployment, such as that of Microsoft Azure, the Endpoint Protection solution will already be there.

All you have to do is just deploy it, and make sure that it is properly configured for your environment.  But my suggestion here would be to engage with a Cloud Services Provider (CSP) that can actually do and manage all of this for you.

Some of the other key benefits of making use of an Endpoint Protection solution for your business include the following:

*It is lightweight, in terms of its file size and the processes that run within it.  This means that there will be no disruption to your existing processes.  It will also not result in “bloatware”.

*Apart from keeping log files, the Endpoint Protection solution also acts like a “Black Box”, very similar to the ones you hear about being used in commercial aircraft.  Meaning it can also record all of the activity that occurs for each and every device for which you have the solution deployed upon. 

This will also prove to be a great boon if you ever need to conduct a Digital Forensics Investigation.

Sunday, September 8, 2024

The Advent Of "Trusted Source" In Cybersecurity

 


One of the biggest buzz words that has been (or still continues to be?) is that of “Trust”.  This is a word we hear often, both in our professional and personal lives.  But, no matter in what venue you hear it in, have you ever thought to think what trust really means?  Well, as it relates to Cyber, here is a definition of it:

“At the heart of trust in information security is authentication, the process of verifying the identity of a user, device, or system. Authentication methods can include something a user knows, something a user has, or something a user is.”

(SOURCE:  https://asmed.com/understanding-trust-in-information-security-a-comprehensive-guide/#:~:text=At%20the%20heart%20of%20trust,or%20something%20a%20user%20is.)

So really, it is all about making sure that the individual who wants to get access to your shared resources is actually who they are claiming to be.  There are many ways to do this, ranging from the ever so famous password to challenge/response questions, to the RSA token, to the One Time Password (OTP), and even down to Biometrics. 

Given the advent of Generative AI and how it can be used to create something that is fake which is extremely hard to discern if it is real or not, businesses are opting to use multiple layers of identification.

This is known as “Multifactor Authentication”, or “MFA” for short.  Essentially, you are using at least three or more layers of authentication.  But, in order to make this robust, all of the authentication mechanisms must be of a different nature.  For example, using a password along with an RSA token, and using something like Fingerprint Recognition in a quick, successive fashion.

But now, there is a new term that is being bandied about in the world of Cyber, and this is called the “Trust Anchor”.  What is it, you may be asking.  Here is also a definition of it:

“Trust anchors serve as authoritative data sources that provide verifiable and accurate identity information.”

(SOURCE:  https://www.darkreading.com/cybersecurity-operations/trust-anchors-in-modern-it-security)

So the key here is a source that you can use to confirm the identity of an individual that are deemed to be reputable.  These entities can be both human and non-human.  For instance, it can be a passport, a state ID card, or even an outside, third party that you deem to be honest.  These can include the credit reporting agencies, and even background check companies.

Using a “Trusted Source” does have some key advantages and disadvantages.  Here is a sampling of them:

The Advantages:

Ø  It can statistically reduce the chances of fraudulent activity happening down the road.  This is especially useful for cross-referencing any information and data that you have on a particular individual.

 

Ø  It can help to make sure that whatever information you use in your company actually comes from a reputable source.  The prime example of this is once again Generative AI.  As I have written about in the past, a good model needs tons of data in order to keep it robust.  It’s like all of the fluids that go into your car, from the gas to the oil to the brake stuff.  All of this needs to be filled up by a “Trusted Source”, such as a mechanic that you know can do the job well.  For the Generative AI model, you also need to make sure that the datasets you collect to feed it also come from a very reputable source.  If not, not only will your results (the outputs) be highly skewed, but if you are using this model to drive parts of your business, it can even create horribly wrong outputs that will only tarnish your brand reputation.

The Disadvantages:

Ø  The privacy that is involved.  Even if you collect datasets that from a “Trusted Source” that you find to be highly reliable, you will be ultimately responsible for the safekeeping them.  Meaning, you need to make sure that you have the right controls in place in order to mitigate the risks of any kind of Data Exfiltration Attacks from  happening. 

 

Ø  Although it may sound like an oxymoron, you actually have to trust the “Trusted Sources” themselves.  For instance, if you are using a state ID to confirm the identity of an individual, you have to make sure that is genuinely authentic, not a fake one.  Also, if you decide to use a third party to provide you with “Trusted Data”, you need to make sure that you trust them first.  This can of course take time to develop, but as a rule of thumb, the best place to get started on this is to have an exhaustive vetting process in place before you select one.

My Thoughts On This:

Another strategic benefit of using a “Trusted Source” is that it can also help create a baseline from which to follow.  For example, you may procure your network security tools from a vendor that you inherently trust. 

As a result, you will also trust the log files that they output.  And from here, you can then create a baseline to determine what is actually deemed to be normal network activity.  Of course, anything outside of this should be deemed as abnormal patterns of activity. 

In a way, the above example is like building a “Chain Of Trust”.  The term “Trust” will always be around in Cybersecurity, but the important thing to remember is that you do not get caught up in all of technojargon that is out there. 

As long as you have faith in whatever “Trusted Source(s)” you make use of, that is all you have to be worried about.

Monday, September 2, 2024

3 Golden Uses Cases For Confidential Computing

 


Happy Labor Day everybody!!!  As we now loaf into almost the 4th quarter of this year, Cybersecurity is going to be gaining more attention.  The primary fuel for this one will be the Presidential Election that is coming up in just a matter of two months.  There is widespread fear of voter fraud, the proper identification of voters, and the biggest concern now is how Generative AI will have an impact.  It has evolved very quickly since the last election, and some of the biggest fears are as follows:

*Widespread of use of Deepfakes

*A huge uptick in Phishing based emails

*Spoofed and phony websites, asking for campaign donations

Apart from the other ways I have written about before in mitigating these risks, I came across a new concept today that I never have heard of before.  It is called “Confidential Computing”.  A technical definition of it is as follows:

“Confidential computing technology isolates sensitive data in a protected CPU enclave during processing. The contents of the enclave, which include the data being processed and the techniques that are used to process it, are accessible only to authorized programming codes. They are invisible and unknowable to anything or anyone else, including the cloud provider.”

(SOURCE:  https://www.ibm.com/topics/confidential-computing).

Put another way, it is using the specialized parts of the Central Processing Unit (CPU) in order to protect your most sensitive datasets.  But the trick here is that it is only those that are currently being processed that are shielded from prying eyes, such as the Cyberattacker.  More details on it can also be found at this link:

https://www.darkreading.com/cyber-risk/how-confidential-computing-can-change-cybersecurity

So, why should you consider making use of this technique for your business?  Here are three compelling reasons:

1)     Compliance:

The fuel that feeds Generative AI are datasets.  It needs a lot of them to not only start learning, but it needs it all of the time to create the most robust set of outputs that are possible.  Because, of this, data theft and data leakages have become much more prevalent, and  the Cyberattacker is taking full advantage of this.  As a result, the major data privacy laws, such as those of the GDPR, CCPA, HIPAA, etc. have now included the use of datasets in Generative AI models in their tenets and provisions of compliance.  This is still a rather murky area, but by using Confidential Computing you will have some reasonable assurances that you will come to some degree of compliance with these laws.  This is especially advantageous to those businesses who conduct a lot of e-commerce-based transactions, or process a lot of financial information and data.

2)     Cloud:

Whether you make use of the AWS or Microsoft Azure, data leakages are a common threat, and ultimately, you will be held responsible for anything that occurs.  Not the Cloud Provider, as many people believe!!!  While these two give you out of the box tools to protect your datasets, you are responsible for their proper configuration.  But whatever you make use of, ensure that even in this kind of environment you have deployed Confidential Computing.  To do this, make sure that you have implemented what is known as the “Trusted Execution Environment”.  This is the secure environment of your CPU, whether it is physical or virtual based.  It makes use of both public and private keys, and mechanisms are established from within it to mitigate the risks of malicious party intercept of them.

3)     AI:

As it was mentioned earlier in this blog, Generative AI models needs tons of datasets to train on, so it can learn effectively.  But once again, you are responsible for the safekeeping of them!!!  Yes, another way to make this happen to some extent is to once again use Confidential Computing.  This also helps to provide assurances that the datasets you feed into the model are authentic, and not fake.  This is something that you must address now, if you make use of AI or any subset of it in your business.  The downside to this is that in a recent survey that was conducted by Code 42, 89% of the respondents believed that using new AI methodologies is actually making their datasets much more vulnerable.

My Thoughts On This:

As you can glean from this blog, the protection of your datasets should be one of the top priorities for the CISO and its IT Security team.  It’s not just the compliance that you have to look out for, it’s also the reputational damage that your company will suffer if you are hit with a Data Exfiltration attack.  After all, it can take months to get a new customer, but it can only take sheer minutes to lose them. 

By making use of Confidential Computing, you can provide one, very strong layer of assurances to your customers and prospects that you are taking a very proactive approach to safeguard their data that they so entrust you with.

Finally, in this blog, we had mentioned about data that is being processed.  There are two other types of datasets that need to have careful attention paid to it as well, and they are:

Ø  Data At Rest:  These are the datasets that are simply residing in a database, and not being used for any special purpose.  They are just “archived”.

 

Ø  Data In Motion:  These are the datasets that are being transmitted from one system to another, such as going from a server in one location to another in a different location.

Sunday, September 1, 2024

4 Grave Threats To The SS7 Wireless Protocol

 


Well, I started my first doctoral level class at DSU last week, the course I am taking is in Wireless Security.  So, guess what today’s blog is all about?  The threats to it!!!  So let’s get started.  Many of us use our smartphone for both our personal and professional lives.  If we lose it, a total feeling of paralysis comes over us. 

Even though Wireless Communications seems simple to use, the technology that drives it is actually complex.  One such protocol that you may not have heard of is known as the “Signaling System 7”, also known as the “SS7” for short. 

A technical definition of it is as follows:

“It is the system that controls how telephone calls are routed and billed, and it enables advanced calling features and Short Message Service (SMS). It may also be called Signaling System No. 7, Signaling System No. 7 or -- in the United States -- Common Channel Signaling System 7, or CCSS7.”

(SOURCE:  https://www.techtarget.com/searchnetworking/definition/Signaling-System-7)

Despite its level of importance in Wireless Communications, it still uses the old fashioned “Trust Based Architecture”, in which all users are presumed to be authentic and legitimate.  Meaning, there are no mechanisms that are implemented into it to actually confirm the identity of the user before they are given access to use the available resources.  Thus, it has become a prime target for the Cyberattacker. 

Here is a sampling of the attacks that the SS7 is vulnerable to:

1)     Phishing:

As I have mentioned before, this is probably the oldest threat variant in the books.  But it is still being used today, and has become even deadlier than ever.  In this instance, the Cyberattacker can easily intercept the lines of communications, and from there, insert a Phishing message.  This very often comes in the form of a text message, and this kind of hack is known as “Smishing”.  But unlike Phishing emails, it is hard to determine if a text message  is real or not, because there are no other telltale clues except for any spelling or grammatical mistakes.

2)     Credentials:

 

If you make use of Two Factor Authentication (2FA) on your smartphone, there is a chance that whatever information or data you provide to confirm your identity can also be stolen.  This is because the SS7, while it does not support 2FA (as far as I know), leaves that backdoor open so that the Cyberattacker can steal it.  This in turn can be used to spoof your identity.  This kind of vulnerability also increases the attack surface amongst the major telecom carriers (such as Sprint, T-Mobile, Verizon, AT&T, etc.).

 

3)     Denial Of Service:

The acronym for this is “DoS”.  This is where the Cyberattacker overloads a server with malformed data packets in order to greatly slow down its processing power.  If multiple servers are targeted, and multiple devices are used to launch the malicious data packets, then this becomes known as a “Distributed Denial Of Service” attack, also known as a “DDoS”.  The primary target for these kinds of attacks are typically those that host web applications.  But over time, as technology has evolved, this risk has become more mitigated, especially with the deployment of the “Next Generation Firewall”.  But this is not so with Wireless Communications.  Because of its aging security mechanisms, the SS7 makes now easier than ever before for the Cyberattacker to launch massive DoS or DDoS attacks onto the Wireless Grid, and from there, render hundreds and even thousands of devices unable to communicate with another.

4)     Expense:

Because attacks to the SS7 very often don’t get noticed immediately, all businesses, no matter how large or small they might be, end up having to pay higher costs because of the damage that has been incurred as a result of any security breach.  This doesn’t get realized until the bill is received, and the expenses are much higher than expected.  Worst yet, if the Cyberattacker adds covertly adds on more services to the smartphone plan, this will drive up costs even more. 

My Thoughts On This:

So you might be asking now how you can mitigate the risks of the security vulnerabilities that are posed by the SS7?  Here are some tips:

1)     Watch the bills:

Just don’t wait for the electronic or paper statement to be delivered.  Instead, as your Wireless Provider to provide you with charges as they happen, on a real time basis.  That way, if anything looks unusual, you will be able to nip it in the bud.  Also, you should be able to set certain threshold levels, so that if a certain expense limit is reached, it will automatically turn that service off until you investigate further.  On a side note, this kind of feature is also available if you use cloud-based services, such as Microsoft Azure.  You can establish certain billing thresholds, and if any go over the limit, your Virtual Machine (VM) will pause until you reactivate again.

2)     Watch the Bot:

Just like Generative AI, Bots can be both useful and a menace.  In the case of the latter, the Cyberattacker typically uses them in order to further ramp up the scale of their hacks.  Ask your Wireless Provider about any tools that you can use to keep the Bots at bay.  While the defenses may not be stellar, you will at least keep your bill to an expected level.

3)     Use Geofencing:

To me this was a new term, so I looked it up.  Here is a definition of it:

               “A geofence is a virtual fence or perimeter around a physical location. Like a real fence, a                geofence creates a separation between that location and the area around it. Unlike a real fence,          it can also detect movement inside the virtual boundary. It can be any size or shape, even a              straight line between two points.”

               (SOURCE:  https://www.verizonconnect.com/glossary/what-is-a-geofence/)

               In other words, you can create “virtual fences” across the physical areas in which your       employees use their smartphones.  The key advantage to this is that you will be able to quickly            notice (via alerts) any devices that leave or enter this perimeter.    Of course, you will want to create a filter so that an alert can also be triggered if an unknown device penetrates through the “virtual fence”.

Understanding What An EDR Really Is Without The Techno Jargon

  The Cybersecurity world, as I had mentioned in one of my previous blogs, is no doubt full of techno jargon.   While using these fancy term...