Sunday, October 13, 2024

4 Grave Risks Of Using Non Human Identities & How To Fix Them

 


As the world of Generative AI continues to explode, there is a new trend that will be emerging:  The Non-Human Identity.  You may be wondering what it is?  Well here is a good definition of it:

“Non-human identities (NHIs) are digital entities used to represent machines, applications, and automated processes within an IT infrastructure. Unlike human identities, tied to individual users, NHIs facilitate machine-to-machine interactions and perform repetitive tasks without human intervention.”

(SOURCE:  What is a Non-Human Identity? | Silverfort Glossary)

Remember, I have written about Digital Person before?  Essentially, this is an avatar, or even a chatbot that is given human-like qualities in order to interact with you.  Instead of typing in a message, you can talk to it and have a conversation with it. 

One of the best examples of this is its use in customer service.  Instead of waiting on hold for hours on end to speak with an actual human being, you can summon up the Digital Person within a matter of seconds. 

If you are not satisfied with the answers, you can always request the Digital Person to be referred to as an actual representative.  This is an example of a Non-Human Identity, or also known as “NHI” for short.  While you can call the Digital Person by a name, in the grand scheme of things, it really does not have any form of identification.

NHIs can be a particularly useful tool to have around, especially when it comes to processing automation and augmentation, when it comes to monitoring all the interconnections that exist today in the world.  In fact, it has been estimated that for every 1,000 people, there are some 10,000 of these kinds of connections.  It is almost impossible for any human being to keep close tabs on all of them, that is why the NHI is so beneficial.

But despite this, there are certain risks that are borne out by using this advancement in Generative AI.  Here is a sampling of some of the major ones:

1)     Expansion of the attack surface:

In the world of Cybersecurity, mostly everybody that is in it has heard of this term.  For example, if you have too many network security devices, this can expand your attack surface.  This goes in direct contradiction of the old proverb that “more is better”.  The same can also be said of the NHI.  While deploying many of them could prove to be beneficial, in the intermediate and long term, it also greatly expands the attack surface of all your interconnections.  Since these are mostly powered by Generative AI, there are still vulnerabilities in them that the Cyberattacker can exploit very quickly.

2)     Hard to see:

It is important to note that many of the NHIs that are deployed tend to function and operate in the background.  As a result of this, they tend to be forgotten about, especially when it comes to time to upgrade and/or optimize them.  This is yet another blind spot that the Cyberattacker knows very well about and can thus use it quickly launch a malicious payload into them.  The net effect of this is a negative, cascading effect across your entire IT/Network Infrastructure in just a matter of sheer minutes.

3)     Violation of PAM:

This is an acronym that stands for “Privileged Access Management”.  These are the rights, privileges, and permissions that are assigned to super user accounts.  An example of this would be a network or database administrator.  They will of course have elevated access to keep the networks and databases running smoothly, respectively.  But these same types of PAMS based accounts are also assigned to the NHI so that they can carry out automated tasks without human intervention.  But once again, the IT Security team forgets about this as well, and the consequence of this is that the Cyberattacked can gain very quick access to these accounts and gain immediate access to anything that they want to.

4)     Third parties:

In today’s world, many businesses outsource many functions to a third-party provider.  And now, instead of having direct contact with them, the entity that hired them now uses the NHI for this communication.  While this can save time to focus on more pressing issues, there is also an inherent risk with this as well.  For example, if the third-party supplier is hit with a security breach, it will also impact the NHI that is connected to it, and in turn, it will have an impact onto your business.  This is yet another form of a Supply Chain Attack, but on a different kind of level.

My Thoughts on This:

Here are some things I recommend that you can do to mitigate the risks of an NHI from being an unintended threat to your business:

Ø  To keep your attack surface as low as possible, deploy NHIs as you absolutely need them.  It is important to get away from thinking that deploying a lot of them will make you more productive.  They simply will not.

 

Ø  If you have smaller NHIs, it will also make it easier for you to keep an eye on them.  But in the end, no matter how many of them you have, you should have a stipulation in your security policy that a constant level of visibility must be maintained on them.

 

Ø  Always make sure that the Generative AI models that you are used to power your NHIs are always updated with the latest security patches.  If you have a Cloud based deployment, this should be automatically taken care of for you.

 

Ø  Watch the level of rights, permissions, and privileges that you assign to the NHIs.  Just like you would for an actual human employee, assign what is only needed, following the concepts of Least Privilege.

 

Ø  You should always by thoroughly vetting your third-party suppliers, but in case you use an NHI to communicate with them, make sure that they have at least the same number of controls that you have for your own IT/Network Infrastructure.  Also, share any security updates with them, so that they can be on the same page as you as well.

 

The fundamental key here is to always be as proactive as possible when using Generative AI.  The downside is that the models are evolving so rapidly, this can be difficult to do.  But it is always important to do the best that you can in this regard.

Sunday, October 6, 2024

The Evolution Of A Federal Generative AI Bill: What Needs To Be Done

 


One thing that I have written about extensively are the data privacy laws that not only the United States has enacted, but also other nations.  While the intention of them is to give consumers the right to know what is happening with their datasets, but to also make sure that the companies that are the stewards have deployed more than enough controls to make sure that the datasets are as protected as possible.

While this is of course a huge step forward, there is just one huge problem:  There is no uniformity amongst them.  Take for example our own 50 states.  Because of this lack of centralization, each one of them is producing their own version of a data privacy law. 

So, if a business were to conduct financial transactions with customers in all of the states, are they bound to each one?  This is a very murky area in which there no clear-cut answers, and unfortunately, will not be so for a long time to come.

Now, as Generative AI is coming into the fold of our society, it appears that each state now is producing their laws in an effort to protect consumers and their datasets, in the very same manner as they have approached data privacy laws.

One such example of this is California.  A number of years ago, they passed the CCPA. Now, they have produced their own Generative AI bill, which was designed to do the following:

*Create a comprehensive regulatory framework to govern the use of Generative AI, in all foreseeable aspects.

*Create a set of standards and best practices to ensure that the datasets the models use are not prone to security breaches.

This became known officially as Senate Bill 1047.  But believe it or not, the governor of California, Gavin Newsom, rejected the passage of this bill.  Why did he do this, you might be asking?  Well, here are his direct words:

““While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."”

(SOURCE:  https://www.darkreading.com/application-security/calif-gov-vetoes-ai-safety-bill)

Here are other reasons why he rejected this bill:

*The emphasis of it was purely on large scale Generative AI models.  There also needs to be a focus on more specialized models, which serve different purposes.

*The bill appeared to be too stringent to the governor.  His reason for this was that it could stifle innovation and ideas.  To counter this, he proposed that a much more flexible approach needs to be taken, and that each model should be taken into account on a case-by-case basis.

*The bill did not address the deployment of Generative AI in those environments that are deemed to be of high risk. 

As a result of this, the following pieces of advice were offered for consideration:

*Create a joint task force that includes a representative sample who will be involved in this process.  This will include people all the way from consumers to the private sector, to academia, and all levels of both the state and federal governments.

*The focus of Generative AI should on the size and the resources that the models use, but rather, there needs to be a huge emphasis on the risks that are borne from using AI to begin with.

*Implement a process where the any passed legislation on Generative AI can be updated as the technology evolved and advances.  Of course, as we know from the efforts in doing this for Cybersecurity, this is very tall order to fill.  In other words, the passage of any updates simply will not keep up with the pace of the rapid advances being made in Generative AI.

*It is highly recommended that any new bill that is presented to the governor for signing be modeled after the bill that the European Union (EU) recently passed.  This is known as the “EU Artificial Intelligence Act”, and is actually highly regarded as a comprehensive approach to regulating Generative AI.  More details about this can be seen at the link below:

https://artificialintelligenceact.eu/

My Thoughts On This:

This is bill that was rejected by the governor of California was officially known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”  Many people supported the passage of this bill (even Elon Musk), but there was also a fair share that rejected it as well.  It has been viewed as a good step forward, but of course, a lot of work still needs to be done on, as I have eluded to previously.

The bottom line is that creating any kind of regulatory bill on Generative AI is going to be very complicated.  For example, it is not just a few segments of American society that are impacted by Generative AI.  Rather it is the entire population and almost every business. 

Also, there are too many unknowns and variables that are involved in the actual creation of a Generative AI model, and the list here will just keep on growing.

On a very macro level, my thinking is that we simply need to have a Department of Cybersecurity created, in the very same manner that the Department of Homeland Security was right after 9/11.  But, we should not wait for a disaster to happen in Generative AI in order for this to happen.  The federal government needs to act now in order to start this effort.

Under this newly created department,  Generative AI would also fit into here as well.  This will not only lead to a centralization of the data privacy laws, but it will also lead to the same result for Generative AI.  Apart from this, we need to start simple first. 

Let us draft a bill that details a framework for all aspects of AI, such as Computer Vision, Natural Language Processing, Large Language Models, Neural Networks, Machine  Learning, etc.

The bottom line here is that Generative AI is not a field all in its own world.  It includes all of these aspects.  What impacts one area will have a cascading effort on the other as well.  Then over time, updates should be added to this framework, which although will take a very long time to accomplish, I am a huge proponent of it.

Monday, September 30, 2024

Boredom In Cybersecurity?!?!? Yes, It's Real

 


As we know today, CISOs all across America (and for that matter, the entire world) and their respective IT Security teams are always fighting ongoing battle trying to keep up with the latest threat variants.  Given all of this, there is a tremendous amount of fatigue that takes place over time. 

One of the best examples of this is that of “Alert Fatigue”.  This is the where the IT Security team gets so flooded with alerts and warnings that they tend to overlook the real ones.

But can you believe that despite all of this, there is yet another phenomenon that is called “Boredom”?  Well, it is a reality.  You may be asking right now, what causes this, if they are so busy trying to put out fires?  Here are some of them:

1)     Technical Debt:

This happens when the IT Security team simply gets so overloaded with stuff that they simply push aside the smaller, easier tasks that need to get done, and over time, it becomes a monumental headache for them to handle.  A good example of this is the deployment of software patches and upgrades. Despite its level of importance, this is an often an overlooked task.  But when it comes time to deploy them, there is a lot of work to be done which can take days to accomplish with a lot of downtime involved.

2)     No Innovation:

If the CISO does not let his or her IT Security team the opportunity to find a new way to solve a problem, or to use the proverbial saying, “thinking outside of the box”, boredom will set in.  In fact, it will lead to complete burnout by having to follow the same procedures over and over again.  Also, there is a good likelihood that your employees could just easily quit if they feel that their ideas are not being heard.

3)     No Education:

There are some employees in the workplace that are merely happy with just punching the clock, but then there are those who want to learn and grow.  In fact, you, the CISO should take a proactive role in encouraging the latter.  Probably one of the bests ways is to encourage the members of your IT Security team to pursue the relevant Cyber certifications that are relevant to their job titles. Of course, to dangle a carrot in front of them, you should also offer to pay for the training and the exams, within reason of course.

So, now how do you, the CISO, actually alleviate this problem?  Here are some tips:

1)     Give Space:

In the Cyber world, there is no such thing as a free moment.  But, in order to alleviate boredom, try to encourage the members of your IT Security team try  out their new ideas as they get time.  Of course, this should be done in a test environment, not the production one.  Perhaps even consider holding contests and awarding a cash prize to the most innovative solution.  You should try to do this at least once a quarter.

2)     Use Automation:

Many companies are now adopting the usage of Generative AI in order to help automate some their more redundant processes.  This is especially true also in the Cyber world, when it comes to Penetration Testing and Threat Hunting.  While one of the benefits of this is that more attention can be paid to your customers, one of the others is also that it will give the members of your IT Security team that extra time to further experiment with their ideas and possible solutions.

3)     Give Ownership:

In this instance, rather than giving all of the duties to your IT Security team, break them up for each and every member.  In other words, you are giving each individual a sense of “ownership”.  For example, assign the tasks of investigating and deploying software patches and upgrades to a couple of them.  Try to set forth KPIs on this, and reward them if they are met or exceeded.  This is yet another great way to build up the level of motivation amongst them.

4)     Provide Training:

You, the CISO have the ultimate responsibility to keep your IT Security trained in the latest happenings of the Cyber Threat Landscape.  This is best done by having training sessions at least once a month, if not more.  Try to keep these training sessions interesting and competitive, by using the concepts of Gamification.

My Thoughts On This:

If you don’t keep your IT Security team engaged, and not bored, one of the worst consequences of this that simply won’t care about doing their jobs at all.  This cannot happen in the Cyber world, where there is so much at stake.  Remember, that in the end, it all takes a show of appreciation.  Give your members a pat on the back, and try to reward them as much as possible, even by simply taking them out to lunch or dinner.

And remember, as it was mentioned before, offering avenues for further education is probably one of the greatest benefits that you can offer.  Humans always have a sense of wanting to learn more, so take advantage of that for the sheer benefit and protection of your company!!!

Sunday, September 22, 2024

The Top 6 Nefarious Uses Of Generative AI In 2024

 


In the world of Cybersecurity, another common denominator between most of the vendors is the sheer love to publish reports as to what is the latest that is happening on the Cyber Threat Landscape.  These are also published by agencies from within the  Federal Government as well.  Probably one of the best known and most reputable reports is actually published by Verizon.

They do this on an annual basis, and they are entitled the “Data Breach Investigations Report”, also known as “DBIR” for short.  To access the 2024 report, click on the link below:

http://cyberresources.solutions/blogs/2024-dbir-data-breach-investigations-report.pdf

What I especially like about this report is that they cover a wide range of Cyber issues, such as:

*Patterns In Incident Response

*Systems Intrusion

*Social Engineering

*Web Application Attacks

*DDoS Attacks

*Heisted Digital Assets

*Misuse Of Privileges

It also covers a wide range of industries upon which the above-mentioned threat vectors can have a huge impact on.  In this report, the following market segments are analyzed:

*Food/Entertainment

*Education

*Finance/Insurance

*Healthcare

*Information Technology

*Manufacturing

*Professional/Scientific Services

*Public Administration

*Retail

And of course, the heavy emphasis on this 2024 is on Generative AI, and especially how it is being used for nefarious purposes by the Cyberattacker.  Here is what they covered:

1)     Phishing:

As most of us know, Phishing is not only of the oldest threat variants around, but believe it or not, it is still widely used.  Previously, you could tell if you received a Phishing email by examining for any attachments, typos, misspellings, grammatical mistakes, etc.  But, the report found that many hackers are now actually using ChatGPT to not only create Phishing emails with hardly any errors in them, but to also provide advice to non-English speakers as to how they can create convincing Phishing emails.  Because of the absence of the telltale signs, it now only takes about 21 seconds for the victim to click on a malicious, and a mere 28 seconds to give away their confidential information.

(SOURCE:  https://www.darkreading.com/vulnerabilities-threats/genai-cybersecurity-insights-beyond-verizon-dbir)

2)     Malware:

In the past, the Cyberattacker would take their time to write the code for a malware that they wanted to deploy onto the victim’s device.  Not anymore. Through the sinister evil twin of ChatGPT, which is called “WormGPT”, the Cyberattacker can now create and design a piece of stealthy malware in just a matter of a few minutes. It is primarily by Large Language Models (also known as “LLMs”). In this regard, the most commonly crafted malware is that of the Keylogger.

3)     Websites:

Back in the days of the COVID-19 pandemic, it was a common place for the Cyberattacker to create phony and fake websites in order to lure the victim to make a payment to a fictitious cause.  Of course, all of this money would then be transferred to an offshore account, such as in China, Russia, or North Korea.  But with Generative AI, the Cyberattacker can not only create a very convincing website, but even deploy malicious artifacts behind them.  Not only this, but these web pages can be dynamically created on the spot by using the right kind of Neural Network Algorithm.

4)     Deepfakes: 

These made their first mark in the 2016 Presidential Elections.  Essentially, this is where the Cyberattacker can  take an image of a real person,  and actually make a video from it.  For example, through Generative AI, a Cyberattacker can take an image of a real politician, and make that into a video that can be easily put onto YouTube.  One of the most common tactics here is to ask for donations for a political cause.  Worst yet, Deepfakes are also being created to spoof Two Factor (2FA) and Multifactor (MFA) authentication mechanisms. 

5)     Voice:

Just as much as the Cyberattacker can take a real image in order to create a fake one, the same can also be said of your voice.  In this instance, through the use of Machine Learning, they can take any legitimate voice recording that is available, and recreate that to make it sound like the voice of the real person.  Typically, it is well known people that are targeted.  Thus, if you receive a call from a phone number that you do not recognize, just don’t answer it.  If the caller leaves a voice mail, delete that as well.  Also, be careful as to what you post on the social media sites, especially when it comes to videos, especially if you are talking on them.

6)     OTPs:

This is an acronym that stands for “One Time Password”.  As its name implies, these are only used once, and are typically used to further verify your login credentials.  For example, if you log into a financial portal, such as your credit card or your bank, the second or third step in the verification process would be that of the OTP.  This is normally sent as a text message to your smartphone.  It usually expires after just a few minutes, and you have to enter it in, if you want to gain full access to your account.  But, the Cyberattacker is now using Generative AI to create fake ones, used in “Smishing” based attacks.  This is where you get a phony message, but rather than getting it on an email, it comes straight through as a text message.  If you get one of these unexpectedly, just delete it!!!

My Thoughts On This:

Interestingly enough, one of the major conclusions of this report is that there is a lot of hype around Generative AI.  In my view, this is certainly true, as many of the Cyber vendors use this keyword in order to make their products and more services that much more enticing for you to buy.  These days, it is hard to tell what is real and what is not.

In a recent class I just taught on Generative AI, some of the students asked me how they should deal with this particular issue.  I told them that the truth of the matter is that it is hard.  Your only true lines of defenses are to trust your gut.  If something doesn’t feel right, just delete it, or don’t click on it.  And always, confirm  the authenticity of the sender!!!

Sunday, September 15, 2024

Understanding What An EDR Really Is Without The Techno Jargon

 


The Cybersecurity world, as I had mentioned in one of my previous blogs, is no doubt full of techno jargon.  While using these fancy terms might be great for marketing efforts in order to attract new customers, the bottom line is that at some point in time, you are going to have to break this down for people to understand.  This is especially critical when you onboard a new customer. 

They are not going to care about the techno jargon that you dazzled with them before, now they want to make sure that the product or solution is going to work, and yield a positive Return On Investment (ROI) down the road.

Such is the case with this new piece of techno jargon.  It is called “Endpoint Protection”.  Although the deployments that are involved with this can be fairly complex, depending upon your requirements, simply put, all that it means is beefing up the lines of defenses that you have for all of your devices, whether they are physical or even in the cloud.

Probably the most typical example of this are the wireless devices that you have given to your employees in order for them to conduct their daily job tasks.  Obviously, given the sheer importance of them, you will want to ensure that are as Cyber secure as possible. 

So how can one go about doing this, in clear and simple terms?  Well, here are some tips:

1)     Deployment:

It is always preferable to use the same Cyber vendor for Endpoint Protection solutions, unless you have a compelling reason to use different vendors.  But whatever route that you do decide to go with, always try to stick to the same deployment methodology.  True, each product/solution will be different, develop a set of best standards and practices that are uniform.  That way, it will be easier to troubleshoot issues, and do upgrades in a consistent manner over time.

2)     Configuration:

As just mentioned, whenever you do software patches and firmware upgrades, keep a detailed history of what has actually been installed.  Or if you make any changes to the Endpoint Protection solution itself, that has to be documented as well.  Remember, depending upon how large your organization is, you will need to inform all of your employees well ahead of time of the changes that will occur.  But first, it is highly advisable to have a meeting with the representatives from the other departments to see what the impact will be, and how it can be minimized.  This is technically known as “Configuration Management”.

3)     Logging:

If in the unfortunate chance your business has been hit with a security breach, you will want to at some point conduct a detailed forensics investigation to determine how exactly it happened.  You will need all of the evidence that you can get, and one of the best forms of this are the log files that are outputted from the Endpoint Solution.  Thus, make sure that data is being collected on a real time basis, and that your solution is optimized at all times.  Further, by using Generative AI, keep track of any unusual or abnormal behavior that occurs on the network traffic to and from all of your Endpoint Devices.

4)     XDR:

Not to throw more techno jargon out there but this is an acronym that stands for “Extended Detection Response”.  This is actually a much more sophisticated version of the traditional Endpoint Solution; in that it can do the following:

*It can actually be a very proactive approach by always changing the attack surface that may exist on all of your Endpoint Devices.  This is an attempt to confuse the Cyberattacker in case they are targeting a specific device of a particular employee.  The main benefit of this is that it will make any vulnerabilities harder to detect and subsequently exploit. 

*It can further beef up the defenses for both the CPU and the memory.  This is a critical area in your Endpoint Devices that the Cyberattacker can literally hide out in going unnoticed, and even deploy malicious payloads onto them, making detection almost impossible.

*It’s database will always be updated on a real time basis with the latest threat profiles, so that it can offer maximum protection to your devices.  Also, since Generative Ai is now being used in Endpoint Protection solutions, it can even now learn on its own and even make reasonable extrapolations as to what future threat vectors could possibly look like.  This is a far cry from the traditional Antivirus and Antimalware software packages of today.  For example, their databases are only updated at intervals, and the timing of that is largely dependent on the vendor.

My Thoughts On This:

Although procuring and deploying an Endpoint Protection solution may appear to be an expensive proposition, the truth is that they are really not.  A lot will depend thought upon how many devices you want to protect. 

Of course, it is always wise to make sure that all of them are Cyber fortified.  In fact, if you make use of a cloud deployment, such as that of Microsoft Azure, the Endpoint Protection solution will already be there.

All you have to do is just deploy it, and make sure that it is properly configured for your environment.  But my suggestion here would be to engage with a Cloud Services Provider (CSP) that can actually do and manage all of this for you.

Some of the other key benefits of making use of an Endpoint Protection solution for your business include the following:

*It is lightweight, in terms of its file size and the processes that run within it.  This means that there will be no disruption to your existing processes.  It will also not result in “bloatware”.

*Apart from keeping log files, the Endpoint Protection solution also acts like a “Black Box”, very similar to the ones you hear about being used in commercial aircraft.  Meaning it can also record all of the activity that occurs for each and every device for which you have the solution deployed upon. 

This will also prove to be a great boon if you ever need to conduct a Digital Forensics Investigation.

Sunday, September 8, 2024

The Advent Of "Trusted Source" In Cybersecurity

 


One of the biggest buzz words that has been (or still continues to be?) is that of “Trust”.  This is a word we hear often, both in our professional and personal lives.  But, no matter in what venue you hear it in, have you ever thought to think what trust really means?  Well, as it relates to Cyber, here is a definition of it:

“At the heart of trust in information security is authentication, the process of verifying the identity of a user, device, or system. Authentication methods can include something a user knows, something a user has, or something a user is.”

(SOURCE:  https://asmed.com/understanding-trust-in-information-security-a-comprehensive-guide/#:~:text=At%20the%20heart%20of%20trust,or%20something%20a%20user%20is.)

So really, it is all about making sure that the individual who wants to get access to your shared resources is actually who they are claiming to be.  There are many ways to do this, ranging from the ever so famous password to challenge/response questions, to the RSA token, to the One Time Password (OTP), and even down to Biometrics. 

Given the advent of Generative AI and how it can be used to create something that is fake which is extremely hard to discern if it is real or not, businesses are opting to use multiple layers of identification.

This is known as “Multifactor Authentication”, or “MFA” for short.  Essentially, you are using at least three or more layers of authentication.  But, in order to make this robust, all of the authentication mechanisms must be of a different nature.  For example, using a password along with an RSA token, and using something like Fingerprint Recognition in a quick, successive fashion.

But now, there is a new term that is being bandied about in the world of Cyber, and this is called the “Trust Anchor”.  What is it, you may be asking.  Here is also a definition of it:

“Trust anchors serve as authoritative data sources that provide verifiable and accurate identity information.”

(SOURCE:  https://www.darkreading.com/cybersecurity-operations/trust-anchors-in-modern-it-security)

So the key here is a source that you can use to confirm the identity of an individual that are deemed to be reputable.  These entities can be both human and non-human.  For instance, it can be a passport, a state ID card, or even an outside, third party that you deem to be honest.  These can include the credit reporting agencies, and even background check companies.

Using a “Trusted Source” does have some key advantages and disadvantages.  Here is a sampling of them:

The Advantages:

Ø  It can statistically reduce the chances of fraudulent activity happening down the road.  This is especially useful for cross-referencing any information and data that you have on a particular individual.

 

Ø  It can help to make sure that whatever information you use in your company actually comes from a reputable source.  The prime example of this is once again Generative AI.  As I have written about in the past, a good model needs tons of data in order to keep it robust.  It’s like all of the fluids that go into your car, from the gas to the oil to the brake stuff.  All of this needs to be filled up by a “Trusted Source”, such as a mechanic that you know can do the job well.  For the Generative AI model, you also need to make sure that the datasets you collect to feed it also come from a very reputable source.  If not, not only will your results (the outputs) be highly skewed, but if you are using this model to drive parts of your business, it can even create horribly wrong outputs that will only tarnish your brand reputation.

The Disadvantages:

Ø  The privacy that is involved.  Even if you collect datasets that from a “Trusted Source” that you find to be highly reliable, you will be ultimately responsible for the safekeeping them.  Meaning, you need to make sure that you have the right controls in place in order to mitigate the risks of any kind of Data Exfiltration Attacks from  happening. 

 

Ø  Although it may sound like an oxymoron, you actually have to trust the “Trusted Sources” themselves.  For instance, if you are using a state ID to confirm the identity of an individual, you have to make sure that is genuinely authentic, not a fake one.  Also, if you decide to use a third party to provide you with “Trusted Data”, you need to make sure that you trust them first.  This can of course take time to develop, but as a rule of thumb, the best place to get started on this is to have an exhaustive vetting process in place before you select one.

My Thoughts On This:

Another strategic benefit of using a “Trusted Source” is that it can also help create a baseline from which to follow.  For example, you may procure your network security tools from a vendor that you inherently trust. 

As a result, you will also trust the log files that they output.  And from here, you can then create a baseline to determine what is actually deemed to be normal network activity.  Of course, anything outside of this should be deemed as abnormal patterns of activity. 

In a way, the above example is like building a “Chain Of Trust”.  The term “Trust” will always be around in Cybersecurity, but the important thing to remember is that you do not get caught up in all of technojargon that is out there. 

As long as you have faith in whatever “Trusted Source(s)” you make use of, that is all you have to be worried about.

Monday, September 2, 2024

3 Golden Uses Cases For Confidential Computing

 


Happy Labor Day everybody!!!  As we now loaf into almost the 4th quarter of this year, Cybersecurity is going to be gaining more attention.  The primary fuel for this one will be the Presidential Election that is coming up in just a matter of two months.  There is widespread fear of voter fraud, the proper identification of voters, and the biggest concern now is how Generative AI will have an impact.  It has evolved very quickly since the last election, and some of the biggest fears are as follows:

*Widespread of use of Deepfakes

*A huge uptick in Phishing based emails

*Spoofed and phony websites, asking for campaign donations

Apart from the other ways I have written about before in mitigating these risks, I came across a new concept today that I never have heard of before.  It is called “Confidential Computing”.  A technical definition of it is as follows:

“Confidential computing technology isolates sensitive data in a protected CPU enclave during processing. The contents of the enclave, which include the data being processed and the techniques that are used to process it, are accessible only to authorized programming codes. They are invisible and unknowable to anything or anyone else, including the cloud provider.”

(SOURCE:  https://www.ibm.com/topics/confidential-computing).

Put another way, it is using the specialized parts of the Central Processing Unit (CPU) in order to protect your most sensitive datasets.  But the trick here is that it is only those that are currently being processed that are shielded from prying eyes, such as the Cyberattacker.  More details on it can also be found at this link:

https://www.darkreading.com/cyber-risk/how-confidential-computing-can-change-cybersecurity

So, why should you consider making use of this technique for your business?  Here are three compelling reasons:

1)     Compliance:

The fuel that feeds Generative AI are datasets.  It needs a lot of them to not only start learning, but it needs it all of the time to create the most robust set of outputs that are possible.  Because, of this, data theft and data leakages have become much more prevalent, and  the Cyberattacker is taking full advantage of this.  As a result, the major data privacy laws, such as those of the GDPR, CCPA, HIPAA, etc. have now included the use of datasets in Generative AI models in their tenets and provisions of compliance.  This is still a rather murky area, but by using Confidential Computing you will have some reasonable assurances that you will come to some degree of compliance with these laws.  This is especially advantageous to those businesses who conduct a lot of e-commerce-based transactions, or process a lot of financial information and data.

2)     Cloud:

Whether you make use of the AWS or Microsoft Azure, data leakages are a common threat, and ultimately, you will be held responsible for anything that occurs.  Not the Cloud Provider, as many people believe!!!  While these two give you out of the box tools to protect your datasets, you are responsible for their proper configuration.  But whatever you make use of, ensure that even in this kind of environment you have deployed Confidential Computing.  To do this, make sure that you have implemented what is known as the “Trusted Execution Environment”.  This is the secure environment of your CPU, whether it is physical or virtual based.  It makes use of both public and private keys, and mechanisms are established from within it to mitigate the risks of malicious party intercept of them.

3)     AI:

As it was mentioned earlier in this blog, Generative AI models needs tons of datasets to train on, so it can learn effectively.  But once again, you are responsible for the safekeeping of them!!!  Yes, another way to make this happen to some extent is to once again use Confidential Computing.  This also helps to provide assurances that the datasets you feed into the model are authentic, and not fake.  This is something that you must address now, if you make use of AI or any subset of it in your business.  The downside to this is that in a recent survey that was conducted by Code 42, 89% of the respondents believed that using new AI methodologies is actually making their datasets much more vulnerable.

My Thoughts On This:

As you can glean from this blog, the protection of your datasets should be one of the top priorities for the CISO and its IT Security team.  It’s not just the compliance that you have to look out for, it’s also the reputational damage that your company will suffer if you are hit with a Data Exfiltration attack.  After all, it can take months to get a new customer, but it can only take sheer minutes to lose them. 

By making use of Confidential Computing, you can provide one, very strong layer of assurances to your customers and prospects that you are taking a very proactive approach to safeguard their data that they so entrust you with.

Finally, in this blog, we had mentioned about data that is being processed.  There are two other types of datasets that need to have careful attention paid to it as well, and they are:

Ø  Data At Rest:  These are the datasets that are simply residing in a database, and not being used for any special purpose.  They are just “archived”.

 

Ø  Data In Motion:  These are the datasets that are being transmitted from one system to another, such as going from a server in one location to another in a different location.

4 Grave Risks Of Using Non Human Identities & How To Fix Them

  As the world of Generative AI continues to explode, there is a new trend that will be emerging:   The Non-Human Identity.   You may be won...