Sunday, November 3, 2024

The Next Great Cyber Threat In 2025: Interconnectivity

 


It is hard to believe that that there are only now two months left in this year.   But now as we approach December, this is the time now that many Cybersecurity pundits start to predict what they think the big threat variants will be for 2025. 

I usually hold off on making my predictions, until closer to the New Year.  But in this blog, I will give you a blatant hint as to what I think of the big issues will be for next year: the level of interconnectivity that exists in the world today.

One of the side effects of this are what is known as the “Supply Chain Attacks”.  I have written about this before, but to refresh your memory, it can be technically defined as follows:

A supply chain attack uses third-party tools or services — collectively referred to as a ‘supply chain’ — to infiltrate a target’s system or network. These attacks are sometimes called “value-chain attacks” or “third-party attacks.”

(SOURCE:  What is a supply chain attack? | Cloudflare)

And as the definition points out, it is typically a mechanism that is used by a third-party supplier that is in turn used by the Cyberattacker in which to infect thousands of endpoints.  The best examples of these are the Solar Winds and CrowdStrike hacks. 

They have many customers obviously, and of course they cannot update each of one their systems individually, it would simply take way too long. 

So instead, both companies have created specialized platforms in which updates can be sent to all the customers in just one shot.  Solar Winds calls theirs “Orion”, and CrowdStrike calls their “Falcon”. 

While is an efficient process, the problem here is that if there is just one weakness in them, the Cyberattacker can easily insert a malicious payload through that point of entry, and from there it will be deployed all over the world in just a matter of minutes.

Yes, this is a very scary situation.  But it is also important to put things in some perspective.  Of course, both companies should have kept checking their respective platforms. The truth of the matter is, both situations simply illustrate just how fragile the infrastructure of the world has become. 

And this is all due to the elevated level of connectivity that everything has with each other.  But as we advance further in technology, especially with that of Generative AI, this level of connectivity is only going to expand, and in manner of speaking, get worse.

The bottom line is that this is simply increasing the attack surface.  This can be easily compared with the defense perimeter a company has.  For instance, if they have too many network security devices from many different vendors, then of course their level of attack surface will be that much more proliferated.

So now you may very well be asking at this point, how can you avoid this situation happening to your business?  Well, the bottom line is that we are all at risk from being impacted by a security breach.  The key takes away here is how to mitigate or reduce that level of risk.  Here are some tips for you:

1)     Conduct a Risk Assessment:

Let us use the example I just set up.  If you know that you have too many network security tools, take inventory of what exactly you all have.  From there, create a visualization of where they are all located at.  If they are scattered all over the place, then try to consolidate them down, and place them strategically, as where they are needed.  For instance, instead of using ten firewalls, try to condense that down to five or fewer.  Another key point to remember here is to try to procure any future security tools that you may acquire through just one or two vendors at most. 

2)     Test the patches:

If your business relies upon someone like Solar Winds or CrowdStrike, do not have them deployed automatically into your production environment!!!  Instead, get the patches, and test them in a sandbox like environment first, to make sure that they will work with the systems that you already have in place.  Also, this will give you some extra time in case the vendors notice that there is even flaw with the updates that they have sent over to you.  This will help you avoid what is known as a “Zero Day Attack.”.

3)     Deploy the Zero Trust Framework:

This is a methodology where you segment your entire IT/Network Infrastructure into different “zones”, with each one of them making use of Multifactor Authentication (MFA).  The basic idea of this is that if the Cyberattacker breaks through one line of defense, the odds of them going deeper becomes statistically zero.

4)     Have the IR Plan:

This is an acronym that stands for “Incident Response”.  Having this kind of plan in place, and regularly practicing it is of utmost importance.  This kind of document will allow you and your IT Security team to respond to and contain a security breach quickly.

5)     Use EDR solutions:

This is also an acronym that stands for “Endpoint Detection and Response”.  These are solutions that are typically deployed on the devices that your employees use to conduct their daily job tasks, whether they are remote or hybrid.  They can be used to monitor and contain any threat variants that are incoming into these devices.

So, there you have it, my first prediction of what the Cyber Threat Landscape could look like in 2025.  Stay tuned for more of them.

Monday, October 28, 2024

What "End Of Life" Means, And The Cyber Risks Of It

 


If you are an ardent user of Windows, you know that Microsoft typically retires their products after a certain period of time.  The good thing here is that they give you plenty of time notifying their customers, and even after a product has been discontinued, they still offer some level of support for a brief period.

While it is a good and even necessary thing to do this, unbelievably, people still use outdated software packages even after they have been discontinued.

A notable example of this is one of my cousins.  She works for the Federal Government in a high-level role, and despite this, unbelievably, they are still using Windows 7.  Not only is this a bad practice, but it is a very grave Cybersecurity Risk as well. 

If you are using an Operating System (OS) that no longer offers any type or kind of software upgrades or patches, you are leaving many back doors open that the Cyberattacker can very easily penetrate through and wreak all kinds of havoc.

The typical example of this is Data Exfiltration, when the Cyberattacker will steal the datasets in a very covert way.  They will do this very slowly, bit by bit, and when you do notice something is missing, it will very often be too late to do anything about it. 

Likely, it will have been sold on the Dark Web, or the Cyberattacker is getting ready to launch some kind of Ransomware or Extortion like attack. 

So, let us explore some reasons wat businesses still like to keep outdated software, even though they know they need to upgrade at some point in time.  Here are some findings:

1)     Money:

This is the biggest reason.  True, now, things are tight with companies right now, so most of them do not want to expend the extra money to upgrade, and keep things modernized.  But the truth of the matter is that if you use outdated software and hardware well beyond where no support is provided, once again you are taking a huge Cyber Risk.  And, if you are impacted by a security breach because of this, the cost of recovery will far exceed the cost it would have taken your business to get the new software.

2)     Shadow IT:

The formal term for this is “Shadow IT Management”.  When it comes to the workplace, this refers to when an employee is overlooking the shoulder of another employee to see what their login information is (such as the username and password).  But when it comes to the situation that we are talking about in this blog, it simply means that the CISO and their IT Security team are knowingly letting their employees use outdated software and are fully cognizant of that fact.  Astonishingly enough, according to a recent study, there are still some 47% of companies that let this happen.  To see more details about this, click on the link below:

Unmanaged Devices Run Rampant in 47% of Companies | 1Password

My Thoughts on This:

It could be the fact that some vendors clearly do not communicate with their customers about when their products will be discontinued.  But given the world today, that will be a huge risk for them to take, as the effects of reputational and branding damage will be exceedingly high if an outdated product a customer was using was the culprit for a major security breach.

So here are two tips of advice, from my side:

Ø  The CISO and their IT Security team need to keep a constant eye for what products and/or services are coming to an end.  Once they get a whiff of something that they are using is going to be outdated, plans need to be drawn up immediately in how to procure the next release or update.  Also, plenty of time must be allocated to present a new budget to the C-Suite, with explanations why these steps are necessary.

 

Ø  Always maintain a clear line of communication not only with all the stakeholders in your company, but also with the vendors with whom you procure IT related products and/or services from.

 

Microsoft has done a wonderful job with communicating the “End of Life” (this is the technical term when a product and/or service will no longer be available, and when support will no longer be available).  FYI, it will be terminated next year, and for more information on that, click on the link below:

Companies “wary” of Windows 11 migration challenges as Windows 10 EOL draws closer | ITPro

Sunday, October 20, 2024

What Zero Day Attacks Are In Generative AI Models

 


If you are in Cybersecurity, one of the new pieces of techno jargon that you will often hear about is a “Zero Day Attack”.  I have heard about it numerous times, especially when I did the auto upgrades to my Windows machines.  But to be honest, this is the first time I have written about it.  So, if you are like me when I was a few months ago, wondering what it is was all about, here is a technical definition of it:

“A zero-day (or 0-day) vulnerability is a security risk in a piece of software that is not known about, and the vendor is not aware of. A zero- ay exploit is the method an attacker uses to access the vulnerable system. These are severe security threats with high success rates as businesses do not have defenses in place to detect or prevent them.

A zero-day attack is so-called because it occurs before the target is aware that the vulnerability exists. The attacker releases malware before the developer or vendor has had the opportunity to create a patch to fix the vulnerability.”

(SOURCE:  What is a Zero Day Attack? | Fortinet)

Let us break this definition down into its components:

Vulnerability:  A gap, or weakness that exists in a software application.

Exploitation:  The Cyberattacker discovers this weakness and takes advantage of it by deploying it into a malicious payload.

Attack:  This is where the Cyberattacker attempts to do some damage, such as Data Exfiltration.

As it relates to Zero Day, it is a hole that exists that nobody, not even the vendor knows about. The Cyberattacker discovers this just by pure chance, or through some covert intel.  Because it is not known, they can then exploit this weakness without anybody noticing, and from there, launch the attack. 

The key point here is that though this process, a Zero Day Attack can be very devastating, because it takes everybody by surprise.  When the damage is done, it is then too late to fully recover it.  But now, with Generative AI exploding on the scene and its subsets, especially that of Machine Learning, Zero Day Attacks are now becoming much more pronounced.

One of the primary reasons for this is that the models are constantly evolving and becoming more dynamic by nature.  Even if the CISO and the IT Security team were to discover any gaps or weaknesses and remediate them, the chances of new ones coming out the next day are very high.  Add to this the fact that these models also increase the attack surface, which makes it even more complex to get a true gauge of the Cyber Threat Landscape.

Here are some examples of Zero Day attacks as it relates to the models of Generative AI:

1)     Prompt Injection:

This can be technically defined as:

“Prompt injection is the use of specially crafted input to bypass security controls within a Large Language Model (LLM), the type of algorithm that powers most modern generative AI tools and services.”

(SOURCE:  What Is Prompt Injection, and How Can You Stop It? - Aqua)

               To make this definition clearer, let us backtrack a little bit.  Suppose you use ChatGPT for daily               job         tasks, and one day you have been asked to visit a customer on site.  True, you could use         Google Maps for this, but you want noticeably clear and concise directions on how to get there.  You simply enter your query into ChatGPT, and it gives you various routes you can choose    from.  But in order to get the specific answer you are looking for; you must create the query   with specific keywords.  These are also technically called “Prompts”.  In fact, this has given              birth to an entirely  new field called “Prompt Engineering”.  But as it relates to a Zero Day         Attack with a Generative AI model, a Cyberattacker can very easily hijack your ChatGPT session,            and insert their own prompts.  The end result is that you are given a set of directions, which   although will get you to   the client site, will take you in a far more convoluted manner than what you had intended.  The   consequences of this kind of Zero Day Attack is far more dangerous if       you ask ChatGPT to automatically log into your financial portals (such as          your credit card or bank      account), and ask, or “prompt” it to give you advice on how you should manage your money.

2)     Training Data:

As I have analogized before, a Generative AI model is like a car.  Like this needs fuel to drive, the model needs data (and lots and lots of it) to propel the queries or the “prompts” into giving you the right answers (also known as the “Outputs”).  But you simply cannot dump all kinds of data into the model.  First, you need to make sure that whatever you feed into it is relevant.  For example, if you have developed a model to predict prices for certain stocks, you need to pump in those datasets that belong to them.  Not those of other stocks.  Second, you need to make sure that the data you feed into the model are as optimized and cleansed as much as possible.  This simply means that there are no outliers that exist in the dataset.  If you do not do this, your results will be highly skewed, in the negative direction.  In this regard, it is quite possible that the Cyberattacker can find a hole in the model as it is being developed.  From there, they can then exploit by inserting fake datasets (also known as “Synthetic Data”), into it.  Thus,  once the model is formally launched into the production environment, it can wreak havoc to your business like nobody has seen before.

My Thoughts on This:

Apart from the dynamic nature of Generative AI models as mentioned before, it is very often typically the case, that the time to market of them takes more precedence than developing the secure design of them.  Also, the AI scientists who create these models have security far from their mindset, because they are simply not trained in this area. 

Thus, to help mitigate the risks of Zero Day Attacks from happening, there is now a new movement that is happening now in Corporate America.  This is the adoption of what is known as an “MLSecOps” team.  This is where the AI scientists work in tandem with the IT Security Team and Operations Team to ensure that security model design starts from the very beginning and receives top priority after the model has been launched and deployed for public use.

An important concept here is also the “MLBOM”, which is an acronym that stands for the “Machine Learning Bill Of Materials”. This will be examined in closer detail in a future blog.

Sunday, October 13, 2024

4 Grave Risks Of Using Non Human Identities & How To Fix Them

 


As the world of Generative AI continues to explode, there is a new trend that will be emerging:  The Non-Human Identity.  You may be wondering what it is?  Well here is a good definition of it:

“Non-human identities (NHIs) are digital entities used to represent machines, applications, and automated processes within an IT infrastructure. Unlike human identities, tied to individual users, NHIs facilitate machine-to-machine interactions and perform repetitive tasks without human intervention.”

(SOURCE:  What is a Non-Human Identity? | Silverfort Glossary)

Remember, I have written about Digital Person before?  Essentially, this is an avatar, or even a chatbot that is given human-like qualities in order to interact with you.  Instead of typing in a message, you can talk to it and have a conversation with it. 

One of the best examples of this is its use in customer service.  Instead of waiting on hold for hours on end to speak with an actual human being, you can summon up the Digital Person within a matter of seconds. 

If you are not satisfied with the answers, you can always request the Digital Person to be referred to as an actual representative.  This is an example of a Non-Human Identity, or also known as “NHI” for short.  While you can call the Digital Person by a name, in the grand scheme of things, it really does not have any form of identification.

NHIs can be a particularly useful tool to have around, especially when it comes to processing automation and augmentation, when it comes to monitoring all the interconnections that exist today in the world.  In fact, it has been estimated that for every 1,000 people, there are some 10,000 of these kinds of connections.  It is almost impossible for any human being to keep close tabs on all of them, that is why the NHI is so beneficial.

But despite this, there are certain risks that are borne out by using this advancement in Generative AI.  Here is a sampling of some of the major ones:

1)     Expansion of the attack surface:

In the world of Cybersecurity, mostly everybody that is in it has heard of this term.  For example, if you have too many network security devices, this can expand your attack surface.  This goes in direct contradiction of the old proverb that “more is better”.  The same can also be said of the NHI.  While deploying many of them could prove to be beneficial, in the intermediate and long term, it also greatly expands the attack surface of all your interconnections.  Since these are mostly powered by Generative AI, there are still vulnerabilities in them that the Cyberattacker can exploit very quickly.

2)     Hard to see:

It is important to note that many of the NHIs that are deployed tend to function and operate in the background.  As a result of this, they tend to be forgotten about, especially when it comes to time to upgrade and/or optimize them.  This is yet another blind spot that the Cyberattacker knows very well about and can thus use it quickly launch a malicious payload into them.  The net effect of this is a negative, cascading effect across your entire IT/Network Infrastructure in just a matter of sheer minutes.

3)     Violation of PAM:

This is an acronym that stands for “Privileged Access Management”.  These are the rights, privileges, and permissions that are assigned to super user accounts.  An example of this would be a network or database administrator.  They will of course have elevated access to keep the networks and databases running smoothly, respectively.  But these same types of PAMS based accounts are also assigned to the NHI so that they can carry out automated tasks without human intervention.  But once again, the IT Security team forgets about this as well, and the consequence of this is that the Cyberattacked can gain very quick access to these accounts and gain immediate access to anything that they want to.

4)     Third parties:

In today’s world, many businesses outsource many functions to a third-party provider.  And now, instead of having direct contact with them, the entity that hired them now uses the NHI for this communication.  While this can save time to focus on more pressing issues, there is also an inherent risk with this as well.  For example, if the third-party supplier is hit with a security breach, it will also impact the NHI that is connected to it, and in turn, it will have an impact onto your business.  This is yet another form of a Supply Chain Attack, but on a different kind of level.

My Thoughts on This:

Here are some things I recommend that you can do to mitigate the risks of an NHI from being an unintended threat to your business:

Ø  To keep your attack surface as low as possible, deploy NHIs as you absolutely need them.  It is important to get away from thinking that deploying a lot of them will make you more productive.  They simply will not.

 

Ø  If you have smaller NHIs, it will also make it easier for you to keep an eye on them.  But in the end, no matter how many of them you have, you should have a stipulation in your security policy that a constant level of visibility must be maintained on them.

 

Ø  Always make sure that the Generative AI models that you are used to power your NHIs are always updated with the latest security patches.  If you have a Cloud based deployment, this should be automatically taken care of for you.

 

Ø  Watch the level of rights, permissions, and privileges that you assign to the NHIs.  Just like you would for an actual human employee, assign what is only needed, following the concepts of Least Privilege.

 

Ø  You should always by thoroughly vetting your third-party suppliers, but in case you use an NHI to communicate with them, make sure that they have at least the same number of controls that you have for your own IT/Network Infrastructure.  Also, share any security updates with them, so that they can be on the same page as you as well.

 

The fundamental key here is to always be as proactive as possible when using Generative AI.  The downside is that the models are evolving so rapidly, this can be difficult to do.  But it is always important to do the best that you can in this regard.

Sunday, October 6, 2024

The Evolution Of A Federal Generative AI Bill: What Needs To Be Done

 


One thing that I have written about extensively are the data privacy laws that not only the United States has enacted, but also other nations.  While the intention of them is to give consumers the right to know what is happening with their datasets, but to also make sure that the companies that are the stewards have deployed more than enough controls to make sure that the datasets are as protected as possible.

While this is of course a huge step forward, there is just one huge problem:  There is no uniformity amongst them.  Take for example our own 50 states.  Because of this lack of centralization, each one of them is producing their own version of a data privacy law. 

So, if a business were to conduct financial transactions with customers in all of the states, are they bound to each one?  This is a very murky area in which there no clear-cut answers, and unfortunately, will not be so for a long time to come.

Now, as Generative AI is coming into the fold of our society, it appears that each state now is producing their laws in an effort to protect consumers and their datasets, in the very same manner as they have approached data privacy laws.

One such example of this is California.  A number of years ago, they passed the CCPA. Now, they have produced their own Generative AI bill, which was designed to do the following:

*Create a comprehensive regulatory framework to govern the use of Generative AI, in all foreseeable aspects.

*Create a set of standards and best practices to ensure that the datasets the models use are not prone to security breaches.

This became known officially as Senate Bill 1047.  But believe it or not, the governor of California, Gavin Newsom, rejected the passage of this bill.  Why did he do this, you might be asking?  Well, here are his direct words:

““While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."”

(SOURCE:  https://www.darkreading.com/application-security/calif-gov-vetoes-ai-safety-bill)

Here are other reasons why he rejected this bill:

*The emphasis of it was purely on large scale Generative AI models.  There also needs to be a focus on more specialized models, which serve different purposes.

*The bill appeared to be too stringent to the governor.  His reason for this was that it could stifle innovation and ideas.  To counter this, he proposed that a much more flexible approach needs to be taken, and that each model should be taken into account on a case-by-case basis.

*The bill did not address the deployment of Generative AI in those environments that are deemed to be of high risk. 

As a result of this, the following pieces of advice were offered for consideration:

*Create a joint task force that includes a representative sample who will be involved in this process.  This will include people all the way from consumers to the private sector, to academia, and all levels of both the state and federal governments.

*The focus of Generative AI should on the size and the resources that the models use, but rather, there needs to be a huge emphasis on the risks that are borne from using AI to begin with.

*Implement a process where the any passed legislation on Generative AI can be updated as the technology evolved and advances.  Of course, as we know from the efforts in doing this for Cybersecurity, this is very tall order to fill.  In other words, the passage of any updates simply will not keep up with the pace of the rapid advances being made in Generative AI.

*It is highly recommended that any new bill that is presented to the governor for signing be modeled after the bill that the European Union (EU) recently passed.  This is known as the “EU Artificial Intelligence Act”, and is actually highly regarded as a comprehensive approach to regulating Generative AI.  More details about this can be seen at the link below:

https://artificialintelligenceact.eu/

My Thoughts On This:

This is bill that was rejected by the governor of California was officially known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”  Many people supported the passage of this bill (even Elon Musk), but there was also a fair share that rejected it as well.  It has been viewed as a good step forward, but of course, a lot of work still needs to be done on, as I have eluded to previously.

The bottom line is that creating any kind of regulatory bill on Generative AI is going to be very complicated.  For example, it is not just a few segments of American society that are impacted by Generative AI.  Rather it is the entire population and almost every business. 

Also, there are too many unknowns and variables that are involved in the actual creation of a Generative AI model, and the list here will just keep on growing.

On a very macro level, my thinking is that we simply need to have a Department of Cybersecurity created, in the very same manner that the Department of Homeland Security was right after 9/11.  But, we should not wait for a disaster to happen in Generative AI in order for this to happen.  The federal government needs to act now in order to start this effort.

Under this newly created department,  Generative AI would also fit into here as well.  This will not only lead to a centralization of the data privacy laws, but it will also lead to the same result for Generative AI.  Apart from this, we need to start simple first. 

Let us draft a bill that details a framework for all aspects of AI, such as Computer Vision, Natural Language Processing, Large Language Models, Neural Networks, Machine  Learning, etc.

The bottom line here is that Generative AI is not a field all in its own world.  It includes all of these aspects.  What impacts one area will have a cascading effort on the other as well.  Then over time, updates should be added to this framework, which although will take a very long time to accomplish, I am a huge proponent of it.

Monday, September 30, 2024

Boredom In Cybersecurity?!?!? Yes, It's Real

 


As we know today, CISOs all across America (and for that matter, the entire world) and their respective IT Security teams are always fighting ongoing battle trying to keep up with the latest threat variants.  Given all of this, there is a tremendous amount of fatigue that takes place over time. 

One of the best examples of this is that of “Alert Fatigue”.  This is the where the IT Security team gets so flooded with alerts and warnings that they tend to overlook the real ones.

But can you believe that despite all of this, there is yet another phenomenon that is called “Boredom”?  Well, it is a reality.  You may be asking right now, what causes this, if they are so busy trying to put out fires?  Here are some of them:

1)     Technical Debt:

This happens when the IT Security team simply gets so overloaded with stuff that they simply push aside the smaller, easier tasks that need to get done, and over time, it becomes a monumental headache for them to handle.  A good example of this is the deployment of software patches and upgrades. Despite its level of importance, this is an often an overlooked task.  But when it comes time to deploy them, there is a lot of work to be done which can take days to accomplish with a lot of downtime involved.

2)     No Innovation:

If the CISO does not let his or her IT Security team the opportunity to find a new way to solve a problem, or to use the proverbial saying, “thinking outside of the box”, boredom will set in.  In fact, it will lead to complete burnout by having to follow the same procedures over and over again.  Also, there is a good likelihood that your employees could just easily quit if they feel that their ideas are not being heard.

3)     No Education:

There are some employees in the workplace that are merely happy with just punching the clock, but then there are those who want to learn and grow.  In fact, you, the CISO should take a proactive role in encouraging the latter.  Probably one of the bests ways is to encourage the members of your IT Security team to pursue the relevant Cyber certifications that are relevant to their job titles. Of course, to dangle a carrot in front of them, you should also offer to pay for the training and the exams, within reason of course.

So, now how do you, the CISO, actually alleviate this problem?  Here are some tips:

1)     Give Space:

In the Cyber world, there is no such thing as a free moment.  But, in order to alleviate boredom, try to encourage the members of your IT Security team try  out their new ideas as they get time.  Of course, this should be done in a test environment, not the production one.  Perhaps even consider holding contests and awarding a cash prize to the most innovative solution.  You should try to do this at least once a quarter.

2)     Use Automation:

Many companies are now adopting the usage of Generative AI in order to help automate some their more redundant processes.  This is especially true also in the Cyber world, when it comes to Penetration Testing and Threat Hunting.  While one of the benefits of this is that more attention can be paid to your customers, one of the others is also that it will give the members of your IT Security team that extra time to further experiment with their ideas and possible solutions.

3)     Give Ownership:

In this instance, rather than giving all of the duties to your IT Security team, break them up for each and every member.  In other words, you are giving each individual a sense of “ownership”.  For example, assign the tasks of investigating and deploying software patches and upgrades to a couple of them.  Try to set forth KPIs on this, and reward them if they are met or exceeded.  This is yet another great way to build up the level of motivation amongst them.

4)     Provide Training:

You, the CISO have the ultimate responsibility to keep your IT Security trained in the latest happenings of the Cyber Threat Landscape.  This is best done by having training sessions at least once a month, if not more.  Try to keep these training sessions interesting and competitive, by using the concepts of Gamification.

My Thoughts On This:

If you don’t keep your IT Security team engaged, and not bored, one of the worst consequences of this that simply won’t care about doing their jobs at all.  This cannot happen in the Cyber world, where there is so much at stake.  Remember, that in the end, it all takes a show of appreciation.  Give your members a pat on the back, and try to reward them as much as possible, even by simply taking them out to lunch or dinner.

And remember, as it was mentioned before, offering avenues for further education is probably one of the greatest benefits that you can offer.  Humans always have a sense of wanting to learn more, so take advantage of that for the sheer benefit and protection of your company!!!

Sunday, September 22, 2024

The Top 6 Nefarious Uses Of Generative AI In 2024

 


In the world of Cybersecurity, another common denominator between most of the vendors is the sheer love to publish reports as to what is the latest that is happening on the Cyber Threat Landscape.  These are also published by agencies from within the  Federal Government as well.  Probably one of the best known and most reputable reports is actually published by Verizon.

They do this on an annual basis, and they are entitled the “Data Breach Investigations Report”, also known as “DBIR” for short.  To access the 2024 report, click on the link below:

http://cyberresources.solutions/blogs/2024-dbir-data-breach-investigations-report.pdf

What I especially like about this report is that they cover a wide range of Cyber issues, such as:

*Patterns In Incident Response

*Systems Intrusion

*Social Engineering

*Web Application Attacks

*DDoS Attacks

*Heisted Digital Assets

*Misuse Of Privileges

It also covers a wide range of industries upon which the above-mentioned threat vectors can have a huge impact on.  In this report, the following market segments are analyzed:

*Food/Entertainment

*Education

*Finance/Insurance

*Healthcare

*Information Technology

*Manufacturing

*Professional/Scientific Services

*Public Administration

*Retail

And of course, the heavy emphasis on this 2024 is on Generative AI, and especially how it is being used for nefarious purposes by the Cyberattacker.  Here is what they covered:

1)     Phishing:

As most of us know, Phishing is not only of the oldest threat variants around, but believe it or not, it is still widely used.  Previously, you could tell if you received a Phishing email by examining for any attachments, typos, misspellings, grammatical mistakes, etc.  But, the report found that many hackers are now actually using ChatGPT to not only create Phishing emails with hardly any errors in them, but to also provide advice to non-English speakers as to how they can create convincing Phishing emails.  Because of the absence of the telltale signs, it now only takes about 21 seconds for the victim to click on a malicious, and a mere 28 seconds to give away their confidential information.

(SOURCE:  https://www.darkreading.com/vulnerabilities-threats/genai-cybersecurity-insights-beyond-verizon-dbir)

2)     Malware:

In the past, the Cyberattacker would take their time to write the code for a malware that they wanted to deploy onto the victim’s device.  Not anymore. Through the sinister evil twin of ChatGPT, which is called “WormGPT”, the Cyberattacker can now create and design a piece of stealthy malware in just a matter of a few minutes. It is primarily by Large Language Models (also known as “LLMs”). In this regard, the most commonly crafted malware is that of the Keylogger.

3)     Websites:

Back in the days of the COVID-19 pandemic, it was a common place for the Cyberattacker to create phony and fake websites in order to lure the victim to make a payment to a fictitious cause.  Of course, all of this money would then be transferred to an offshore account, such as in China, Russia, or North Korea.  But with Generative AI, the Cyberattacker can not only create a very convincing website, but even deploy malicious artifacts behind them.  Not only this, but these web pages can be dynamically created on the spot by using the right kind of Neural Network Algorithm.

4)     Deepfakes: 

These made their first mark in the 2016 Presidential Elections.  Essentially, this is where the Cyberattacker can  take an image of a real person,  and actually make a video from it.  For example, through Generative AI, a Cyberattacker can take an image of a real politician, and make that into a video that can be easily put onto YouTube.  One of the most common tactics here is to ask for donations for a political cause.  Worst yet, Deepfakes are also being created to spoof Two Factor (2FA) and Multifactor (MFA) authentication mechanisms. 

5)     Voice:

Just as much as the Cyberattacker can take a real image in order to create a fake one, the same can also be said of your voice.  In this instance, through the use of Machine Learning, they can take any legitimate voice recording that is available, and recreate that to make it sound like the voice of the real person.  Typically, it is well known people that are targeted.  Thus, if you receive a call from a phone number that you do not recognize, just don’t answer it.  If the caller leaves a voice mail, delete that as well.  Also, be careful as to what you post on the social media sites, especially when it comes to videos, especially if you are talking on them.

6)     OTPs:

This is an acronym that stands for “One Time Password”.  As its name implies, these are only used once, and are typically used to further verify your login credentials.  For example, if you log into a financial portal, such as your credit card or your bank, the second or third step in the verification process would be that of the OTP.  This is normally sent as a text message to your smartphone.  It usually expires after just a few minutes, and you have to enter it in, if you want to gain full access to your account.  But, the Cyberattacker is now using Generative AI to create fake ones, used in “Smishing” based attacks.  This is where you get a phony message, but rather than getting it on an email, it comes straight through as a text message.  If you get one of these unexpectedly, just delete it!!!

My Thoughts On This:

Interestingly enough, one of the major conclusions of this report is that there is a lot of hype around Generative AI.  In my view, this is certainly true, as many of the Cyber vendors use this keyword in order to make their products and more services that much more enticing for you to buy.  These days, it is hard to tell what is real and what is not.

In a recent class I just taught on Generative AI, some of the students asked me how they should deal with this particular issue.  I told them that the truth of the matter is that it is hard.  Your only true lines of defenses are to trust your gut.  If something doesn’t feel right, just delete it, or don’t click on it.  And always, confirm  the authenticity of the sender!!!

The Next Great Cyber Threat In 2025: Interconnectivity

  It is hard to believe that that there are only now two months left in this year.    But now as we approach December, this is the time now ...