Sunday, March 16, 2025

The Non Political View Of Saving The US Healthcare Industry

 


As we all know, this great country of ours has been shaken from the bottom down  all the way to the top with our current presidential administration.  Yes, our Federal Government has been bloated for decades, but the approach that is being taken is a bit extreme. 

Cuts are being made all over with no thought in mind, and worse yet, the people who depend heavily upon Medicaid could see their benefits not only reduced but even cut all together. 

Even CISA, the main Cybersecurity Agency from within the Federal Government, is starting to see cuts and even starting to lay off hundreds of their own employees.  So, when you these two together, you see one horrible trend:  The healthcare industry here in the United States is now going to be even more vulnerable in the hands of the Cyberattacker.

Consider some of these stats:

*Health Tech Magazine predicted that 2025 will be the worst year ever for security breaches.

*According to the 2024 Ponemon Healthcare Cybersecurity Report, 92% of the organizations that are in or even affiliated with the healthcare industry were hit by a Threat Vector.

*In the report from IBM called the Cost of a Data Breach Report 2024”, it was estimated that each security breach cost a healthcare entity at least $4.88 million.

Of course, the healthcare industry has always been vulnerable to Cyberattacks, but it has now become even more pronounced as Generative AI and Machine Learning (ML) now start to take a permanent route in both automation and customer service.  You could very well be wondering at this point, what are the most persistent and deadliest Threat Vectors that are posed to the healthcare industry?  Here is a sample of them:

1)     Phishing:

Yes, this is deemed to be the oldest of all the Threat Vectors out there.  But given its age, the Cyberattacker of today is still able to is to take the signature profiles of old ones and create newer ones from that.  In other words, this is building a better mousetrap.  Look at these alarming stats:

Ø  According to the 2022 IBM X-Force Threat Intelligence Index, Phishing will “be a common tactic for hackers to use against the health sector." (SOURCE:  Biggest Cyber Threats to the Healthcare Industry Today)

 

Ø  According to the NIH National Library of Medicine in a one-month time, the average healthcare organization received Phishing 858,200 emails.  139,400 of them belonged to marketing, and 18,871 of them contained a malicious payload, such as an .XLS file containing a macro.

 

Ø  In a security breach on average, at least 2.6 million PII datasets are stolen from patients.  These include their confidential information, appointments with doctors, medical records, etc.

 

2)     Ransomware:

This is the kind  of Threat Variant where the Cyberattacker could lock up parts of the IT/Network Infrastructure of a healthcare organization and expect to pay a ransom (usually in a Bitcoin) for the victim to get their files unlocked.  Such is the case with Change Healthcare.  Over one hundred million patients had their PII datasets locked up from a Ransomware Attack, and in return, a $33  million ransom was paid to the Cyberattacking group. 

Then in just last month, the various blood banks located throughout the entire state of New York were hit by a Ransomware Attack, four hundred of them in total.

A recent study also found  that the malicious payloads in Ransomware Attacks can be delivered in one of three ways, or even with all of them:

Ø  Phishing based Emails.

Ø  Malvertising

Ø  Malicious attachments that were downloaded

For those of you may not know, Malvertising can be technically defined as follows:

“Malvertising or malicious advertising is a technique that cybercriminals use to inject malware into users' computers when they visit malicious websites or click on an ad online.”

(SOURCE:  What is Malvertising and how to prevent it? | Fortinet)

Finally, the average dollar amount of ransom  payments made by the healthcare industry was almost $2.56 million.

My Thoughts on This:

After reading all of this, anybody is wondering, what can I do to protect myself?  Well, the answer comes from two fronts.  The first one is on the healthcare industry itself.  Here are some things that they need to do:

Ø  Deploying Generative AI powered EDR and XDR solutions to all the endpoints that are issued to the healthcare workers.  Note that endpoint is a general term that refers to tablets, laptops, smartphones, etc.

 

Ø  Follow a regular schedule of deploying software patches and updates.  This also includes firmware.

 

Ø  Make use of Multifactor Authentication (MFA).  This is where at least three or more differing authentication mechanisms are used to confirm the identity of the person in question.

 

Ø  If passwords are still a key credential, then  make use  of a Password Manager.  These software applications can create long and complex passwords on an automated basis.

 

Ø  Make sure that you have a strong Security Policy that is being enforced.  But even more importantly, make sure that you have Incident Response/Disaster Recovery/Business Continuity Plans in place, and that they are rehearsed on a regular basis.

As for  you, I am assuming the patient always keep checking  both your bank credit card accounts at least twice a day to make sure that there is no fraudulent activity that has occurred.  Many healthcare organizations now even offer your own personal patient portal,  where you can access pretty much the same kind of information and data that your doctor can.  Keep checking this also on a regular basis to make sure that there is no fraudulent activity here wither.

Finally, to view the report from the:

Ø  Ponemon Institute, click here:  http://cyberresources.solutions/Blogs/Ponemon_HC_Report.pdf

 

Ø  CISA, click here:  http://cyberresources.solutions/Blogs/CISA_HC_Report.pdf

 

Ø  IBM, click here:  http://cyberresources.solutions/Blogs/IBM_Report.pdf

 

Ø  NIH, click here:  http://cyberresources.solutions/Blogs/NLM_HC_Report.pdf

Sunday, March 9, 2025

The Cyber Recession That Is About To Happen In 2025

 


In the past few weeks, I have written a lot about Generative AI, so today, I am going to break from it and talk about something else that is also equal, if not more important in Cybersecurity.  To start off with, we all know that the United States economy is starting to slow down. 

A lot of this can be attributed to the massive number of layoffs that have occurred within the Federal Government, and because of the uncertainty of the tariffs, which have wreaked havoc on our own financial markets. 

To make matters even worse, the overall job growth is also starting to slow down, something that we have not seen in quite some time.

But despite all of this, there is still a silver lining:  The demand and creation for jobs in Cybersecurity still remains strong, however, there are more jobs available than what people can fill.   Consider some of these key statistics:

*According to the ISC2 in their report entitled the “2024 Cybersecurity Workforce Study”, there will be need to 3.4 million Cyber professionals to keep up with the demand. 

*According to Cyber Seek, there were  457,433 cybersecurity job openings from August 2023 to September 2024, but barely any of them were filled.

Yes, this gap is very alarming.  Here are some reasons cited for this trend:

*The Cyber Threat Landscape is constantly changing, in fact even by the minute.  Thus, trying to find the right workers with the exact skillset that is needed is very difficult to do.  In fact, according to a recent report from IBM, over 60% of businesses have failed to find the candidate that they were looking for, simply because they did not have the skills needed.

*A lot of the focus on Cyber jobs has been on offensive roles, such as being a Penetration Tester.  But the way that technology is evolving today, many companies are now resorting  to automated Penetration Testing, versus doing it the traditional ways.  So the demand now are for those candidates that have defensive oriented skills sets, such as being a part of the IT Security team.  But many of the people that have had these roles tend to burn out very quickly, because they are completely inundated with tasks, or the simply are suffering from what is known as “Alert Fatigue”.

*The dawn of the data privacy laws has now created a new demand for Cyber professionals that also have a legal background.  Unfortunately, there are very few people who have this precise skillet.  But, there is a new trend that is also emerging, and that is the need .  what is known as a “Chief Data Privacy Officer”.  Personally, I do not know of anybody who has filled this kind of role, but they seem to be out there.

Compounding the last one even more, is that many companies hiring for that skillset also require an in-depth knowledge of the GDPR, CCPA, the NIST frameworks, and even the ISO standards.  Anybody who can do this will truly be a specialist in the core.

But it is not the hiring managers that are too solely to blame in this regard.  Evern the recruiters have played their fair share of misleading candidates to apply, and they never hear back from them again.  These are technically referred to as “Ghost Jobs”, as these are used to only create a pool of candidates for the recruiting agencies. 

Another complaint that candidates have about the recruiters is that the job postings that they apply to have extremely broad requirements.  But if they have the interview, they are completely shocked when the hiring manager lays out extremely specific requirements for the job. 

My Thoughts on This:

So now, you may very well be asking yourselves:  How can this situation be turned around?  It comes down to both the job candidate and the hiring manager.  Let’s start first with the former.  Assuming that this person will be getting some kind of degree, they should be encouraged to network with their instructors to find an internship of some sort. 

This is what I did when I was in college.  I met with a professor, and he connected me with The Andersons, a large grain company based in the Midwest.

Further, the students should also be asking their instructors about what kinds of specific courses they should be  taking.  For example, if they want to become a Malware Analyst, then they will have to take more quantitative oriented courses to build an analytical mindset. 

Also, the instructors need to take a more active role in encouraging their students to take entry level certs, such as the Certified in Cybersecurity from ISC2 or the Security+ from CompTIA.

Now, on the side of the employer.  In order to end this cat and mouse game of finding the right candidate (which they most likely will never find), they need to take the risk and try to hire somebody that has just entry level skills and train them up for the job. 

True, this could cost a little bit of money in the beginning, but these kinds of candidates will have a tendency to stay longer with the company, versus hiring somebody with the right skill set (and of course at a much higher salary), who probably will not stay around for very long, because they know that they are in demand.

In  the end, there will always be a need for Cyber workers, as threat variants will not fail to exist, and the Cyberattackers will only keep getting stealthier and more deadly in their attacks. If this jobs gap remains the way it is, there will be many more victims because of security breaches occurring in the end. 

Therefore, all three parties must make this happen:

Ø  The student wants a job in Cyber.

Ø  The recruiter in Cyber

Ø  The hiring manager that is trying to fill a Cyber position

Let us make this happen!!!

Sunday, March 2, 2025

3 Top Trends To Emerge From Generative AI Poisoning Attacks

 


It seems like that all the news headlines today in Cyber are all about Generative AI and its many different subsets, such as Large Language Models (also known as “LLMs”).  I have covered this topic very extensively in the four books that I have written about it, as well as in the white papers, articles and blogs that I have written for other people. 

But there is one area in which, unbelievably, I have touched upon, and that is the area of what is known as “AI Data Poisoning”. 

You may be wondering what it is, so here is a technical definition of it:

“Data poisoning is a type of cyberattack where threat actors manipulate or corrupt the training data used to develop artificial intelligence (AI) and machine learning (ML) models.”

(SOURCE:  What Is Data Poisoning? | IBM)

Remember, as I have written about in the past, what drives a Generative AI model is the data that is fed into it.  It can be easily compared to a car which needs gasoline to make it run and go places.  Likewise, it is the data that fuels the model and gives the momentum that it needs to produce an answer, or an output to the query that has been submitted to it.

But keep in mind that not just any output will do.  It must meet what the end user is looking for.  In order to make sure that this happens, whoever  is in charge of the model must make sure that the datasets that are fed into the model are cleansed and robust, as well as free from having any statistical outliers. 

Using our car for example again, you need to give the right kind of fuel so that the engine will not get damaged (for instance, you do not pump diesel fuel into a Honda).  The same is true of the Generative AI model.  It needs the right data to make its algorithms (which is its engine) work equally smoothly.

But Generative AI is a field that is changing on an almost daily basis.  Thus trying to deploy the latest Cybersecurity controls can be an almost. impossible task to accomplish.  The Cyberattacker is fully aware of this and knows the vulnerabilities that are present.  Thus, they launch what are known as Poisoning Attacks to insert fake data into the model. 

But it does not stop here.  They can also quite easily insert a malicious payload to serve two key purposes:

Ø  Launch another Supply Chain Attack (just as we saw with Solar Winds and Crowd Strike) that could have huge, cascading effects.

Ø  Launch a Data Exfiltration Attack to not only steal the legitimate datasets that are being used in the model itself, but also those datasets which reside in the IT and Network Infrastructure of a business entity.

So given all of this, there are now three trends that are expected to happen, at some point in time down the road, which are as follows:

1)     Back To Solar Winds:

Yes, I know I just mentioned this, but the kind of attack that can happen here to a Generative AI Model will be magnified by at least ten times because of a Poisoning Attack.  To put it another perspective, when the Solar Winds hack took place, there were about 1,000 victims.  Now, there could be at least 10,000 victims or even more, all over the world.  In this regard, the main point of insertion for a malicious payload would be LLM, if there is one that is present.

2)     The Role of the CDO:

This is an acronym that stands for the “Chief Data Officer”.  This job title can be compared to that of the CISO, but their focus is on the datasets that their company has and is currently using.  Up until now, their main tasks were to simply write the Security Policies that would help fortify the lines of defenses around a Generative AI model.  But with the advent of Data Poisoning, their role will now shift into hiring and managing a team of employees whose sole mission is the cleansing and optimization of the datasets before they are fed into the model.  Another key role for them here also is to make sure that whatever datasets they are using come into compliance with the data privacy laws, such as those of the GDPR and the CCPA.

3)     It is Going to Happen:

If Phishing has been around, so will Poisoning Attacks.  They will start to evolve this year and pick up steam later on.  But as companies keep using Generative AI, this will be a highly favored threat variant for the Cyberattacker.  In fact, according to a recent market survey that was conducted by McKinsey, over 65% of businesses today use Generative AI on a daily basis.  To see the full report, access the link below:

http://cyberresources.solutions/Blogs/Gen_AI_Report.pdf

My Thoughts on This:

I am far from being an actual Generative AI practitioner, but I would like to offer my opinion as to how you can mitigate the threat of a Poisoning Attack from impacting your business:

Ø  Generative AI models are not just one thing.  The model or models that it uses are connected to many other resources in the external world.  There are a lot of interconnectivities here, so I would recommend keeping a map or visual to keep track of all this and keep updating on a real-time basis as more connections are being made into it.  This will also give a clever idea as to where you need to exactly deploy your Cybersecurity controls in the Generative AI Ecosystem.

 

Ø  If you can, hire a CDO as quickly as you can.  You do not have to hire them as full-time employees, you can also hire them on a contract basis, to keep them affordable.  But you will need them ASAP if you are going to make use of Generative AI based models.

Poisoning Attacks are going to be around for a long time.  So, now is the time to get prepared!!!

Sunday, February 23, 2025

The Importance Of Separating The Logical And Emotional Aspects If You Are A Victim

 


Human beings have two basic instincts among all others:  Being a creature of habit, and wanting to forgive people if they have wronged you in some way, shape, or form.  I know for one I am a creature of habit.  The best example of this is just a few days ago. 

I recently traded in my 22-year-old Honda Civic and am now leasing Kia.  This is the first time that I have had a car with all the electronic gizmos in it.  I have always been an analog dashboard kind of person on my past cars, so there are times I have wished to have that back.

But I know I made the right decision and must get used to all these new fancy things.  In terms of forgiveness, well, I am also a pretty loving guy.  The best example of this is one of my best friends of over 40 years, and we have our major spats, and the most recent one, a few days ago about the current pollical climate.  But of course, being close friends for such a long time, we forgave almost immediately.

These two examples can also fire perfectly well in the world of Cybersecurity.  For example, suppose you have been a long-time customer of a major vendor.  All of a sudden, you have been informed that they have been impacted by a security breach.  Some of the first questions that you will ask are:

1)     How did it happen?

2)     How soon did you find out it happened?

3)     What steps have you taken to rectify the situation?

4)     MOST IMPORTANT:  How am I impacted?  Is my data safe?

5)     What kind of recourse are you going to offer me?

But no matter how much you try to find fault with and blame the vendor for what happened, the tendency to want to stick around with them still persists.  After all, it is going to take time to find  a new vendor, and time to get acclimated to the way they serve customers. 

And what if they are more expensive?  So now the feeling of being a “creature of habit” sets in, and in the end, you decide you want to still stick with the same vendor.  This is technically known as “Digital Forgiveness”.

But now there is a new psychological play here as well.  It is the phenomenon called “Risk Normalization”.  To put it simply, you further rationalize your decision to continue with the same vendor by further rationalizing this:  “Well, anybody can become a victim, I guess it was my turn now”.

Because of all the loyalty you have shown to the vendor, the tendency will now be for them, indirectly, to take advantage of you.  For example, there attitude could very well be now:  “Well if a security breach happens again, they will still probably stick around.  No need to beef up my lines of defenses even further”. 

But, taking this kind  of approach can have detrimental effects, which include the following:

1)     Trust:

Although you may have forgiven the vendor, it will still be a part that will be hidden in your memory.  So, if the vendor takes a complacent attitude with you, your level of trust with them can erode over time.  Not your loss, it will be theirs, because customers can easily be lost, but it can take an exceptionally long time to get a new one.

2)     Anxiety:

After a company has been hit by a security breach, the moral and ethical thing for them to do is to offer you some kind of recourse, most often which comes in the form of free credit reports and real time monitoring.  But they are not legally required to do this.  So, if nothing is offered to you, it is quite likely that a prominent level of feeling of anxiety will kick in.  For example, some of your most immediate fears will be:  “Will I become a victim of ID Theft”? 

3)     Goodwill:

If the vendor again becomes a victim of a security breach, your goodwill towards them will completely vanish, and at this time you will say:  “This is the straw that broke the camel’s back, I am finding a new vendor”. 

My Thoughts on This:

Although this is much easier said than done, if your vendor has been hit by a Cyberattacker, and you have become a victim, it is imperative to separate yourself from the emotional side, and take these solid steps:

Ø  After you have been notified, immediately demand to know what happened to your data, and what corrective measures have or are currently being taken to protect your datasets.

 

Ø  Immediately enable either 2FA or MFA on all your financial accounts, such as you are banking and credit card portals.  Keep checking them at least twice a day to make sure that there is no fraudulent activity.

 

Ø  Immediately contact the three credit bureaus (Equifax, TransUnion, and Experian) and put a freeze on your account.

 

Ø  Demand recourse, more than what the vendor has to offer.  If you can afford the legal expenses, even consider filing a lawsuit.

 

Ø  Remember in the end, that you are the customer.  In our capitalistic society, the “Customer Is King”.  So, wield these powers that you have, and try to find  a different vendor.  If you take this route, make sure you ask what steps are being taken to protect your data if you were to go with them.

Finally, you, the customer, also need to play a part in protecting your data.  For example, with the recent passages of the many data privacy laws, especially those of the GDRP and the CCPA, you now have the legal right to know explicitly know how your data is being stored, processed, and archived.  And, you can always ask to have your datasets deleted if at any time you are not feeling comfortable with the way it is being managed.

Sunday, February 16, 2025

A Fine Line Must Be Drawn In Generative AI Usage: The Banking Example

 


One common question that I get asked from time to time is what do Cyberattackers like to prey on?  In other words, who do they like to target?  To be honest, about anything and anybody can be a prime target.  But it all comes down to one key motivating factor:  MONEY, AND LOTS OF IT. 

Wherever there is a backdoor is open and $$$ is easy to smell, the Cyberattacker will make its prey.  It can happen in a lot of diverse ways, such as Social Engineering, Phishing (Business Email Compromise is a big one here), finding vulnerabilities in a web application, etc.

But one thing I can answer for sure is that an industry which is heavily targeted is the banking  one.  After all, once the Cyberattacker has access to the login info of the victim, all heck can break loose.  For example, they can initiate a fake transaction, open a fake debit card, or just do things the old-fashioned way:  just steal whatever money is in the victim’s account.

In response to this, most of the financial institutions based here in the United States have done an excellent job implementing safeguards to protect their customers.  I can even vouch for this for myself.  One time, I got a letter from my bank stating that my debit card got hacked into. 

I never even used it, but the moment they got whiff of a potential, fraudulent transaction, they cancelled it immediately.  Then one time, I logged into my checking account from my iPhone (which I hardly ever do), the bank blocked my access later, because a different IP address was detected.

But another area in banking which needs more attention paid to is that of the mobile apps that they create and deploy for their customers.  Consider these stats:

*Fraudulent activity will exceed $40 billion by 2027, which is a staggering 32% increase.

*Banking as a Service will also witness a 20% increase in attacks.

(SOURCE:  How Banks Can Adapt to the Rising Threat of Financial Crime)

In fact, the mobile app can be viewed as Banking as a Service tool.  After all, you can download it from an app store, such as Google or Apple.  In these cases, one of the easiest ways for the Cyberattacker to get into is to try to find a backdoor in the source code, especially in the API. 

As I have written before, many software developers use ones that are open sourced primarily because they are free to download and use, with no licensing fees involved.  Also, there are plenty of forums online in which help, and resources are available.

But the software developers who make use of these kinds of APIs do not check them to make sure that they have been updated.  Because of this, many backdoors can be left open for easy penetration by the Cyberattacker.  From here, they can manipulate the mobile app or even heist the source code to create a fake, this tricking and luring in their victims.

So how can a bank avoid this situation.  In a theoretical sense, the easy answer is that they should use their own IT Department to create it.  But, this can be a costly proposition, so many banks choose to outsource the development of it, in the name of saving money. 

While this can be a good thing, it also poses grave risks as well.  For example, what if they have hired a web development team, such as in India, and they are not properly vetted?

In this regard, the banks must take the vetting process very seriously.  They need to make sure that whoever they hire must meet strict security requirements that are at least on par or even greater than what the bank has in place. 

Further, the right controls must be put in place, in case any customer information and/or data is given for testing purposes.  In fact, the bank should take the initiative and responsibility to create a set of best practices and standards for their vetting process.

Another avenue that banks are looking at to further protect they are as a Service offerings is the use of Generative AI.  One of the best ways that this has been used is to quickly detect any form of abnormal behavior that falls out of the baseline profile of the customer.  

Once this has been captured, the Generative AI model will trigger the account to be blocked almost immediately.  Generative AI is also great when it comes to halting a wire transfer that looks fishy, such as the in the case of a Business Email Compromise Attack.

But with the good comes the bad.  For instance, a Cyberattacker can easily heist one of these models and modify it in a way that it will not detect fraudulent activity for a certain period of time.  Or worse yet, they can not only create a fake website,  but they can also Generative AI to create a Deepfake, which is a replication of a real person. 

They can use this to create a Digital Personality that the customer can interact with, but Social Engineering can be embedded here, so that a trusting dialog can be developed.  Once this has come to fruition, Digital Personality can then be manipulated to prey upon the vulnerable state of mind of the customer and con them into giving out their personal information and data.

My Thoughts on This:

IMHO, banks, no matter what their size or their geographic location is, there must be a fine line drawn as to how much Generative AI should be used.  Perhaps creating a set of best standards and practices would be great here, as to where it can and cannot be used.

In the end, it is extremely easy to get swept away by the glamor that Generative AI brings to the table,  but it is especially important to keep in mind, as in the case of the banks, that the human side is needed as well.

Back to my example again of my account being blocked.  Suppose the only way that it could be unblocked was by having a conversation with a Digital Person.  But for some reason, no matter how much I tried to convince it that it was me that was trying to log in, it still does not unblock it. 

But luckily after waiting for a few minutes, I was able to reach a real, live customer assistant to whom I explained the situation.  The next second, it was unblocked.

The equation for having a great level of security is to have a balance between technology and the human element.

Sunday, February 9, 2025

How A Generative AI UEBA Solution Can Power Your Defenses

 


I have written a lot about Generative AI in the past, both in books, at my full-time job, my freelancing gig, and on this blog site.  As mentioned, it brings both its good and bad sides with it.  But to stay positive today (despite what else is going on in the news), a great big advantage that Generative AI is its ability to harness through tons of data and provide the appropriate response to a query.  So, in this regard, it can be a great boon for Cybersecurity as well.

One  instance of this is filtering through all the noise that are outputted in terms of the log files from the network security devices that you may have implemented.  For example, this can include firewalls, network intrusion devices, routers, hubs, etc. 

They all present information that is especially useful to an IT Security team.  But the problem with this is that there are tons of it comb through.  It can take an IT Security team days and even months to have go through all of this. 

But by using Generative AI and professionally training it, the model can sift through all of this very quickly, in fact in just a matter of a few minutes.  From here, it can then present the information that is relevant to the IT Security team.  Once notable example of this is for the filtering of what are known as “False Positives”.  These are the alerts and warnings that come through that are deemed to be illegitimate, or exceptionally negligible risk.

A good model can detect all of this, and either completely discard them or archive them for later study.  From here, only the real alerts and  warnings are then presented to the IT Security team, which can then be triaged and responded to appropriately. 

This almost eliminates the problem of what is known as “Alert Fatigue”.  This is where there are so many of them to go through, one can get burned out.  But this can cause severe repercussions as well, as a burned-out employee could decide to totally give up, and not even respond to anything.

Another key area where Generative AI can be used in quite well is in a field called “User and Entity Behavioral Analytics”, also known as “UEBA” for short.  It can be technically defined as follows:

“User and entity behavior analytics (UEBA) is a cybersecurity solution that uses algorithms and machine learning to detect anomalies in the behavior of not only the users in a corporate network but also the routers, servers, and endpoints in that network.”

(SOURCE:  What is User Entity and Behavior Analytics (UEBA)? | Fortinet)

Deploying and using this kind of solution can be quite complex, depending upon how large your IT and Network Infrastructure is, and how many employees that you have.  But simply put, UEBA is the science of tracking down any abnormal or unusual patterns in the usual flow of network traffic. 

A scenario where is used most is in trying to determine any anomalies that fall outside of the baseline profile that you have established.  A great use case is when you have set up a limit of only three login attempts after an account is locked out.

An outlier here is if somebody keeps trying repeatedly to login.  Usually, this is a warning sign  that a Cyberattacker is on the prowl by launching a Dictionary Attack, or it can be a frustrated employee that is legitimately trying to login into their device.  Whatever the situation might be, this must be investigated.  But once again, having to go through all the data can be a nightmare.  But by incorporating Generative into your solution, any suspicious behavior that merits further attention by the IT Security team will be presented very quickly by the model.

In fact, many of the Cybersecurity vendors are already baking this into their solutions, so there is no extra work that is required on your end.  All you need to do is merely feed the data so the model can learn and create a baseline profile.  Once it has done this, you then need to set up the criteria of what is deemed to be abnormal behavior.

UEBA solutions are used quite heavily in Security Operation Centers (also known as “SOCs”), because of all the monitoring that must be 24 X 7 X 365.  But some of the areas where it has hardly ever been deployed are the following:

*The Healthcare Industry

*Government Agencies

*The Educational Sector

The first and last ones are hit the hardest by the Cyberattacker, because legacy systems are still being used, and lack of funding, especially by the schools.  But the good thing here is that most of the UEBA solutions are now offered as a SaaS based product, which makes it affordable for about any kind of entity. 

It is highly likely that UEBA will develop over time, especially as Generative AI quickly advances further.  Thus, if you decide to deploy it, you will have to make sure that you deploy all the software patches and upgrades to it.

My Thoughts on This:

Making use of a UEBA solution is of course a no brainer.  It is one of the best ways that you can defend your business from an Insider Threat.  It also comes very handy when trying to secure those login credentials that are deemed to be “super user” (this falls under the realm of Privileged Access Management).  Consider these stats:

*In 2024, the cost of a Data Exfiltration Attack rose from $4.4 billion to well over $4.8 billion, which is a 10% increase.

*Over 70% of SOCs feel that they will miss a real threat with all the False Positives that are constantly being bombarded with.

(SOURCE:  Behavioral Analytics in Cybersecurity: Who Benefits Most?)

But also, there are two key things to keep in mind:

*The baseline profile that you get is only going to be as accurate as the data you feed into the model for it to learn.

*While using an automated tool by Generative AI is advantageous, do not become overly dependent upon it.  Remember, great Cybersecurity takes an equal combination of both technology and the human element.

But best of all, a good UEBA solution will reduce “Alert Fatigue”, and help ensure a sense of proactiveness amongst your IT Security team.

Sunday, February 2, 2025

Will Generative AI Replace Human Penetration Testers? Find Out Here

 


Very often, I get the question asked to me:  “What Is a Penetration Test”?  To make a long story short, I usually tell people that it is one of the best ways to see where the vulnerabilities of a business lie, and how to fix them up quickly”. 

Of course, for those that are in Cyber, know that there is a lot more that is involved with that.  I have been writing about Penetration Testing for years, even published a book on it and a huge eBook that is available to buy (I think its only $9.95 for a Kindle version of it). 

I also have very good friends and even business partners who are very good Penetration Testers as well.  I have learned a lot from them, especially how they have crafted their art.  But over the last couple of years, the conversations  have shifted to what it means to bring in Generative AI into this realm of Cyber.  Before the advent of this, Penetration Testing has been done manually.

Meaning, one would hire a company that specializes in doing this, and there would be actual human beings involved, all the way from conducting the offensive exercises to writing the final report to the client.  But now, Generative AI is taking root here, and people have started to question just how dependable is it?

Well, this is a difficult question to answer right off the cuff, as it will depend primarily upon how people view Generative AI in general.  But to help you decide from a point of view, let us look at both the advantages and disadvantages of it.

The Advantages:

1)     Automation:

Conducting an actual offensive exercise takes a lot of focus, attention, and brain power.  Some of the more tasks that are involved here can be quite repetitive, thus detracting away the human concentration that is needed.  But here, Generative AI can be used to automate some of these tasks, thus leaving the Penetration Tester(s) to focus on the big picture, which is finding the gaps and recommending to the client the best way to fix them.

2)     Scenarios:

Before the offensive exercises are executed, the Penetration Tester(s) must map out the targets that they want to break down, from both an ethical and legal perspective.  Of course, nothing can be done without the explicit permission of the client, and it must be written out in detail in the contract.  The primary objective of the Penetration Tester(s) is to take the mindset of an actual Cyberattacker.  While the ones that I personally know do a great job of doing this, sometimes extra help can be of great use.  In this regard, this is where Generative AI can play a huge role.  For example, it can model other kinds of testing scenarios that the Penetration Tester(s) may not have even thought of before. 

3)     Cost:

At one point in time, I was an actual reseller for a company that made a Penetration Testing package that was completely automated.  When  I met with the sales rep that oversaw the Chicago market, I asked him what the price was for it.  He said it was $50,000.00 to buy a license for one year.  When I heard that, my mouth dropped, and I was thinking, WTF????  Who can afford that?  But after he explained to me in more detail that for just one flat fee, a company who buys this license can run an unlimited amount of tests.  This stands in stark contrast to the Penetration Test that is done manually, and this can be as much as $30,000.00 - $40,000 for just one test.  Now, imagine, you had to do this once a quarter?  The costs can really add up here.  So yes, $50K is a lot to put up front, this automated tool that is powered by Generative AI can pay for itself in the end, depending upon how many times you make use of it.

4)     Speed:

A Penetration Test that is powered by Generative AI can run a comprehensive offensive exercise in just a matter of a few hours, versus one that is done manually, which can take weeks, or even months, depending upon the scope of the actual test.  This is especially true for large scale environments.

The Disadvantages:

1)     Mistakes:

Yes, human Penetration Testers can make mistakes, but those tools that are powered by Generative AI can make more of them, and for the worst of it, you may not even know about it.  For example, a fully automated Penetration Testing tool may hit a target which has not received client approval, and as a result, which could be prime time for a major lawsuit to happen.  Or worse yet, there could be a misconfiguration in the tool itself, which could lead to a huge data leakage fiasco.

2)     Data:

Using a tool that is powered by Generative AI sounds sexy and all, but there is a fark side to it.  You must train it, and to do so, you need a large number of datasets in order to keep the models optimized at all times.  Even more unglamorous, you must make sure that they are cleansed so that they do not give the wrong output.  For instance, suppose that a fully automated tool hits on a target, and returns an output stating that no vulnerabilities were found, and in fact, they really were some.  This can be blamed on the lack of using cleansed datasets, which caused the output to be skewed.

3)     Black Box:

Generative AI, and for that matter, all aspects of AI in general, such as Neural Networks, Machine Learning, Computer Vision, are all deemed to be what is known as “Garbage in And Garbage Out”.  Meaning, whatever you feed into the models will give you the output that you are seeking.  In turn, this creates the phenomenon known as the “Black Box”.  Meaning, you can see what goes in and what comes out, but you do not know what happens in between.  Many of the AI vendors hold this close to their chest, as these are primarily the algorithms that drive their products.  But, while it is great that a client will get the outputs, they also want to know how the automated tool produced all that.  What would you tell them in that case?  If I were paying a large amount of money for an automated Penetration Test, I would for sure want to know that.

4)     Cloud:

For a company that migrates their entire IT and Network Infrastructure into Cloud, it can be a nebulous process.  Even after the migration has been completed, it can still be complicated, depending upon how much and what has been moved over.  As a result, an automated tool will not work well in this kind of environment, because each Cloud deployment will vary quite a bit from one another.  Therefore, if a client wants to take a Penetration Test in such an environment, they are far better off hiring human Penetration Testers.  This is especially true for web-based applications.

My Thoughts on This:

So, the next big question is:  “Will Generative AI replace human Penetration Testers”?  My answer to this is a blatant know.  Human intervention is still required, especially when it comes to evaluating the results of the offensive exercises and conveying that into a written format to the client.  Heck, even the people who started ChatGPT should always check the outputs to make sure they sound realistic before sharing with anybody else.

If you are in the market for having an actual Penetration Test being done at your business, my first piece of advice is talk to an actual human being first to see what you need to get done.  Don’t simply spend the $50K to buy an automated tool.  As a client, you also need to understand what is being done to your environment, how the vulnerabilities will be found.  These can be best answered only by a real live human.

CrowdStrike One Year Later: 3 Key Lessons Learned

  Well guess what people?   It has been a year since the CrowdStrike fiasco, and from what we know, it was the biggest Cybersecurity   fiasc...