Sunday, February 23, 2025

The Importance Of Separating The Logical And Emotional Aspects If You Are A Victim

 


Human beings have two basic instincts among all others:  Being a creature of habit, and wanting to forgive people if they have wronged you in some way, shape, or form.  I know for one I am a creature of habit.  The best example of this is just a few days ago. 

I recently traded in my 22-year-old Honda Civic and am now leasing Kia.  This is the first time that I have had a car with all the electronic gizmos in it.  I have always been an analog dashboard kind of person on my past cars, so there are times I have wished to have that back.

But I know I made the right decision and must get used to all these new fancy things.  In terms of forgiveness, well, I am also a pretty loving guy.  The best example of this is one of my best friends of over 40 years, and we have our major spats, and the most recent one, a few days ago about the current pollical climate.  But of course, being close friends for such a long time, we forgave almost immediately.

These two examples can also fire perfectly well in the world of Cybersecurity.  For example, suppose you have been a long-time customer of a major vendor.  All of a sudden, you have been informed that they have been impacted by a security breach.  Some of the first questions that you will ask are:

1)     How did it happen?

2)     How soon did you find out it happened?

3)     What steps have you taken to rectify the situation?

4)     MOST IMPORTANT:  How am I impacted?  Is my data safe?

5)     What kind of recourse are you going to offer me?

But no matter how much you try to find fault with and blame the vendor for what happened, the tendency to want to stick around with them still persists.  After all, it is going to take time to find  a new vendor, and time to get acclimated to the way they serve customers. 

And what if they are more expensive?  So now the feeling of being a “creature of habit” sets in, and in the end, you decide you want to still stick with the same vendor.  This is technically known as “Digital Forgiveness”.

But now there is a new psychological play here as well.  It is the phenomenon called “Risk Normalization”.  To put it simply, you further rationalize your decision to continue with the same vendor by further rationalizing this:  “Well, anybody can become a victim, I guess it was my turn now”.

Because of all the loyalty you have shown to the vendor, the tendency will now be for them, indirectly, to take advantage of you.  For example, there attitude could very well be now:  “Well if a security breach happens again, they will still probably stick around.  No need to beef up my lines of defenses even further”. 

But, taking this kind  of approach can have detrimental effects, which include the following:

1)     Trust:

Although you may have forgiven the vendor, it will still be a part that will be hidden in your memory.  So, if the vendor takes a complacent attitude with you, your level of trust with them can erode over time.  Not your loss, it will be theirs, because customers can easily be lost, but it can take an exceptionally long time to get a new one.

2)     Anxiety:

After a company has been hit by a security breach, the moral and ethical thing for them to do is to offer you some kind of recourse, most often which comes in the form of free credit reports and real time monitoring.  But they are not legally required to do this.  So, if nothing is offered to you, it is quite likely that a prominent level of feeling of anxiety will kick in.  For example, some of your most immediate fears will be:  “Will I become a victim of ID Theft”? 

3)     Goodwill:

If the vendor again becomes a victim of a security breach, your goodwill towards them will completely vanish, and at this time you will say:  “This is the straw that broke the camel’s back, I am finding a new vendor”. 

My Thoughts on This:

Although this is much easier said than done, if your vendor has been hit by a Cyberattacker, and you have become a victim, it is imperative to separate yourself from the emotional side, and take these solid steps:

Ø  After you have been notified, immediately demand to know what happened to your data, and what corrective measures have or are currently being taken to protect your datasets.

 

Ø  Immediately enable either 2FA or MFA on all your financial accounts, such as you are banking and credit card portals.  Keep checking them at least twice a day to make sure that there is no fraudulent activity.

 

Ø  Immediately contact the three credit bureaus (Equifax, TransUnion, and Experian) and put a freeze on your account.

 

Ø  Demand recourse, more than what the vendor has to offer.  If you can afford the legal expenses, even consider filing a lawsuit.

 

Ø  Remember in the end, that you are the customer.  In our capitalistic society, the “Customer Is King”.  So, wield these powers that you have, and try to find  a different vendor.  If you take this route, make sure you ask what steps are being taken to protect your data if you were to go with them.

Finally, you, the customer, also need to play a part in protecting your data.  For example, with the recent passages of the many data privacy laws, especially those of the GDRP and the CCPA, you now have the legal right to know explicitly know how your data is being stored, processed, and archived.  And, you can always ask to have your datasets deleted if at any time you are not feeling comfortable with the way it is being managed.

Sunday, February 16, 2025

A Fine Line Must Be Drawn In Generative AI Usage: The Banking Example

 


One common question that I get asked from time to time is what do Cyberattackers like to prey on?  In other words, who do they like to target?  To be honest, about anything and anybody can be a prime target.  But it all comes down to one key motivating factor:  MONEY, AND LOTS OF IT. 

Wherever there is a backdoor is open and $$$ is easy to smell, the Cyberattacker will make its prey.  It can happen in a lot of diverse ways, such as Social Engineering, Phishing (Business Email Compromise is a big one here), finding vulnerabilities in a web application, etc.

But one thing I can answer for sure is that an industry which is heavily targeted is the banking  one.  After all, once the Cyberattacker has access to the login info of the victim, all heck can break loose.  For example, they can initiate a fake transaction, open a fake debit card, or just do things the old-fashioned way:  just steal whatever money is in the victim’s account.

In response to this, most of the financial institutions based here in the United States have done an excellent job implementing safeguards to protect their customers.  I can even vouch for this for myself.  One time, I got a letter from my bank stating that my debit card got hacked into. 

I never even used it, but the moment they got whiff of a potential, fraudulent transaction, they cancelled it immediately.  Then one time, I logged into my checking account from my iPhone (which I hardly ever do), the bank blocked my access later, because a different IP address was detected.

But another area in banking which needs more attention paid to is that of the mobile apps that they create and deploy for their customers.  Consider these stats:

*Fraudulent activity will exceed $40 billion by 2027, which is a staggering 32% increase.

*Banking as a Service will also witness a 20% increase in attacks.

(SOURCE:  How Banks Can Adapt to the Rising Threat of Financial Crime)

In fact, the mobile app can be viewed as Banking as a Service tool.  After all, you can download it from an app store, such as Google or Apple.  In these cases, one of the easiest ways for the Cyberattacker to get into is to try to find a backdoor in the source code, especially in the API. 

As I have written before, many software developers use ones that are open sourced primarily because they are free to download and use, with no licensing fees involved.  Also, there are plenty of forums online in which help, and resources are available.

But the software developers who make use of these kinds of APIs do not check them to make sure that they have been updated.  Because of this, many backdoors can be left open for easy penetration by the Cyberattacker.  From here, they can manipulate the mobile app or even heist the source code to create a fake, this tricking and luring in their victims.

So how can a bank avoid this situation.  In a theoretical sense, the easy answer is that they should use their own IT Department to create it.  But, this can be a costly proposition, so many banks choose to outsource the development of it, in the name of saving money. 

While this can be a good thing, it also poses grave risks as well.  For example, what if they have hired a web development team, such as in India, and they are not properly vetted?

In this regard, the banks must take the vetting process very seriously.  They need to make sure that whoever they hire must meet strict security requirements that are at least on par or even greater than what the bank has in place. 

Further, the right controls must be put in place, in case any customer information and/or data is given for testing purposes.  In fact, the bank should take the initiative and responsibility to create a set of best practices and standards for their vetting process.

Another avenue that banks are looking at to further protect they are as a Service offerings is the use of Generative AI.  One of the best ways that this has been used is to quickly detect any form of abnormal behavior that falls out of the baseline profile of the customer.  

Once this has been captured, the Generative AI model will trigger the account to be blocked almost immediately.  Generative AI is also great when it comes to halting a wire transfer that looks fishy, such as the in the case of a Business Email Compromise Attack.

But with the good comes the bad.  For instance, a Cyberattacker can easily heist one of these models and modify it in a way that it will not detect fraudulent activity for a certain period of time.  Or worse yet, they can not only create a fake website,  but they can also Generative AI to create a Deepfake, which is a replication of a real person. 

They can use this to create a Digital Personality that the customer can interact with, but Social Engineering can be embedded here, so that a trusting dialog can be developed.  Once this has come to fruition, Digital Personality can then be manipulated to prey upon the vulnerable state of mind of the customer and con them into giving out their personal information and data.

My Thoughts on This:

IMHO, banks, no matter what their size or their geographic location is, there must be a fine line drawn as to how much Generative AI should be used.  Perhaps creating a set of best standards and practices would be great here, as to where it can and cannot be used.

In the end, it is extremely easy to get swept away by the glamor that Generative AI brings to the table,  but it is especially important to keep in mind, as in the case of the banks, that the human side is needed as well.

Back to my example again of my account being blocked.  Suppose the only way that it could be unblocked was by having a conversation with a Digital Person.  But for some reason, no matter how much I tried to convince it that it was me that was trying to log in, it still does not unblock it. 

But luckily after waiting for a few minutes, I was able to reach a real, live customer assistant to whom I explained the situation.  The next second, it was unblocked.

The equation for having a great level of security is to have a balance between technology and the human element.

Sunday, February 9, 2025

How A Generative AI UEBA Solution Can Power Your Defenses

 


I have written a lot about Generative AI in the past, both in books, at my full-time job, my freelancing gig, and on this blog site.  As mentioned, it brings both its good and bad sides with it.  But to stay positive today (despite what else is going on in the news), a great big advantage that Generative AI is its ability to harness through tons of data and provide the appropriate response to a query.  So, in this regard, it can be a great boon for Cybersecurity as well.

One  instance of this is filtering through all the noise that are outputted in terms of the log files from the network security devices that you may have implemented.  For example, this can include firewalls, network intrusion devices, routers, hubs, etc. 

They all present information that is especially useful to an IT Security team.  But the problem with this is that there are tons of it comb through.  It can take an IT Security team days and even months to have go through all of this. 

But by using Generative AI and professionally training it, the model can sift through all of this very quickly, in fact in just a matter of a few minutes.  From here, it can then present the information that is relevant to the IT Security team.  Once notable example of this is for the filtering of what are known as “False Positives”.  These are the alerts and warnings that come through that are deemed to be illegitimate, or exceptionally negligible risk.

A good model can detect all of this, and either completely discard them or archive them for later study.  From here, only the real alerts and  warnings are then presented to the IT Security team, which can then be triaged and responded to appropriately. 

This almost eliminates the problem of what is known as “Alert Fatigue”.  This is where there are so many of them to go through, one can get burned out.  But this can cause severe repercussions as well, as a burned-out employee could decide to totally give up, and not even respond to anything.

Another key area where Generative AI can be used in quite well is in a field called “User and Entity Behavioral Analytics”, also known as “UEBA” for short.  It can be technically defined as follows:

“User and entity behavior analytics (UEBA) is a cybersecurity solution that uses algorithms and machine learning to detect anomalies in the behavior of not only the users in a corporate network but also the routers, servers, and endpoints in that network.”

(SOURCE:  What is User Entity and Behavior Analytics (UEBA)? | Fortinet)

Deploying and using this kind of solution can be quite complex, depending upon how large your IT and Network Infrastructure is, and how many employees that you have.  But simply put, UEBA is the science of tracking down any abnormal or unusual patterns in the usual flow of network traffic. 

A scenario where is used most is in trying to determine any anomalies that fall outside of the baseline profile that you have established.  A great use case is when you have set up a limit of only three login attempts after an account is locked out.

An outlier here is if somebody keeps trying repeatedly to login.  Usually, this is a warning sign  that a Cyberattacker is on the prowl by launching a Dictionary Attack, or it can be a frustrated employee that is legitimately trying to login into their device.  Whatever the situation might be, this must be investigated.  But once again, having to go through all the data can be a nightmare.  But by incorporating Generative into your solution, any suspicious behavior that merits further attention by the IT Security team will be presented very quickly by the model.

In fact, many of the Cybersecurity vendors are already baking this into their solutions, so there is no extra work that is required on your end.  All you need to do is merely feed the data so the model can learn and create a baseline profile.  Once it has done this, you then need to set up the criteria of what is deemed to be abnormal behavior.

UEBA solutions are used quite heavily in Security Operation Centers (also known as “SOCs”), because of all the monitoring that must be 24 X 7 X 365.  But some of the areas where it has hardly ever been deployed are the following:

*The Healthcare Industry

*Government Agencies

*The Educational Sector

The first and last ones are hit the hardest by the Cyberattacker, because legacy systems are still being used, and lack of funding, especially by the schools.  But the good thing here is that most of the UEBA solutions are now offered as a SaaS based product, which makes it affordable for about any kind of entity. 

It is highly likely that UEBA will develop over time, especially as Generative AI quickly advances further.  Thus, if you decide to deploy it, you will have to make sure that you deploy all the software patches and upgrades to it.

My Thoughts on This:

Making use of a UEBA solution is of course a no brainer.  It is one of the best ways that you can defend your business from an Insider Threat.  It also comes very handy when trying to secure those login credentials that are deemed to be “super user” (this falls under the realm of Privileged Access Management).  Consider these stats:

*In 2024, the cost of a Data Exfiltration Attack rose from $4.4 billion to well over $4.8 billion, which is a 10% increase.

*Over 70% of SOCs feel that they will miss a real threat with all the False Positives that are constantly being bombarded with.

(SOURCE:  Behavioral Analytics in Cybersecurity: Who Benefits Most?)

But also, there are two key things to keep in mind:

*The baseline profile that you get is only going to be as accurate as the data you feed into the model for it to learn.

*While using an automated tool by Generative AI is advantageous, do not become overly dependent upon it.  Remember, great Cybersecurity takes an equal combination of both technology and the human element.

But best of all, a good UEBA solution will reduce “Alert Fatigue”, and help ensure a sense of proactiveness amongst your IT Security team.

Sunday, February 2, 2025

Will Generative AI Replace Human Penetration Testers? Find Out Here

 


Very often, I get the question asked to me:  “What Is a Penetration Test”?  To make a long story short, I usually tell people that it is one of the best ways to see where the vulnerabilities of a business lie, and how to fix them up quickly”. 

Of course, for those that are in Cyber, know that there is a lot more that is involved with that.  I have been writing about Penetration Testing for years, even published a book on it and a huge eBook that is available to buy (I think its only $9.95 for a Kindle version of it). 

I also have very good friends and even business partners who are very good Penetration Testers as well.  I have learned a lot from them, especially how they have crafted their art.  But over the last couple of years, the conversations  have shifted to what it means to bring in Generative AI into this realm of Cyber.  Before the advent of this, Penetration Testing has been done manually.

Meaning, one would hire a company that specializes in doing this, and there would be actual human beings involved, all the way from conducting the offensive exercises to writing the final report to the client.  But now, Generative AI is taking root here, and people have started to question just how dependable is it?

Well, this is a difficult question to answer right off the cuff, as it will depend primarily upon how people view Generative AI in general.  But to help you decide from a point of view, let us look at both the advantages and disadvantages of it.

The Advantages:

1)     Automation:

Conducting an actual offensive exercise takes a lot of focus, attention, and brain power.  Some of the more tasks that are involved here can be quite repetitive, thus detracting away the human concentration that is needed.  But here, Generative AI can be used to automate some of these tasks, thus leaving the Penetration Tester(s) to focus on the big picture, which is finding the gaps and recommending to the client the best way to fix them.

2)     Scenarios:

Before the offensive exercises are executed, the Penetration Tester(s) must map out the targets that they want to break down, from both an ethical and legal perspective.  Of course, nothing can be done without the explicit permission of the client, and it must be written out in detail in the contract.  The primary objective of the Penetration Tester(s) is to take the mindset of an actual Cyberattacker.  While the ones that I personally know do a great job of doing this, sometimes extra help can be of great use.  In this regard, this is where Generative AI can play a huge role.  For example, it can model other kinds of testing scenarios that the Penetration Tester(s) may not have even thought of before. 

3)     Cost:

At one point in time, I was an actual reseller for a company that made a Penetration Testing package that was completely automated.  When  I met with the sales rep that oversaw the Chicago market, I asked him what the price was for it.  He said it was $50,000.00 to buy a license for one year.  When I heard that, my mouth dropped, and I was thinking, WTF????  Who can afford that?  But after he explained to me in more detail that for just one flat fee, a company who buys this license can run an unlimited amount of tests.  This stands in stark contrast to the Penetration Test that is done manually, and this can be as much as $30,000.00 - $40,000 for just one test.  Now, imagine, you had to do this once a quarter?  The costs can really add up here.  So yes, $50K is a lot to put up front, this automated tool that is powered by Generative AI can pay for itself in the end, depending upon how many times you make use of it.

4)     Speed:

A Penetration Test that is powered by Generative AI can run a comprehensive offensive exercise in just a matter of a few hours, versus one that is done manually, which can take weeks, or even months, depending upon the scope of the actual test.  This is especially true for large scale environments.

The Disadvantages:

1)     Mistakes:

Yes, human Penetration Testers can make mistakes, but those tools that are powered by Generative AI can make more of them, and for the worst of it, you may not even know about it.  For example, a fully automated Penetration Testing tool may hit a target which has not received client approval, and as a result, which could be prime time for a major lawsuit to happen.  Or worse yet, there could be a misconfiguration in the tool itself, which could lead to a huge data leakage fiasco.

2)     Data:

Using a tool that is powered by Generative AI sounds sexy and all, but there is a fark side to it.  You must train it, and to do so, you need a large number of datasets in order to keep the models optimized at all times.  Even more unglamorous, you must make sure that they are cleansed so that they do not give the wrong output.  For instance, suppose that a fully automated tool hits on a target, and returns an output stating that no vulnerabilities were found, and in fact, they really were some.  This can be blamed on the lack of using cleansed datasets, which caused the output to be skewed.

3)     Black Box:

Generative AI, and for that matter, all aspects of AI in general, such as Neural Networks, Machine Learning, Computer Vision, are all deemed to be what is known as “Garbage in And Garbage Out”.  Meaning, whatever you feed into the models will give you the output that you are seeking.  In turn, this creates the phenomenon known as the “Black Box”.  Meaning, you can see what goes in and what comes out, but you do not know what happens in between.  Many of the AI vendors hold this close to their chest, as these are primarily the algorithms that drive their products.  But, while it is great that a client will get the outputs, they also want to know how the automated tool produced all that.  What would you tell them in that case?  If I were paying a large amount of money for an automated Penetration Test, I would for sure want to know that.

4)     Cloud:

For a company that migrates their entire IT and Network Infrastructure into Cloud, it can be a nebulous process.  Even after the migration has been completed, it can still be complicated, depending upon how much and what has been moved over.  As a result, an automated tool will not work well in this kind of environment, because each Cloud deployment will vary quite a bit from one another.  Therefore, if a client wants to take a Penetration Test in such an environment, they are far better off hiring human Penetration Testers.  This is especially true for web-based applications.

My Thoughts on This:

So, the next big question is:  “Will Generative AI replace human Penetration Testers”?  My answer to this is a blatant know.  Human intervention is still required, especially when it comes to evaluating the results of the offensive exercises and conveying that into a written format to the client.  Heck, even the people who started ChatGPT should always check the outputs to make sure they sound realistic before sharing with anybody else.

If you are in the market for having an actual Penetration Test being done at your business, my first piece of advice is talk to an actual human being first to see what you need to get done.  Don’t simply spend the $50K to buy an automated tool.  As a client, you also need to understand what is being done to your environment, how the vulnerabilities will be found.  These can be best answered only by a real live human.

5 Ways In Which Generative AI Can Be Used To Launch Social Engineering Attacks

  Many of the threat variants of today from the Cyber Threat Landscape are born from some of the oldest ones.   In other words, the Cyberatt...