Sunday, April 14, 2024

The Impacts Of Liquid Cooling On AI Datacenters

 


When we think of AI, hear about it, or even use it, we often think of ChatGPT.  While in a way this is correct, Generative AI (from which ChatGPT is derived from) is just a subset of AI.  For example, there are other areas as well, such as Machine Learning, Computer Vision, Neural Networks, Large Language Models, Natural Language Processing, etc.

But yet, there is yet another area of AI which will receives almost no public attention whatsoever, and those are the companies that own the datacenters which house the servers to host the AI applications.  But a point of clarification is needed here.  Although many of the AI applications are now SaaS  based, and in fact, you can even create and host your own AI app on Microsoft Azure – you still need a physical server to host all of this software.

Because of the huge growth in AI, there in turn has been an increased demand for datacenters.  In fact, if you listen to a business channel like CNBC, you will see them even talk about the stocks of some of these companies that own these datacenters.  For example, some names that come to mind here include Vertiv, Advanced Micro Devices, Nvidia, Iron Mountain, etc.

The demand for datacenters is going to be red hot in the coming years.  In fact, it is predicted that the entire AI market will be worth well over $1.3 Billon in just revenue alone.  This represents a staggering growth rate of over 37% from today’s numbers.

(SOURCE:  https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html)

Given all of the servers and networking technologies that a datacenter has to contain, the temperature in them can get very hot.  As a result, these physical infrastructures need to be cooled on a 24 X 7 X 365 basis throughout the entire year.  But, despite the profits that are being made, the costs of cooling, take a big chunk out of that – it can be almost as much as 40% for a datacenter’s electricity bill. 

(SOURCE:  https://www.computerweekly.com/news/366568452/Datacentre-operators-face-capacity-planning-challenges-as-AI-usage-soars)

Because of these staggering costs, many datacenters are now opting for another form of cooling rather than the traditional ones.  This makes use of water, as now referred to as “Liquid Cooling”.  At least here in the United States, the datacenters rely upon a freshwater supply for cooling – this is the same source that provides us with our drinking water.  Although we think that water is a plentiful resource that we will never run out of, consider these statistics:

*The typical datacenter uses at least 1-5 million gallons of water, on a daily basis.

(SOURCE:  https://www.washingtonpost.com/climate-environment/2023/04/25/data-centers-drought-water-use/

*Almost a third of the world’s servers are located here in the United States.

(SOURCE:  https://www.usitc.gov/publications/332/executive_briefings/ebot_data_centers_around_the_world.pdf)

But now, we are facing an imminent water crisis, brought on by two fronts:

*The sheer amount of water shortages that are now happening because of global warming and an increased demand for more drinking water by our population.

*The increased number of Cyberattacks against our Critical Infrastructure, namely that of our water supply lines.  More details on these kinds of attacks can be seen at the link below:

https://www.cnn.com/2023/12/01/politics/us-water-utilities-hack/index.html

So now the trick is for datacenters to start to rely upon other means in which to procure their water resources.  Considerations have been given to the options:

*Using sewer water, and even water from the oceans.

*The deployment of more advanced freshwater tracking technologies to get an accurate view of just how much fresh water is actually being consumed.  More information about this can be found at the link below:

https://datacenters.lbl.gov/water-efficiency#:~:text=Key%20best%20practices%20for%20water,Evaluate%20chillers%20for%20replacement

*Procuring grants and other sources of funding from the Federal Government to look at alternate means of using less fresh water, but yet will maintain the current levels of cooling that are needed by a datacenter.  In fact, the Department of Energy) just announced a grant of $40 million in this regard.  Details on this can be seen at the link below:

https://www.energy.gov/articles/doe-announces-40-million-more-efficient-cooling-data-centers

*Building out the datacenters in areas of the United States where the temperature is cooler, and there is an abundant supply of other forms of water, such as water from the Atlantic and Pacific Oceans, or even in the Gulf of Mexico, and the surrounding Great Lakes regions. 

But from the standpoint of Cybersecurity, more effort and initiative has to be taken to shore up the defenses on our water supply lines.  This is not just a local or state issue, rather, this is something that must be addressed and fully funded by the Federal Government.  But it is important to keep in mind that our Critical Infrastructure is made up entirely of technology and equipment that was made back in the 1970s.

In fact, many of the vendors that made these parts are probably no longer in existence.  So, it is not just a matter of ripping out the old stuff and putting new ones in to help solve the Cyber problem.  At the present time,  this simply will not work.  The only option we have is to add more layers of security, but this has to be done very carefully, in order to ensure that whatever is deployed will be interoperable and compatible with the old stuff.

My Thoughts On This:

So now the big question is:  “What if my datacenter runs out of a fresh water supply, or it is hit with a Cyberattack?”  The fundamental answer to this comes down to proper planning.  You need to have an Incident Response Plan, a Disaster Recovery Plan, and a Business Continuity Plan to address this.  Two areas of focus should be:

*Sourcing a secondary source of freshwater for your datacenter in case of any interruptions.

*Beefing up your lines of defenses in case you are indeed hit with a Cyberattack, and your cooling systems were the primary target.

So as you can see, in order for all of this to work, it is going to take a huge partnership with the private and public sectors, and even that of academia in order to make all of this work. But it can happen, over time, which is something we do not have the luxury of right now.

Finally for more details on how our precious water supply systems can be further protected, click on the link below:

https://www.scmagazine.com/perspective/heres-how-we-can-make-water-utilities-more-secure

Sunday, April 7, 2024

The Key Fundamental Cyber Question That Needs To Be Asked And Answered

 


Today’s blog is a little bit different than the others, and yes, that means no AI!!!!  This is an issue that I have addressed many times before, and even in one of the books that I wrote about on Risk and Cybersecurity Insurance.  This is the topic of whether a CISO is really understanding what they get when they purchase a holistic, end to end Cyber based solution.

What got me to this topic was an article that I had read this morning about a Cyber Executive who interviewed many people in the industry to see what kinds of trends exist in their buying patterns.  Here is what he found:

*Not planning the solution in its entirety.  In other words, asking questions and evaluating the product and/or service to make sure that it addresses all of our needs.  In other words, CISOs very often look for curing the symptoms and not the actual cause.  Once they have found something that can do this, they immediately jump at it without thinking clearly if this is what they really need.

*CISOs are often taken aback by all of the bells and whistles that comes with an all-inclusive security package.  For example, if a dashboard looks sleek, that is the catalyst that decides if they will buy or it not.  Or now, the big thing is Generative AI.  If the package comes with it, buy it!

*CISOs very often don’t take a close look at the triaging process and the legitimate warnings and alerts that come through.  Very often, they leave this to their IT Security teams to filter through.  But IMHO, this is the wrong approach to take.  It is this very process that paint the entire picture of what exactly is going on the IT and Network Infrastructure.  It’s like taking aspirin to stop a chest discomfort without seeing the doctor to determine the underlying cause and to see if further action is needed. 

*Another area of key weakness is that CISOs do not adopt and enforce is a software patching process.  Instead, if they even do have a process in place, they often rely on automation which may or may work.

So, what does the author recommend as to how a CISO should make their purchasing decisions?  He starts off with first that an organization needs to have a comprehensive Security Program in place first, which should answer these fundamental questions:

*Examining all current processes for your lines of defenses, and asking this question:  “Why are we using it?  Give me the reasons.”

*Your current strategies for fending off an imminent threat, and how to even deal with those that are lurking about your IT and Network Infrastructure, when you finally discover them.

*How quick is the response time?  This is where the key metrics of the “Meant Time To Detect” and the Mean Time To Respond” become especially critical.

*What are the current methods for Incident Response, Disaster Recovery, and Business Continuity?  Are there even plans in place, and if so, how often have they been rehearsed?

*Who is part of the Incident Response team, and do they know what they need to do if they are called upon during the time of a security breach?

To help the CISO address all of the issues, and even more, he recommends following the Security Framework as outlined by NIST.  It can be downloaded at the link below:

https://www.darkreading.com/cybersecurity-operations/biggest-mistake-security-teams-make-when-buying-tools

He gives his own model for Cybersecurity, which is as follows:

“Program = Tool + People + Processes + Goals”

(SOURCE:  https://www.darkreading.com/cybersecurity-operations/biggest-mistake-security-teams-make-when-buying-tools)

In my writings, I have produced something similar, but with not as many variables in it.  This is as follows:

Great Cyber:  People + Technology

In other words, to have truly effective lines of defense for your business, you cannot rely too much upon one side or the other.  You need both, as the model proposed by the author also suggests.

Towards the end of the article, the author also points out two key areas the CISO also needs to address in crafting their plans.  They are as follows:

1)     Involve everybody:

In Corporate America today, people still think that all issues that are related to technology fall onto the shoulders of the IT Department.  While the proverbial buck does stop here, it is important to remember that each and every employee has to tow their own line for the collective good!!!  In other words, “Cyber Hygiene” is not just left to the IT Department.  Everybody has their role in this, to make sure for example, that they recognize the signs of a Phishing Email and discard it.  Or, creating long and complex passwords with the help of a Password Manager.  It takes all of the employees to fill the cracks!!!

2)     Conduct Risk Assessments:

This is one area in which I have belabored heavily upon.  In order to lay out your Security Framework, you first need to identify all of the vulnerabilities that are present.  Simply put, this means inventorying all of your digital and physical assets, and ranking them on a numerical scale in terms of their degree of vulnerability.  Of course, those with the highest ranking should receive immediate attention, by either putting in new controls or upgrading the existing ones that are in place.  Also, by conducting this kind of Assessment, you will know where all of your security tools lay at, and from there, you can then decide if you really need them or not.  This is called decreasing your Attack Surface, and will enforce the efficient use of the tools.  Remember, by having too many of them, you widen the gap for the Cyberattacker to penetrate into.

My Thoughts On This:

To be honest, I agree with the author on these points.  As a CISO, if you are considering procuring a new solution, ask this basic question:  “Am I really addressing the underlying issue or just the symptom”?  By thinking along these lines, you and your IT Security team will go a lot further in staying ahead of the Cyberattacker.

Sunday, March 31, 2024

Why Hackers Are Now Breaking Their Own "Ethics"

 


It was just yesterday that I was writing a tentative outline for a possible course on Continuing Education (CE), at a nearby Junior College.  The proposed topic is on Penetration Testing, and I even wrote a blurb on the outline as to how Pen Testers are actually “Ethical Hackers”.  If you are new to Cybersecurity, you may be wondering, “OK, what is exactly hacking that is Ethical?”  Well, it does exist, and here is a technical definition for it:

“Ethical hacking is the use of hacking techniques by friendly parties in an attempt to uncover, understand and fix security vulnerabilities in a network or computer system.”

(SOURCE:  https://www.ibm.com/topics/ethical-hacking)

So as you can see from the above definition, the operative word is “friendly”.  In the world of Penetration Testing, the guys that do the actual hacking belong to what is known as the “Red Team”.  But they can only carry out their planned hacks with explicit and written consent for the client that they are doing it for.  But this now brings up another key point.

Even with the traditional “bad guy” hackers, there used to evolve a code of “Ethics” as well.  Hacking has been around since the 1960s, and since then, a certain code of cadence was created.  Examples of this include the following:

*Any entity that involved healthcare, and the delivery of life saving services, were completely of limits.  This means primarily hospitals and ERS.

*Critical Infrastructure could not be touched.  If it were to be, it would be considered an act of war by the impacted country, with the repercussions unthinkable (perhaps even a nuclear war).  But it is important to keep in mind here that Cyberattackers are pushing the envelope as far as they can, with the prime example being that of the Colonial Gas Pipeline attack.  Although the actual pipeline was not affected, it did affect the financial markets and the supply chain in a cascading effect.

More details on this can be seen at the link below:

https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-weve-learned-what-weve-done-over-past-two-years

*Individuals and businesses that were going to become a victim could only be hit once, and not anymore. 

*The COVID-19 pandemic also ushered in a new era of “bad guy” hacker Ethics, especially in the way of not targeting testing places and those entities providing the much-needed vaccinations.

But after the pandemic eroded away (it is still technically here, though), the rules of “Ethical Hacking” by the Cyberattacker has changed greatly. This has been brought up a lot by the covertness, stealthiness, and sophistication of Ransomware attacks.  For example, we are not just seeing computers being locked up and files encrypted, we are now seeing it in its worst form ever.  This includes the selling of PII datasets on the Dark Web and conducting Extortion like Attacks.

A lot of the disappearance of a kind of good gestures in “bad guy” hacking has been catalyzed by two main factors:

1)     The increased interconnectivity with just about everything (primarily brought on by the IoT).

2)     The advent of Generative AI.

In my own view, it is the latter which is the dominant force here.  For example, a Cyberattacker can easily create the source code for crafting a piece of malicious payload that can be deployed to launch a Supply Chain Attack (like the Solar Winds hack), or even use it to create a Phishing Email that it is almost impossible to tell the difference between a real one a fake one. 

Another unfortunate catalyst driving this new trend is the fact that many of the hackers are now getting much younger in age.  In fact, with so much that is available online and on the Dark Web, even a novice still in junior high school and rent a service called “Ransomware as a Service”, and have a third party launch a devastating attack for literally pennies on the dollar.

Also in the hacking circles, it has even become a badge of honor to attack high value targets, such as companies that are in the Fortune 500.  In fact, the Cyberattackers in this regard have become so brazen that will even leverage the media to their own benefit in order to fully advertise what they have done.  In a horrible sense, this is how a Cyberattacker adds to their “resume”.  More details on this can be seen at the link below:

https://www.darkreading.com/threat-intelligence/ransomware-gangs-pr-charm-offensive-pressure-victims

But this has also led to the take down of the more traditional Ransomware groups, such as “Black Cat”.  Heck, even Cyberattackers are snitching on their own brand so that they can remain at the top of the hacking list, with the “best reputation” that is possible.  More information about this can be found at the link below:

https://www.darkreading.com/cybersecurity-operations/feds-snarl-alphv-blackcat-ransomware-operation

My Thoughts On This:

Back in the day, hacking simply meant that somebody would just break into a computer system just to see what it contained, it was just a “curiosity” based attack.  But as it has been described in this blog, this is no longer the case.  Going into the future, just simply expect the worst.  Things will not get any better.  The more that you try to fortify your systems, the more the Cyberattacker is going to pound on your door.

For more information on what is expected in the way of hacks for 2024, click on the link below:

http://cyberresources.solutions/blogs/2024_Hacks.pdf

Sunday, March 24, 2024

How To Improve Your Code Signing Process: 6 Golden Tips

 


With the advent of AI, one of the biggest issues that face all businesses and individuals alike is making sure that whatever receive is actually legitimate.  This could be in the form of an Email, phone call, or even a legitimate piece of mail. 

With all of the advances that are taking place, especially in that of Generative AI, creating illegitimate content that looks real, but is phony is very hard to discern.  For example, as this next Presidential Election cycle comes up in jut a matter of months, the issue of Deepfakes will come about.

This is where the Cyberattacker will try to create a video impersonating one of the candidates, which will look like the real thing.  From here, the video will then prompt you to a fake site, which will ask you to enter your username and password, which then the Cyberattacker will collect. 

More than likely, you will also be prompted to make a donation of sorts, but that money, once submitted will be sent over to a phony bank account, never being able to be recovered again.

Another example of this, is the source code that software developers use in order to create web based applications.  Very often in this regard, open-source APIs are used to help create and deliver the project quicker to the client. 

But the problem here is that the libraries and repositories that host them don’t keep them updated, or even replace them when newer or updated versions of the same APIs come out.

To help alleviate this problem, a procedure called “Code Signing” is used.  This can be technically defined as follows:

Code signing is the process of applying a digital signature to a software binary or file. This digital signature validates the identity of the software author or publisher and verifies that the file has not been altered or tampered with since it was signed. Code signing is an indicator to the software recipient that the code can be trusted, and it plays a pivotal role in combating malicious attempts to compromise systems or data.”

(SOURCE:  https://www.digicert.com/faq/code-signing-trust/what-is-code-signing)

Put in simpler terms, this is a more sophisticated way of confirming that the source code (or piece of it) that you receive is actually legitimate coming from the real source.  But this too has been wrought with difficulties, with the recent Solar Winds hack.  In extra effort here as well, he Certificate Authority/Browser , also known as the “CA/B” Forum launched a new set of guidelines for maintaining code-signing certificates.  More information about this can be seen at the link below:

http://cyberresources.solutions/blogs/Code_Signing.pdf

But apart from what is also available, there are a certain number of steps that you and your IT Security team should take as an extra, proactive step.  Here are some suggested guidelines:

1)     Secure the Keys:

Although this option is available for both On Premises and Cloud based deployments, it is far more efficient if you have the latter.  For example, Microsoft Azure has specific vaults that you can create and deploy for this very purpose.

2)     Access Control:

The use of Role Based Access Control, also known as “RBAC” is very important here.  For example, you would give a Network Administrator control to maintain the servers at your business, but you would not give these to your administrative assistant.  So therefore, the Code Signing Process should be only limited to those individuals that are intimately involved in the creation of the source code.  Further, all of the rights, permissions, and privileges that you give them, must be monitored closely.  All of this falls into an area which is known as Privileged Access Management (PAM), and will be examined in a future blog.

3)     Implement a Rotation Schedule:

As the name suggests, you should never ever use the same Code Signing Key over and over again.  The best practices here would mandate that for each source code release that you make, a new set of Keys has to be created.  If even one Key is compromised, all subsequent releases associated with that Key will also be affected.

4)     Have Time Stamps:

For each source code release, the Code Signing Key must have a time stamp to it, and have the sender notify of you when it has been sent.  Anything that has a long-time interval should be a big red flag, as this could indicate the source code could have been altered maliciously.

5)     Check the Integrity:

Always check the integrity of the Code Signatures.  If there was more than one that is required, have each software confirm the validity of it before it is released.

6)     Keep it simple:

Whatever methodology you make use of for your Code Signing Procedures, make sure you keep it simple, so that not only will it be quick to deploy but whatever security policies you use to keep it safe will also be easily enforced.  Just as important, make sure that you keep this process centralized, so that whoever from the IT Security team will be managing this will have clear transparency into it. 

My Thoughts On This:

Other key factors that need to be taken into consideration include the following:

*Mapping Policies

*The type of Certificate Authority that you want to make use of

*A defined approval process

*Setting up expiry dates for the Code Signing Keys

*The kinds of Cryptographic Algorithms you want to use

Also keep in mind that the police and procedures that you have in place will need to be evaluated and updated on a regular basis, as the Cyber Threat Landscape keeps changing and becoming more complex.

Sunday, March 17, 2024

How To Conduct A Quick Cyber Assessment - 4 Golden Tips

 


Whenever a business is hit by a Cyberattack, the first priority of course is restore mission critical applications as quickly as possible.  Then from there it is all about dealing with the fallout from it, especially when it comes to facing customers and possibly law enforcement.  Then the last thing on your mind would be to conduct a detailed forensics investigation to examine what led to the breach, and how it can be avoided again.

But there is still a question that will linger in your mind”  “Who was the Cyberattacker that launched this variant against my business?”  While it is easier now to determine this when compared to the past, it is still a 50-50 proposition.  And in the end, you may even not think it’s worth it, because after all, if the Cyberattacker is in a foreign country, how will you bring them to justice?

In fact, this entire process that I have just portrayed is technically called “Attribution”.  More information about this can be seen at the link below:

https://www.darkreading.com/cyberattacks-data-breaches/how-to-identify-cyber-adversary-standards-of-proof

But maybe its time to take a step back now, and assess the chances of you becoming a victim.  I don’t mean that you have to follow a NIST or CISA based framework to the letter, but take a simple, real-world  approach to it.  Here are some tips to do this:

1)     Examine your own business:

Take a very close stock of your business model.  Take a close look at all of the digital and physical assets that you have, and then ask yourself this very basic question:  “What is it a potential Cyberattacker will go after?”  Of course, this will primarily depend upon what you have, but also keep in mind that a Cyberattacker will try to get access to something low on the totem pole in order to get to something much higher and valuable, like the database that holds the passwords of your employees and customers.  In other words, follow this quote:

               “Know your enemy and you will win a hundred battles; know yourself and you will win a     thousand."

               (SOURCE:  https://www.darkreading.com/cyberattacks-data-breaches/how-to-identify-cyber-   adversary-what-to-look-for)

2)     Your security tools:

This is a subject that I have written about before, on many occasions.  But in this instance, ask your IT Security team took a quick look at what you really have, then ask yourself these questions:

*How many brands come from just one vendor?

*How many come from multiple vendors?

*Is it possible to cut down on what I have and strategize?

*How much time does it take to make sure that each device is always optimized?

If your business still has the traditional Perimeter Defense model, then going after these security devices will be amongst the first choices for the Cyberattacker to go after.  After all, once they break through this, they can get access to just about anything.  But if you have the Zero Trust Security model, then this is entirely different.  But the bottom line here is that you want to consolidate all of your security tools, and deploy them in the most strategic areas.  And if possible, try to stick to just one or two vendors for all of this.  It will make it a lot easier for the IT Security team to manage, and will not have to parse through so many varying log output files.  The moral of the story here:  With too many tools, your attack surface is greatly increased!!!

3)     Timing:

Examine how long it takes you to actually detect and respond to a security breach.  Believe it or not, it takes a business an average of seven months to do this.  The metrics that reflect this are known as the “Mean Time To Detect” and the “Mean Time To Respond”, also known as “MTTD” and “MTTR”, respectively.  You will want to of course respond to a breach and contain as soon as possible.  But try also to set specific goals for yourself as well.  For example, it should take no longer that three hours to detect and contain a breach, should it ever happen.

4)     Examination:

Finally, take a look at your own IT and Network Infrastructure.  For instance, or are you still 100% On Prem, or in the Cloud, using something like the AWS or Microsoft Azure?  Or are you still using a Hybrid based approach?  If you are still On Prem, you are putting your business at grave risk, IMHO.  You are far better off going to a total, 100% Cloud based infrastructure.  At least with this, you will get, for the most part, all of the tools readily available to protect your infrastructure.

My Thoughts On This:

Finally, make some time to study the various methods that Cyberattackers have used in the past in order to launch their malicious payloads.  Most of this should be available online, especially from either NIST or CISA.  Finally, remember to take the above-mentioned steps from a holistic approach, and above all, be honest to yourself when you do this kind of informal assessment.

Saturday, March 16, 2024

ChatGPT Versus Gemini: Which One Comes On Top???

 


I have been writing about AI a lot lately, especially when it comes to Generative AI.  In fact, I am authoring an entire book on the subject matter.  But somehow or another, we tend to think that ChatGPT is the only Gen AI tool that is out there for the consumer.  However, this is far from the truth.  There are many others, and one that is starting to gain some attention is from Google, which is called “Gemini”.  This was actually formerly known as “Bard”.

Interestingly enough, I came across an article this morning which compares the two in a side-by-side matchup.  So here we go:

1)      The creation of diagrams:

If you are a technically oriented person like I am, then creating diagrams in your content (whatever it may be) is important to illustrate key points.  I am not a very good graphics designer, so I rely heavily upon the tools that I have available to me in Word.  But which of these tools is better?  The author felt that Geminin did a far better job in composing a technical diagram, whereas ChatGPT totally fizzled out.  In fact, it suffers from a phenomenon called “Hallucination”.  What is it you may be asking?  Well, here is a definition of it:

“AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”

(SOURCE:  https://cloud.google.com/discover/what-are-ai-hallucinations#:~:text=AI%20hallucinations%20are%20incorrect%20or,used%20to%20train%20the%20model.)

In other words, you don’t get the output that you were wanting to get, despite the fact that the Gen AI model probably has been trained over 100X.

2)     Explaining Diagrams:

If an end user sees a diagram, and they don’t understand, the first inkling will be to ask for help interpreting it.  Of course, this is where Gen AI can come into play.  The author of the article found that while both ChatGPT and Gemini suffice for the need, the latter is a bit less wordy than the former.

3)     Examining Log Files:

One of the life bloods for the IT Security team in figuring out if anything is going wrong are to examine the log files that have been outputted by the network security devices.  It can take a long time to do this manually, so Gen AI can help here.  Which tool is better?  When presented with an actual log file, both analyzed it the same way.  But Gemini was a bit more concise than when compared to ChatGPT.

4)     Creating documentation:

Having Incident Response/Disaster Recovery/Business Continuity plans are a must for any business.  When the two tools were asked to create a sample plan, the author found that Gemini did a better job in understanding what the exact requirements were.

5)     The Creation of Source Code:

Creating scripts is one way for the IT Security team to help automate routine and mundane tasks, such as those found in Penetration Testing and Threat Hunting.  When these tools were asked to produce a few lines of a script code, both came  out as equal winners.

6)     Data Analysis:

When both were asked in this regard, Gemini was a clear loser.  It only suggested ways to analyze the metrics, whereas ChatGPT actually did the task to varying degrees, by making use of its “Data Analyst” plugin, which allows for Excel files to be ingested into the Gen AI model.

7)     Creating phony Phishing attacks:

One of the best ways you can find out if your employees are paying attention to your Security Awareness training is by launching a mock Phishing Email attack, and see who falls prey to it.  When Gemini and ChatGPT were asked to create a mock Phishing Email, the author found that Gemini created a more succinct and deceptive message versus ChatGPT, which created a wordy one.

8)     Data Privacy Laws:

Let’s face it, the wording in the GDPR, CCPA, HIPAA, and any other sort of compliance framework can be a nightmare to understand.  So what do you do in this particular situation?  Well ask Geminin or ChatGPT for help.  The author did this exact thing, and found out Gemini was better than Chat GPT in this regard.

My Thoughts On This:

So, you now may be asking, which one should I use?  From what I read in the article and wrote about in this blog, it seems that Gemini has the upper hand here.  But, try both for yourself and see which ones provide the best output for your needs.

For more help on which on to use, click on the link below:

https://www.darkreading.com/cybersecurity-operations/why-chose-google-bard-help-write-security-policies

 

 

Sunday, March 10, 2024

How Biometrics Is A Double Edged Sword In Cyber

 


I have been in IT Security for a long time, probably at least 20+ years.  I first got started in the Biometrics field.  Of course, I had no idea what this was all about, so I had to teach myself a lot about what the technology is.  From there, I started the first security gig, in which I was reselling Hand Geometry Scanners and Fingerprint Scanners from two leading vendors. 

I had to define my market, and I decided to focus on Physical Access Entry applications.  This is simply meant that I would present these two devices as a means to replace the traditional lock and key.  I thought that this would be more or less an easier sales cycle, because who wouldn’t want an automated to open up their doors?

Well, I was proved wrong, and in a big way.  Even despite the craze that Biometrics got after the 9/11 attacks (especially that of Facial Recognition), people simply either did not understand it, or simply just did not care about it.  So as result, I found myself educating people about it, more than selling it.  Of course, I did not make much money in those years that I had the business, but this new path put me on a different trajectory.

I ended up closing down this first business, and opened up a new one 15 years ago.  This focused on content generation about Biometrics, such as authoring articles and doing podcasts with top Biometrics vendors.  I even wrote and published three books on this subject through a leading publisher, CRC Press.  But now fast forward some 20 years later to the present time.

Where is Biometrics today?  Honestly, I have been out of the field for too long to see where the trends are.  One fact that I do know of is that is has received strong as being used as an authentication mechanism for an MFA solution.  In this regard, both Fingerprint Recognition and Iris Recognition have received a lot of attention.

But even despite the good that Biometrics can bring to an organization, it is one of those technologies that still receives more negative attention.  And now, it may be at its worst.  Just in November of 2023, the Department of Defense (DoD) released a detailed report about the specific weaknesses of Biometrics.  This report can be downloaded at this link:

http://cyberresources.solutions/blogs/DOD_Biometrics.pdf

But to just summarize, here are the some of the major weaknesses that were reviewed in the report:

*Data Theft:  This was compared to stealing a password, and from there, access to just about anything can be yielded.

*Spoofing/Impersonation:  There were instances where something like a Fingerprint Recognition Template was hijacked, and spoofed in order to gain access to a high secure area.

*Data Privacy:  Just like AI, Biometrics are often viewed as a “black box solution”.  Meaning, you give it the input, and from there, you get the output, with no knowledge as to how the insides of the system work.  This has led to huge concerns with respect to data protection and privacy.

*Integration:  From the best of my knowledge, it appears that Biometrics is not being heavily used as a standalone solution.  Rather, it is being used as an add on, such as in MFA, as reviewed earlier.  So, there are integration challenges here as well, especially if Biometrics is going to be used to further secure our Critical Infrastructure.

My Thoughts On This:

In fact, Biometric data is now viewed as “Personal Identifiable Information” (or “PII” for short).  Because of this, they are now prone to the data privacy laws such as the GDPR, CCPA, HIPAA, etc.  More details on this can be seen at the link below:

https://www.darkreading.com/cyber-risk/thought-gdpr-compliance-was-hard-buckle-up

Now again, I am not sure about all of the technological advancements that have occurred in the recent with regards to Biometrics, one thing I can tell you for sure is that when an image of a fingerprint is captured, it is usually converted over into a mathematical file.  In this case, it would be a binary one, which is represented as a series of 1’s and 0’s, like this:  1100010101000100111.  So if you think about it, if a Cyberattacker were to steal this, what can they do with it?

IMHO, not much really.  It’s not like stealing a credit card number.  And for that matter, there should really be no issue at all with data privacy laws, unless the end user has some sort of extremely unique identifier that is associated with their particular Template.  I can agree with the last point, but in terms of spoofing, I still find that a little hard to believe.

One of the only ways I can see this happening is if an image of a fingerprint has been left on the sensor, and has not been completely wiped away.  If you choose to use Biometrics as means of defense, my best advice would be to deploy it with the same amount of caution as you would with other Cyber related devices, and use the same approaches to make sure that it will work well in your particular environment.

The Impacts Of Liquid Cooling On AI Datacenters

  When we think of AI, hear about it, or even use it, we often think of ChatGPT.   While in a way this is correct, Generative AI (from which...