Tuesday, May 30, 2023

Discover New Ways To Secure Your Source Code

 



For the longest time, software developers and their respective teams have evaded the microscopic lens of Cybersecurity.  But given how digital the world has become today, and all of the interconnectivity that is taking place, they are now coming under the limelight.  The source code that is used to design a web based application often goes unchecked for any gaps in security, or to make sure that there are no backdoors that are left behind.

Now, that is trend changing.  Businesses are now paying much greater attention and detail to the code that is being compiled.  After all, if one of their clients gets hit by a security in that particular application, the company that designed it will be held liable for it.  Another grave issue at risk here is the use of APIs, especially those that are open sourced.

Software developers love using them, as it saves time in the code writing process.  But very often, these APIs remain outdated, and are also not checked for any security gaps.  Given all of the areas in which source code could be exposed to, what are some of the best ways and practices that a business can adopt so that they stay ahead of the curve?

In this podcast, we have the honor and privilege of interviewing Joe Saunders, the Founder and CEO of RunSafe Security.  He will go over in detail some of the latest tips that any business can deploy to protect the apps that they create.

You can download the podcast at this link:

https://www.podbean.com/site/EpisodeDownload/PB141EA04RRGHQ

Saturday, May 27, 2023

The Top 3 Cyber Risks To Your Wearable Device

 


I am not one to mention abut my personal lifestyle on my blogs, but today, I want to share with you something.  Ever since the beginning of this year, I was diagnosed with acute congestive heart failure.  It got so bad one time that I was in the hospital for a week as the doctors and nurses tried to drain out the fluid build up in my lungs. 

I am doing OK now, but now I have been told that 65% of my heart is actually dead, and only 35% of it is actually working.

This simply means I do not know how much longer I have.  But I am trying to live each day to the fullest that I can.  The reason why I bring this up is that if things don’t improve much, my cardiologist is seriously thinking of implanting a pacemaker in me. 

I am not afraid of that per se, but it is considered now to be an IoT device….and of course, that brings up some huge Cyber risks.

But it is not just that, just about all sorts of implanted medical devices, and even wearables that keep track of your health and daily movements are now posing a huge Cyber risk for the people that use them.  Consider some of these stats:

*On a global basis, there have been well over one million who have been at grave risk from having their devices tampered with, even endangering their own lives.

*This market is going to be a prime target for Cyberattackers – as it will have a market value of over $265 billion by 2026.

(SOURCE:  https://www.marketsandmarkets.com/Market-Reports/wearable-electronics-market-983.html#:~:text=Updated%20on%20%3A%20March%2029%2C%202023,a%20highest%20CAGR%20of%2039.12%25.)

But keep in mind that there are two distinctions to be made here – the implanted medical devices, and the normal technological wearables that people use.  In terms of the former, the Federal Government is already taking aggressive steps to protect people. 

For example, the FDA has been making announcements that it plans to implement much more stringent guidelines for the implementation of security in these devices, which to me is great news.

Then there is the latter – those devices that let you know how many steps you are taking, your heartbeat, oxygen rate, etc.  Here are the Cyber risks that are posed to them:

1)     It is made of the latest and greatest:

Those wearables that I have just mentioned come with the best technology available.  Meaning, they can track more information about you than you realize.  While you might think it is just collecting the basic info about you, more than likely it is getting a lot more than that.  Unfortunately, the vendors that make these devices don’t reveal what is being collected per se.  The best bet to protect yourself in this situation is to try to find the privacy settings, and try to restrict the stuff that is being collected from there.  Also keep in mind that wearables are very small devices, and they can be lost or stolen even easier than your smartphone.  For that matter, the vendors in this regard have been proactive.  They have implemented the use of MFA, in which you have to present at least three or more authentication details about yourself before you can use it. If your device offers the use of Biometrics (such as Facial Recognition or Fingerprint Recognition), use it.  That is much more secure than using a password or PIN number.

2)     Make sure encryption is being used:

Encryption is simply a fancy term for the scrambling of data so that it remains in a garbled, and meaningless state until it is descrambled.  But in order to do this, the person needs to have a private key to unlock it.  This is a strong safety mechanism which ensures that if your health data does fall into the wrong hands, the chances are greatly minimized that nothing bad will happen to it (like being sold on the Dark Web, being given out to the public, etc.).  So the bottom line here is that the next time you go shopping for a wearable device, make sure that not only it has MFA on it, but that the data which it will store about you will also remain encrypted as well.

3)     Use the remote wipe:

Along with the above security features that you need to be sure about, also make sure that the wearable device you are thinking of buying also comes with what is known as a “Remote Wipe” feature. This is where you can actually delete your private data in case you lose your device.  For example, if you are out jogging somewhere, and your wrist device accidentally falls out, through your smartphone or even any other wireless device, you can issue a command that will automatically delete the data that resides in it. But keep in mind that this is only a temporary solution.  Stored information is really never truly deleted, and if the Cyberattacker is smart enough, they will find a way to access it.

My Thoughts On This:

As mentioned earlier in this blog, wearable devices are part of the IoT ecosystem.  Meaning, as they get more advanced in terms of technology, they will become that much interconnected to other things, in both the physical and virtual worlds.  In fact it is even predicted that there will be well over 29 billion of them by the year 2023. (SOURCE:  https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/)

So this means that the effort you have to take to protect yourself is only increasing significantly.  Obviously, you can start with the steps outlined here, or better yet, the best protection is just don’t even use them.  When you go outside for that walk or jog, just simply enjoy the nature and beauty of the outdoors.

That is what I do, and in the medical I am in, it works wonders.

 

Friday, May 26, 2023

How To Take Your Risk Assessment One Step Further - Procuring The Right Controls

 


One of the things that I have written about before on many occasions is the need for a business to conduct what is known as a Risk Assessment.  In very simple terms, it is where the CISO and their IT Security team come together and literally inventory all of the physical and digital assets that their company possesses. 

From here, all of them are then ranked according to their degree of vulnerability using some categorical ranking, for instance, where 1 would be least vulnerable and 10 would be most vulnerable.

Once this has been done, it then serves as a steppingstone to decide what kinds of protective controls need to be procured and deployed.  That is where most Risk Assessments stop.  But I have advocated taking it one step further.

For example, use the results also to determine where these controls can be most strategically placed at.  In other words, try to make do with the existing controls you have, but place them in a more efficient manner so that they can offer maximum results.

In other words, one should never buy new Cyber tools just for the sake of beefing up your lines of defenses.  Put another way, get away from the proverbial way of thinking that there is safety in numbers.  There really is not any. 

If you buy tools just for the sake of deploying them, you are not only going to overburden your IT Security team with false positives, but you will also be greatly increasing the attack surface just that much more.

But, on the flip side, there will be instances when reshuffling your existing controls and updating them will not be enough.  You simply need to get newer ones. It’s like an old car.  The money that is spent on fixing it can be used to get a newer one, which probably last a longer time.  But once again, just don’t go out on a buying spree.

You still need to take time to figure out what it is you really need.

So, in an effort to get you started in this kind of mindset, here are some tips that you should follow:

1)     Is technology proactive enough?

All Cyber vendors that make their own products and/or solution like to state that they are extremely proactive.  But what does that mean exactly?  This term can have different sorts of connotations, but in very general terms, it is when a tool will provide alerts and warnings as they happen, and not as a lagged function.  Or better yet, a proactive tool is where it can detect even the smallest hint of malicious or suspicious behavior and try to project what it will mean, using the help of ML or AI.  But be careful here as well, as many Cyber vendors like to tout that there products and/or services also have AI built into them, and customers get suckered into it.

2)     Can it gather intelligence?

In the world of Cyber, collecting intelligence and interpreting it is one of the key facets in trying to stay one step ahead of the Cyberattacker.  But usually this is provided once again by either an AI or ML tool, and this in turn needs a huge amount of data to be fed into it so that it can learn, and try to project the future of the Cyber threat landscape.  Trying to get a human to do all of these tasks will take weeks if not months, and no company has that kind of time to waste. So, make sure that whatever tool you plan to get will provide some sort of reasonable intelligence for your IT Security team to use.

3)     Can it work by itself?

This has always been a point of contention in the Cyber world.  Can you really have a tool that is truly, 100% autonomous without needing human intervention?  IMHO, not it is not. Probably the best example of this is the Pen Testing community. A lot of the vendors here like to claim that there tools are completely automated, and do not need human intervention.  But in my view, they are taking this a little bit to the extreme.  Every tool needs some kind of human input, but the trick here is to find that tool which can be at least 60%-70% working by itself.  Having automation like this in Cyber is very important, but don’t ever get hung up when a vendor claims that their tool is 100% free from humans.  It is not, and will never be.

4)     Can the tool match your future needs?

The technical term for this is known as “scalability”.  In other words, can this new tool match your security requirements if it ever changes over time (and most probably will)?  You want a tool that can do this, as you don’t want to either discard (if your requirements lessen) or have to buy a new one if it increases. In this regard, you should probably look at getting security tools that are available from the major Cloud providers, as the AWS or Microsoft Azure.  Not only are their tools easy to deploy in just a matter of minutes, but they are also “scalable” within a matter of seconds, which leaves you, the CISO, nothing to worry about.

5)     Can it co mingle?

Unless you are planning a full-blown migration to the Cloud, and still have On Prem infrastructure, you are not simply going to rip out your old systems so that your new tools will work in your business.  But at the same token, you simply don’t want to add in a new security tool and hope that it works with everything else.  Thus, you have to make sure that whatever new tools you purchase will co mingle nicely with the existing infrastructure that you have.  This is the main problem that Critical Infrastructure has today.  A lot of the technologies that fuel these systems today were built in the late 1960s to the early 1970s.  But back then, nobody even thought of Cybersecurity.  But today, it has now become a grave vulnerability for the United States.  Finding the tools of today to beef up the security for the for the legacy Critical Infrastructure is now an almost impossible task.  But here, the Cloud can be best your friend.  If you are 100% here, all of the tools are brand new and updated, so you will not have to worry about any co mingling issues.

My Thoughts On This:

Any Cyber vendor worth their grain of salt will allow you to try their product and/or service for a free trail period.  Always take advantage of this, so you can make sure that whatever your are thinking of procuring in your environment will actually work, and not only meet, but even surpass your needs.

Saturday, May 20, 2023

6 Ways In Which AI Can Threaten You

 


In yesterday’s blog, I wrote about ChatGPT, and the fear that it has brought upon society.  While it certainly has its plusses, it too has its many minuses as well.  The trick is in learning more about it, and getting yourself ready for whatever may come of it. 

True, this is far easier said than done, but being proactive from the security side of things will keep you that much further ahead of the game. 

So with this in mind, I bring to you some of the other Cyber threats that AI in general, not just ChatGPT per se, can potentially bring to not only your business, but even also to you personally.  So, here we go:

 

1)     Poisoning the model:

As I also mentioned in yesterday’s blog, AI does not, and will never mimic humans ever.  Essentially, all AI is garbage and garbage out.  Meaning, whatever you feed into it will give you the output.  It is as straightforward as that.  But the trick here is that you have to cleanse and optimize the datasets on a daily basis in order to make sure that you get what you need.  If you don’t do this, then whatever is outputted to you will be of no use.  But this is an area where the Cyberattacker can come into as well.  If there is any kind of weakness or backdoor in your AI system, the Cyberattacker can literally tap into your datasets, and alter them in a way that will allow for malicious payloads to be deployed into it.  This is also technically referred to as “poisoning”.  For more information about this, click on the link below:

https://spectrum.ieee.org/ai-cybersecurity-data-poisoning

2)     Data privacy:

I think it was about a week ago or so that I wrote a blog specifically on this topic.  True, there are laws out there now like the GDPR and the CCPA that are designed and aimed to protect our PII datasets in general, but how about when it comes to those pieces of data that are used in AI systems?  Unfortunately, there is no law around this yet.  There has been talk about it from within the Biden Administration, but as you know how it goes in politics, it will take forever to get anything passed, especially given the bickering that is happening right now in Congress, between both the Republicans and the Democrats.  Worst yet, a hijacked AI system can even be used to guess other datasets of yours that may be in the database of the company.  For more detailed on this, click on the link below:

https://www.usenix.org/system/files/sec21-carlini-extracting.pdf

3)     DDoS like attacks:

This is probably one of the most old-fashioned attacks that could ever exist, along with Phishing.  This is essentially where a Cyberattacker launches malformed data packets towards a server, and brings it to a screeching halt with total bombardment of them.  The server never really shuts down per se (thought it could) but it makes any service availability so slow that it will take minutes to access anything versus the normal seconds that it would take.  The same thing can happen to AI systems as well.  The Cyberattacker can launch similar pieces of malicious payloads towards it, and make the system consume so much hardware power that it too will literally shut down as well.  This is called a “Sponge Attack”, and more information about it can be seen here:

https://ieeexplore.ieee.org/document/9581273

4)     Phishing attacks:

This was elaborated on in much more detail in yesterday’s blog.  Essentially, there are always tell-tale signs of a Phishing based email.  But with ChatGPT and other AI tools, anyone with nefarious goals in mind can always use these tools to craft out a Phishing based email, which is not only hard to detect, but it can even evade any sort of firewall or antimalware system.  In fact, there already have been reports of an escalation of ChatGPT for these very purposes. More information about this can be seen at the link below:

https://www.darkreading.com/vulnerabilities-threats/bolstered-chatgpt-tools-phishing-surged-ahead

It is even listed as a Top 5 attack:

https://www.darkreading.com/attacks-breaches/sans-lists-top-5-most-dangerous-cyberattacks-in-2023

5)     Deepfakes:

This is when AI can be used to replicate a real-life person.  Although this is scary enough, it can even be used to create a video of them, even duplicating their voice.  Probably the best example of this is during any election cycle.  A Cyberattacker can create a rea like video of a candidate, and ask for donations for the cause.  But in reality, any money that is collected will simply go to an offshore account somewhere, and never even being able to retrieve the money.  Or worst yet, this could be a bait to lure in victims to a phony website where their login details can be easily heisted from.

               More information about Deepfakes can be seen at the link below:

https://www.darkreading.com/threat-intelligence/threat-landscape-deepfake-cyberattacks-are-here

6)     Malware getting worst:

At some point in time, malware was an evil that could be managed, but now it seems like it is only getting worse.  It has come to the point now where it can evade all forms of detection, even the most sophisticated of firewalls.  But as AI and ChatGPT further evolve, creating even stealthier forms of malware which can pretty much go undetected forever will now be the norm.  The greatest fear now is that this kind of newly bred malware will be used to infiltrate the Critical Infrastructure.  More details about this can be seen here:

https://www.darkreading.com/attacks-breaches/attackers-are-already-exploiting-chatgpt-to-write-malicious-code

My Thoughts On This:

In this blog, I have described those AI threats that are most relevant to individuals and businesses alike.  There are other threats out there that can stem from this, and they are as follows:

*Evasion Attacks

*Prompt Injections

*Model Theft

*Weaponized Models

I will cover these kinds of attack vectors in a future blog, so stay tuned, and be proactive!!!

Friday, May 19, 2023

3 Grave Weaknesses Of ChatGPT You Need To Know About

 


Well here we are, almost approaching June.  Can’t believe where this year is going.  But as time goes on, so does the world of Cybersecurity.  Probably the biggest thing making news right now is Artificial Intelligence, or AI. 

There have been a ton of stories of people for it, and also people against it.  Heck, there have even been cries in American society that perhaps it is time to put the brakes on AI, and let us try to get an understanding of what it is really about it.

I even attended a major Cyber event last Tuesday at a rather posh hotel in Schaumburg.  Though of course all of the talk was on Cyber related stuff, one of the main points of discussion was AI.  People were fearful of its impact, while some were really interested in it, and how it can be used in Cyber. 

When I was having lunch with some of the other attendees, I told them that I wrote a complete book on AI and ML.  I even mentioned that my dad was a Professor of Neurosciences at Purdue.

I told them that the bottom line is that we will never even come close to even fully understanding how the human brain works, much less even try to replicate it. In fact, at best, we will only come to 0.5% of any kind of understanding of it at all. I even mentioned the fact that all AI will be best used for is just automation, especially when it comes to mundane and ordinary tasks.

But as AI continues to dominate, so will ChatGPT.  I think I have written about this in a couple of recent blogs recently.  I even wrote an entire whitepaper about it for a client.  While it does have its advantages, it also spawned fears amongst a lot of people, especially when it comes to Cyber.  What kinds of fears are those?  Well, the biggest one is that it will be used for nefarious purposes.

Here are some of the areas in which it is believed that it will happen:

1)     Phishing:

This is probably the oldest attack vector ever known to history.  It stems all the way back to the early 90s, and the first public breach was done to AOL and its subscriber base in the later 90s.  Ever since then, it has evolved and grown, and has even become stealthier and almost hard to recognize at times.  But for some reason or another, there are telltale signs that are left behind, such as misspelled words, poor grammar, different URLs being used, etc.  But the fear now is that with ChatGPT, all of these signs of a Phishing email will now disappear, because it is so “intelligent”.  Well guess what, it is not.  The damned thing cannot even extract sources of information and data from the Internet.  It is simply and purely garbage in and garbage out.  While the signs of a Phishing email now will not be so obvious, there will be still be something that looks funny.  The trick now is to take your time and find them, if your gut is telling you that something is not right.  The bottom line:  Try to treat every email received as a Phishing one, and apply the same level of caution to everything that comes into your inbox.

2)     Coming out with the opposite:

In technical terms, this is also known as “Reverse Engineering”.  In simpler terms, this is where you can take a product of some sort, and break it down into its raw components to see what the initial ingredients were.  In the world of Cyber, although this was a security risk, it was never really too much of a concern, because it took a lot of effort and tome to do it, and of course, the Cyberattacker would not be interested in doing this kind of thing.  But with ChatGPT, not only can you create source code, but you can even reverse engineer existing code into its former building blocks.  From here, a Cyberattacker that is extremely well trained in software development can now even ask ChatGPT what the weaknesses are, and even where the backdoors in the code exist.  So rather than taking the time to find them on their own, ChatGPT can do it for you, in just a matter of minutes.  One of the biggest fears of this is that the most traditional forms of web application attacks, such as those of SQL Injection, will happen very quickly, and even go unnoticed.

3)     Smarter malware:

It is important to note that malware is a catch all term, which encompasses just about every threat variant that is out there.  Long story short, malware can be considered as a malicious (hence the acronym “mal”) piece of code that can cause extensive damage to an IT or Network based infrastructure.  In the past, and even up until now, the Cyberattacker had to manually deploy the malware into a weak spot, and from there remotely control so it can be deployed whenever and wherever.  But with ChatGPT, this extra reach is no longer needed.  A Cyberattacker can now write a piece of code, and install permutations from within it in order for the malware to deploy itself, where it will cause the most damage possible.  Thus, the term “Smart Malware”.  In fact the first known instances hit upon Samsung, and more detail about this can be seen at the link below:

https://www.darkreading.com/vulnerabilities-threats/samsung-engineers-sensitive-data-chatgpt-warnings-ai-use-workplace

My Thoughts On This:

At the end of the day, ChatGPT will be around.  Given its attention and notoriety that it has now, it is quite likely that it will only grow.  But like all good things, it too will have its doomsday.  The hysteria and craze and anxiety that it is causing now will die out for sure. 

Now is probably the best time to get your IT Security team to fully explore the weaknesses of ChatGPT, and believe they do exist.

Then from there, you need to train your employees how to spot any threat variants that looks like they could have evolved from ChatGPT.  To keep your business even safer, you should even restrict your employees from not using it during work hours, unless it is required for their job functions.

And stay tuned on this blog site.  As I continue to learn more about ChatGPT, especially its weaknesses, I will post them here as well.

Saturday, May 13, 2023

How Do You Cyber Defend Your Business With Rising Inflation? Hire The Ethical Hacker

 


Well there is no doubt that we could be heading for a recession, when it will happen nobody really knows yet.  But it is true that American folks are feeling the costs of higher prices, whether it comes to their credit cards, mortgages, or even the price at the pump (here in Chicago, it is notoriously high). 

Now there is another huge storm brewing in the economic headwinds here, and that is the fear that the US could default on its own debt unless a deal is reached in Congress.

Everybody is hopeful about this, as nobody, not even the politicians, want this to happen.  But in downturns like this, all businesses feel the impact.  And it is true for Cyber as well.  For most SMBs, it has really never been a top priority, and it falls even further to the bottom of the rung now. 

Although Cyber should still be priority for every individual and every organization, this common way of thinking still persists:  “Why should I invest in a security program if I have never been yet”?

Even many Cyber vendors are also feeling the pinch of the slowdown, and because of that, many of them are now offering price points that are attractive to SMB owners.  Their hope is that as they potentially lose bigger clients, they can make up the revenue gap with the SMB market. 

Although VC spending still continues to go into Cyber startups, the momentum which it had is also slowing down.

So, this all feeds a viscous cycle:  With no new innovations coming out, and nobody really spending any more money on Cyber, the hackers now have the upper hand.  But there is a way around this.  Although it may sound corny and even ridiculous, the answer lies in possibly hiring an Ethical Hacker to help you shore up your defenses.

What is an Ethical Hacker you may be asking?  Well, this is an individual (or perhaps even a company) who was once on the dark side of Cyber but has now turned for the good.  Their main objective now is not to harm people, but to help them.  These kinds of individuals are great to hire when it comes to Penetration Testing and Threat Hunting. 

The ultimate goal when conducting these exercises is to take the mindset of an actual, real life Cyberattacker, and try to take the walls of defenses of a business that needs to have these kinds of services. 

So, rather than trying to train people to do this, why not hire an Ethical Hacker to do this?  After all, they have done this before, so for lack of a better term, they literally know all of the ins and outs of how to hack, because they have done it before.

Also, Ethical Hackers can be used for Bug Bounty programs.  This is where a tech company (such as Oracle or Google) comes out with a new product or service, but they want to make sure that all of the bugs have been worked out. 

So to do this, they typically announce a program that lets people, especially the Ethical Hackers, try to break those systems.

If turn, they also have to come up with a viable solution, and prepare a rather exhaustive report as to what they found, and how they would remediate the weaknesses or the gaps that they have found.  They then submit this report, and if the tech company that announced the Bug Bounty program likes what they see in this report, the Ethical Hacker is then awarded with a very nice cash prize, somewhere in the 5 digits.  This could also be a new revenue stream for a Cyber startup.

Also, given the current threat landscape, hiring an Ethical Hacker or even a team of them makes sense.  Consider these stats:

*In 2022, the total number of Cyberattacks increased by a staggering 87%;

*The total number of Cyberattacks against government institutions went up by an alarming 95%;

*The average cost of just one Cyberattack reached a jaw dropping $4.35 million.

The sources for these stats came from here:

https://www.bloomberg.com/news/articles/2023-02-14/ransomware-attacks-on-industrial-firms-increased-by-87-in-2022?leadSource=uverify%20wall

https://cloudsek.com/whitepapers-reports/unprecedented-increase-in-cyber-attacks-targeting-government-entities-in-2022

https://blog.checkpoint.com/2023/01/05/38-increase-in-2022-global-cyberattacks/

Also, by taking on an Ethical Hacker, you the SMB owner, are helping to shorten the ever-widening Cyber worker shortage.  Remember, you do not have to hire one of these guys on a full-time basis unless you feel inclined to do so. You can always hire them as needed on a contract basis, which will save you quite a bit of money.

My Thoughts On This:

In today’s times, the overall IT Security team is just burned out from trying to keep up with what they have.  A great alternative to automation is to hire an Ethical Hacker.  Why train someone to think like a Cyberattacker?  Just hire somebody who has already been one!!! Yes, I know, there could possibly be a lot of fear for an SMB owner to take on a person like this.

In other words, hiring Ethical Hackers can be a great staff augmentation solution as well.

But keep in mind that you bear the same kinds of risks when you hire any other kind of employee.  Finally, as it has been so nicely put in this quote:

“Economic turbulence means less investment in cybersecurity and a surge in cybercrime. Put simply, it's a recipe for disaster.”

(SOURCE:  https://www.darkreading.com/attacks-breaches/why-economic-downturns-put-innovation-at-risk-and-threaten-cyber-safety-)

So to use the old proverb:  “Why not fight fire with fire?”

Should The Microsoft/OpenAI Partnership Be Regulated? Find Out Here

 


Just in the last week, I attended a number of networking sessions in the burbs here in Chicago, and with many of the people I have met, the bulk of the discussion has been around AI.  Some people asked me what my thoughts on AI were, and I simply said that it is a piece of technology like everything else. 

It has its good and bad, but I also mentioned to them that we are in a bubble right now.  Just like the .com craze, this AI bubble will burst also, probably sooner than later.

As I wrote in a recent blog, AI has been around since the 1950s.  How come nobody paid attention to it back then, and all of a sudden, it is causing so much hysteria?  Well, it all comes down to one entity called Open AI. 

They are the creators of ChatGPT, and it is something that has caught the world by storm.  As far as I know (and I am no expert), this is probably the most sophisticated piece of tool out there that has become available to the mass markets.

Pretty much everybody can use it, and as far as I know, it is still free to use.  There is a paid subscription to it, it is called the “Enterprise” version, and I believe that it is only $20.00 per month.  But also keep in mind, ChatGPT also has its severe limitations as well, and I elaborated on this in detail in a recent whitepaper that I wrote for a client.

Back to the discussions I had, some people had even mentioned how their kids were using it for their papers and projects.  But as I chimed in, as much as cheating here could be happening, teachers and professors are also quickly adopting tools to see if ChatGPT has been used or not. 

I even talked to a couple of attorneys, who were fearful of AI.  I told them that human intervention is always needed, so the fears of people losing jobs, etc.  is still a pipe dream, in my opinion.

But AI is never going to go away.  It will be around with us for a long time to come, with more advances to come into the future.  Will it be like Star Trek technology?  Possibly in decades, but nothing in our lifetimes.  So, this brings up another question: 

As much as data privacy has become under the scrutiny if the GDPR and the CCPA, will AI follow suit?  In other words, should there be separate pieces of legislation governing the safe and fair use of AI?

Well, it all starts back to Microsoft.  This company has always been on the cutting edge of technology, especially when it came to AI.  So when it saw the potential of Open AI, it immediately sought to form a partnership with them, which actually transpired back in 2019.  From what I understand, Microsoft injected about $1 billion into Open AI. 

But truth to be told, it wasn’t a straight cash donation.  Rather, Microsoft offered Open AI a bunch of free deals (that was actually worth the $1 billion) so that the Chat GPT platform could be hosted on Azure.

So because of this, Microsoft and Open AI have now fostered a very deep relationship, which essentially gives the software giant the upper hand in the AI market.  Because of this, the question of why this did not come under the scrutiny of the Federal Government? 

In other words, why are some many other M and A deals come under this microscope, but why not this one?  There is no clear-cut answer to this.  My only assumption is that since AI is so new to everybody, there was no legal precedence in hand to regulate this kind of business transaction.

Many people feel that this should have been regulated, but it never was.  Now, even people are questioning if this partnership will stifle innovation, because it is all hosted on Azure.  This is actually a complex question to answer. 

In one sense, yes, ChatGPT should be available on open-source platforms as well.  But this can expand the attack surface probably at least 10X more than what the current fears are. 

Also keep in mind that Azure is not just a Microsoft centric platform.  It has also embraced the open-source model as well, in order to keep up with AWS.  But don’t worry, Microsoft has its keenly aware it.  In the end, it will do everything it can to be at the forefront of AI. 

Meaning, if people want to see ChatGPT on open-sourced platforms, Microsoft will make sure that this happens.

This has been technically referred to as the “Walled Garden”.  It simply means that each of the major tech vendors (like Google, AWS, Oracle, Meta, etc.). will create their own ChatGPT like tools.  While this will perhaps give the consumer a greater product choice when it comes to AI, the critics have also claimed here as well that this too will stifle innovation and growth.

My Thoughts On This:

To be honest, I am rather excited to see where the growth and technological advances in AI will take us.  But yet, I approach with a strong sense of caution.  I am not worried about if it replaces jobs, this will never happen.  But I am simply afraid that the hysteria brought on by the media will only fuel the mayhem which is happening right now. 

There needs to be some control over that.

But back to the fundamental question:  Do we need AI laws like we do for data privacy?  I think at some point we will, but perhaps not right now.  I think what is really needed most is to keep a proactive eye on the AI landscape that is developing.  In this regard, I think the Biden Administration has done a very good job with this, at least so far. 

There needs to be some sort of framework that is evolving which can easily be translated into law when the need arises.  But the big caveat here is that the law needs to keep up with the pace of technology.  And AI will be advancing very quickly.

To get more viewpoints into ChatGPT and AI general, follow these links:

https://www.darkreading.com/remote-workforce/pentesters-need-to-hack-ai-question-its-existence

https://www.darkreading.com/vulnerabilities-threats/gpt-4-provides-improved-answers-while-posing-new-questions

 

 

Saturday, May 6, 2023

How The Biden Administration Is Handling The Social Implications Of ChatGPT

 


Although I have my political views and beliefs, I try to remain as agnostic as possible in my tech writing work.  Sometimes its not easy to be, but I try my hardest to do so.  That is until now.  I am going to take a bold political stance, and finally say that I think the Biden Administration, when compared to previous ones, has done a lot more to help strengthen our Cyber defense posture. 

True, we may not all agree with all of the fine points in the bills and legislations that have been passed, but the sincere effort is there.  And that is what I applaud.

Now, as the dawn of AI comes upon us (primarily driven by ChatGPT), the Biden Administration has stepped into the foray again to try to quell all of the fear, angst, and unknowns that have been brought upon by this new trend.  Here are some examples of what has been, or what will be accomplished:

*The Blueprint For An AI Bill Of Rights.  The exact text can be seen here at this link:

http://cyberresources.solutions/AI_Ebook/AI_Bill_Of_Rights.pdf

*The National Science Foundation is also launching a new AI initiative, called the “Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem”.  The exact text can also be seen at the link below:

http://cyberresources.solutions/AI_Ebook/AI_NSF.pdf

*The National Institute of Standards and Technology is also coming out with a brand-new AI framework, and the content of this can be seen at the link below:

https://www.nist.gov/itl/ai-risk-management-framework

So, IMHO, these are great steps forward that the Biden Administration is taking.  But they are also taking one more unique approach as well. They are actually going to sponsor an event at an upcoming Cyber event, which is called “DEF CON”. 

The main objective of this is to publicly evaluate and release any disclosures of the newest AI technologies that have recently come out.

In other words, this is a vetting event for the public in which they can get the truth from the vendors about the AI products that they are peddling.  Some of the companies that will be taking part in this vetting process include the following:

*Anthropic

*Google

*Hugging Face

*Microsoft

*Nvidia

*OpenAI

*Stability AI

Another main objective of this public vetting process is to address the concerns the public has about the social implications of AI, such as racial profiling, discrimination, etc.  The idea for the public exposure for all of this is that the Biden Administration feels it is very important for these AI vendors to directly address and correct the fears us American citizens have about using AI in everyday life. 

As it was noted, it is time to take off the black box from AI, and demonstrate what it can do, and most importantly what it cannot do.

But most importantly, ChatGPT and its maker, Open AI is going to come under the microscope as well.  The main issue to be dealt here are not just the social implications, but also the Cyber ones as well.  While ChatGPT is great for doing certain things, its biggest drawback is that it will be used to the most extreme, nefarious purposes possible. 

The biggest pint of angst is that even a kid with no Cyber experience or knowledge, can use ChatGPT in order to launch a massive Cyberattack on the likes that nobody has seen before, especially on our Critical Infrastructure.  For instance, do you think the Solar Winds security breach was damaging enough?  Well, ChatGPT could possibly even be used to launch even grander attacks than that.

But one of the biggest fears is that ChatGPT will be used to spread a horrible amount of misinformation to the public at large, especially that on Social Media.  Even more so, as the next Presidential Election comes, ChatGPT could also even be used to create Deepfakes that are so compelling and real that even experts will not be able to tell at first glance what is real and not. 

Even more troublesome is that these Deepfakes can also be used in large Phishing attacks in order to lure in large scale donors.

My Thoughts On This:

Truth be told, AI is nothing new.  It has been around since at least the mid-1950s, but it has not made its claim to fame until now, thanks to the propulsion of ChatGPT.  The bottom line is that AI and ML are going to be around with us for a very long period of time. 

It has its advantages and minuses also.  But it is very important to keep in mind that not only will we never fully understand the human brain, but we will never even be able to replicate all of the reasoning powers of it.

At best, we may only understand a mere 0.5% of it.  This is where AI tools such as ChatGPT will have their limitations.  I wrote a rather exhaustive whitepaper for a client on this very topic, and there are some serious restrictions that it has.  In my view, AI and ML will best be used only for automation processes, where mundane and ordinary tasks are done on a daily basis.

An area of this which has evolved is known as “Robotic Process Automation”, also known as “RPA” for short.  A typical example of this are the robot-like arms that you see in car manufacturing plants.  Will there be job loss here?  Yes, there will be. 

But it will nowhere be to the extent that people are fearful of today.  We will always need human intervention when it comes to AI.  Keep this also in mind:  Any AI tool needs to have a large amount of data fed into it, so it can learn. 

How is this possible?  With humans of course.  Also, the algorithms that make up an AI system have to also be optimized on a 24 X 7 X 365 basis.  And of course, humans will still be needed here as well.  What we are going through is just a hysteria and a bubble brought on by ChatGPT.

Eventually, and probably soon enough, it will die like the .com bubble in the late ‘90s.

2 Key Areas Where The GDPR & The CCPA Fail

 


As American citizens, one of the things that we cherish most is our Constitutional right to privacy.  Unless we are required to by law, we have the right not to reveal any information to anybody, and this is best exemplified under our right to remain silent, especially when it comes to being charged with a crime. 

But back then, nobody even thought of Cybersecurity, much the less computers.  But fast forward at lightning speed to now, and we are in the digital world.

Every little thing that we say or do can come under scrutinization, because of all of the technology and interconnectivity that we rely upon.  Not only that, but even our own Personal Identifiable Information (PII) datasets can be used for unknown marketing purposes by other companies, and heck, even the Cyberattacker loves to target this whenever they are out to get something.

Because of this, many so-called data privacy laws have been passed, in an effort to protect the average, everyday citizen.  Some of the more famous examples of these are the CCCPA and the GDPR.  These mandates require extensive auditing and heavy financial penalties if a company has been found negligent in not safeguarding the data.

While these legislations are good to have, they have been coming at a sheer cost to businesses.  As a Cyber consultant, one of the biggest complaints I keep getting is that it costs too much money to come into compliance with these laws, especially in the way of testing and deploying new controls. 

I can see this viewpoint as well, and the money that was used to come into compliance could be used for other purposes for business growth.

So the main point of contention is are these data privacy laws too excessive?  Here are some reasons why they are being viewed this way:

1)     The laws are too broad:

In other words, this simply means that they are open to too wide swings in interpretation.  Because of this, many businesses feel that they become prey on whim from the regulators and auditors.  Defining if a company has done enough to protect the PII datasets becomes quite murky.  For example, suppose a company is hit with a security breach, and a big chunk of their data sets gets hijacked, then what?  The CISO can always state they took every effort to protect the data, but an auditor from the GDPR can always claim that they did not, without having to show much proof for it.  This is where the huge issue of subjectivity comes into play.  Who is right and who is not?  Is there some middle ground here?  Remember, we are all prone Cyberattacks, no matter how much protection we ensure to mitigate that risk from actually happening.  This is an area that is not clearly spelled out in these data privacy laws.  Another area of huge dispute in this regard is the use of “Cookies” on your web browser.  These are tiny pieces of code left by a website (especially an e-commerce based one) that track your movements on the web.  The premise here is that by knowing where you have been the online store merchant can be in a better position to offer you products and services that better fit your needs. Because these are also considered pieces of PII, they too have become prone to both the GDPR and CCPA.  That is why now you will see on just about every website that you visit you will see notices that cookies are being used on your web browser.  Of course, you can accept or deny the usage of cookies, or just simply move on.  While this is good for the customer or the prospect, it is very bad news for the digital marketing efforts of the various businesses that depend upon this to market their products and services.  As a result, a new tool called the “Unified ID 2.0” has come out.  With this, tracking mechanisms are used that don’t require the explicit permission of the customer or prospect.  More information about this can be seen at the link below:

https://www.thetradedesk.com/us/about-us/industry-initiatives/unified-id-solution-2-0

In fact, all of the major web browsers of today (Edge, Chrome, Safari, Firefox, etc.) now have special features in them that allow them to track the visitation habits without giving away the explicit identity of the end user in question.  Is this bad or good?  Again, this is open to a wide range of interpretations.  This technique is also known technically as a “Fingerprint Alteration Technique”, and the huge downside with this is that this uses stateless bits of data, a web browser now cannot tell who is a legitimate end user and who is not.

2)     The bad and the good:

The last statement that we made now segways into this major part.  If companies start to use this newer, stateless means of tracking people, how do you know who is a good person, and one who is a Cyberattacker?  What are the differentiators here?  If these newer techniques were to be enhanced even more, it could then give away the identity of the customer or prospect, thus defeating its entire purpose all together.  So this brings up another point:  The data privacy laws that were created yesterday are much too slow to adapt to the advances of technology today.  But of course, this is true of any technology law that is created and enforced.

My Thoughts On This:

Unfortunately, in the end, it is the responsibility of each and every company to make sure that they are compliant with the GDPR and the CCPA, no matter how much it may cost.  But there is a silver lining here if you dig deeper.  These laws are now making companies aware of the stewardship of the information and the data that they collect. 

This will hopefully now make them more proactive in maintaining a strong Cyber stance.

But here is also a tiny bit of advice from me:  Companies should always be proactive, no matter if they are bound to by the CCPA/GDPR or now.  Taking a little bit of action everyday on a continual basis will make your lines of defense that much stronger, and in the end cost less when you do have to become compliant with the myriad of data privacy laws.

 

Friday, May 5, 2023

Learn More About The Zero Trust Framework In This Podcast

 


To some degree or another, most of us have heard of the Zero Trust Framework.  While the concept of this is not new, its development as well as deployment is something that is starting to take hold today.  Essentially with this methodology, nobody is trusted – not even your long term employees.  In order to access the shared resources, everybody, both internal and external to your business environment must be verified each and every time they need to gain access to the shared resources.

In fact, the largest of the Cloud Providers, most notably the AWS and Microsoft Azure are now starting to offer tools that can help an organization offer the Zero Trust Framework in a quick and easy manner.  In fact, there are also other Cyber vendors that are offering solutions so that an organization can deploy this methodology without too many administrative headaches.

Oen such company is called Bastion Zero.  Their solution offers numerous benefits, which include the following:

Ø  Centralized Management Policy

Ø  Centralized Logging

Ø  It can be deployed in seconds

In this podcast, we have the honor and privilege of interviewing Dr. Sharon Goldberg, the Founder of Bastion Zero.  In this segment, she will be explaining in more detail what the Zero Trust is all about, and what their solution specifically offers.

You can download the podcast at this link:

https://www.podbean.com/site/EpisodeDownload/PB13F1438QXWJG

How To Avoid Being Caught In Global Based Cyberwarfare

  Although the scope of this blog is to remain as apolitical as possible, sometimes it’s not just that easy to do, especially when you are t...