Saturday, November 25, 2023

How To Combat The 3 Risks Generative AI Brings To Source Code

 


Source code security is a topic that I am passionate about.  In fact, I’ve got a whole whitepaper Sort of on that topic (it deals more specifically with on how to keep a good software update/patch schedule).  As I have written about before countless times, this topic is going to be one of the top Cyber issues in the coming year, and quite possibly for a long time to come. 

With pretty much everything going digital and being connected amongst one another, web apps and mobile apps are going to be the way things are headed.  But yet another problem that is compounding this grave issue is once again, Generative AI.  Love it or hate it, this too is going to be around for a long time to come also. 

Many businesses are now starting to send source code now using Generative AI, in an effort to automate the process.  It used to be done manually, but now longer is it the case.  This is a very vulnerable situation for those organizations that rely heavily upon outsourcing when it comes to building web or mobile apps.

For example, suppose Company ZYX has its base of operations here in the United States.  Some of the source code is developed here, but a major part of it is developed overseas, such as India.  Human intervention at one point was probably used to ship the code over to the headquarters in the US.  But now, Generative AI is being used for this, so probably nobody is really checking twice to see if there have been any issues or vulnerabilities.

So now, this is the big question:  How can US companies (or for that matter, any business really located throughout the world) protect themselves from receiving malicious code when automated means are being used for distribution?  Here are some tips that any CISO and their IT Security team can use:

1)     Mandate the use of code signing certificates:

You may be wondering what exactly this is?  Well, here is a technical definition of it:

               “Code Signing Certificates are used by software developers to digitally sign applications, drivers,      executables and software programs as a way for end-users to verify that the code they receive           has not been altered or compromised by a third party.”

               (SOURCE:  https://www.digicert.com/signing/code-signing-               certificates#:~:text=Code%20Signing%20Certificates%20are%20used,compromised%20by%20a             %20third%20party.)

               In a way, this can be compared to a chain of custody forms that are used in digital forensics             investigations.  In order for the evidence to be admissible to a court of law, there must be a               record of all of the authorized individuals who had access to it.  This is to ensure that nothing              has been altered or changed in the process, and that everything is still intact.  This is also true               for the source code.  In order to make sure that that it has maintained its integrity and that it             has not fallen into the wrong hands, software developers use these kinds of signing certificates.          However, for the longest time, this was an optional feature businesses to implement.  But now    people, especially those in the Cyber industry, are now stating that it should be a mandatory in     order to keep the attack surface to a minimum.

               Also keep in mind that the following questions have to be answered:

               *Who in your organization is signing code?

*Where are private code-signing keys stored?

*What software is being signed?

2)     Maintain visibility:

Even today, many businesses still go on the presumption that outside suppliers can be trusted, as long as they have a good reputation.  However, do not take anything for granted.  In this regard, you have to think like the Zero Trust Framework.  You have to go on the premise that absolutely nobody can be trusted, and that everybody must go through at least three layers of authentication in order to have their identity to be fully confirmed.  So with this in mind, you and your IT Security team have to keep a close eye on the rights, permissions, and privileges that are being assigned to everybody, especially even the software development teams that create your ever important source code.  In the end, always implement the concept of Least Privilege, which simply states that nobody should receive any permissions than are absolutely necessary to have.

3)     Assign a responsible party:

Back in the day, when source code was being developed, it would have been the responsibility of the IT Security team to ensure its safekeeping.  But with Generative AI, remote work, the use of both DevOps and DevSecOps teams, there are now hundreds, and possibly even thousands of people who could come into contact with the source code in one way or another.  Therefore, you need to find somebody or some entity that you can trust who will actually oversee the ownership of the source code.  Once that has been determined, then your IT Security team should work closely with them in order to make sure that all of the necessary protocols and controls are indeed put into place.  Try to find no more than two or three people, or just one entity, to take responsibility for the ownership.  The more involved this process gets, the worse off it will be in the long run.

My Thoughts On This:

According to a recent study conducted by Deloitte, over 50% of businesses will be using Generative AI for automation purposes.  And as you can imagine, the shipping of source code will be a big part of this.  While I am not against the use of AI completely, there needs to be sense of checks and balances here.  Cybersecurity needs both aspects:  the technology and the humans in order to really mitigate the risks of security breaches.

More information about this study can be found at the link below:

https://www.forbes.com/sites/serenitygibbons/2023/02/02/2023-business-predictions-as-ai-and-automation-rise-in-popularity/?sh=2aeab99e744b

 

Friday, November 24, 2023

Emerging Cyber Opps In The Middle East: Capitalize On Them


 

When we talk about Cybersecurity, what are the geographic regions that you think of?  On the good side, it is the United States and the European Union.  Of course, on the Dark Side are the nation state actors such as those of Russia, China, and North Korea.  

But as I have been writing throughout the year, it seems like that the Middle East is now becoming the ripe opportunity for Cyber related projects, especially when it comes to Cloud deployments, such as those of the AWS and Azure.

So, why is the Middle East becoming so popular now?  Well, this region has traditionally been known to be ultra conservative.  Keep in mind that the countries I am primarily talking about are Saudi Arabia, the United Arab Emirates, Qatar, Oman, Bahrain, etc. 

The governments of these nations hold very closely to their chest the PII datasets of their citizens.  This was the way for the longest time, until the COVID-19 Pandemic hit.  With everybody on lock down, the only way for people to conduct businesses and daily job tasks was through video conferencing and the other usual means of electronic communications.

Of course, the governments of these countries just mentioned did not want to miss out on any opportunities during this time period, so they eased up quite a bit on the restrictions.  Now they are starting to realize that using the Cloud for large scale computing is the wave of the future.  The drivers for this are:

*AI and ML, and everything that comes with it

*The Internet of Things (IoT)

*The 5 G Wireless Network

*Cybersecurity in general

Interestingly enough, the expected market for the Middle East for this kind of stuff is expected to be at about $10 billion by 2027, which represents a growth rate of at least 20%.  This is illustrated below:


(SOURCE:  https://www.blueweaveconsulting.com/report/middle-east-public-cloud-market-report)

Also, given the huge explosion of AI and ML this year, data centers are now becoming a big thing in this part of the world, with the United Arab Emirates (UAE) signing up deals with Azure AWS, GCP, and even Ali Baba, based in China. 

Also, Saudi Arabia just launched its first data center, giving yet another dedicated home to the GCP (Google Cloud Platform).  Soon, they will be expanding to have data centers that host Oracle and SAP.  And Qatar is also expanding into the datacenter footprint as well, by having on premises hosting for both the CGP and Azure. 

But despite the huge growth and interest, there is still strong concern among more conservative Middle Easterners about data sharing, and it will be made private.  Another big concern are those of the data leakages that still do occur, whether they are intentional or not.  But it is highly expected that the sheer benefits of Cloud deployments will far outweigh these concerns in the end.

My Thoughts On This:

It is obvious that this explosion in the Middle East is going to continue for a long time to come.  While other markets may soon reach their maturation point, these countries still have a long time to go until they ever reach that point.  Also, the trend is still continuing to grow as well where other IT vendors and telecoms are forging other kinds of partnerships with the major Cloud providers.

Some of these include:

*The Core 42 based out of Abu Dhabi

(https://www.thenationalnews.com/business/technology/2023/10/20/abu-dhabis-core42-and-amazon-web-services-team-up-to-boost-uaes-digital-transformation/)

*The Snowflake based in the UAE

(https://www.snowflake.com/blog/cloud-deployment-region-uae-launch/)

*The Moro Hub based in Qatar

(https://www.thalesgroup.com/en/worldwide/aerospace/press_release/moro-hub-and-thales-join-forces-spearhead-innovation-physical-and)

Finally, more details about other Cloud based opportunities can be seen in the link below:

https://www.darkreading.com/dr-global/the-gulfs-dizzying-tech-ambitions-present-risk-opportunity

 


Saturday, November 18, 2023

How To Increase Your Security Posture Without Breaking Your Budget

 


All of the financial pundits seem to be predicting that the United States will be headed for a recession either in this quarter or starting next year.  This is hard to say for sure, as most of the economic data I have seen points towards a still strong economy.  Job growth seems to be good, and the overall inflation measures seem to be trending downwards.

The hope of the financial markets is that the Fed will now stop hiking, and let things get back to some sense of normalcy.  But even despite these numbers, people are still reacting with their emotions about their IT and Cyber budgets, something that I had pointed out in the blog from yesterday.  Many CISOs are still not sure about budgets, while some are still increasing, the bulk of the others are still stagnant. 

So the trick now is to make do with what you have in hand.  In other words, the CISO has to try to find ways to reduce ways to reduce their overall Cyber Risk posture without cutting too much into their existing budgets.  So, rather than waste time in trying to find new hardware to deploy (you can still look at software solutions, the ones that I have seen so far are very reasonably prices, especially if you use a Cloud based platform like Microsoft Azure).

So what can a CISO do here?  Here are some key tricks:

1)     Consolidate, consolidate, and consolidate:

What I mean by this is that conduct another comprehensive and detailed Risk Assessment Analysis to see where all of your digital and physical assets lie at.  This even includes your network security tools.  From this, try to see where there is overlap and redundancy.  Them from there, consolidate as much as you can.  For example, if you have 10 firewalls at your place of business, then try to strategically deploy them where they are needed the most, and perhaps just use 3 firewalls.  This is also good practice, as this will greatly reduce the size of your attack surface as well.

2)     Be dynamic and fluid:

Some time ago, as the CISO, you and your IT Security team probably tried to predict what the possible threat variants could look like down the road.  While this is a good practice to do, you just can’t stick to that particular roadmap.  You have to address what is happening here and now, and still keep predicting what is going to happen in the future.  Then from there, you shift your strategies and lines of defense accordingly.  By also doing this, the probability will be less that you will have to take more out of your existing budget.  But also keep in mind here that because you are strategizing, this does not mean you need to get new tools and technologies.  Try to make to so with what you already have unless you absolutely have to procure new gadgets.

3)     Make use of the Cloud:

This is probably the best way to save on expenses.  If you migrate to something like Microsoft Azure, you will only be paying a fraction of what you are right now with an On Premises Solution.  With the Cloud, all pricing and costs are known, and you only pay a monthly fee for only the resources that you consume.  Everything else is covered.  And if you find that you are spending more than you want to, you can scale down in just a matter of seconds.

My Thoughts On This:

As a CISO, these are things that you need to pay attention to now.   Apart from a possible recession, there are other headwinds you need to know about also, which are as follows:

*Security spending will be well over 11% (SOURCE:  https://www.gartner.com/en/documents/4016190)

*The average security breach now costs at least $4.45 million, and will only escalate (SOURCE:  http://cyberresources.solutions/blogs/Data_Cost.pdf)

*Ransomware breaches will cost well over $900 million (SOURCE:  https://www.wired.co.uk/article/ransomware-attacks-rise-2023)

*The data privacy and compliance laws are now being even more strongly enforced, with the prime example of that being the SEC – more information on that can be seen at this link:

https://www.wired.co.uk/article/ransomware-attacks-rise-2023

Because of this, publicly traded companies now have to report to shareholders on the steps that they are taking to improve their Cyber Posture.  More information on that can be seen at the link here:

https://www.darkreading.com/risk/hot-seat-ciso-accountability-in-new-era-of-sec-regulation

Remember too, that you need to get Cyber Insurance as well.  Not only is this getting more difficult to do, but it is also getting to become quite expensive.  If you follow the steps in this blog, you will have extra room in your budget to get that much needed policy.

 

Friday, November 17, 2023

The 3 Golden Intersection Points of Behavior & Cyber

 


Most of the people I talk to about Cybersecurity ask me where I got my formal education from in it.  I tell people that everything that I know about Cyber is all self-taught and self-learned.  My degrees were actually in Ag Econ, from Purdue and SIUC, respectively. 

This was more or less applied to economics, so I learned some things about consumer behavior, and how they make their purchasing decisions, at least on a theoretical level.

So far, my formal education has not intersected with the work I do in Cybersecurity, even as a technical writer.  But that is until today, where I came across a very unique article that talks about a field called “Behavioral Economics”, and how it closely parallels Cyber in three unique areas. 

But first, you might be asking what is “Behavioral Economics”?  Well, it can be defined as follows:

“Behavioral economics combines elements of economics and psychology to understand how and why people behave the way they do in the real world. It differs from neoclassical economics, which assumes that most people have well-defined preferences and make well-informed, self-interested decisions based on those preferences.”

(SOURCE:  https://news.uchicago.edu/explainer/what-is-behavioral-economics)

Simply put, the economic theory that I was taught assumes that humans make rational buying decisions, based upon the priorities in their needs, and upon how much they can spend.  But Behavioral Economics takes an opposite stance to this, and makes the hypothesis that humans buy and act on impulse, with no regards to their budget.

So now, here is how it comes into play with Cybersecurity, as just mentioned:

1)     The Mental Accounting:

The economic argument here is that people will value and spend money depending on the particular instance that they are in.  Take for example the CISO.  He/She could be sitting down with their IT Security team today, and trying to forecast the budget for 2024.  They could have run various Risk Assessment models to substantiate the money they want.  Now, they present this to the C-Suite.  For some reason or another, the budget gets turned down on this basic premise:  “Why spend for something that has happened yet.  The Risk based scenarios that you have presented to us are occurrences that might happen in the future.  So why give extra money when nothing has happened in the present?” 

2)     The Error In Thinking -  The Sunk Cost Fallacy:

It is of course to try to forecast what the future Cyber threat variants will could possibly look like, and for the CISO to plan their defenses accordingly.  But in this regard, it is too human to stick to what has been forecasted versus proactively being engaged as to what is happening today.  Because of this, the reality of getting a negative Return On Investment (ROI) becomes even harsher with the current projects that are put in place to fend off the predicted threats, rather than the real ones that are actually happening today.  Therefore, CISOs need to be much more dynamic in this aspect, and react to the sheer volume of frequency that Cyberattacks are happening today.  In fact, research has shown that almost every 40 seconds now, there is a threat vector that is being launched.

3)     The Issue of Availability Heuristics:

This area of Behavioral Economics makes the theoretical assumption that people will react to a given situation, rather than going back to the facts and the numbers of the reality at hand.  Here is an example of how it can relate to Cyber:  Assume that John Doe receives an email, and assumes that it is safe because the sender is a known contact, and the overall image of the email looks the same.  As a result, no thought is ever given to if this new email could be an actual Phishing based one.  The chances are good that it could be, because all the Cyberattacker has to do is hijack the contact book, and make the email look like the real thing.  This is an area of Social Engineering that is being exploited to the maximum today by different hacking groups.  In other words, people are so busy these days, they don’t take the time to smell the proverbial roses and evaluate a particular action that they are about to take.

My Thoughts On This:

As much as people claim that they do research, rely on AI and ML, and try to think rationally before they make a decision, take this only at face value, at least when it comes to Cyber.  The bottom line is that emotions and past experiences do rule here when reacting to something, such as a security breach. 

More research needs to be done in this area, and perhaps even be used in Security Awareness training programs as well to employees, stressing the importance to think as logically as possible.

Saturday, November 11, 2023

The Security Risks Posed By APIs & How To Mitigate Them

 


Introduction

In the world of software development today, Application Protocol Interfaces (APIs) are one of the key packages that are being used in creating applications today.  But first, what exactly is an API?  It can be thought of as an intermediary between two very different software packages which brings them together in order to create a seamless environment for the end user. 

In other words, it is the bridge that takes your request from a page on a website that you are visiting back to the web server.

In response, the web server then analyzes the query, calls up the relevant information from the database, and then transmits the results back to you in a manner that is consumable.  A good example of this is requesting a free whitepaper from a Cybersecurity site. 

You fill in your relevant contact information, and this gets transmitted to the web server.  It then searches its own database and relevant directory structures for the whitepaper, and then sends it back to your Email inbox.

Think of the API as the tunnel between that contact form and the whitepaper you have selected.  If you decide later on that you want another piece of content resource, that same API can be used over again to call that particular piece up. 

The primary advantage of the API is that it can be used over and over again, for differing requests.  If it hadn’t been for the API, unique source code would have to be implemented each and every time for every different kind of request.

The Security Risks That Are Posed To APIs

The website of the Open Web Application Security Project, also known as OWASP, is one of the best resources to find out what out the latest risks are.  This organization updates their list every few years, and in fact, the latest version came out this year.  According to them, here are the top API Security Risks:

1)     Broken Access Control

2)     Cryptographic Failures

3)     Injection Style Attacks

4)     Insecure Source Code Design

5)     Security Misconfiguration

6)     Vulnerable/Outdated Components

7)     IAM Failures

8)     Data Integrity Failures

9)     Security Logging/Monitoring Failures

10)  Server-Side Request Forgeries

 

 

 

How To Mitigate API Security Vulnerabilities

What can a business do to mitigate some of these security gaps?  Here are some top tips:

Ø  Authentication should come first, then authorization:

There is often confusion as to what these two are.  Authentication is confirming the actual identity of the end user, and authorization are the permissions, rights, and privileges that are granted to them.  In most organizations, the latter is usually done first, and the former second.  But in terms of API security, this way of thinking drastically needs to be changed.  Before an end user can be granted access to any shared resource, their identity must first be confirmed, ideally through Multifactor Authentication (aka MFA, this is where more than one layer of authentication is used).  Once this process has been accomplished, then the end user should be given the appropriate privilege level to access what he or she needs to.  This could fall into the realm of policy-based access control (PBAC), or of role-based access control (RBAC).  To ensure an even greater level of security for the APIs, access tokens should also be used.

Ø  Make regular use of Cryptography:

In this regard, the tools of Encryption must be used to protect the APIs that are being used in the web application, especially when it comes to the point where of communications where the end user is requesting certain pieces of information, and the web server has to respond to them.  The primary objective of this is to ensure the highest level of security for any authentication details that are being sent back and forth.  Examples of this include implementing SSL certificates, Transport Layer Security (aka TLS) protocols, and API gateways (which will allow you to streamline and manage all of the network traffic coming to and leaving the APIs).

Ø  Deploy Throttling Quotas:

As described previously in the last section, of the main security weaknesses of APIs is that many of them do not have any sort of restrictions placed on them in terms of the number of requests that they can process.  In order to mitigate this risk, it is highly recommended that certain rules be deployed onto the APIs to gradually reduce the number of requests that they can process once a certain limit has been reached.  For example, this can include the following:

               *A maximum number of requests that can be handled during certain time periods;

               *Imposing limitations on the level of bandwidth that is being consumed;

               *Deploying other types of quotas that are time or resource based in nature.

Ø  Make use of validation techniques:

Also as mentioned, of the other grave weaknesses of APIs is that malicious code can be injected into them and be manipulated by the Cyberattacker for their gain. One way to overcome this vulnerability is to implement set of validation rules that acts as a cross check for any new lines of source code (such as input strings or objects) that are implemented.  In this way, if anything appears to be anomalous or out of the ordinary, these lines of code can be discarded quickly before an injection attack occurs to the APIs that are being used.

Ø  Deploy RESTful APIs:

This is a set of rules and standards that have been created in order to make APIs easy to understand and scan on a regular basis so that any security vulnerabilities in them can be detected quickly and efficiently.  This is also known as the “RESTful Web Service” and is primarily based upon the Simple Object Access Protocol (aka SOAP). To this end, this kind of API forces the software developer to formulate source code from the very beginning of the Software Development Lifecycle (aka SDLC), rather than waiting at the very end, when time to delivery is of essence.

Ø  Implement Auditing and Logging Tools:

Given the ever-changing dynamics of the security threat landscape, it is now more paramount than ever to keep examining your log files on a regular basis, especially as it relates to the APIs that are currently in place.  Much of this can now be automated through the use of Artificial Intelligence (AI) technology and can detect any kind of erratic activity in just a matter of a few seconds.

Conclusions

Overall, this article has examined what an API is, and the major security issues that are associated with them, through the list provided by OWASP.  Solutions were also given in helping to mitigate these risks.  The use of APIs will only continue to grow into the future, as web applications become more complex and handle an even larger influx of information and data. 

Therefore, it is always important to keep ahead of the curve, by conducting regular security audits on the APIs, through the use of Penetration Testing and Threat Hunting exercises.

Sources

1)     https://owasp.org/Top10/

Friday, November 10, 2023

Learn About The Latest Risk To The Android Device - The IMUTA Malware

 


Let’s fac it, our smartphones are the extension of our lives, both on a personal and professional basis.  If anything happens to it, we are totally paralyzed, in a way like losing our Internet connection.  But given how much we actually use them; they have been for the longest time a prime target for the Cyberattacker. 

There have been countless articles written about how to keep your device safe as much as possible, so today I am going to be one of them.

But we take a different approach on this one, in that we focus upon the Android, and all of the smartphones that use them.  At the present time, this is the most popular OS that is around.  In fact, it has been estimated that there are over 3 billion users of it on a global scale. 

Most of them have relied heavily upon the antimalware, antivirus, firewall, etc.  that is installed onto them as the main source of protection.  But truth be told, the malware of today has become so sophisticated and covert that many of them bypass these traditional safeguards and very often are not detected in time.

The main culprit behind this trend is believe or not, Generative AI.  Through the use of the various platforms that have become available (especially that of ChatGPT) a Cyberattacker can literally create a piece of malicious payload that can be easily spread in Phishing emails, and even tricking legitimate chatbots that many people make use of. 

The Cyberattacker can now quite easily “jailbreak” into the core of the chatbot, and manipulate it in a way to launch Social Engineering attacks.  For example, if an end user went to a website seeking advice on something from a chatbot, it could really be just a fake, and instead, it could pose questions back, or even engage in a particular conversation, that will urge the end user to give up their private and confidential information, ranging anywhere from financial information to their Social Security numbers.

As a result of all of this, there has been a 61% rise in Phishing based attacks alone just from Generative AI, and outsmarting the chatbots.  Apart from this, another grave threat that is posed to the Android user is what is known as “Incremental Malicious Update Attacks”, also known as the “IMUTA” for short. 

Essentially, this is where a Cyberattacker will deploy a malicious payload in an incremental fashion, finding its home in a mobile app that is not completely secured.  In fact, the major source of where these malicious are deployed is the Google Play Store.

The trick here is that every time the end user updates their Android device, this malicious payload will become more dominant in the OS, until is too late to do anything about it.  According to a recent article that was  published in the Journal of Ambient Intelligence and Humanized Computing, the researchers have demonstrated how IMUTA can be used to breach the privacy of how a voice search application (which is actually a mobile app) that is downloaded from the Play Store can add malicious features through the incremental updates. Worst yet,

The malware can scan and collect private user data from the device, such as contacts, messages, photos, and transmit it to a remote server for covert execution.

More information about this threat vector can be found at the following links:

https://scholars.org/contribution/imuta-malware-breaches-google-play-security

https://link.springer.com/article/10.1007/s12652-023-04535-7

So given just how scary the above scenario can be, what is an Android user supposed to do?  Well, first keep in mind that all people who use a smartphone are prone to being hacked.  The key here is in mitigating that risk.  So here are some quick steps that you can take:

1)     Be careful of what you download:

For example, do you really need this app, or is  just a “want to have”?  My recommendation would be to download those apps that you really need.  Before you do so, always check the website of the vendor, and try to find any reviews by doing online searches.  If there is anything negative about you, don’t download it!!!

2)     Keep your smartphone updated:

Simply put, if there is an update, install it.  If possible, enable your smartphone to update automatically, but outside of hours when you are not using it.

3)     Monitor your device:

Beware of any subtle signs that your device is telling you.  For example, watch for any slowdowns, unusual system crashes, or any pop ups that appear.  These are usually the first warning signs of the IMUTA threat vector.

My Thoughts On This:

To be honest, if I was shopping for a smartphone or looking for a replacement, I would go with an iPhone.  I have been a user of one since 2014 or so, and the security features they have are great.  Also, the app store from Apple is pretty secure as well. 

For instance, before any mobile app can be uploaded for general consumption by the public, Apple requires that the software developer(s) follow stringent security testing measures, including thoroughly testing the mobile app in a sandbox environment, then releasing it to a limited production environment.

Just some food for thought.

Saturday, November 4, 2023

The Stark Revelations Of App Sec Today - Must Read

 


In yesterday’s blog, I wrote about Patch Tuesday, and the need to develop secure source code in creating the applications.  This is an effort to mitigate the amount of total patches that you will need in the end.  But there is another that is closely related to this, and this has to do with what is known as “Application Security”.  A technical definition of it is as follows:

“AppSec is the process of finding, fixing, and preventing security vulnerabilities at the application level, as part of the software development processes. This includes adding application measures throughout the development life cycle, from application planning to production use. In the past, security happened after applications were designed and developed. Today, security is “shifting left”.”

(SOURCE:  https://www.checkpoint.com/cyber-hub/cloud-security/what-is-application-security-appsec/).

So while software security deals with the code itself, Application Security (also called “App Sec”) deals with the vulnerabilities, gaps, and weaknesses after the application has been launched, and is continuing to be used throughout its lifetime. 

But this is more all-encompassing, as it also includes the hardware, and the databases that are used to store the information and data that are submitted to by customers and prospects.

A recent report that was launched and published by the Purple Book Community reveals just how alarming AppSec is not taken seriously enough by Corporate America.  It is entitled “State of Application Security”, and it can be downloaded at this link:

http://cyberresources.solutions/blogs/App_Sec.pdf

The survey pool consisted of the following titles:

*CISOs

*Security Engineers

*Software Developers

*Application Security Engineers

*Other C Suite Executives

Here are some of the highlights of what was discovered:

*48% of the respondents claim that their IT Security can support over 50+ software developers.

*42% can only support one to five software developers.

*24% of the respondents claim that they have no security support for software developers.

*On average, there are 100+ software developers for just one IT Security team member.

*It was discovered that vulnerabilities happen multiple times during the launch of a product and during its entire lifecycle, which nobody really seems to pay too much attention to.

*Only 21% of the respondents say that remediate a vulnerability at the AppSec level within a timespan of just one day.  (More details can be seen at this link:  https://www.darkreading.com/edge/remediation-ballet-is-a-pas-de-deux-of-patch-and-performance)

The bottom line is that these numbers are indicative that security is not even a priority in the development or the production release of the application.  Also, the very slow remediation times is also compounding the problem of a lack of response to AppSec.  But here is something else that is interesting which the study revealed:

*100% of the respondents claim that they are or at least planning to deploy all of their software applications and entire infrastructures into a Cloud platform, such as that of the AWS or Microsoft Azure. 

Although the Cloud providers do a reasonably good job in providing the needed tools to secure these apps, the tenants need to take their part in protecting them as well to.  For instance, businesses need to make sure that the settings they use are not the default ones, but rather, are the ones that specifically meet their security requirements. 

Doing this will also greatly mitigate the risks of data leakages in happening.

Thus, there is a greater call now among CISOs to quickly adopt the principles of DevSecOps.  In a general sense, this is where the IT Security and Operations teams come together and work in unison with the software development teams to act as a double check of not only the source code, but the completed web application as well before it is rolled out into the production environment.

More information about Cloud Security can be seen at the link below:

https://www.darkreading.com/google-cloud-security/considerations-for-reducing-risk-when-migrating-to-the-cloud

Another key problem that is compounding the problem of the lack of AppSec resources is the extremely limited availability of needed funding.  In the survey, this was discovered:

*22% of the respondents have no budget or funding at all.

*About 35% of the respondents claim that they have some sort of budget, but there will be no increase to it in 2024.

Because of this lack of money, only 38% of the respondents have barely defined any kind or type of AppSec program for their business.

My Thoughts On This:

In response to these dismal numbers, the Purple Book Community has just launched what is known as the “Scalable Software Security Maturity Model”, also known as the “S3M2” for short.  More information about this can be seen at the link below:

https://www.thepurplebook.club/s3m2

 

 

Friday, November 3, 2023

Is Patch Tuesday Still Worth It???

 


As the months and years go by, all of the major Cyber vendors have chosen to pick a particular day of the month in order to formally release all of their software patches and upgrades to the public.  Although I don’t keep specific track of them, I know for a fact that Adobe, Oracle, and I think even VMware have a specific day and time of the month.  But the behemoth of them all, Microsoft, has been the leader in this realm.

On the second Tuesday of every month, Microsoft announces all of their upgrades for all of the software applications that need them.  So as you can imagine, this can be quite an exhaustive list to go through, and decide which ones you need to download and apply. 

Personally, I really don’t pay too much attention to it because I let my laptop do all of the updating as it needs to.

Once I hear the fan going off for a long period of time, that’s when I know for sure it is happening.  But here are some interesting tidbits about “Patch Tuesday”, as it has become known as:

*It first started in October of 2003, and has continued since then, for 20 years.

*Despite its prestige, Microsoft has been ranked as being amongst the worst technology vendors for having vulnerabilities in the software apps that it creates and deploys. Heck, it has even been known to have gaps even in the patches themselves.  For example, in this 20-year reign, there have been 10,900 flaws in Microsoft products.  Of these, 1,200 were ranked as “Critical” in terms of severity, and 5,300 were ranked as “Important”.

*There have been over 630 exploits for just one “Critical” or “Important” rated vulnerability. 

*Because of the sheer growth of Microsoft in the last 20 years, driven primarily by M365 and Azure, the company has been deemed also to have amongst the largest attack surfaces ever imaginable. 

But now, the critical question that is being asked is why should companies have to wait until Patch Tuesday to find out about vulnerabilities?  Shouldn’t they know about them sooner so they can fend off any potential threats?  Here are some thoughts on this of the story:

1)     The Zero Day Exploits:

This is a fancy techno jargon that simply means that a Cyberattacker has discovered a weakness or a gap before Microsoft is aware about it.  This means that they are ready to pounce, and will try to use just one entry point to cause as much destruction as possible.  Whether it is simply deploying a malicious payload, or heisting PII datasets to be sold onto the Dark Web, the damage has already been done.  In fact, it has been estimated that it only takes 79 minutes for a Cyberattacker to find any unknown vulnerabilities.  Really in the end, this is not too difficult to accomplish.  With all of the free tools for hacking that are available on the Dark Web, this comes of no surprise to me.  But even when this happens, there is a huge lag time that is involved.  Once the exploit has been discovered, Microsoft then needs to create the patch for it.  Then it has to be tested.  But not only that, then once businesses download them, they too need to further test it in a sandboxed environment to make sure that it will “play nice” with the other components of the IT and Network Infrastructure of a business.  The bottom line is that there still exists a huge time gap between when an exploit is first known, and the when the patch comes out to the public for downloading.

(SOURCE:  https://www.crowdstrike.com/resources/reports/threat-hunting-report/)

2)     Put security in the first place:

This now all comes down to a topic which I have belabored about before:  Address security as the source code for the application or project in the extremely early stages of development.  For example, compile the source code in modules or “chunks”, and after each iteration, thoroughly test it for any weaknesses or vulnerabilities.  If any are found, then remediate then and there so it does not have a cascading effect onto the subsequent modules.  In this respect, open-source APIs also needed to be vetted completely before they are put out into the production environment.  In the end there will always be some sort of vulnerabilities, but by taking a proactive approach early on in the game will greatly mitigate these risks.  This is technically known as “Secure By Design”, and it has been highly recommended by CISA.  More information about this can be seen at the link below:

https://www.darkreading.com/vulnerabilities-threats/5-steps-to-becoming-secure-by-design-in-the-face-of-evolving-cyber-threats

My Thoughts On This:

So now, the other big questions that remain, is Patch Tuesday still worth it?  People will have differing views about this, but IMHO, I think it is still worth it.  The primary advantage of it is that at least the IT Security team knows when new software patches and upgrades will be coming out, so they can plan accordingly. 

But again, this can also be stressful, because if Microsoft releases over 100+ patches, those will have to be reviewed in extensive detail to see what is needed.

But then of course, there is the school of thought of putting them out on an ad hoc basis.  Meaning, rather than waiting for the second Tuesday in the month to announce them, Microsoft should simply release them as they are rolled out.  But then of course, the complaints will be that there is not enough time to prepare, etc. 

In the end, it takes the best of both worlds, and truthfully speaking, it will be very hard to achieve.  Given just how gargantuan Microsoft is, it is quite conceivable that they could come out with new patches almost every other day.  So what is a business to do? 

The answer is simple:  Be proactive.  Keep testing your systems and digital assets on a regular basis, and remediate anything that is found.  That way, you will be one step ahead of Patch Tuesday.

Also, more information about Patch Tuesday can be found at the link below:

https://www.darkreading.com/edge-articles/how-patch-tuesday-keeps-the-beat-after-20-years

 

 

How To Launch A Better Penetration Test In 2025: 4 Golden Tips

  In my past 16+ years as a tech writer, one of the themes that I have written a lot about is Penetration Testing.   I have written man blog...