How to calculate impact metrics in the CVSS standard - security

I am trying to calculate impact metrics (confidentiality, availability and integrity) in the CVSS standard.
Can anyone suggest how this is done?

You should be using the definitions provided in the standard since you are following a standard to communicate the findings in a standardized and commonly-agreed way.
Refer to this document: https://www.first.org/cvss/specification-document

First of all, there is a distinction between CVSS v2 and v3, so you would need to specify which one you use (one has 'complete' the other 'high' as possible impacts for CIA).
In any case, in order to assess the proper impact, you need to understand the vulnerability and the standard. The official standard has many different examples.
A couple here: assume you have a vulnerability that allows you to crash the Apache Web Server by sending a malformed request from remote. If we have to assign a CVSS score for the Web Server itself, Confidentiality would not be impacted as nothing is leaked, Integrity would not be impacted as no data is being manipulated. Availability however would be completely impacted, as the server would not respond anymore.
However, let's assume that we have to assign a CVSS for the same issue, only this time we do it for an operating system (e.g. the server itself on which the http server is running). While confidentiality and integrity still are not impacted, the availability of the server would be impacted only PARTIALLY. While the web server might not respond anymore, other services might (e.g. SSH, Mailserver, etc).
This was to make it clear, that CVSS cannot be calculated automatically, but has to be assessed for each vulnerability in relation to the affected software component. There are also a number of other pitfalls defined by the standard, so I strongly recommend you go read it!

Related

Is a WAF necessary on Kubernetes?

When reading blog posts about WAFs and Kubernetes, it seems 90+ % of the posts are written by WAF-providers, while the remaining posts seem to be sceptical. So I would like to hear what your experiences are with WAFs, do they make sense, and if so can you recommend any good open-source WAFs? We are currently not allowed to used American cloud providers, as we work with "person data", and the Schrems II judgement has indicated that unencrypted "person data" is not allowed on their platforms (even if on EU servers).
To my understanding WAF help with the following:
IP-whitelists/blacklists
Rate Limits
Scanning of HTTPS requests for SQLi and XSS
Cookie Poisoning and session-jacking
DDOS (requires a huge WAF cluster)
But I would also think that these problems can be handled elsewhere:
IP-whitelists/blacklists can be handled by the Loadbalancer or NetworkPolicies
Rate Limits can be configured in the Ingress
Handling of SQLi and XSS is done by input sanitization in the application
Server-side sessions bound to IPs can prevent poisoning and jacking
DDOS are hard to absorb, so I have no native solution here (but they are low risk?)
Sure, I can see the advantage in centralizing security at the access gate to the network, but from what I have read WAFs are hard to maintain, they have tons af false positives and most companies mainly use them to be compliant with ISO-standards, and mainly in "monitoring mode". Shouldn't it be secure enough to use SecurityPolicies, NetworkPolicies, Ingress Rules and Loadbalancer Rules rather than a WAF?
A WAF is not strictly necessary on Kubernetes — or on any other deployment platform. Honestly, even after consulting for dozens of companies, I've seldom encountered any site that used a WAF at all.
You're right that you could duplicate the functions of a WAF using other technology. But you're basically reinventing the wheel by doing so, and the programmers you assign to do it are not as expert in those security tasks than the developers of the WAF are. At least they are probably doing it as one of many other tasks they are working on, so they can't devote full-time to implementation and testing of the WAF.
There is also a valid argument that defense in depth in computing is a good thing. Even if you have other security measures in place, they might fail. It's worth creating redundant layers of security defense, to account for that possibility.
There's a tradeoff between implementing security (or any other feature) yourself versus paying someone else for their expert work. This is true for many areas of software development, not only a WAF.
For example, it has become popular to use a web application framework. Is it possible to develop your own framework? Of course it is, and sometimes it's necessary if you want the code to have very specific behavior. But most of the time you can use some third-party framework off the shelf. It saves you a lot of time, and you get the instant benefit from years of development and testing done by someone else.
A good waf does a lot more than that, and it is independent of the deployment model (kubernetes or else).
A waf can
Detect and prevent application level exploits far beyond sqli and xss. Sure, you can make a secure application... but can you actually make a secure application? (A team of sometimes changing developers usually cannot.)
Detect and prevent vulnerabilities in underlying layers, like nginx or the OS - or maybe even kubernetes itself.
Provide hotfixing known vulnerabilities until they are actually fixed in the code or patched in the underlying component (like for example preventing certain values for certain parameters you know are vulnerable and so on).
So in short, yes, a waf does make sense with k8s too, in fact it is not dependent on the deployment model. A waf is just a layer 7 firewall that understands http, and can look into traffic to find flaws and prevent exploits.
Update:
For example a recent vulnerability was log4shell, in log4j. In a request it was possible to run arbitrary stuff on servers due to a framework level (3rd party) vulnerability. A good, regularly updated waf would prevent that probably even before you read about the problem.
Spring4shell was a somewhat similar vulnerability in Spring, that can also be prevented by wafs. So could Heartbleed, a vulnerability in openssl.
There was a php vulnerability quite a while ago that involved a magic number, sent as any parameter.
Command injection vulnerabilities in any application or component follow specific patterns, and so on.
A waf also has more generic patterns for usual application vulnerabilities including (but not limited to) sql injection and xss. Sure, your application could be secure and not have these. But especially over time, it will for sure be vulnerable, even the best team cannot produce bug free code, and that applies to security bugs too.
As a web application is usually only accessible through http, ALL of that is available for capture for a component that understands http. All application layer attacks (and that's a lot) will come through http and a waf at least in theory is capable of preventing them. Surely it will not always recognize everything, it's not magic, and again, you could all implement it yourself. But it would be very difficult and time consuming. The same as you would not implement an API gateway or a network firewall, you would want to use a WAF to provide a layer of protection to your application and it's underlying components.
On the other hand, it's true that it takes some time to configure for your specific scenario and application. At first, it will probably produce false positives. Then you can decide how to manage those, you can disable entire rules, or remove certain pages or parameters from checks and so on. It does involve some work, maybe a lot for a very complex application. But when it's configured, it provides an additional layer of protection against threats you may not even have currently, but will in the future.
WAF suggestions:
If you are running managed kubernetes (AWS EKS, Azure AKS and the like) then probably your cloud provider's waf is the best choice due to easy setup and good integration (though I understand that might not be an option for you). I don't know of a good one apart from modsecurity if you are running your own. Naxsi would come to mind, and while I don't have experience with it, its functionality seems very limited compared to other options and what's described above.
WAF &/or API Gateway you may call, play a very vital role in a web application that many developers fail to understand initially.
First and foremost note that its another "out of process" component of your application that assumes all your attack surface
Least it can provide is to play the role as a "Circuit Breaker". For example your main kubernetes based deployment is down, for multiple reasons, this layer can provide some maintenance pages to your users
Further to that, it can provide caching of response, aggregation of responses from different microservices, buffering, prevention of injection types of attacks, centralized request logging, request analysis, TLS termination, Authentication decoupling, TLS translations, HTTP translations, OWASP protection and the list goes on. See this brief video for one reference implementation: link
There is a reason why a web application like Google Search and all other big similar ones rely on a WAF/API Gateway!

Is it necessary to set up a firewall for my webserver if it's just serving pages?

I have an Ubuntu droplet running a webserver. It's serving dynamic pages.
The backend is written in python with fastapi + uvicorn.
Generally speaking, security can often affect performance. As this paper points out
Network security and network performance are inversely related.
It goes on to say that a firewall indeed does have an negative impact on performance.
As seen from the result of the simulation, network performance is adversely affected when firewall is implemented.
I am concerned about speed. I want it to be lightning fast. That's why I've chosen ASGI.
Does it make sense to set up a firewall in this scenario? There is not input of user data (forms and the like) anywhere on the website.
No one can tell based on the information provided whether you should use a firewall or not. There are too many moving parts.
Have you done any risk assessment for your application? What kind of data is your application processing, and how sensitive is it? What are your requirements regarding Confidentiality, Integrity and Availability? Have you done any hardening already? Has someone evaluated the security of your application? Do you require authentication and access control? Which firewall would you like to have (normal, next gen, application firewall) and what for?
When you look on the definition of application security (application should offer Confidentiality, Integrity and Availability of the data at rest, in transfer and processed) you notice that Confidentiality and Integrity goes against the Availability principle. The very core of application security contains a contradiction. By improving security (putting a firewall) you might worsen the security (legitimate users may not access the site, everything gets slower).
Having said that - please do either a threat and risk analysis of your application or have a look on the OWASP Application Security Verification Standard and go over Level 1 requirements.

Client Server Security Architecture

I would like go get my head around how is best to set up a client server architecture where security is of up most importance.
So far I have the following which I hope someone can tell me if its good enough, or it there are other things I need to think about. Or if I have the wrong end of the stick and need to rethink things.
Use SSL certificate on the server to ensure the traffic is secure.
Have a firewall set up between the server and client.
Have a separate sql db server.
Have a separate db for my security model data.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Passwords generated using a salt which is stored in a different db to the passwords.
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
I have tried searching for some diagrams which could help me understand but I cannot find any which seem to be appropriate.
Thanks in advance
Hardening your architecture can be a challenging task and sharding your services across multiple servers and over-engineering your architecture for semblance security could prove to be your largest security weakness.
However, a number of questions arise when you come to design your IT infrastructure which can't be answered in a single SO answer (will try to find some good white papers and append them).
There are a few things I would advise which is somewhat opinionated backed up with my own thought around it.
Your Questions
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
Settle for the cloud. You do not need to store things on physical servers anymore unless you have current business processes running core business functions that are already working on local physical machines.
Running physical servers increases your system administration requirements for things such as HDD encryption and physical security requirements which can be misconfigured or completely ignored.
Use SSL certificate on the server to ensure the traffic is secure.
This is normally a no-brainer and I would go with a straight, "Yes"; however you must take into consideration the context. If you are running something such as a blog site or documentation-related website that does not transfer any sensitive information at any point in time through HTTP then why use HTTPS? HTTPS has it's own overhead, it's minimal, but it's still there. That said, if in doubt, enable HTTPS.
Have a firewall set up between the server and client.
That is suggested, you may also want to opt for a service such as CloudFlare WAF, I haven't personally used it though.
Have a separate sql db server.
Yes, however not necessarily for security purposes. Database servers and Web Application servers have different hardware requirements and optimizing both simultaneously is not very feasible. Additionally, having them on separate boxes increases your scalability quite a bit which will be beneficial in the long run.
From a security perspective; it's mostly another illusion of, "If I have two boxes and the attacker compromises one [Web Application Server], he won't have access to the Database server".
At foresight, this might seem to be the case but is rarely so. Compromising the Web Application server is still almost a guaranteed Game Over. I will not go into much detail into this (unless you specifically ask me to) however it's still a good idea to keep both services separate from eachother in their own boxes.
Have a separate db for my security model data.
I'm not sure I understood this, what security model are you referring to exactly? Care to share a diagram or two (maybe an ERD) so we can get a better understanding.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Obvious yes; what I am about to say however is controversial and may be flagged by some people (it's a bit of a hot debate)—I recommend using BCrypt instead of PKBDF2 due to BCrypt being slower to compute (resulting in slower to crack).
See - https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage
Passwords generated using a salt which is stored in a different db to the passwords.
If you use BCrypt I would not see why this is required (I may be wrong). I go into more detail regarding the whole username and password hashing into more detail in the following StackOverflow answer which I would recommend you to read - Back end password encryption vs hashing
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
This purely depends on your goals, budget and requirements. I would personally go for AWS, however you should read some more on alternative platforms such as Google Cloud Platform before making your decision.
Last Remarks
All of the things you mentioned are important and it's good that you are even considering them (most people just ignore such questions or go with the most popular answer) however there are a few additional things I want to point:
Internal Services - Make sure that no unrequired services and processes are running on server especially in productions. These services will normally be running old versions of their software (since you won't be administering them) that could be used as an entrypoint for your server to be compromised.
Code Securely - This may seem like another no-brainer yet it is still overlooked or not done properly. Investigate what frameworks you are using, how they handle security and whether they are actually secure. As a developer (and not a pen-tester) you should at least use an automated web application scanner (such as Acunetix) to run security tests after each build that is pushed to make sure you haven't introduced any obvious, critical vulnerabilities.
Limit Exposure - Goes somewhat hand-in-hand with my first point. Make sure that services are only exposed to other services that depend on them and nothing else. As a rule of thumb, keep everything entirely closed and open up gradually when strictly required.
My last few points may come off as broad. The intention is to keep a certain philosophy when developing your software and infrastructure rather than a permanent rule to tick on a check-box.
There are probably a few things I have missed out. I will update the answer accordingly over time if need be. :-)

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Resources