Is a WAF necessary on Kubernetes? - security

When reading blog posts about WAFs and Kubernetes, it seems 90+ % of the posts are written by WAF-providers, while the remaining posts seem to be sceptical. So I would like to hear what your experiences are with WAFs, do they make sense, and if so can you recommend any good open-source WAFs? We are currently not allowed to used American cloud providers, as we work with "person data", and the Schrems II judgement has indicated that unencrypted "person data" is not allowed on their platforms (even if on EU servers).
To my understanding WAF help with the following:
IP-whitelists/blacklists
Rate Limits
Scanning of HTTPS requests for SQLi and XSS
Cookie Poisoning and session-jacking
DDOS (requires a huge WAF cluster)
But I would also think that these problems can be handled elsewhere:
IP-whitelists/blacklists can be handled by the Loadbalancer or NetworkPolicies
Rate Limits can be configured in the Ingress
Handling of SQLi and XSS is done by input sanitization in the application
Server-side sessions bound to IPs can prevent poisoning and jacking
DDOS are hard to absorb, so I have no native solution here (but they are low risk?)
Sure, I can see the advantage in centralizing security at the access gate to the network, but from what I have read WAFs are hard to maintain, they have tons af false positives and most companies mainly use them to be compliant with ISO-standards, and mainly in "monitoring mode". Shouldn't it be secure enough to use SecurityPolicies, NetworkPolicies, Ingress Rules and Loadbalancer Rules rather than a WAF?

A WAF is not strictly necessary on Kubernetes — or on any other deployment platform. Honestly, even after consulting for dozens of companies, I've seldom encountered any site that used a WAF at all.
You're right that you could duplicate the functions of a WAF using other technology. But you're basically reinventing the wheel by doing so, and the programmers you assign to do it are not as expert in those security tasks than the developers of the WAF are. At least they are probably doing it as one of many other tasks they are working on, so they can't devote full-time to implementation and testing of the WAF.
There is also a valid argument that defense in depth in computing is a good thing. Even if you have other security measures in place, they might fail. It's worth creating redundant layers of security defense, to account for that possibility.
There's a tradeoff between implementing security (or any other feature) yourself versus paying someone else for their expert work. This is true for many areas of software development, not only a WAF.
For example, it has become popular to use a web application framework. Is it possible to develop your own framework? Of course it is, and sometimes it's necessary if you want the code to have very specific behavior. But most of the time you can use some third-party framework off the shelf. It saves you a lot of time, and you get the instant benefit from years of development and testing done by someone else.

A good waf does a lot more than that, and it is independent of the deployment model (kubernetes or else).
A waf can
Detect and prevent application level exploits far beyond sqli and xss. Sure, you can make a secure application... but can you actually make a secure application? (A team of sometimes changing developers usually cannot.)
Detect and prevent vulnerabilities in underlying layers, like nginx or the OS - or maybe even kubernetes itself.
Provide hotfixing known vulnerabilities until they are actually fixed in the code or patched in the underlying component (like for example preventing certain values for certain parameters you know are vulnerable and so on).
So in short, yes, a waf does make sense with k8s too, in fact it is not dependent on the deployment model. A waf is just a layer 7 firewall that understands http, and can look into traffic to find flaws and prevent exploits.
Update:
For example a recent vulnerability was log4shell, in log4j. In a request it was possible to run arbitrary stuff on servers due to a framework level (3rd party) vulnerability. A good, regularly updated waf would prevent that probably even before you read about the problem.
Spring4shell was a somewhat similar vulnerability in Spring, that can also be prevented by wafs. So could Heartbleed, a vulnerability in openssl.
There was a php vulnerability quite a while ago that involved a magic number, sent as any parameter.
Command injection vulnerabilities in any application or component follow specific patterns, and so on.
A waf also has more generic patterns for usual application vulnerabilities including (but not limited to) sql injection and xss. Sure, your application could be secure and not have these. But especially over time, it will for sure be vulnerable, even the best team cannot produce bug free code, and that applies to security bugs too.
As a web application is usually only accessible through http, ALL of that is available for capture for a component that understands http. All application layer attacks (and that's a lot) will come through http and a waf at least in theory is capable of preventing them. Surely it will not always recognize everything, it's not magic, and again, you could all implement it yourself. But it would be very difficult and time consuming. The same as you would not implement an API gateway or a network firewall, you would want to use a WAF to provide a layer of protection to your application and it's underlying components.
On the other hand, it's true that it takes some time to configure for your specific scenario and application. At first, it will probably produce false positives. Then you can decide how to manage those, you can disable entire rules, or remove certain pages or parameters from checks and so on. It does involve some work, maybe a lot for a very complex application. But when it's configured, it provides an additional layer of protection against threats you may not even have currently, but will in the future.
WAF suggestions:
If you are running managed kubernetes (AWS EKS, Azure AKS and the like) then probably your cloud provider's waf is the best choice due to easy setup and good integration (though I understand that might not be an option for you). I don't know of a good one apart from modsecurity if you are running your own. Naxsi would come to mind, and while I don't have experience with it, its functionality seems very limited compared to other options and what's described above.

WAF &/or API Gateway you may call, play a very vital role in a web application that many developers fail to understand initially.
First and foremost note that its another "out of process" component of your application that assumes all your attack surface
Least it can provide is to play the role as a "Circuit Breaker". For example your main kubernetes based deployment is down, for multiple reasons, this layer can provide some maintenance pages to your users
Further to that, it can provide caching of response, aggregation of responses from different microservices, buffering, prevention of injection types of attacks, centralized request logging, request analysis, TLS termination, Authentication decoupling, TLS translations, HTTP translations, OWASP protection and the list goes on. See this brief video for one reference implementation: link
There is a reason why a web application like Google Search and all other big similar ones rely on a WAF/API Gateway!

Related

Is it necessary to set up a firewall for my webserver if it's just serving pages?

I have an Ubuntu droplet running a webserver. It's serving dynamic pages.
The backend is written in python with fastapi + uvicorn.
Generally speaking, security can often affect performance. As this paper points out
Network security and network performance are inversely related.
It goes on to say that a firewall indeed does have an negative impact on performance.
As seen from the result of the simulation, network performance is adversely affected when firewall is implemented.
I am concerned about speed. I want it to be lightning fast. That's why I've chosen ASGI.
Does it make sense to set up a firewall in this scenario? There is not input of user data (forms and the like) anywhere on the website.
No one can tell based on the information provided whether you should use a firewall or not. There are too many moving parts.
Have you done any risk assessment for your application? What kind of data is your application processing, and how sensitive is it? What are your requirements regarding Confidentiality, Integrity and Availability? Have you done any hardening already? Has someone evaluated the security of your application? Do you require authentication and access control? Which firewall would you like to have (normal, next gen, application firewall) and what for?
When you look on the definition of application security (application should offer Confidentiality, Integrity and Availability of the data at rest, in transfer and processed) you notice that Confidentiality and Integrity goes against the Availability principle. The very core of application security contains a contradiction. By improving security (putting a firewall) you might worsen the security (legitimate users may not access the site, everything gets slower).
Having said that - please do either a threat and risk analysis of your application or have a look on the OWASP Application Security Verification Standard and go over Level 1 requirements.

node.js api gateway implementation and passport authentication

I am working on implementing a microservices-based application using node.js. While searching for examples on how to implement the api gateway, I came across the following article that seems to provide an example on implementing the api gateway: https://memz.co/api-gateway-microservices-docker-node-js/. Though, finding example for implementing the api gateway pattern in node.js seems to be a little hard to come by so far, this article seemed to be a really good example.
There are a few items that are still unclear and I am still have issues finding doc. on.
1) Security is a major item for the app. I am developing, I am having trouble seeing where the authentication should take place (i.e. using passport, should I add the authentication items in the api gateway and pass the jwt token along with the request to the corresponding microservice as the user's logged in information is needed for certain activities? The only issue here seems to be that all of the microservices would need passport in order to decrypt the jwt token to get the user's profile information. Would the microservice be technically, inaccessible to the outside world except through the api gateway as this seems to be the aim?
2) How does this scenario change if I need to scale to multiple servers with docker images on each one? How would this affect load balancing, as it seems like something would have to sit at a higher level to deal with load balancing?
I can tell that much depends on your application requirements. Really.
I'm now past the 5 years of experience in production microservices using several languages going from medium to very large scale system.
None of them shared the same requirements, and without having a deep understanding of what you need and what are your business (product) requirements it would be hard to know what's the right answer, by the way I'll try to share some experience to help you get it right.
Ideally you want the security to be encapsulated in an external service, so that you can update and apply new policies faster. Also you'll be able to deprecate all existing tokens should you find a breach in your system or if someone in your team inadvertedly pushes some secret key (or cert) to an external service.
You could handle authentication on each single service or using an edge newtwork tool (such as the API Gateway). Becareful choosing how to handle it because each one has it's own privileges:
Choosing the API Gateway your services will remain lighter and do not need to know anything about the authentication steps, but surely at some point you'll need to know who the authenticated user is and you need some plain reference to it (a JSON record, a link or ID to a "user profile" service). How you do it it's up to your requirements and we can even go deeper talking about different pros and cons about each possible choice applicable for your case.
Choosing to handle it at the service level requires you (and your teams) to understand better about the security process taking place (you can hide it with a good library) and you'll need to give them support from your security team (it's may also be yourself btw you know the more service implementing security, the more things you'll have to think about to avoid adding unnecessary features). The big problem here is that you'll often end up stopping your tasks to think about what would help you out on this particular service and you'll be tempted to extend your authentication service (and God, unless you really know what you're doing, don't add a single call not needed for authentication purposes).
One thing is easy to be determined: you surely need to think about tokens (jwt, jwe or, again, whatever your requirements impose).
JWT has good benefits, but data is exposed to spoofing, so never put in there sensitive data or things you wouldn't publicly share about your user (e.g. an ID is probably fine, while security questions or resolution to 2FA would not). JWE is an encrypted form of the spec. A common token (with no meaning) would require a backend to get the data, but it works much like cookie-sessions and data is not leaving your servers.
You need to define yourself the boundaries of your services and do yourself a favor: make each service boundaries clean, defined and standard.
Try to define common policies and standardize interactions, I know it may be easier to add a queue here, a REST endpoint there, a RPC there, but you'll soon end up with a bunch of IPC you will not be able to handle anymore and it will soon catch your attention.
Also if your business solution is pretty heavy to do I don't think it's a good idea to do yourself the API Gateway, Security and so on. I'd go with open source, community supported (or even company-backed if you have some budget) and production-tested solutions.
By definition microservice architectures are very dynamic, you'll fight to keep it immutable between each deployment version, but unless you're a big firm you cannot effort keeping live thousands of servers. This means you'll discover bugs that only presents under certain circumstances you cannot spot in other environments (it happens often to not be able to reproduce them).
By choosing to develop the whole stack yourself you agree with having to deal with maintenance and bug-discovery in your whole stack. So when you try to load a page that has 25 services interacting you know it may be failing because of a bug in: your API Gateway, your Security implementation, your token parser, your user account service, your business service A to N, your database service (if any), your database load balance (if any), your database instance.
I know it's tempting to do everything, but try to keep it flat and do what you need to do. By following this path you'll think about your product, which I think is what's the most important think to do now.
To complete my answer, about the scaling issues:
it doesn't matter. Whatever choice you pick it will scale seamlessly:
API Gateway should be able to work on a pool of backends (so from that server you should be able to redirect to N backend machines you can put live when you need to, you can even have some API to support automatic registration of new instances, or even simples put the IP of an Elastic Load Balancer or HAproxy or equivalents, and as you add backends to them it will just work -you have moved the multiple IPs issue from the API Gateway to one layer down).
If you handle authentication at services level (and you have an API Gateway) see #1
If you handle authentication at services level (without an API Gateway) then you need to look at some other level in your stack: load balancing (layer 3 or layer 7), or the DNS level, you can use several features of DNS to put different IPs to answer from, using even advanced features like Anycast if you need latency distribution.
I know this answer introduced a lot of other questions, but I really tried to answer your question. The fact is that you need to understand and evaluate a lot of things when planning a microservice architecture and I'd not write a SLOC without a very-written-plan printed on every wall of my office.
You'll often need to go mental focus and exit from a single service to review the global vision and check everything is going fine.
I don't want to scare you, I'm rather trying to make you think to succeed.
I just want you to make sure you correctly evaluated all of the possibilities before to decide to do everything from scratch.
P.S. Should you choose to act using an API gateway be sure to limit services to only accept requests through it. On the same machine just start listening on localhost, on multiple machines you'll need some advanced networking rule depending on your operating system.
Good Luck!

Client Server Security Architecture

I would like go get my head around how is best to set up a client server architecture where security is of up most importance.
So far I have the following which I hope someone can tell me if its good enough, or it there are other things I need to think about. Or if I have the wrong end of the stick and need to rethink things.
Use SSL certificate on the server to ensure the traffic is secure.
Have a firewall set up between the server and client.
Have a separate sql db server.
Have a separate db for my security model data.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Passwords generated using a salt which is stored in a different db to the passwords.
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
I have tried searching for some diagrams which could help me understand but I cannot find any which seem to be appropriate.
Thanks in advance
Hardening your architecture can be a challenging task and sharding your services across multiple servers and over-engineering your architecture for semblance security could prove to be your largest security weakness.
However, a number of questions arise when you come to design your IT infrastructure which can't be answered in a single SO answer (will try to find some good white papers and append them).
There are a few things I would advise which is somewhat opinionated backed up with my own thought around it.
Your Questions
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
Settle for the cloud. You do not need to store things on physical servers anymore unless you have current business processes running core business functions that are already working on local physical machines.
Running physical servers increases your system administration requirements for things such as HDD encryption and physical security requirements which can be misconfigured or completely ignored.
Use SSL certificate on the server to ensure the traffic is secure.
This is normally a no-brainer and I would go with a straight, "Yes"; however you must take into consideration the context. If you are running something such as a blog site or documentation-related website that does not transfer any sensitive information at any point in time through HTTP then why use HTTPS? HTTPS has it's own overhead, it's minimal, but it's still there. That said, if in doubt, enable HTTPS.
Have a firewall set up between the server and client.
That is suggested, you may also want to opt for a service such as CloudFlare WAF, I haven't personally used it though.
Have a separate sql db server.
Yes, however not necessarily for security purposes. Database servers and Web Application servers have different hardware requirements and optimizing both simultaneously is not very feasible. Additionally, having them on separate boxes increases your scalability quite a bit which will be beneficial in the long run.
From a security perspective; it's mostly another illusion of, "If I have two boxes and the attacker compromises one [Web Application Server], he won't have access to the Database server".
At foresight, this might seem to be the case but is rarely so. Compromising the Web Application server is still almost a guaranteed Game Over. I will not go into much detail into this (unless you specifically ask me to) however it's still a good idea to keep both services separate from eachother in their own boxes.
Have a separate db for my security model data.
I'm not sure I understood this, what security model are you referring to exactly? Care to share a diagram or two (maybe an ERD) so we can get a better understanding.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Obvious yes; what I am about to say however is controversial and may be flagged by some people (it's a bit of a hot debate)—I recommend using BCrypt instead of PKBDF2 due to BCrypt being slower to compute (resulting in slower to crack).
See - https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage
Passwords generated using a salt which is stored in a different db to the passwords.
If you use BCrypt I would not see why this is required (I may be wrong). I go into more detail regarding the whole username and password hashing into more detail in the following StackOverflow answer which I would recommend you to read - Back end password encryption vs hashing
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
This purely depends on your goals, budget and requirements. I would personally go for AWS, however you should read some more on alternative platforms such as Google Cloud Platform before making your decision.
Last Remarks
All of the things you mentioned are important and it's good that you are even considering them (most people just ignore such questions or go with the most popular answer) however there are a few additional things I want to point:
Internal Services - Make sure that no unrequired services and processes are running on server especially in productions. These services will normally be running old versions of their software (since you won't be administering them) that could be used as an entrypoint for your server to be compromised.
Code Securely - This may seem like another no-brainer yet it is still overlooked or not done properly. Investigate what frameworks you are using, how they handle security and whether they are actually secure. As a developer (and not a pen-tester) you should at least use an automated web application scanner (such as Acunetix) to run security tests after each build that is pushed to make sure you haven't introduced any obvious, critical vulnerabilities.
Limit Exposure - Goes somewhat hand-in-hand with my first point. Make sure that services are only exposed to other services that depend on them and nothing else. As a rule of thumb, keep everything entirely closed and open up gradually when strictly required.
My last few points may come off as broad. The intention is to keep a certain philosophy when developing your software and infrastructure rather than a permanent rule to tick on a check-box.
There are probably a few things I have missed out. I will update the answer accordingly over time if need be. :-)

When writing a HTTP proxy, what security problems do I need to think about?

My company has written a HTTP proxy that takes the original website page and translates it. Think something along the lines of the web translation service provided by Google, Bing, etc.
I am in the middle of security testing of the service and associated website. Of course there is going to be a million attacks or misuses of the site that I haven't yet thought of. Additionally I don't want our site to become a vector that allows anonymous attacks against third party sites. Since this site will be subject to many eyes from the day it is opened, ensuring the security of both our service and the sites visited by our service is concerning me.
Can anyone point me to any online or published information for security testing. e.g. good lists of attacks to be worried about, security best practices for creating web sites/proxies/etc. I have a good general understanding of security issues (XSS, CSRF, SQL injection, etc). I'm more looking for resources to help me with the specifics of creating tests for security testing.
Any pointers?
Seen:
https://www.owasp.org/index.php/Top_10
https://stackoverflow.com/questions/1267284/common-website-attack-methods-detection-and-recovery
Most obvious problems for a translation service:
Ensure that the proxy cannot access to internal network. Obvious when you think but mostly forgotten in the first release. i.e. user should not able to request translation for http://127.0.0.1 etc. As you can imagine this can cause some serious problems. A clever attack would be http://127.0.0.1/trace.axd which will expose more than necessary as it thinks the request coming from localhost. If you also have any kind IP based restrictions between that system and any other systems you might want to be careful about them as well.
XSS is the obvious problem, ensure that translation delivered to the user in a separate domain (like Google Translate). This is crucial, don't even think that you can filter XSS attacks successfully.
Other than that for all other common web security issues, there are lots of things to do. OWASP is the best resource to start for automated testing there are free tools such as Netsparker and Skipfish

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Resources