I have a node.js application that runs on different servers/containers. Clients are authenticated with JWT tokens. I want to protect this API from DDos attacks. User usage limit settings can be stored in the token. I've been thinking about som approaches:
Increment counters in a memory database like redis or memcached and check it on every request. Could cause bottlenecks?.
Do something in a nginx server (is it possible?)
Use some cloud based solution (AWS WAF? Cloudfront? API Gateway? Do they do what I want?)
How to solve this?
There are dedicated npm packages like
express-rate-limit https://www.npmjs.com/package/express-rate-limit
ratelimiter https://www.npmjs.com/package/ratelimiter
I don't know about AWS but in Azure you can use Azure API Management to secure your api https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts
Related
Current implementation:
Single-instance WebApp with custom authentication (own DB) and custom server-based token management (matching session-token-user in an in-memory table).
Desired implementation:
Multiple WebApp instances behind Azure Application Gateway as a load balancer and URL router. Still with custom authentication. Token handling: ????? (preferably JWT)
As this will be a multi-tenant service, we don't want to use AD.
Questions:
What would be the best way to implement this scenario? Where we can keep track of users vs. tokens? This is, considering that now many servers need to verify the token. An in-memory table is not suitable anymore, unless it can be done inside the Gateway instance.
Does this has to be done programmatically (like now), or there is a configurable mechanism in Gateway or some other Azure service?
Application Gateway does not support authentication with AD. It also definitely does not support custom authentication. Hence the authentication and authorization has to be done at the backend servers. The solution would require a distributed cache where tokens are kept, which is accessible to all backend servers. You could use Azure Redis Cache for this.
I have some doubts about which is the most appropiate way to allow access to my company backend services from public Clouds like AWS or Azure, and viceversa. In our case, we need an AWS app to invoke some HTTP Rest Services exposed in our backend.
I came out with at least two options:
The first one is to setup an AWS Virtual Private Cloud between the app and our backend and route all traffic through it.
The second option is to expose the HTTP service through a reverse proxy and setup IP filtering in the proxy to allow only income connections from AWS. We donĀ“t want the HTTP Service to be public accesible from the Internet and I think this is satisfied whether we choose one option or another. Also we will likely need to integrate more services (TCP/UDP) between AWS and our backend, like FTP transfers, monitoring, etc.
My main goal is to setup a standard way to accomplish this integration, so we don't need to use different configurations depending on the kind of service or application.
I think this is a very common need in hybrid cloud scenarios so I would just like to embrace the best practices.
I would very much appreciate it any kind of advice from you.
Your option #2 seems good. Since you have a AWS VPC, you can get an IP to whitelist by your reverse proxy.
There is another approach. That is, expose your backends as APIs which are secured with Oauth tokens. You need some sort of an API Management solution for this. Then your Node.js app can invoke those APIs with the token.
WSO2 API Cloud allows your to create these APIs in the cloud and run the api gateway in your datacenter. Then the Node.js api calls will hit the on-prem gateway and it will validate the token and let the request go to the backend. You will not need to expose the backend service to the internet. See this blog post.
https://wso2.com/blogs/cloud/going-hybrid-on-premises-api-gateways/
I am quite new to backend type work, so I am teaching myself postgres and express. I have built an API that uses JWT authentication and allows calls only from one host, but I am wondering if there is anything more I need to do in order to protect db access.
I have deployed my REST API on AWS Elastic Beanstalk. I plan on moving everything to lambda + api gateway, but even then besides API security, is there any general guideline as to how to protect db access? I have looked online, but most tutorials do not even cover authentication and such. Thanks
As long as the Security Group for your RDS server only allows incoming network traffic from your Elastic Beanstalk servers (or your Lambda functions) then you can be sure that nothing else is able to access your database.
I have hosted a node.js app using an azure VM connected to a cloud service. Now i am trying to figure out how many sessions have hit that endpoint and their usage around it like, server response time, latency, memory usage, disk usage, unique users etc. Is there a way to get it?
If you're using VMs, you need to implement logging mostly by yourself.
Azure, using the Portal, allows you to view metrics for the VM, like CPU and memory usage. However, to get application-level metrics, such as response times, number of requests, etc, you need to design your own solution.
If your Node.js application communicates with the Internet through a reverse proxy (e.g. Nginx, IIS, etc), then you could fetch those metrics from your web server's logs
Otherwise, you'll have to implement logging inside your JS code. The correct way depends on the framework you're using (if any): Express, Koa, Hopi, etc.
On the other hand, were you using PaaS (Azure Web Apps), you'd get most of these metrics automatically.
Is there way to access someones Memcache in Google App Engine?
Any docs about Memcache security?
No. All GAE services, including memcache, are per-application. This is enforced at API level, there is no (known) way to access services in the name of some other app.