I have a Flask web application that is currently deployed on AWS Elastic Beanstalk with a configured Classic Load Balancer.
My issue is that my sessions do not seem to be persistent, as I had originally implemented session based auth, but when the frontend was deployed and hitting my API, sessions would not persist and users could never stay logged in.
I had intended to switch to token based auth, so that is what I did, and I avoided the session issue.
Fast forward, and I have now implemented OAuth1 using Flask-OAuthlib
but unfortunately, this lib uses sessions to maintain the OAuth1 provider token secret.
I attempted to enable Duration-Based Session Stickiness via the AWS console for my Classic Load Balancer, but that seemingly did not resolve the issue.
The specific line of code that is causing me trouble is here.
Might there be a way to make this OAuth1 code stateless and not require the session?
Might I be configuring something wrong for my sessions, or missing a simple fix?
Any help would be very much appreciated.
Related
I have a Node.Js app running on an AWS ECS cluster behind a load balancer. It is facing the internet via an Amazon-provided public DNS.
Inside the app there's a user login feature based on passport / express.
When I just launch it as 1 task then it is possible to log in for the user.
When I launch it as more than 1 task, then while it logs in (I can see it in logs) it directly doesn't pass through the other function and logs out.
What could be an issue here?
Seems like the sticky session is not enable and as a result that request distributed randomly across two different tasks and the session does not exist for the user.
Sticky sessions
Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.
You can enable sticky session in Load load balancer under LB attributes.
load-balancer-target-groups sticky-sessions
I have a node.js application that runs on different servers/containers. Clients are authenticated with JWT tokens. I want to protect this API from DDos attacks. User usage limit settings can be stored in the token. I've been thinking about som approaches:
Increment counters in a memory database like redis or memcached and check it on every request. Could cause bottlenecks?.
Do something in a nginx server (is it possible?)
Use some cloud based solution (AWS WAF? Cloudfront? API Gateway? Do they do what I want?)
How to solve this?
There are dedicated npm packages like
express-rate-limit https://www.npmjs.com/package/express-rate-limit
ratelimiter https://www.npmjs.com/package/ratelimiter
I don't know about AWS but in Azure you can use Azure API Management to secure your api https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts
I am quite new to backend type work, so I am teaching myself postgres and express. I have built an API that uses JWT authentication and allows calls only from one host, but I am wondering if there is anything more I need to do in order to protect db access.
I have deployed my REST API on AWS Elastic Beanstalk. I plan on moving everything to lambda + api gateway, but even then besides API security, is there any general guideline as to how to protect db access? I have looked online, but most tutorials do not even cover authentication and such. Thanks
As long as the Security Group for your RDS server only allows incoming network traffic from your Elastic Beanstalk servers (or your Lambda functions) then you can be sure that nothing else is able to access your database.
I am looking into using a service fabric cluster for a service with a public API. When creating a service fabric cluster I have the ability to choose either secured mode and use a certificate, or use unsecured mode.
In unsecured mode, anyone can call the API which is what I want, however it also means that anyone can go to the management page at *northeurope.cloudapp.azure.com:19080 and do anything which is obviously not ok.
I tried using the secure mode with a certificate, and this prevents anyone without the certificate from using the management page, but also seems to prevent anyone calling the API.
Am I missing something simple? How do I keep the management side of the cluster secured, while making the API public so that anyone can call it?
Edit: After looking more carefully it seems to me that the intended behaviour is that as I've configured a custom endpoint when setting up the cluster that I should be able to call the service. So I believe it may just be an error in my code.
Securing a cluster has nothing to do with your application endpoints. There is a separation of concerns between securing the system (management endpoints, node authentication) and securing your applications (SSL, user auth, etc.). There is some other problem here, most likely you have configured the Azure Load Balancer to allow traffic into your cluster on the ports that your services are listening on. See here for more info on that: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/#service-fabric-in-azure
I've recently migrated an application from heroku to amazon-ec2 because of recomendations from a security consultant. Yet, he didn't know deeply heroku and the doubt remained.
Can access to a Heroku PostgreSQL DB be restricted for it to be accessed only by the application?
Would you recommend Heroku for security critical applications?
This is a deceptively complex question because the idea of "restricted so that it can be accessed only by the application" is ill-defined. If your ultimate goal is simply to keep your data as secure as possible, then Heroku vs. AWS vs. physical servers under lock and key involves some cost-benefit analysis that goes beyond just how your database can be accessed.
Heroku limits database access via authentication. You share a secret (username/password) between the database and the application. Anyone who has that secret can access the database. To facilitate keeping the secret secret, all database access is or should be over SSL.
There are other ways to restrict access in addition to authentication, but many of them are incompatible with a cloud-based approach. Also, many of them require you to take much more control over the environment of your servers, and the more responsibility you have on that front, the bigger the chance that issues totally separate from who can hit the postgres port on your database will sink you.
The advantage in using AWS directly instead of through a paas provider like Heroku is that you can configure everything yourself. The disadvantage is that you have to configure everything yourself. I would recommend you use AWS over a managed service only if you have a team of qualified and attentive sysadmins to configure, monitor and update your environment. Read Heroku's security policy page. Are you doing at least that much to protect your servers in your own configuration on AWS? If not, then you might have bigger problems than how many layers of redundancy are in place around your database.