Rest API real time Tricky Question- Need Answer - security

I was recently interviewed by a MNC technical panel and they asked me different questions related to RestAPI , i was able to answer all but below 2 questions though i answered but not sure if those are correct answers. Can somebody answer my queries with real time examples
1) How can i secure my Rest API when somebody send request from Postman.The user provides all the correct information in the header like session id, Token etc.
My answer was: The users token sent in the header of the request should be associated with the successfully authenticated user info then only the user will be granted access if the Request either comes from Postman or application calls these API.(The panel said no to my answer)
2) How can i handle concurrency in Rest API Means if multiple users are trying to access the API at the same given time (For e.g multiple post request are coming to update data in a table) how will you make sure one request is served at one time and accordingly the values are updated as requested by different user request.
2) My answer was: In Entitiy framework we have a class called DbUpdateConcurrencyException, This class takes of handling concurrency and serves one request is served at a time.
I am not sure about my both the above answers and i did not find any specific answer on Googling also.
Your expert help is appreciated.
Thanks

1) It is not possible, requests from Postman or any other client or proxy (Burp, ZAP, etc) are indistinguishable from browser requests, if the user has appropriate credentials (like for example can observe and copy normal requests). It is not possible to authenticate the client application, only the client user.
2) It would be really bad if a web application could only serve one client at a time. Think of large traffic like Facebook. :) In many (maybe most?) stacks, each request gets its own thread (or similar) to run, and that finishes when the request-response ends. These threads are not supposed to directly communicate with each other while running. Data consistency is a requirement of the persistence technology, ie. if you are using a database for example, it must guarantee that database queries are run one after the other. Note that if an application runs multiple queries, database transactions or locks need to be used on the database level to maintain consistency. But this is not at all about client requests, it's about how you use your persistence technology to achieve consistent data. With traditional RDBMS it's mostly easy, with other persistence technologies (like for example using plaintext files for storage) it's much harder, because file operations typically don't support a facility similar to transactions (but they do support locks, which you have to manage manually).

Related

How to verify the requester of a Node API

I have a Cloudflare Worker that presents a registration form, accepts input from the user that is posted back to the Worker which then sends that on to an Node HTTP API elsewhere (DigitalOcean if that matters) that inserts the data into a MongoDB (though it could be any database). I control the code in both the CF-Worker and the API.
I am looking for the best way to secure this. I am currently figuring to include a pre-shared secret key in the API call request headers and I have locked down what this particular API can do with database access control. Is there an additional way for me to confirm that only the CF Webworker can call the API?
If this is obvious to some I apologize. I have always been of the mind that unless you are REALY good at security it is best to consult those who are.
You can research OAuth2.0 standard. That is authorization standard for third party clients. Here is link: https://oauth.net/2/
This solution is the most professional.There are other less secure ways to do it, but easier to implement. Password and username, x-api-key, etc..
It sounds to me that you can also block all IPs and allow only requests from that specific domain name (CF Worker)

Is there ever a need to have GET request API as POST is better in every way?

So we were starting a new project from scratch and one of the developers suggested why have any GET API requests as POST API's are better in every which way. (At least when using a mobile client)
On further looking into this it does seem POST can do everything GET can do and it can do it better -
slightly more secure as parameters are not in URL
larger limit than GET request
So is there even a single reason to have a GET API ? (This will only be used from a mobile client so browser specific cacheing doesn't affect us)
Is there ever a need to have GET request API as POST is better in every way?
In general, yes. In your specific circumstances -- maybe no.
GET and POST are method tokens.
The request method token is the primary source of request semantics
They are a form of meta data included in the http request so that general purpose components can be aware of the request semantics and contribute constructively.
POST is, in a sense, the wildcard method - it can mean anything. But one of the consequences of this is - because the method has unconstrained semantics, general purpose components can't do anything useful other than pass the request along.
GET, however, has safe semantics (which includes idempotent semantics). Because the request is idempotent, general purpose components know that they can resend a GET request when the server returns no response (ie messages being lost on unreliable transport); general purpose components can know that representations of the resource can be pre-fetched, reducing perceived latency.
You dismissed caching as a concern earlier, but you may want to rethink that - the cache constraint is an important element that helped the web take over the world.
Reducing everything to POST reduces HTTP from an application for transferring documents over a network to dumb transport.
Using HTTP for transport isn't necessarily wrong: Simple Object Access Protocol (SOAP) works that way, as does gRPC. You still get authorization, and conditional requests; features of HTTP that you might otherwise need to roll your own.
You aren't doing REST at that point, but that's OK; not everybody has to.
That doesn’t mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. (Fielding, 2008)

Is there a way to protect an API route from being hit by an authenticated user outside of an app making requests?

I'm making an angular app that has users log in, make progress, then they are awarded levels/experience points. I'm using a nodejs/express API and I want to be able to make a call from my app to award them exp. I'm using a JWT and server signing with a private key to auth requests, but realized that a user could just pull their token and give themselves experience. My question would be is there anyway to protect my route from that or is that a fundamental flaw in design?
I don't believe this is something you can do specifically with JWT. As commenters have already said, JWT just provides access rights for the given token. As you say yourself, it would be simple enough to just read the traffic and send their own requests to jack up their exp.
While your basic authentication/authorisation mechanism can't solve this, you can handle it in some other fashion within, for example, the request payload itself.
You could encrypt and/or sign your payloads - given that the app would need to know or receive key(s) to use, it's possible that with enough investigation that this is eventually found and duplicated as well. But it's another step someone would have to go through and replicate.
You could employ additional checks and measures - have your requests for [exp increase] be a two-step process; the server responds to the initial request with some minor task to be solved that is then attached to the follow-up request. Assuming the task is done properly, you can be reasonably sure that it came from your app as your app knows how to solve the problems issued (or someone with a serious lack of hobbies outside of deconstructing your entire application).
You could limit the amount of exp that should be reasonably achievable by your users. If you know that people should, at most, be able to gain xyz exp per minute/hour/day/etc, then by monitoring exp growth, you can flag and/or block additional gains past this point.

Securing RESTful API: Is it possible to disallow XHR requests from the JS Console?

My application (mostly client-side code written in backbone) interfaces with a Node.js server. The sole purpose of my server is to provide API endpoints for my backbone application.
GET requests are pretty safe, attackers can't do much here. But I do have a few POST and PUT requests. One of the PUT requests is responsible for updating vote count for a particular user, e.g.
app.put('/api/vote`, function(req, res) {
// POST form data from the client
var winningPerson = req.body.winner; // userID
var losingPerson = req.body.loser; // userID
}
I have noticed that some people were just spamming PUT requests for one particular user via JS console or some kind of REST API console, bypassing the intention of the application enforced by the User Interface. If you were to use this application as it is intended, it would never allow you to vote for the same person multiple times in a row, let alone any arbitrary user from the database (assuming you know their user id).
But yes, yes I know: "Don't trust the client". So how can I fix the above problem? Will some kind of IP address checking help here to prevent voting multiple times within a span of 3-5 minutes? What can I do to disallow access to my API from the console so that users cannot arbitrarily vote for anyone they wish, but instead only vote by clicking on an image with a mouse, or at the very least vote from console just for those two people, not any arbitrary person?
The answer lies within your server. It shouldn't allow the user to vote more than once within the specified timespan. This is a kind of business rule you can enforce via server only because it's under your control.
Any enforcing in the UI is good and profitable, but is not bullet-proof. You definitely have to check on the server to be sure. There is much more to the server's business logic than
The sole purpose of my server is to provide API endpoints for my backbone application.
Don't try to control something that is out of your control - the client side of your application. Some people vote more times because you (your API) ALLOW them to do so. As soon as your server replies "Try in 5 minutes, dude." they'll stop doing this or there will be no harm when doing this at least.

How to defend excessive login requests?

Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.

Resources