I have a use-case where,
I make a POST request to a REST API from client server with a payload of size 20-30MB, the request header contains content-encoding set to gzip, so it gets compressed at the n/w level
In addition to this, do I need to enable gzip compression at the API level?
Now, assuming that the above POST request succeeded in writing that larger payload to the database, and when I try to retrieve the data back to the client which is again a POST request (that is how it's been designed, make a POST request to retrieve data), do i need to set the POST request header that retrieves data to accept-encoding: gzip
The REST API is a node/express application and the database is cassandra.
I think you are misunderstanding how this works.
the request header contains content-encoding set to gzip, so it gets compressed at the n/w level
This means your client is compressing the payload. Not the network. The content-encoding HTTP header tells the other side what format the body is being sent in, not what format you would like the body to be sent in. Typically a HTTP client, server or library handles all this for you.
In addition to this, do I need to enable gzip compression at the API level?
No. If the client is compressing it already, in the client code (perhaps using a standard HTTP library), there is nothing to be gained from compressing it again in your code.
Now, assuming that the above POST request succeeded in writing that larger payload to the database, and when I try to retrieve the data back to the client which is again a POST request (that is how it's been designed, make a POST request to retrieve data), do i need to set the POST request header that retrieves data to accept-encoding: gzip
If your client can handle gzip, then it should automatically set accept-encoding: gzip. Then if your server can deliver it gzipped, it will. The accept-encoding header is a an optional bit of information the client can send to let the server know of the client's capabilities. It is a hint ("Hey I can accept gzipped responses so use that if you want") and is not an order that must be obeyed (i.e. it is not "You must send me this in gzip format").
How the file is actually stored at the server is entirely dependent on your app, but typically it would be stored unzipped and that way it can be delivered to clients who don't handle gzip, or those that do, or those that handle other compression formats like brotli. The web server typically then compresses on the fly based on the accept-encoding header in the request.
Related
I am using Angular 5, with a Hapi NodeJs backend. When I send the "cache-control: private, max-age=3600" header in the http response the response is cached correctly. The problem is that when I make the exact same request in a different tab and with connections to different database the data cached in the browser tab 1 is shared with browser tab 2 when it make the same request. Is there a way for the cache to only be used per application instance using the cache-control header?
Same Webapp in Browser Tab 1. Same Domain.
Database 1
Same Webapp in Browser Tab 2. Same Domain.
Database 2
User agent needs to somehow differentiate these cache entries. Probably your best option is to adjust a cache entry key (add subdomain, path or query parameter that identifies a database to a URI).
You can also use custom HTTP header (such as X-Database), in pair with Vary HTTP header but in this case user agent may store only single response at a time because it is still uses URI as cache key and Vary HTTP header for response validation only. Relevant excerpt from The State of Browser Caching, Revisited article by Mark Nottingham:
The one hiccup I saw was that all tested browser caches would only store one variant at a time; i.e., if your responses contain Vary: Foo and you get two requests, the first with Foo: 1 and the second with Foo: 2, the second response will evict the first from cache.
Whether that’s a problem depends on how you use Vary; if you want to reuse cached responses with old values, it might reduce efficiency. However, that doesn’t seem like a common use case;
For more information check RFC 7234 Hypertext Transfer Protocol (HTTP/1.1): Caching, and Understanding The Vary Header article by Andrew Betts
There are various ways the web applications can be attacked using the vectors in HTTP request itself. Attacks like the HTTP response splitting make use of modifying the request headers itself to exploit the vulnerable applications. Apart from input validation and sanitization at the server side, the question came to my mind if one can make the request headers immutable.
Is it possible to make it immutable?
Request headers are sent from the client to the server.
The browser itself constructs an HTTP request to send. A user with control over the client can of course change the HTTP request, including headers to anything that they want.
Therefore, making them immutable is impossible. Remember, as a general rule, anything on the client-side is up for grabs.
You can prevent headers from being altered during transit. That is, while the HTTP request is on the wire from the client to the server. For this, a technology called TLS is used (used to be called SSL, and most of the time it still is). This encrypts and authenticates the connection, making it immutable.
You can see if TLS/SSL is being used because the browser address bar will display HTTPS at the very beginning of the URL.
While going through the material on web application vulnerabilities, I came across some queries. Here are the details.
The tools like burp, APPScan reports the vulnerabilities based on specific response header from the application server. I understood that there are certain headers which are complementary to one another. For Ex:- 'Content-type' and 'X-Content-Type-Options'. If second is set to 'nosniff', the browser will not sniff the response body at all and will honor the value set in 'Content-Type' header. Similarly, the HTTP request also can indicate the type of response it is expecting using 'Accept' kind of headers. In such cases, do I need to consider application vulnerable if 'X-Content-Type-Options' is not set to 'nosniff' while the request for such response has 'Accept' parameter indicating media type only to text/plain?
The other extension to this query is that if the response header has the following fields set. Content-Type=text/plain X-Content-Type-Options=nosinff. In my opinion, this is not a vulnerability since second parameter will restrict browser from sniffing the response body. Is my understanding correct from security perspective ?
From an attacker's perspective, your entire http request to the server is nothing more than a block of strings. The attacker is not likely to use a standard browser to perform attacks. It would be something more like a script that just bumps out http requests or a proxy tool that intercepts the request and allows any form of modification. The "nosniff" flag won't matter to these types of request modification. Never trust the contents of http requests since they can be dictated by the users in its entirety.
I'm a bit naive about how to send cookie data between servers. I am aware that in the HTTP request you use Set-Cookie.
Right now, I am sending a header between the servers, for authorization purposes, so that one server is authorized with the other. But I am wondering if there is some advantage to using cookies, if cookies act differently than headers in this case. From what I have read, cookies and headers are one and the same for most purposes?
Using two Node.js servers, one being the web server, the other being the API server, is there any reason why sending a cookie vs a regular non-cookie header is better?
The "cookie" represents shared state between the client and the server. As was mentioned, the way to set cookie values, is to use the Set-Cookie header. And the way to communicate values that have already been set is to use the Cookie header.
Cookies are typically associated with web browsers, as tool to track and identify existing users. I've never seen cookies used for server to server communication.
The Authorization header is good for passing encoded or encrypted strings.
So for example you might see:
Authorization: "Basic dXNlcm5hbWU6cGFzc3dvcmQ="
The value in this case is the base64 encoded string of "username:password"
I wouldn't worry too much about what header you use. You can make up your own x-my-awesome-auth-header: Its conventional to prefix a custom header with an "x".
An important thing to consider, is what the header value contains. If you are communicating over plain http make sure you encrypt.
Also consider using open source standards for passing encrypted data such as JWT
Edit: To answer your question, is there any reason why sending a cookie is better? When it comes to server to server communication, its actually much worse to use Cookies, because those servers have to maintain state with other servers. eg. When A talks to B, A has to remember what B said when they talk again. You typically what server to server communication to be stateless, meaning you can throw away data pertaining to authorization and permission after each transaction. Each transaction has to go through the full authorization and permission resolution process. Its much easier to code, and there is no penalty in terms of security as long as your are protected via encryption
Yes, "cookies" is just jargon for the Cookie: HTTP header and corresponding Set-Cookie: header. So they are ultimately the same basic thing. Many APIs use the slightly more semantic Authorization: header, so that would be a good place to start.
I am new in network security area, now I am designing a REST web api.
The question is that could http response and request be eavesdropped?
If it is impossible, then I don't need encrypt the response json file and the request parameter.
It is easy to eavesdrop an http request and even tamper and modify it before reaching the server.
http sends/receives data in clear text, use https (ssl) if you want it to be encrypted