I was long aware of this compression, but was curious why anyone else than Google implement it. Then I found following link: https://engineering.linkedin.com/shared-dictionary-compression-http-linkedin
Wow, LinkedIn implemented it too, looks like it worth the effort for big volume net traffics. So I went with Fiddler to investigate headers which are well defined for this compression, I mean dictionary negotiation, etc. Side note, latest Chrome, latest Fiddler and Chrome reports "Accept-Encoding" sdch as well as the rest - gzip, deflate
Guess what? I dont see it working, nor for Google (search queries) neither for LinkedIn. Nada! no dictionary negotiation, no download of dictionary, no server reporting it has a dictionary for the browser to download. So what happend? Is it dead? abandoned by Google and LinkedIn? It proved to be inefficient?
Short answer - it is dead for now
Related
I need to protect my videos to be downlouded using "Internet Download Manager - IDM/IDMan"
i used
1. rtmp stream
2. signed URL
3. expiration date of signed URL (60seconds)
4. i set player(jwplayer)to *autostar*
AND i need to set signed url outdated if it used one time
using this solution IDM will get an url that is already used then blocked
Is there any way to configure cloudfront to use signed url just one time;
Or any solution that can protect videos to be uploaded and used in other web sites.
Please can you help?
Thanks in advance
Cloudfront does not support the ability to only play a url once and they never will. The reason is that the only way to do this will be for all their edge servers to share the information - they currently do not share state which means scaling is much easier and performance is much better.
Unfortunately, if you're looking for fine grained control over how your videos are played, you're going to need more fine grained code, which you can't do in cloudfront - you'll need to host content directly on your server.
Idea 1: Limit by count
You can implement the idea that you have - once the url has been used once, you no longer serve up that file.
Idea 2: Limit by referrer
You can look at the referrer header and if it's from your website, then allow the content to be downloaded. Otherwise, reject it. Note: this can be spoofed and a user can set the referrer header manually.
Preventing a video from being downloaded and later uploaded is technically impossible. By letting them display the video, there really isn't any way to do that without them being able to record those bits and replay them later. There are probably things, like preventing right clicks or using an odd proprietary format or something else but I'm not familiar with DRM techniques.
I'd like to try the pack200 compression for a Java applet. I understand that the browser must support this for it to work, and according to documentation it does if it sends "Accept-encoding: pack200-gzip" to the server. However, my browsers (tried a couple) won't send that, only "Accept-encoding: gzip, deflate". Since I assumed the JRE is the key for the browser to use this new encoding, I've tried installing several Java REs from 1.6.0.34 to latest 1.7, but with no success. What am I missing here? Is there something I've misunderstood?
Googling this does not give much help unfortunally, I've tried!
Edit: OK I found out what I misunderstood. I was using a HTTP analyzer to see what the browser was sending to the server, but it's not the browser sending this particular requests of course, it's the JVM. Looking at the requests on server I see the correct accept-encoding being sent.
You can make JAWS support for your applet.
Both JNLP application and JNLP applet can be wrapped with pack200 and unwrapped on client machine... see jnlp desc for more details
We're working on a website. Our client want to check the website daily, but they're facing a problem. Whenever we make a change on the website, they have to clear their browser cache.
So I added the following header to my server configuration
Cache-Control: no-cache
As far as I see, firefox is receiving this header and I'm pretty sure that it is obeying it.
My question is, is this "Cache-Control: no-cache" guaranteed and does it work across all the browsers (including IEs)?
I find it's handy to use a "useless" version number in the requests. For example, instead of requesting script.js, request script.js?v=1.0
If you are generating your pages dynamically (PHP, etc) you can just keep the version number in a variable and only have to update it in one place whenever you update. If you want the content never to be cached, just use the output of time() as your version number.
EDIT: have you tried asking your client to change his browser caching settings? That way you can bypass the problem entirely
Couple of days back, i came across news about how hackers Stole 200,000+ Citi Accounts Just By Changing Numbers In The URL. Seems like Developers compromised security for being RESTful and also didn't bothered to keep session id in place of userId. I'm also working on a product where security is the main concern so I'm wondering whether we should avoid REST and use post requests everywhere in such a case? or am I missing something important about REST ?
Don't blame the model for a poor implementation, instead learn from the mistakes of others.
That's my (brief) opinion, but I'm sure better answers will be added :)
(P.S. - using Post doesn't increase the security in any way)
The kind of security issues mentioned in the question have largely nothing to do with REST, SOAP or Web. It has got to do with how one designs the applications.
Here is another common example. Say, there is a screen in an e-commerce application to show the details of an order. For a logged in user, the URL may look like this:
http://example.com/site/orders?orderId=1234
Assuming that orders are globally unique (across all users of that system), one could easily replace that orderId with some other valid OrderId not belonging to the user and see the details. The simple way to protect this is to make sure that the underlying query (SQL etc) has user's Id also added as conjunction (AND in a WHERE clause for SQL).
In this specific case, a good application design would have ensured that the account id coming in the URL is verified with the associated authenticated session.
The same data gets transmitted across the wire whether you use GET or POST. Here is a sample GET request that is the result of submitting a form [took out User-Agent value because it was long]:
GET /Testing/?foo=bar&submit=submit HTTP/1.1
Host: localhost
User-Agent: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: http://localhost/Testing/demoform.html
Now here's the same request as a POST:
POST /Testing/ HTTP/1.1
Host: localhost
User-Agent: ...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: http://localhost/Testing/demoform.html
Content-Type: application/x-www-form-urlencoded
Content-Length: 21
foo=bar&submit=submit
Note that this is what the server sees when you submit the request or what a man-in-the-middle attacker might see while intercepting the request.
In the GET we see that foo = bar and submit = submit on the first line of the request. In the POST we have to look at the last line to see that...hey! foo = bar and submit = submit. Same thing.
At the browser level, the difference manifests itself in the address bar. The first request shows the ?foo=bar&submit=submit string and the second one does not. A malicious person that wants to intercept this data doesn't care about what appears in the browser's address bar. The main danger comes about because anyone can copy a URL out of the address bar and pass it around thus leaking the parameters; in fact it is very common for people to do that.
The only way to keep our malicious person from seeing either of these types of requests is for it all to be encrypted before it is sent to the server. The server provides the public key which is used (via the https protocol and SSL/TLS). The browser uses the key to encrypt the request and the server decrypts it with its private key. There is still an issue on the client side as to whether the key it received from the server actually belongs to the people running the server. This has to be verified via some out of band trust system like a third party verification or fingerprint comparisons or something like that.
All this is completely orthogonal to REST. Regardless of how you do it, if you are communicating information across the wire with HTTP you are going to have this issue and you're going to need to encrypt the requests/responses to prevent malicious people from seeing them.
POST requests are no safer than RESTful requests, which are no safer than GET requests.
There are a range of measures to increase the security of your application, that cannot all be listed here. Wikipedia has a good number of them and methods to prevent them.
Here's an example: GET requests shouldn't be used for critical actions such as withdrawing a bank account, because if you're logged into your bank account, someone can set a rouge image with the source as http://yourbank.com/actions/withdraw/300USD , and the URL will be loaded, withdrawing money from your bank account. This is easily countered by using a POST request.
And then, we have some further security considerations to take dealing with this post request, because again it can be spoofed.
Using POST instead of GET as a security measure is simply a use of "security through obscurity". In reality it is no safer as any one with a small amount of knowledge of HTTP can forge a POST request.
Using sessions ids instead of user ids is also just another way of obscuring the security hole, it's not really fixing the problem as session ids can be hijacked too.
The fact that in this particular case security hole was made extremely easy to exploit by changing the URL does not make the use of REST the cause of the security issue.
As others have mentioned, whenever you need to secure REST services, HTTPS is the place to start looking.
I would consider REST and all web applications security concerns very similar.
The problem stated in the question is considered "school" error - something experienced web developer would not do. So if you understand web app security - you'll understand REST security as well.
So add experienced web developer to your team if you don't have any - he'll help you with REST.
If security is the main concern, exclusively use https:// and POST, never http:// and GET.
This will avoid the attack you describe as well as many others such as session hijacking, and simple eavesdropping on the line.
(... and, abstain from making the mistake of authenticating with https:// and switching to http:// later, which was "de facto standard" until a few months ago when someone published a tool which did the obvious)
I'm in the process of hacking together a web app which uses extensive screen scraping in node.js. I feel like I'm fighting against the current at every corner. There must be an easier way to do this. Most notably, two things are irritating:
Cookie propagation. I can pull the 'set-cookie' array out of the response headers, but performing string operations to parse the cookies out of the array feels extremely hackish.
Redirect following. I want each request to follow through redirects when a 302 status code is returned.
I came across two things which looked useful, but I couldn't use in the end:
http://zombie.labnotes.org/, but it doesn't have HTTPS support, so I can't use it.
http://www.phantomjs.org/, but couldn't use it because it doesn't (appear to) integrate with node.js. It's also pretty heavyweight for what I'm doing.
Are there any JavaScript screenscraping-esque libraries which propagate cookies, follow redirects, and support HTTPS? Any pointers on how to make this easier?
i actually have a scraper library now https://github.com/mikeal/spider it's quite nice, you can use jquery and routes.
feedback is welcome :)
You may want to check out https://github.com/mikeal/request from mikeal, I just spoke to him the chatroom and he says that it does not handle cookies at the moment but you can write a submodule to handle these for you in the meantime.
in regards to redirect it handles beautifully :)
It turns out someone made a phantomjs module for node.js:
https://github.com/sgentle/phantomjs-node
While phantom is fairly heavy, it also supports SSL, cookies, and everything else a typical browser supports (since it is a webkit browser, after all).
Give it a shot, it may be exactly what you are looking for.