I'd like to try the pack200 compression for a Java applet. I understand that the browser must support this for it to work, and according to documentation it does if it sends "Accept-encoding: pack200-gzip" to the server. However, my browsers (tried a couple) won't send that, only "Accept-encoding: gzip, deflate". Since I assumed the JRE is the key for the browser to use this new encoding, I've tried installing several Java REs from 1.6.0.34 to latest 1.7, but with no success. What am I missing here? Is there something I've misunderstood?
Googling this does not give much help unfortunally, I've tried!
Edit: OK I found out what I misunderstood. I was using a HTTP analyzer to see what the browser was sending to the server, but it's not the browser sending this particular requests of course, it's the JVM. Looking at the requests on server I see the correct accept-encoding being sent.
You can make JAWS support for your applet.
Both JNLP application and JNLP applet can be wrapped with pack200 and unwrapped on client machine... see jnlp desc for more details
Related
I was long aware of this compression, but was curious why anyone else than Google implement it. Then I found following link: https://engineering.linkedin.com/shared-dictionary-compression-http-linkedin
Wow, LinkedIn implemented it too, looks like it worth the effort for big volume net traffics. So I went with Fiddler to investigate headers which are well defined for this compression, I mean dictionary negotiation, etc. Side note, latest Chrome, latest Fiddler and Chrome reports "Accept-Encoding" sdch as well as the rest - gzip, deflate
Guess what? I dont see it working, nor for Google (search queries) neither for LinkedIn. Nada! no dictionary negotiation, no download of dictionary, no server reporting it has a dictionary for the browser to download. So what happend? Is it dead? abandoned by Google and LinkedIn? It proved to be inefficient?
Short answer - it is dead for now
I started fiddler and when I tried to access google.com , I got the below error
It was able to find that, the request is coming from an untrusted tool or something like that. Can anyone please explain how they are doing it or any hint about it, so that, we could apply for our web sites.
Once I closed the fiddler, it started working fine again.
Thanks in advance
Jonathon
It's all explained in the "what does it mean" section: Fiddler has send your browser its own SSL certificate to be able to intercept the request (it –more or less– decrypts it using its certificate, then re-encrypts it using Google's one).
Chrome comes preloaded with public keys that it expects to see in the certificate chain for web sites, including of course google.* ones, so it can detect that Fiddler's certificate is not one coming from Google.
See http://blog.stalkr.net/2011/08/hsts-preloading-public-key-pinning-and.html
We're working on a website. Our client want to check the website daily, but they're facing a problem. Whenever we make a change on the website, they have to clear their browser cache.
So I added the following header to my server configuration
Cache-Control: no-cache
As far as I see, firefox is receiving this header and I'm pretty sure that it is obeying it.
My question is, is this "Cache-Control: no-cache" guaranteed and does it work across all the browsers (including IEs)?
I find it's handy to use a "useless" version number in the requests. For example, instead of requesting script.js, request script.js?v=1.0
If you are generating your pages dynamically (PHP, etc) you can just keep the version number in a variable and only have to update it in one place whenever you update. If you want the content never to be cached, just use the output of time() as your version number.
EDIT: have you tried asking your client to change his browser caching settings? That way you can bypass the problem entirely
when I connect to my site with Mathermatica (Import["mysite","Data"]) and look at my Apache log I see:
99.XXX.XXX.XXX - - [22/May/2011:19:36:28 +0200] "GET / HTTP/1.1" 200 6268 "-" "Mathematica/8.0.1.0.0 PM/1.3.1"
Could I set it to be something like this (when I connects with real browser):
99.XXX.XXX.XXX - - [22/May/2011:19:46:17 +0200] "GET /favicon.ico HTTP/1.1" 404 183 "-" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.68 Safari/534.24"
As far as I know you can't change the user agent string in Mathematica. I once used a proxy server (CNTLM) to get Mathematica to talk with a firewall which used NTLM authentication (which Mathematica doesn't support). CNTLM also allows you to set the user agent string.
You can find it at http://cntlm.sourceforge.net/. Basically, you set-up this proxy server to run on your own machine and set its port number and ip-address in the Mathematica network settings. The proxy adds user agent stuff and handles the NTLM authentication. Not sure how it works if you don't have a NTLM firewall. There are other free proxies around that might work for you.
EDIT The Squid http proxy seems to do what you want. It has the request_header_replace configuration directive which allows you to change the contents of request headers.
Here is a way to use the Apache HTTP client through JLink:
Needs["JLink`"]
ClearAll#urlString
urlString[userAgent_String, url_String] :=
JavaBlock#Module[{http, get}
, http = JavaNew["org.apache.commons.httpclient.HttpClient"]
; http#getParams[]#setParameter["http.useragent", MakeJavaObject#userAgent]
; get = JavaNew["org.apache.commons.httpclient.methods.GetMethod", url]
; http#executeMethod[get]
; get#getResponseBodyAsString[]
]
You can use this function as follows:
$userAgent =
"Mozilla/5.0 (X11;Linux i686) AppleWebKit/534.24 (KHTML,like Gecko) Chrome/11.0.696.68 Safari/534.24";
urlString[$userAgent, "http://www.htttools.com:8080/"]
You can feed the result to ImportString if desired:
ImportString[urlString[$userAgent, "mysite"], "Data"]
A streaming approach would be possible using more elaborate code, but the string-based approach taken above is probably good enough unless the target web resource is very large.
I tried this code in Mathematica 7 and 8, and I expect that it works in v6 as well. Beware that there is no guarantee that Mathematica will always include the Apache HTTP client in future releases.
How It Works
Despite being expressed in Mathematica, the solution is essentially implemented in Java. Mathematica ships with a Java runtime environment built-in and the bridge between Mathematica and Java is a component called JLink.
As is typical of such cross-technology solutions, there is a fair amount of complexity even when there is not much code. It is beyond the scope of this answer to discuss how the code works in detail, but a few items will be emphasized as suggestions for further reading.
The code uses the Apache HTTP client. This Java library was chosen because it ships as an unadvertised part of the standard Mathematica distribution -- and it also happens to be the one that Import appears to use internally.
The whole body of urlString is wrapped in JavaBlock. This ensures that any Java objects that are created over the course of operation are properly released by co-ordinating the activities of the Java and Mathematica memory managers.
JavaNew is used to create the relevant Apache HTTP client objects, HttpClient and GetMethod. Java expressions like http.getParams() are expressed in JLink as http#getParams[]. The Java classes and methods are documented in the Apache HTTP client documentation.
The use of MakeJavaObject is somewhat unusual. It is required in this case as a Mathematica string is being passed as an argument where a Java Object is expected. If a Java String was expected, JLink would automatically create one. But JLink is unable to make this inference when Object is expected, so MakeJavaObject is used to give JLink a hint.
What about URLTools?
Incidentally, the first thing I tried to answer this question was to use Utilities`URLTools`FetchURL. It looked very promising since it takes an option called "RequestHeaderFields". Alas, this did not work because the present implementation of that function uses that option only for HTTP POST verbs -- not GET. Perhaps some future version of Mathematica will support the option for GET.
I'm extremely lazy and curl is more flexible in less code than J/Link, without the object management issues. This is an example of posting data (userPass) to a url and retrieving the result in JSON format.
Import["!curl -A Mozilla/4.0 --data " <> userPass <> " " <> url, "JSON"]
I isolate this kind of thing in an impure function (unless it is pure) so I know it's tainted, but any web access is that way.
Because I use a pipe, MMA cannot deduce the type of file. ref/Import mentions that « Import["!prog","format"] imports data from a pipe. » and « The format of a file is by default deduced from the file extension in its name, or by FileFormat from its contents. » As a result, it is necessary to specify "CSV", "JSON", etc. as the format parameter. You'll see some strange results otherwise.
curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a busload of other useful tricks.
From the curl and libcurl welcome page.
Mathematica does all of its internet connectivity through a user specified proxy server. If, as Sjoerd suggested, setting one up is too much work, you might want to consider writing the call in C/C++, and then calling that from Mathematica. I don't doubt there are plenty of C libraries that do what you want in a few lines of code.
For calling C code within Mathematica, see the C Language Interface documentation
Mathematica 9 has the new URLFetch function. It has the option UserAgent.
You can also use J/Link to make your web requests or call curl or wget on the command line.
Is a POST secure enough to send login credentials over?
Or is an SSL connection a must?
SSL is a must.
POST method is not more secure than GET as it also gets sent unencrypted over network.
SSL will cover the whole HTTP communication and encrypt the HTTP data being transmitted between the client and the server.
<shameless plug>I have a blog post that details what an HTTP request looks like and how a GET request compares to a POST request. For brevity's sake, GET:
GET /?page=123 HTTP/1.1 CRLF
Host: jasonmbaker.wordpress.com CRLF
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1 CRLF
Connection: close CRLF
and POST:
POST / HTTP/1.1 CRLF
Host: jasonmbaker.wordpress.com CRLF
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1 CRLF
Connection: close CRLF
CRLF
page=123
(The CRLF is just a newline)
As you can see, the only differences from the standpoint of how a request is formed* is that a POST request uses the word POST and the form data is sent in the body of the request vs the URI. Thus, using HTTP POST is security by obscurity. If you want to protect data, you should use SSL.
* Note that there are other differences.
That depends on your circumstances, how much would the interception of the credentials cost somebody?
If it's just a login to a software Q+A site then SSL might not be necessary, if it's an online banking site or you store credit card data then it is.
This is a business not a techncial decision.
HTTP POST is not encrypted, it can be intercepted by a network sniffer, by a proxy or leaked in the logs of the server with a customised logging level. Yes, POST is better than GET because POST data is not usualy logged by a proxy or server, but it is not secure.
To secure a password or other confidential data you must use SSL or encrypt the data before you POST. Another option would be to use Digest Authentication with the browser (see RFC 2617). Remember that (home grown) encryption is not enough to prevent replay attacks, you must concatenate a nonce and other data (eg. realm) before encrypting (see RFC 2617 for how it is done in Digest Auth).
SSL is a must :)
HTTP Post is transmitted in plain text. For an example, download and use Fiddler to watch HTTP traffic. You can easily see the entire post in there (or via a network traffic monitor like WireShark)
It is not secure. A POST can be sniffed just as easily as a GET.
No...POST is not secure enough at all. SSL is a MUST.
POST only effectively hides the parameters in the query string. Those parameters can still be picked up by anybody looking at the traffic in between the browser and the end point.
No, use SSL.
With POST the values are still submitted as plain text unless SSL is used.
The most secure way is to not send credentials at all.
If you use Digest Authentication, then SSL is NOT a must.
(NB: I am not implying that Digest Authentication over HTTP is always more secure than using POST over HTTPS).
POST is plaintext.
A secure connection is a must.
That's why it's called a secure connection.
A POST request alone is not secure because all the data is "traveling" in plain text.
You need SSL, to make it secure.
The only difference between HTTP GET and HTTP POST is the manner in which the data is encoded. In both cases it is sent as plain-text.
In order to provide any sort of security for login credentials, HTTPS is a must.
You do not need an expensive certificate to provide HTTPS either. There are many providers that will issue very basic certificates for about $20USD. The more expensive ones include identity verification which is more of a concern for e-commerce sites.
Please see this great article:
Protect Against Malicious POST Requests
https://perishablepress.com/protect-post-requests/
POST data is sent in plain text if you are using an unencrypted HTTP connection.
IF this is secure enough depends on your usage (hint: it's not).
If both the server, the client machine and ALL MACHINES BETWEEN THEM are part of a controlled, fully trusted network, this may be ok.
Outside of these very limited circumstances (and sometimes even within them) plain text authentication is asking for trouble.