Defining same GET twice - browser

Is there a defined protocol as to what browsers should do in this case:
http://example.com/?id=1&id=2
Can I safely assume all browsers will use the same parameter, or is this not standardized? From testing it appears browsers will take the last-defined parameter (id=2).

Web servers usually use the last parameter. In your case, web servers use the second id.

Related

ServiceStack and concurrency

We're evaluating ServiceStack and have found that all example hosts only allow a single request to be processed at a time. If you add a Debug.WriteLine and Thread.Sleep to any entry point, this is easy to see.
I'm assuming we're either missing some setting or are missing a pretty big point with how ServiceStack should be used.
Thanks,
Ross
This actually was a mistake in how we were testing ServiceStack. We were using the same browser but separate tabs/windows, which actually blocks itself from making concurrent requests. Once using two different browsers (e.g. IE and Chrome), we were able to witnesse ServiceStack handling two requests at the same time.

Strategy for spreading image downloads across domains?

I am working on PHP wrapper for Google Image Charts API service. It supports serving images from multiple domains, such as:
http://chart.googleapis.com
http://0.chart.googleapis.com
http://1.chart.googleapis.com
...
Numeric range is 0-9, so 11 domains available in total.
I want to automatically track count of images generated and rotate domains for best performance in browser. However Google itself only vaguely recommends:
...you should only need this if you're loading perhaps five or more charts on a page.
What should be my strategy? Should I just change domain every N images and what would good N value be in context of modern browsers?
Is there point where it would make sense to reuse domain rather than introduce new one (to save DNS lookup)?
I don't have specific number of images in mind - since this is open source and publicly available code I would like to implement generic solution, rather than optimize for my specific needs.
Considerations:
Is the one host faster than the other?
Does a browser limit connection per host?
How long does it take for the browser to resolve a DNS name?
As you want this to make a component, I'd suggest you make it able to have multiple strategies to find the host name to use. This will not only allow you to have different strategies but also to test them against each other.
Also you might want to add support for the javascript libraries that can render the data on the page in the future so you might want to stay modular anyway.
Variants:
Pick one domain name and stick with it, hardcoded: http://chart.googleapis.com
Pick one domain name out of many, stick with it: e.g. http://#.chart.googleapis.com
Like 2 but start to rotate the name after some images.
Like 3 but add some javascript chunk at the end of the page that will resolve the DNS of the missing hostnames in the background so that it's cached for the next request (Provide the data of the hostnames not used so far).
Then you can make your library configureable, so you don't need to hardencode in the code the values but you provide the default configuration.
Then you can add the strategy as configuration so someone who implements can decide over it.
Then you can make the component offer to load the configuration from outside, so let's say, if you create a Wordpress plugin, the plugin can store the configuration and offer a plugin user an admin-interface to change the settings.
As the configuration already includes which strategy to follow you have completely given the responsibility to the consumer of the component and you can more easily integrate different usage-scenarios for different websites or applications.
I don't exactly understand the request to rotate domains. I guess it does make sense in the context that your browser may only allow X open requests to a given domain at once, so if you have 10 images served from chart.googleapis.com, you may need to wait for the first to finish downloading before beginning to recieve the fifth, and so on.
The problem with rotating domains randomly is that then you defeat browser caching entirely. If an image is served from 1.chart.googleapis.com on one page load and then from 7.chart.googleapis.com on the next page load, the cached chart is invalidated and the user needs to wait for it to be requested, generated, and downloaded all over again.
The best solution I can think of is somehow determining the domain to request from algorithmically from the request. If its in a function, you can md5 the arguments somehow, convert to an integer, and then serve the image from {$result % 10}.chart.googleapis.com.
Probably a little overkill, but you at least can guarantee that a given image will always be served from the same server.

Getting MSISDN from mobile browser headers

What is the best way of going about this? I need to get MSISDN data from users accessing a mobisite to enhance the user experience.
I understand not all gateways would populate the headers entirely, but would wish to have MSISDN capture as option one before falling back on a cookie based model
I know this is an old post, but I'd like to give my contribution.
I work for a mobile carrier and here we have a feature that you can set some parameteres for header enrichment. We create some filters to match certain traffic passing through the GGSN (GPRS gateway node) then it opens the packages at layer 7 (when application layer is HTTP - not protected with SSL) and write msisdn, imsi and other parameters inside it.
So it is a carrier-depending feature.
While some operators do this, the representation and mechanism depends entirely on the operator. There is no standard way to do this.
If you are willing to pay for it try http://Bango.com. They provide an api but you may need to redirect user to their service
As others have said, there is no standard way between mobile operators for passing the MSISDN in the HTTP headers.
Different operators vary on the header value used, some operators do not pass the MSISDN unless they "authorize" your website and others have more complicated means of passing the MSISDN (e.g. redirects to their network to pick up the header).
Developing a site for one specific operator is easy enough, developing for multiple is next to impossible if you need to rely on the header.

Safe implementation of script tag hack to do XSS?

Like a lot of developers, I want to make JavaScript served up by Server "A" talk to a web service on Server "B" but am stymied by the current incarnation of same origin policy. The most secure means of overcoming this (that I can find) is a server script that sits on Server "A" and acts as a proxy between it and "B". But if I want to deploy this JavaScript in a variety of customer environments (RoR, PHP, Python, .NET, etc. etc.) and can't write proxy scripts for all of them, what do I do?
Use JSONP, some people say. Well, Doug Crockford pointed out on his website and in interviews that the script tag hack (used by JSONP) is an unsafe way to get around the same origin policy. There's no way for the script being served by "A" to verify that "B" is who they say they are and that the data it returns isn't malicious or will capture sensitive user data on that page (e.g. credit card numbers) and transmit it to dastardly people. That seems like a reasonable concern, but what if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not? Would it be any more safe with HTTPS? Example scenarios would be appreciated.
Addendum: Support for IE6 is required. Third-party browser extensions are not an option. Let's stick with addressing the merits and risks of the script tag hack, please.
Currently browser venders are split on how cross domain javascript should work. A secure and easy to use optoin is Flash's Crossdomain.xml file. Most languages have a Cross Domain Proxies written for them, and they are open source.
A more nefarious solution would be to use xss how the Sammy Worm used to spread. XSS can be used to "read" a remote domain using xmlhttprequest. XSS isn't required if the other domains have added a <script src="https://YOUR_DOMAIN"></script>. A script tag like this allows you to evaluate your own JavaScript in the context of another domain, which is identical to XSS.
It is also important to note that even with the restrictions on the same origin policy you can get the browser to transmit requests to any domain, you just can't read the response. This is the basis of CSRF. You could write invisible image tags to the page dynamically to get the browser to fire off an unlimited number of GET requests. This use of image tags is how an attacker obtains documnet.cookie using XSS on another domain. CSRF POST exploits work by building a form and then calling .submit() on the form object.
To understand the Same Orgin Policy, CSRF and XSS better you must read the Google Browser Security Handbook.
Take a look at easyXDM, it's a clean javascript library that allows you to communicate across the domain boundary without any server side interaction. It even supports RPC out of the box.
It supports all 'modern' browser, as well as IE6 with transit times < 15ms.
A common usecase is to use it to expose an ajax endpoint, allowing you to do cross-domain ajax with little effort (check out the small sample on the front page).
What if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not?
Lets say you have two servers - frontend.com and backend.com. frontend.com includes a <script> tag like this - <script src="http://backend.com/code.js"></script>.
when the browser evaluates code.js is considered a part of frontend.com and NOT a part of backend.com. So, if code.js contained XHR code to communicate with backend.com, it would fail.
Would it be any more safe with HTTPS? Example scenarios would be appreciated.
If you just converted your <script src="https://backend.com/code.js> to https, it would NOT be any secure. If the rest of your page is http, then an attacker could easily man-in-the-middle the page and change that https to http - or worse, include his own javascript file.
If you convert the entire page and all its components to https, it would be more secure. But if you are paranoid enough to do that, you should also be paranoid NOT to depend on an external server for you data. If an attacker compromises backend.com, he has effectively got enough leverage on frontend.com, frontend2.com and all of your websites.
In short, https is helpful, but it won't help you one bit if your backend server gets compromised.
So, what are my options?
Add a proxy server on each of your client applications. You don't need to write any code, your webserver can automatically do that for you. If you are using Apache, look up mod_rewrite
If your users are using the latest browsers, you could consider using Cross Origin Resource Sharing.
As The Rook pointed out, you could also use Flash + Crossdomain. Or you could use Silverlight and its equivalent of Crossdomain. Both technologies allow you to communicate with javascript - so you just need to write a utility function and then normal js code would work. I believe YUI already provides a flash wrapper for this - check YUI3 IO
What do you recommend?
My recommendation is to create a proxy server, and use https throughout your website.
Apologies to all who attempted to answer my question. It proceeded under a false assumption about how the script tag hack works. The assumption was that one could simply append a script tag to the DOM and that the contents of that appended script tag would not be restricted by the same origin policy.
If I'd bothered to test my assumption before posting the question, I would've known that it's the source attribute of the appended tag that's unrestricted. JSONP takes this a step further by establishing a protocol that wraps traditional JSON web service responses in a callback function.
Regardless of how the script tag hack is used, however, there is no way to screen the response for malicious code since browsers execute whatever JavaScript is returned. And neither IE, Firefox nor Webkit browsers check SSL certificates in this scenario. Doug Crockford is, so far as I can tell, correct. There is no safe way to do cross domain scripting as of JavaScript 1.8.5.

"Same origin policy" and scripts loaded from google - a vulnerable solution?

I read the question here in SO "jQuery Linking vs. Download" and I somehow don't get it.
What happens if you host a page on http://yourserver.com, but load jQuery library from http://ajax.googleapis.com and then use the functions defined in jQuery script?
Does "same origin policy" not count in this case? I mean, can you make AJAX calls back to http://yourserver.com?
Is the JavaScript being executed considered as coming from yourserver.com?
My point here is, you do not know what the user has downloaded from some third party server (sorry, Google), and still the code executing on his computer is as good as the one he would download from your server?
EDIT: Does it mean _that if I use a web statistics counter from a 3rd party I don't know very well, they might "inject" some code and call into my web services as if their code was part of mine?
The owner of site http://yourserver.com/ should trust the content it references from other servers (in this case, Google's). The same origin policy doesn't apply to "script" tags.
Of course, the scripts of the foreign servers (once loaded) have access to the whole DOM: so, if the foreign content is compromised, there can be security exposures.
As with many things in the web world, it comes down to trust and continuous management.
Edit:
Does it mean _that if I use a web
statistics counter from a 3rd party I
don't know very well, they might
"inject" some code and call into my
web services as if their code was part
of mine?
Yes.
Answering the Edit comment: Yes. Unless the counter was wrapped in an iframe tag, it is as if it was a part of your web site and can call into your web services, access your cookies, etc.
Yes, the policy doesn't apply to <script> tags.
If someone was able to hack google's script store, it would affect every page served from every domain, that uses google.com as their host for scripts.

Resources