Is it possible to achieve the below mentioned scenario with IIS - ARR combination? or with any additional development.
Have a front end web server (reverse proxy kind of), in DMZ, which will authenticate (and if possible authorize) users and then forwards the request to corresponding application servers with in the internal network.
Any suggestions / thoughts would be really helpful.
The short answer is "no", as it isn't a feature out of the box.
The longer answer is "yes" because you can write a native module for IIS that does whatever you want before ARR gets its hands on the request (modify headers, change the target host, modify the request body, etc).
I know this works as I used this method to implement some common middleware (authentication, cors, common error pages, cookie rewriting) across a set of services using a variety of technologies.
This is the starting point for the documentation:
"Walkthrough: Creating a Request-Level HTTP Module By Using Native Code"
https://msdn.microsoft.com/en-us/library/ms689320(v=vs.90).aspx
You could achieve this by adding in a Query String value equivalent to "?authorized=false/true", Use ARR to evalute the Query String Value - if authorized=false route to authorization server farm, then send the same URL from the authorization process with query string authorized=true which will route to the "live" server farm.
Of course you will probably want to use some other value than "authorized" in plain text in your URL!
You could possibly do the same thing in the header, using appcmd you can manipulate headers in ARR.
The schema for ARR is in the C:\Windows\system32\inetsrv\config\schema\arr_schema.xml file.
If you examine this schema you will see where the Header elements are.
HTH
Related
I am working on an enterprise web app. Our app would be deployed to production in https only. We are writing REST APIs that would be consumed by our html 5 client only (not for others)
My team has taken decision to use POST for all rest verbs, because GET, PUT, DELETE exposes id in the url.
What my team thinks is that, having any type of data exposed in the url is not a good idea, even if id is not directly the database id.
Everybody seems to be agreed to use POST http method for GET, PUT, DELETE and pass http_method as parameter and using the parameter decide what operation to do.
This is the only difference that we are planning to do for REST APIs. What do you think?
By avoiding the use of GET you loose the ability to use caching. Hiding the id of the resource in the body will most likely cause you to violate the "identification of resources" constraint. Which will make linking/hypermedia far more difficult.
I would assert that it will be almost impossible to get the benefits of a RESTful system if you limit yourself to POST. Having said that, considering you are only targeting HTML5 clients you probably don't need most of the REST benefits. Doing Json-RPC is probably more than adequate for your needs.
However, I still think throwing away GET for a false sense of security is a bad idea. The body of your posts are just as easy to access by some intermediary as an id in the URI.
You should use HTTP as it was intended so you can get the full benefits of the stack. The different verbs have different semantics for a reason.
You can feel like POST is more secure than GET all you want, but it's a false premise.
Who are you securing this data from? Man in the middle? That's what HTTPS solves.
Are you securing it from your end users? The ones who can download your entire application and study the source code at their leisure whenever they want? Or view the source of a page with a right click and see your payloads laid out in all their glory? Or simply turn the browser debugger on and watch all of your traffic? You're trying to secure it from those people?
POST doesn't hide anything from anybody.
You should probably have a look at this post as it pretty much explains what https does for you. So the question would be: who do you want to protect the URL from? It might be in the browser history or visible to someone standing behind the user, but not to anyone listening on the line.
I have a url to a search page (e.g. http://x.y.z/search?initialQuery=test). It isn't a webservice endpoint, its just a basic url (which goes through a Spring controller). There is no security around accessing page, you can enter the link in a browser and it will render results.
What I want is to find a way to prevent other sites from submitting requests to this url, unless they are specifically allowed.
I build a filter which would intercept all request to this page, and perform some validation. If validation failed then they would be redirected to another page.
The problem is what validation to perform... I tried using the referer field to see if the request was coming from an "allowed" site but I know the referer field isn't always populated and can easily be faked.
Is there a way to achieve this?
We also have IHS so if there is something that can be done in there either that would be great.
I'd suggest implementing some kind of system to allow users to log in if you really want to protect a page from being accessed.
You could try to detect the IP address of the incoming request, but I'd imaging that this can be spoofed quite easily.
Realistically, pages that are public are open to any kind of interrogation to the limits that you set. Perhaps limiting the data that the page returns is a more practical option?
This is the reason that website's like Facebook and Twitter implement oAuth to prevent resources from being accessed by unauthorised users.
How about you only run the result if the referring page has passed along a POST variable called "token" or something which has been set to a value that you give each app that's going to hit the search page. If you get a request for that page with a query string, but not POST value for "token", then you know its an unauthorized request and can handle it accordingly.
If you know the IPs of sites which can contact your service, you can put Apache as a proxy and use access control to permit/deny access to specific directories/urls.
I assume that you want to avoid having your site "scraped" by bots, but do want to allow humans to access your search page.
This is a fairly common requirement (google "anti scraping"). In ascending order of robustness (but descending order of user-friendliness):
block requests based on the HTTP headers (IP address, user agent, referrer).
implement some kind of CAPTCHA system
require users to log in before accessing the search URL
You may be able to buy some off-the-shelf wizardry that (claims to) do it all for you, but if your data is valuable enough, those who want it will hire mechanical turks to get it...
make a certification security.
u can make self signed certification using openssl or java keytool
and u will have to send a copy of certificate to ur client.
If this client will not have this certificate, It will not be able to call ur service.
And to make certificate enable in ur web container.I dont know bout other containers but
in Apache tomcat, u can do it in connector tag of ur server.xml
Of late I have been diving deep into web application security. While browsing I found WebScarab Tool from OWASP which can inject possible attack in to your web application and make your application vulnerable.
I am using that tool to intercept any request which goes through my web application based on JSF 1.2 Framework. While using I observed that whatever values are entered in form fields are shown as it is HttpRequest in this tool.You can modify these values and it will automatically create a new request header and strikingly the modified values will be inserted into the DB.
Isn't it a potential attack? I mean anyone can intercept any HttpRequest and modify the parameter with the help of a tool and inject some malicious content,
My questions are:
Is it possible for everyone to intercept HttpRequest generating from any webpage, say stackoverflow.com?
If yes, how can you avoid these modification by an unknown user who can modify the parameter and remake the encoded URL?
If no, please explain why? I am absolutely numb?
WebScarab is a proxy:
WebScarab operates as an intercepting proxy, allowing the operator to review and modify requests created by the browser before they are sent to the server, and to review and modify responses returned from the server before they are received by the browser.
But this requires the client (e.g. your web browser) to actually use the proxy:
In order to start using WebScarab as a proxy, you need to configure your browser to use WebScarab as a proxy. This is configured in IE using the Tools menu. Select Tools -> Internet Options -> Connections -> LAN Settings to get the proxy configuration dialog.
So only the communication of clients that use the WebScarab proxy can be intercepted.
Using WebScarab or other UI Interceptor tool, person can Change the Transaction data in between of processing of request from Client to Server.
Basically this can be avoided by applying Same Validations at both Client and Server side of the application.
eg, if Application has Change pwd functionality, and someone tries Interceptor and modifies the pwd with new intercepted Pwd., while saving it should be validated on server side , whether user entered correct password or not.
What is the best way of going about this? I need to get MSISDN data from users accessing a mobisite to enhance the user experience.
I understand not all gateways would populate the headers entirely, but would wish to have MSISDN capture as option one before falling back on a cookie based model
I know this is an old post, but I'd like to give my contribution.
I work for a mobile carrier and here we have a feature that you can set some parameteres for header enrichment. We create some filters to match certain traffic passing through the GGSN (GPRS gateway node) then it opens the packages at layer 7 (when application layer is HTTP - not protected with SSL) and write msisdn, imsi and other parameters inside it.
So it is a carrier-depending feature.
While some operators do this, the representation and mechanism depends entirely on the operator. There is no standard way to do this.
If you are willing to pay for it try http://Bango.com. They provide an api but you may need to redirect user to their service
As others have said, there is no standard way between mobile operators for passing the MSISDN in the HTTP headers.
Different operators vary on the header value used, some operators do not pass the MSISDN unless they "authorize" your website and others have more complicated means of passing the MSISDN (e.g. redirects to their network to pick up the header).
Developing a site for one specific operator is easy enough, developing for multiple is next to impossible if you need to rely on the header.
I am using CouchDB for my Data Layer in a Rails 3 application using CouchRest::Model hosted on Heroku.
I am requesting a List of Documents and returning them as JSON to my Browser and using jQuery Templates to represent that data.
Is there a way I could build the request on the server side, and return the request that would need to be called from the browser WITHOUT opening a huge security hole i.e. giving the browser access to the whole database?
Ideally it would be a one off token access to a specific query, Where the token would be generated on the server side, and CouchDB would take the token, and make sure it matches what the query should be, and give access to the results.
One way that comes to mind would be to generate a token Document and use a show function (http://guide.couchdb.org/draft/show.html) to return the results for that token Document's view results. Though I am not sure if that is possible.
Though another is to put a token on the Document itself and use a list function (http://guide.couchdb.org/draft/transforming.html)
Save that, any other ideas?
Thanks in Advance
Is there a way I could build the
request on the server side, and return
the request that would need to be
called from the browser WITHOUT
opening a huge security hole i.e.
giving the browser access to the whole
database?
Yes. One method is to create a rack app and mount it inside your rails app. You can have it receive requests from users' browsers at "/couch" and forward that request to your "real" couchdb url, returning couch's JSON response as-is or modifying it however you need.
You may also be able to use Couch's rewrite and virtual host features to control what Couch URLs the general public is able to reach. This probably will necessitate the use of list or show functions. http://blog.couchone.com/post/1602827844/of-rewrites-and-virtual-hosting-an-introduction
Ideally it would be a one off token access to a specific query, Where the token would be generated on the server side, and CouchDB would take the token, and make sure it matches what the query should be, and give access to the results.
You might use cookies for this since list and show functions can set and get cookie values on requests.
But you could also include a hash value as part of each request. Heroku's add-on API has a good example of how this works. https://addons.heroku.com/provider/resources/technical/build/sso
Notice that the API calls are invalid outside of a certain window of time, which may be exactly what you need.
I'm not sure I precisely understand your needs, but I hope I have been able to give you some helpful ideas.