HTTP Request Object and processing local requests - iis

I have a web application that uses the CDO Message object to email reports.
For example:
Dim objCDO
Set objCDO = Server.CreateObject("CDO.Message")
objCDO.CreateMHTMLBody "http://server/report.asp"
The problem is that when it makes its request to IIS, it does not inherit the ASP session identity of the logged in user i.e. the session variables used to authenticate requests before permitting access to the content are blank. This makes authentication tough, forcing me to add a querystring variable (because as you can see, you cant add a form variable to this request) like:
objCDO.CreateMHTMLBody "http://server/report.asp?userid=xx&password=xx"
SURELY there must be a way of the authorisation in the report to check whether the request came from the local machine i.e. the CDO object in the mailer script, thus negating the need for authentication?

Just don't do it! For these reasons:-
Making a second request back into the server will cause the current thread to block, if you have too many of these you will deadlock the application.
CreateHTMLBody internaly uses the WinINET http stack to make the request. This stack is intended for use in client interactive scenarios. In the server scenario it isn't thread-safe.
You lose all session context so it can (as you are discovering) make somethings more difficult or less secure. In addition it can create a load of unwanted sessions.
Whilst its true CreateHTMLBody can be very convienient it can also create bloated emails. In the server situation you really need to craft the email with code rather than use this tempting method.
Edit
It seems Jimbo has more general scenarios in mind than just CreateHTMLBody. The general scenario is that a component (over which the consumer has no control) is used in an ASP page (we will designate this the "Client Page") and it makes a subsequent request (potentially via WinINET) to another ASP page (we will designate this the "Service Page"). There is the assumption that the only thing the "Client Page" can control about the usage of the component is the URL supplied to it.
Here are some approaches to avoid or mitigate the issues outlined above.
Handling Locking Problems: Placing the "Client Page" and the "Service Page" in different ASP application would avoid the locking issues. My suggestion would be place the "Client Page" in a different application to the rest of the application and that this new application would be in sub folder of the main application.
Dealing with WinINET issues: Place the new application in its own application pool. If using WinINET in an unsafe manner does cause a problem it doesn't affect the main application process. Indeed placing it in its own process may help avoid such problems. (No guarantees here but its the best you can get to avoiding WinINET issues completely).
Controlling security: Configure the "Service Page" to only accept requests from the "Client Page". There are probably a number of ways to do this but the simplest is to enable IP based security, the requests to the "Service Page" should only be coming from a specific IP or at least a limited set of IP addresses.
Handling authentication: During the main application logon create a volatile cookie containing some unique value. Since the "Client Page" is perceived as a sub-folder of the main application by the browser it will receive this cookie. The "Client Page" can use this cookie to confirm the authenticity of the request and/or pass it in the URL when using the component.
Supressing prolific session creation: Have the "Service Page" call Session.Abandon before it completes its operation.

Related

is authentication with client side rendered app and sessions possible?

No matter how I reason about it, it seems as if there is no secure way of implementing a client side rendered single-page-application that uses/accesses information on sessions for authentication, either via cookies, without severe compromise in security. I was mainly looking to building a React app, but it seems as if I will need to build it with SSR for a relatively secure version of authentication.
The use case that I'm especially thinking of is where the user logs in or registers and then gets a cookie with the session id. From there, in a server side implementation, I can simply set up conditional rendering depending on whether the server stored session has an associated user id or not and then pull the user information from there and display it.
However, I can't think of a client-side rendered solution where the user can use the session id alone on the cookie that isn't easily spoofable. Some of the insecure implementations would include using browser storage (local/session). Thanks.
I think the major issue here is that you are mixing the two parts of a web page (at least according to what HTML set out achieve) and treating them both as sensitive information.
You have two major parts in a web page - the first being the display format and the second being the data. The presumption in client side rendering / single page applications is that the format itself is not sensitive, and only the data needs to be protected.
If that's the case you should treat your client-side redirect to login behavior as a quality of life feature. The data endpoints on your server would still be protected - meaning that in theory an unauthenticated user could muck about the static HTML he is being served and extract page layouts and templates - but those would be meaningless without the data to fill them - which is the protected part.
In practice - your end product would be a single page application that makes requests to various API endpoints to fetch data and fill in the requested page templates. You wouldn't even need to go as far as storing complex session states - a simple flag notifying the client if it is authenticated or not would suffice (that is beyond what you would normally use for server-side authentication such as cookies or tokens)
Now let's say I'm a malicious user who is up to no good - I could "spoof" - or really just open the browser dev tools and set the isAuthenticated flag to true letting me skip past the login screen - now what would I do? I could theoretically navigate to my-service/super-secret without being redirected locally back to the login page on the client side - and then as soon as the relevant page tries to load the data from the server with the nonexistent credentials it would fail - best case displaying an error message, worst case with some internal exception and a view showing a broken template.
So just to emphasize in short:
A. If what you want to protect is your TEMPLATE then there is no way to achieve this clientside.
B. If what you want to protect is your DATA then you should treat gating/preventing users from navigating to protected pages as a quality of life feature and not a security feature, since that will be implemented on the server when serving the data for that specific page.

How to deny outside post requests?

I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.

URL Rewriting vulnerability

We modified our Session handling from cookie based to URL Rewriting. By doing this the session id gets transmitted as part of the URL.
Now there is a vulnerability issue, where whoever uses this URL will be able to log in into the system.
To resolve this issue we have done the following
[1] A HTTP Session Listener has been created to maintain list of HTTP sessions.
Listener reacts on the events when session are created or destroyed.
[2] A Session Filter has been created to verify HTTP Session and check its integrity against HTTP Request attributes
Session will be invalidated in case Request attributes (identifying the client origin) do not match original attributes stored with session. (to block the session hijack attempt)
However i think that this has a gap, when you are trying to access over a proxy etc.
Is there any other effective solution for this?
Also we cannot use third party libraries to resolve this because of the nature of the produce.
So you need to be doubly careful with session ID likes this: users share URLs! The definitive advice on the subject comes from OWASP:
https://www.owasp.org/index.php/Session_Management_Cheat_Sheet
But I think you should consider the following additional controls:
Rotating the session key on each request. This is only practical with simple web applications though. It'll cause problems undoubtedly with AJAX and might be difficult to manage if the user is likely to open a second tab on the application.
Shorter timeouts.
I am presuming that in the 'HTTP Request Attributes' you mention you are already picking up the User-agent, source IP address and invalidating the session if these are inconsistent.
If you are using SSL it might be possible to do a great solution where the session ID is tied to the SSL connection (Apache, for example, exposes this in a SSL_SESSION_ID environment variable). But this information might not be available to your application.

Admin access through a GET parameter

I'm working on a really simple web site. I usually do a full blown admin to edit the site, but this time I thought about editing in place (contenteditable="true").
To simplify login for the user, I'd like to just give him a password that he can type in the address bar to log him in, instead of the usual login form. So he would visit domain.com/page?p=the_password and then I would store his data in a session and give him a cookie with a session id (usual stuff) and redirect him to domain.com/page.
How safe / unsafe is this? I'm doing this in PHP, but I guess it applies to any server-side language.
Your login idea is unsafe: URLs for requests end up in web server logs and other places besides, so that means passwords will end up in web server logs.
Your "contentedittable" idea is probably unsafe, but in a more subtle way. It's also (again, probably) non-compliant with the HTTP specification.
GET requests should always be idempotent. This is because user agents (browsers, caches, etc...) are allowed to reissue the same GET request any number of times without user consent. One reason why a browser might do that is because the user pressed the back button and the previous page is no longer in the cache. If the request is not idempotent then issuing it a second time may have an unexpected and unwanted side effect.
It sounds like your "editing in place" feature might not always be idempotent. There are many kinds of simple edits which are in fact idempotent so I could be wrong, but as soon as you have for example the ability to add a new item to a list via this kind of interface it's not.
Non-idempotent requests should be issued through methods like PUT, POST, and DELETE.
To add to #Celada answer. The URL will be stored in the browser history or network caches/proxies, so the password can leak in this way. Also it would be trivial to login a random Internet user as someone else (Login Cross Site Request Forgery attack), by for example having a web site with an img element pointing to domain.com/page?p=the_password
You don't write about this, but once the user is logged in your scheme needs to protect against Cross Site Request Forgery (so a random page can not perform admin actions on behave of the logged-in user).

Protocol, paradigm or software for authenticating web requests across one's own domains

tl;dr
I am considering a webservice design model which consist of several services/subdomains, each of which may be implemented in different platforms and hosted in different servers.
The main issue is authentication. If a request for jane's resources came in, can a split system authenticate that request as her's?
All services access the same DB layer, of course. So I have in mind a single point of truth each service can use to authenticate each request.
For example, jane accesses www.site.com, which renders stuff in her browser. The browser may send a client-side request to different domains of site.com, with requests like:
from internalapi.site.com fetch /user/users_secret_messages.json
from imagestore.site.com fetch /images/list_of_images
The authentication issue is: another user (or an outsider) can craft a request that can fool a subdomain into giving them information they should not access.
So I have in mind a single point of truth: a central resource accessible by each service that can be used to authenticate each request.
In this pseudocode, AuthService.verify_authentication() refers the central resource
//server side code:
def get_user_profile():
auth_token=request.cookie['auth_token']
user=AuthService.verify_authentication(auth_token)
if user=Null:
response.write("you are unauthorized/ not logged in")
else:
response.write(json.dumps(fetch_profile(user)))
Question: What existing protocols, software or even good design practices exist to enable flawless authentication across multiple subdomains?
I seen how OAuth takes the headache out of managing 3rd-party access and wonder if something exists for such authentication. I also got the idea from Kerberos and TACACS.
This idea was the result of teamthink, as a way to simplify architecture (rather than handle heavy loads).
I built a system that did this a little while ago. We were building shop.megacorp.com, and had to share a login with www.megacorp.com, profile.megacorp.com, customerservice.megacorp.com, and so on.
The way it worked was in two parts.
Firstly, all signon was handled through a set of pages on accounts.megacorp.com. The signup link from our pages went there, with a return URL as a parameter (so https://accounts.megacorp.com/login?return=http://shop.megacorp.com/cart). The login process there would redirect back to the return URL after completion. The login page also set an authentication cookie, scoped to the whole of the megacorp.com domain.
Secondly, authentication was handled on the various sites by grabbing the cookie from the request, then forwarding it via an internal web service to accounts.megacorp.com. We could have done this is a straightforward SOAP or REST query, with the cookie as a parameter, but actually, what we did was send a HTTP request, with the cookie added to the headers (sort of as if the user had sent the request directly). That URL would then come back as a 200 if the cookie was valid, serving up some information about the user, or a 401 or something if it wasn't. We could then deal with the user accordingly.
Needless to say, we didn't want to make a request to accounts.megacorp.com for every user request, so after a successful authentication, we would mark the user's session as authenticated. We'd store the cookie value and a timestamp, and if subsequent requests had the same cookie value, and were within some timeout of the timestamp, we'd treat them as authenticated without passing them on.
Note that because we pass the cookie as a cookie in the authentication request, the code to validate it on accounts.megacorp.com is exactly the same as handling a direct request from a user, so it was trivial to implement correctly. So, in response to your desire for "existing protocols [or] software", i'd say that the protocol is HTTP, and the software is whatever you can use to validate cookies (a standard part of any web container's user handling). The authentication service is as simple as a web page which prints the user's name and details, and which is marked as requiring a logged-in user.
As for "good design practices", well, it worked, and it decoupled the login and authentication processes from our site pretty effectively. It did introduce a runtime dependency on a service on accounts.megacorp.com, which turned out to be somewhat unreliable. That's hard to avoid.
And actually, now i think back, the request to accounts.megacorp.com was actually a SOAP request, and we got a SOAP response back with the user details, but the authentication was handled with a cookie, as i described. It would have been simpler and better to make it a REST request, where our system just did a GET on a standard URL, and got some XML or JSON describing the user in return.
Having said all that, if you share a database between the applications, you could just have a table, in which you record (username, cookie, timestamp) tuples, and do lookups directly in that, rather than making a request to a service.
The only other approach i can think of is to use public-key cryptography. The application handling login could use a private key to make a signature, and use that as the cookie. The other applications could have the corresponding public key, and use that to verify it. The keys could be per-user or there could just be one. That would not involve any communication between applications, or a shared database, following the initial key distribution.

Resources