When trying to create a SSL connection with LWP::UserAgent, what do I use for realm? - security

I've started a project to scrape my work's employee website to scrape the user's (in this case, mine) schedule and munge the data onto a google calendar. I've decided to go with Perl with LWP.
The problem is this, when trying to set up SSL negotiations I don't know what do put for the 'realm'.
For example: (http://www.sciencemedianetwork.org/wiki/Form_submission_with_LWP,_https,_and_authentication)
# ...
my $ua = new LWP::UserAgent;
$ua->protocols_allowed( [ 'http','https'] );
$ua->credentials('some.server:443',**'realm'**,'username','password');
# ...
I've looked at everything my browser can tell me and at a wireshark packet capture trying to find anything but to no avail. I assume that second argument to credentials() isn't optional.
Where do I find the 'realm' I'm supposed to use?

The credentials are for the HTTP authentication protocol (RFC 2617) (Wikipedia).
The server can challenge the client to authenticate itself. This response contains a string called “realm” which tells the client for what authentication is required. This allows the same server under the same domain to request authentication for different things, e.g. in a content management system where there might be an “user password” and an “administrator password”, which would be two different realms.
In a browser, this realm would be displayed alongside the username and password box which allows the user to type in the correct password.
To discover the realm, navigate to a page which requires authentication and look for the WWW-Authenticate header.
Note that HTTP authentication has become quite uncommon, with session cookies being used more often. To deal with such an authentication scheme, make sure that your LWP::UserAgent has an attached cookie storage, and then navigate through the login form before visiting your actual target page. Using WWW::Mechanize tends to make this a lot easier.

Related

Are security concerns sending a password using a GET request over https valid?

We have webpage which uses the sapui5-framework to build a spa. The communication between the browser and the server uses https. The interaction to log into the page is the following:
The user opens the website by entering https://myserver.com in the browser
A login dialogue with two form fields for unsername and password is shown.
After entering username and password and pressing the login-button
an ajax-request is send using GET to the URL: https://myusername:myPassword#myserver.com/foo/bar/metadata
According to my understanding using GET to send sensitive data is never a good idea. But this answer to HTTPS is the url string secure says the following
HTTPS Establishes an underlying SSL conenction before any HTTP data is
transferred. This ensures that all URL data (with the exception of
hostname, which is used to establish the connection) is carried solely
within this encrypted connection and is protected from
man-in-the-middle attacks in the same way that any HTTPS data is.
An in another answer in the same thread:
These fields [for example form field, query strings] are stripped off
of the URL when creating the routing information in the https packaging
process by the browser and are included in the encrypted data block.
The page data (form, text, and query string) are passed in the
encrypted block after the encryption methods are determined and the
handshake completes.
But it seems that there still might be security concerns using get:
the URL is stored in the logs on the server and in the same thread
leakage through browser history
Is this the case for URLs like?
https://myusername:myPassword#myserver.com/foo/bar/metadata
// or
https://myserver.com/?user=myUsername&pass=MyPasswort
Additional questions on this topic:
Is passsing get variables over ssl secure
Is sending a password in json over https considered secure
How to send securely passwords via GET/POST?
On security.stackexchange are additional informations:
can urls be sniffed when using ssl
ssl with get and post
But in my opinion a few aspects are still not answered
Question
In my opinion the mentioned points are valid objections to not use get. Is the case; is using get for sending passwords a bad idea?
Are these the attack options, are there more?
browser history
server logs (assuming that the url is stored in the logs unencrypted or encrypted)
referer information (if this is really the case)
Which attack options do exist when sending sensitive data (password) over https using get?
Thanks
Sending any kind of sensitive data over GET is dangerous, even if it is HTTPS. These data might end up in log files at the server and will be included in the Referer header in links to or includes from other sides. They will also be saved in the history of the browser so an attacker might try to guess and verify the original contents of the link with an attack against the history.
Apart from that you better ask that kind of questions at security.stackexchange.com.
These two approaches are fundamentally different:
https://myusername:myPassword#myserver.com/foo/bar/metadata
https://myserver.com/?user=myUsername&pass=MyPasswort
myusername:myPassword# is the "User Information" (this form is actually deprecated in the latest URI RFC), whereas ?user=myUsername&pass=MyPasswort is part of the query.
If you look at this example from RFC 3986:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose
myusername:myPassword# is part of the authority. In practice, use HTTP (Basic) authentication headers will generally be used to convey this information. On the server side, headers are generally not logged (and if they are, whether the client entered them into their location bar or via an input dialog would make no difference). In general (although it's implementation dependent), browsers don't store it in the location bar, or at least they remove the password. It appears that Firefox keeps the userinfo in the browser history, while Chrome doesn't (and IE doesn't really support them without workaround)
In contrast, ?user=myUsername&pass=MyPasswort is the query, a much more integral part of the URI, and it is send as the HTTP Request-URI. This will be in the browser's history and the server's logs. This will also be passed in the referrer.
To put it simply, myusername:myPassword# is clearly designed to convey information that is potentially sensitive, and browsers are generally designed to handle this appropriately, whereas browsers can't guess which part of which queries are sensitive and which are not: expect information leakage there.
The referrer information will also generally not leak to third parties, since the Referer header coming from an HTTPS page is normally only sent with other request on HTTPS to the same host. (Of course, if you have used https://myserver.com/?user=myUsername&pass=MyPasswort, this will be in the logs of that same host, but you're not making it much worth since it stays on the same server logs.)
This is specified in the HTTP specification (Section 15.1.3):
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
Although it is just a "SHOULD NOT", Internet Explorer, Chrome and Firefox seem to implement it this way. Whether this applies to HTTPS requests from one host to another depends on the browser and its version.
It is now possible to override this behaviour, as described in this question and this draft specification, using a <meta> header, but you wouldn't do that on a sensitive page that uses ?user=myUsername&pass=MyPasswort anyway.
Note that the rest of HTTP specification (Section 15.1.3) is also relevant:
Authors of services which use the HTTP protocol SHOULD NOT use GET based forms for the submission of sensitive data, because this will cause this data to be encoded in the Request-URI. Many existing servers, proxies, and user agents will log the request URI in some place where it might be visible to third parties. Servers can use POST-based form submission instead
Using ?user=myUsername&pass=MyPasswort is exactly like using a GET based form and, while the Referer issue can be contained, the problems regarding logs and history remain.
Let assume that user clicked a button and following request generated by client browser.
https://www.site.com/?username=alice&password=b0b123!
HTTPS
First thing first. HTTPS is not related with this topic. Because using POST or GET does not matter from attacker perspective. Attackers can easily grab sensitive data from query string or directly POST request body when traffic is HTTP. Therefor it does not make any difference.
Server Logs
We know that Apache, Nginx or other services logging every single HTTP request into log file. Which means query string ( ?username=alice&password=b0b123! ) gonna be written into log files. This can be dangerous because of your system administrator can access this data too and grab all user credentials. Also another case could be happen when your application server compromise. I believe you are storing password as hashed. If you use powerful hashing algorithm like SHA256, your client's password will be more secure against hackers. But hackers can access log files directly get passwords as a plain-text with very basic shell scripts.
Referer Information
We assumed that client opened above link. When client browser get html content and try to parse it, it will see image tag. This images can be hosted at out of your domain ( postimage or similar services, or directly a domain that under the hacker's control ) . Browser make a HTTP request in order to get image. But current url is https://www.site.com/?username=alice&password=b0b123! which is going to be referer information!
That means alice and her password will be passed to another domain and can be accessible directly from web logs. This is really important security issue.
This topic reminds me to Session Fixation Vulnerabilities. Please read following OWASP article for almost same security flaw with sessions. ( https://www.owasp.org/index.php/Session_fixation ) It's worth to read it.
The community has provided a broad view on the considerations, the above stands with respect to the question. However, GET requests may, in general, need authentication. As observed above, sending user name/password as part of the URL is never correct, however, that is typically not the way authentication information is usually handled. When a request for a resource is sent to the server, the server generally responds with a 401 and Authentication header in the response, against which the client sends an Authorization header with the authentication information (in the Basic scheme). Now, this second request from client can be a POST or a GET request, nothing prevents that. So, generally, it is not the request type but the mode of communicating the information is in question.
Refer http://en.wikipedia.org/wiki/Basic_access_authentication
Consider this:
https://www.example.com/login
Javascript within login page:
$.getJSON("/login?user=joeblow&pass=securepassword123");
What would the referer be now?
If you're concerned about security, an extra layer could be:
var a = Base64.encode(user.':'.pass);
$.getJSON("/login?a="+a);
Although not encrypted, at least the data is obscured from plain sight.

ssh based authentication in expressjs

I am currently using expressjs with node.js as my rest server for my website. Currently users can login on to my website and start some actions through ui. They want to automate this stuff and I am looking for ways to achieve that. Some of the ways I can think is:
Create a new request which can take login creds as part of reuest parameters and execute the desired the actions. My users would have to save their password as pain text for automations which doesn't seem OK to me.
login using ssh similar to how bitbucket/github takes our public ssh key and lets up do codepush with out writing the password everytime. How do I implement this kind of setup. My users want to execute everytime they deploy in test machine. So they will put my script in server restart script.
If I have to adda new ssh based authentication, are there any npm modules which can help me with implementation?
I am using mean.io boiler plate code and login is currently is based on default login protocol of theirs, where in I save the hashed password and compare that during login.
I think dealing with public-private key pairs is probably more trouble than it is worth. Perhaps you can go with a third option:
Allow users to generate API keys from your web interface. The keys will be "long" randomly generated strings (GitHub uses a 40 character long hexadecimal string for its keys). They can be used for making API requests in place of a password in a username-password pair. For additional security, allow users to limit a key's usage to a certain IP (range).
Also, make sure your application is being served over HTTPS if it is not already.
Example flow:
User tim generates a random API key on your site (aisjd8auasdjsd80j43j).
tim wants to make a request to your API. In the request, tim sets an authorization header:
GET /api/v1/list-all HTTP/1.1
Host: example.com
X-API-Auth: tim:aisjd8auasdjsd80j43j
...
Your API verifies the X-API-Auth header, checking if tim owns the given API key.
Your API returns the requested information on sucess.
Also, it may be worth using using HTTP basic authentication instead of the custom X-API-Auth header, as I did in the above example. I believe it would be slightly easier in command line tools like curl to make HTTP basic authentication requests, rather than setting a custom header.

lwIP on Stellaris 32-bit microcontroller - Secure Login

I wish to use my Stellaris LM3S8962 microcontroller as a bridge between internet and a bunch of sensors. I will be using Zigbee nodes for communication from the sensors to the microcontroller. I have been able to use the lwIP TCP/IP stack (for LM3S8962) to access HTML pages stored in the controller's flash.
Now, I want to add a secure login system for the same. What I basically want is that - when I enter the IP of the controller in the browser, it should prompt me for a username and a password. I want to make this system as secure as possible using the lwIP TCP/IP stack.
FYI, the stack does not support PHP or any other scripts. CGI feature (in C) is supported but I don't know how to implement the security part. Please guide.
There are basically two ways you could implement user authentication over HTTP on your platform:
"classic" Basic HTTP authentication (see RFC2616 for exact specification),
a login form, creating a session ID, and returning that to the browser, to be stored either in a cookie, or in the URL.
Basic HTTP authentication works by you inserting a check into your web page serving routine, to see if there is an Authorization HTTP header. If there is, you should decode it (see the RFC), and check the specified username/password. If the credentials are ok, then proceed with the web page serving. If the credentials are incorrect, or in case there is no Authorization header, you should return a 401 error code. The browser will then prompt the user for the credentials. This solution is rather simple, and you'll get the browser login dialog, not a nice HTML page. Also, the microcontroller will have to check the credentials for every page request.
Authentication via a login form works by your server maintaining a list of properly authenticated user sessions (probably in memory), and checking the request against this list. Upon loggin in, a unique, random ID should be generated for that session, and the session data should be entered into your list. The new session ID should be returned to the browser to be included in upcoming HTTP requests. You may choose the session ID to be put into a browser cookie, or you can embed it into the URL, as a URL parameter (?id=xxxxx). The URL parameter is embedded by sending a 302 Redirection response to the browser with the new URL. If you want the cookie solution, you can send the response to the login request with a Set-Cookie HTTP response header. In the login form solution, the actual credentials (username/password) are only checked once, at login. After that, only the session ID is checked against the list of active sessions. The session ID has to be extracted from the requests for every web page serving (from the URL, or from the Cookie HTTP header). Also, you'll have to implement some kind of session timeout, both as a security measure, and to put bounds to the size of the list of active sessions.
Now comes the harder part: in both solutions, the username/password travels in some requests (in every request for the Basic HTTP authentication, and in the login page POST for the login form solution), and can be extracted from the network traffic. Also, information neccessary to hijack the session is present in every request in both solutions. If this is a problem (the system works on a freely accessible LAN segment, or over network links beyond your control), you'll probably need SSL to hide these sensitive data. How to get a reasonable SSL implementation for your platform, that's another question.

Protocol, paradigm or software for authenticating web requests across one's own domains

tl;dr
I am considering a webservice design model which consist of several services/subdomains, each of which may be implemented in different platforms and hosted in different servers.
The main issue is authentication. If a request for jane's resources came in, can a split system authenticate that request as her's?
All services access the same DB layer, of course. So I have in mind a single point of truth each service can use to authenticate each request.
For example, jane accesses www.site.com, which renders stuff in her browser. The browser may send a client-side request to different domains of site.com, with requests like:
from internalapi.site.com fetch /user/users_secret_messages.json
from imagestore.site.com fetch /images/list_of_images
The authentication issue is: another user (or an outsider) can craft a request that can fool a subdomain into giving them information they should not access.
So I have in mind a single point of truth: a central resource accessible by each service that can be used to authenticate each request.
In this pseudocode, AuthService.verify_authentication() refers the central resource
//server side code:
def get_user_profile():
auth_token=request.cookie['auth_token']
user=AuthService.verify_authentication(auth_token)
if user=Null:
response.write("you are unauthorized/ not logged in")
else:
response.write(json.dumps(fetch_profile(user)))
Question: What existing protocols, software or even good design practices exist to enable flawless authentication across multiple subdomains?
I seen how OAuth takes the headache out of managing 3rd-party access and wonder if something exists for such authentication. I also got the idea from Kerberos and TACACS.
This idea was the result of teamthink, as a way to simplify architecture (rather than handle heavy loads).
I built a system that did this a little while ago. We were building shop.megacorp.com, and had to share a login with www.megacorp.com, profile.megacorp.com, customerservice.megacorp.com, and so on.
The way it worked was in two parts.
Firstly, all signon was handled through a set of pages on accounts.megacorp.com. The signup link from our pages went there, with a return URL as a parameter (so https://accounts.megacorp.com/login?return=http://shop.megacorp.com/cart). The login process there would redirect back to the return URL after completion. The login page also set an authentication cookie, scoped to the whole of the megacorp.com domain.
Secondly, authentication was handled on the various sites by grabbing the cookie from the request, then forwarding it via an internal web service to accounts.megacorp.com. We could have done this is a straightforward SOAP or REST query, with the cookie as a parameter, but actually, what we did was send a HTTP request, with the cookie added to the headers (sort of as if the user had sent the request directly). That URL would then come back as a 200 if the cookie was valid, serving up some information about the user, or a 401 or something if it wasn't. We could then deal with the user accordingly.
Needless to say, we didn't want to make a request to accounts.megacorp.com for every user request, so after a successful authentication, we would mark the user's session as authenticated. We'd store the cookie value and a timestamp, and if subsequent requests had the same cookie value, and were within some timeout of the timestamp, we'd treat them as authenticated without passing them on.
Note that because we pass the cookie as a cookie in the authentication request, the code to validate it on accounts.megacorp.com is exactly the same as handling a direct request from a user, so it was trivial to implement correctly. So, in response to your desire for "existing protocols [or] software", i'd say that the protocol is HTTP, and the software is whatever you can use to validate cookies (a standard part of any web container's user handling). The authentication service is as simple as a web page which prints the user's name and details, and which is marked as requiring a logged-in user.
As for "good design practices", well, it worked, and it decoupled the login and authentication processes from our site pretty effectively. It did introduce a runtime dependency on a service on accounts.megacorp.com, which turned out to be somewhat unreliable. That's hard to avoid.
And actually, now i think back, the request to accounts.megacorp.com was actually a SOAP request, and we got a SOAP response back with the user details, but the authentication was handled with a cookie, as i described. It would have been simpler and better to make it a REST request, where our system just did a GET on a standard URL, and got some XML or JSON describing the user in return.
Having said all that, if you share a database between the applications, you could just have a table, in which you record (username, cookie, timestamp) tuples, and do lookups directly in that, rather than making a request to a service.
The only other approach i can think of is to use public-key cryptography. The application handling login could use a private key to make a signature, and use that as the cookie. The other applications could have the corresponding public key, and use that to verify it. The keys could be per-user or there could just be one. That would not involve any communication between applications, or a shared database, following the initial key distribution.

Username and password in https url

Consider the URL:
https://foo:password#example.com
Does the username/password portion in the above example qualify as a "URL parameter", as defined in this question?
When you put the username and password in front of the host, this data is not sent that way to the server. It is instead transformed to a request header depending on the authentication schema used. Most of the time this is going to be Basic Auth which I describe below. A similar (but significantly less often used) authentication scheme is Digest Auth which nowadays provides comparable security features.
With Basic Auth, the HTTP request from the question will look something like this:
GET / HTTP/1.1
Host: example.com
Authorization: Basic Zm9vOnBhc3N3b3Jk
The hash like string you see there is created by the browser like this: base64_encode(username + ":" + password).
To outsiders of the HTTPS transfer, this information is hidden (as everything else on the HTTP level). You should take care of logging on the client and all intermediate servers though. The username will normally be shown in server logs, but the password won't. This is not guaranteed though. When you call that URL on the client with e.g. curl, the username and password will be clearly visible on the process list and might turn up in the bash history file.
When you send passwords in a GET request as e.g. http://example.com/login.php?username=me&password=secure the username and password will always turn up in server logs of your webserver, application server, caches, ... unless you specifically configure your servers to not log it. This only applies to servers being able to read the unencrypted http data, like your application server or any middleboxes such as loadbalancers, CDNs, proxies, etc. though.
Basic auth is standardized and implemented by browsers by showing this little username/password popup you might have seen already. When you put the username/password into an HTML form sent via GET or POST, you have to implement all the login/logout logic yourself (which might be an advantage and allows you to more control over the login/logout flow for the added "cost" of having to implement this securely again). But you should never transfer usernames and passwords by GET parameters. If you have to, use POST instead. The prevents the logging of this data by default.
When implementing an authentication mechanism with a user/password entry form and a subsequent cookie-based session as it is commonly used today, you have to make sure that the password is either transported with POST requests or one of the standardized authentication schemes above only.
Concluding I could say, that transfering data that way over HTTPS is likely safe, as long as you take care that the password does not turn up in unexpected places. But that advice applies to every transfer of any password in any way.

Resources