HTTP Modules use Cookie/Credentials - security

I am using Metasploit auxiliary/scanner/http modules like dir_listing, http_login, files_dir.... and for some modules cookie is not required, everything can be testing on the root page.
But for some modules, like the scanner, the blind_sql_query you cannot test everything withing the root page scope, if the website requires a logging or a certaing page requires a cookie, or a http_referer.
The crawler module has USER and PASSWORD options but with the login web as the start poing of crawling and the credentials well set, it doesn't happen to work well, it doesn't ask for the name of the field if its a POST login, etc.
Does someone knows how to perform it¿? How to audit with metasploit as if you were a user, the same way in other applications you can set either a cookie or login-in a form.

Because every login mechanism can be implemented a bit differently, you might need a bit more manual interaction. I think that this MSF plugin might not be the right tool for that.
I would recommend using an interception proxy for this task, with already integrated crawler. That way, you can login to the app, get the required token of authority and crawl the site. One of the best - http://portswigger.net/. This task you can do with Free version. Or OWASP Zed Attack Proxy.
If you still need to use MSF, you can chain the plugin through one of these more capable proxies, using PROXIES MSF variable.

Related

What is the best way to authenticate users with auth0 (oauth2) in a chrome extension that runs content scripts across multiple origins?

I've seen a few posts on this but I want to highlight some specific questions I have yet to see properly answered.
My current chrome extension contains the following:
background service worker
html pages to handle login / logout (although doing this in a popup would be great)
content scripts that run a SPA on certain domains
What I would like is for a user to be able to authenticate with auth0, and then any content script running on any domain can then use an access token to hit my API.
The current challenges I've been seeing that I'm not sure how to tackle:
Ideally each running content script has its own token. This involves using the auth0 session to silently get an access token. However, since auth0 checks the origin when hitting /authorize it would mean registering every domain as an "allowed origin" which is not possible for me due to volume. Currently if I try just setting the redirectURI to my chrome extension URL, it ends up timing out. I have seen some users report this approach working, so I'm not sure if I'm doing something wrong or not, but this approach feels unsafe in retrospect.
If I instead funnel my requests through the background script, so all running content scripts effectively use a single access token, how do I refresh that access token? The documentation recommends making a call to /oauth/token which involves the client secret. My guess is this is not something I should be putting into my javascript as all of that is visible to anyone who inspects the bundle. Is that a valid concern? If so, what other approach do I have?
If I do use a manually stored refresh_token, what is the best way to keep that available? The chrome storage API says not to use it for sensitive information. Is it preferred then to keep it in local storage?
If the best option is to have the background script make all the requests on behalf of the content scripts, what is the safest way for the content scripts to make a request through the background script? I would rely on chrome.runtime.sendMessage but it seems like the API supports arbitrarily sending messages to any extension, which means other code that isn't part of the extension could also funnel requests through the background script.
More generally, I would love to hear some guidance on a safe architecture to authenticate users for a multi-domain extension.
I am also not adverse to using another service, but from what I've seen so far, auth0 offers relatively good UX/DX.

Access without Logging in

Im using GWT, GAE to make a web app.
I looked at a bunch of tutorials regarding implementing a login system but most of those tutorials implement it so it's mandatory to login to access the web app. How would I go about making it so that anyone can access the app but if they want to use account specific functionality, they they have the option of signing up for an account.
There are two parts to it.
First, in your client code you check if a user is logged in. If so, you allow access to the "closed" parts of the app. If not, you show a link/button to login and hide tabs/views that are accessible to authorized users.
Second, in your server code you specify which requests do not require authentication and which do require it. This is necessary if a user somehow figures out how to send a request without using your client code.
For example, in my code some requests have checkSession() called at the very beginning. If no authentication object is found for this user in session, this method throws LoginException to the client. If the authentication object is present, the request continues to execute normally and returns requested data to the client.
Further to Andrei's answer, if you want a framework to manage the sessions for you, you can use GWT-Platform, which has an excellent Gatekeeper feature.
I use it for mine and I have a LoggedInGatekeeper class. Simply add #UseGatekeeper(LoggedInGatekeeper.class) to the constructor of each presenter proxy and it checks if the user is logged in. If you want anyone to be able to access that page simply annotate with #NoGatekeeper. Easy!
It takes a bit of setting up but it's a great MVP framework. There are maven archetypes and samples etc.
Hope this helps.

When trying to create a SSL connection with LWP::UserAgent, what do I use for realm?

I've started a project to scrape my work's employee website to scrape the user's (in this case, mine) schedule and munge the data onto a google calendar. I've decided to go with Perl with LWP.
The problem is this, when trying to set up SSL negotiations I don't know what do put for the 'realm'.
For example: (http://www.sciencemedianetwork.org/wiki/Form_submission_with_LWP,_https,_and_authentication)
# ...
my $ua = new LWP::UserAgent;
$ua->protocols_allowed( [ 'http','https'] );
$ua->credentials('some.server:443',**'realm'**,'username','password');
# ...
I've looked at everything my browser can tell me and at a wireshark packet capture trying to find anything but to no avail. I assume that second argument to credentials() isn't optional.
Where do I find the 'realm' I'm supposed to use?
The credentials are for the HTTP authentication protocol (RFC 2617) (Wikipedia).
The server can challenge the client to authenticate itself. This response contains a string called “realm” which tells the client for what authentication is required. This allows the same server under the same domain to request authentication for different things, e.g. in a content management system where there might be an “user password” and an “administrator password”, which would be two different realms.
In a browser, this realm would be displayed alongside the username and password box which allows the user to type in the correct password.
To discover the realm, navigate to a page which requires authentication and look for the WWW-Authenticate header.
Note that HTTP authentication has become quite uncommon, with session cookies being used more often. To deal with such an authentication scheme, make sure that your LWP::UserAgent has an attached cookie storage, and then navigate through the login form before visiting your actual target page. Using WWW::Mechanize tends to make this a lot easier.

Using wget in conjunction with an OpenID Login

I have a (legit) edge case where I would like to download a web page using any command line tool, but the web page needs an OpenID login to show the content I need.
wget supports basic auth and FTP, but I can't think of a sane way to use it in conjuction with an OpenID login. The only way I can think of
Perform an OpenID login using wget
Somehow store the resulting session cookie
Try to fetch the page using another wget call, and --load-cookies the cookies from the last call
this seems complex to build, though, as IIRC the OpenID login process is not entirely as straightforward as your plain old web site login. Does anyone either
know a less complex way (performing the OpenID login manually somewhere else would be completely acceptable)
know a ready-made implementation of what I describe above? I'm keen on avoiding having to build this from scratch if at all possible.
Other inspirations are welcome as well.
I can work either on Linux or on Windows. Linux would be preferred from an infrastructure point of view but either platform is fine.
performing the OpenID login manually somewhere else
Well the best I can think of is to use any browser for logging in to whatever service you want. The service will then preserve your "state" somehow in a cookie at your browser.
Get that cookie, e.g. store it at cookie.txt and pass it in the header
wget --header="Cookie: $(cat cookie.txt)" http://...
as long as the session is valid, you can use the wget script. Should work for 99% of all cases, thought probably not for online banking (if it does... switch banks immediately :-P)

URL Based Authentication Link

What are some good suggestions or resources to look at to help me secure a single click URL based authentication?
Essentially, the situation is a third party system which accepts an HTTPS request, through the browser, where you supply authentication information (un, pw, authkey, etc...). The service then, upon authenticating the provided credentials, will allow or deny login access. The point being, that if someone clicks on the link, they're automatically granted access to this third party system.
Currently, there isn't a whole lot of security surrounding the whole process, (which isn't a big deal because the product isn't in production yet) and the third party is willing to make some modifications to secure this up a bit.
I've already determined I need to hash the information, and probably even submit it via a POST to prevent it from showing information in the browser history. But I'd like a little input on how you all would handle something like this.
[Edit: Requests are and will continue being sent via HTTPS. I also modified the HTTP previously used to be HTTPS]
Don't think about "secure this up a bit". It's either secure from the ground up, or it's got holes that will cost you dearly.
Look at HTTP Digest Authentication. It's simple, reliable and works well under most circumstances.
Look at the OWASP.org top-10 vulnerabilities. Be sure you understand and address each one.
You should probably use HTTPS to avoid the credentials being eavesdropped upon while in transit to the third party web server.
Protect yourself from using stale link to gain access to the application. Make the link be dependent on current time value

Resources