Using wget in conjunction with an OpenID Login - linux

I have a (legit) edge case where I would like to download a web page using any command line tool, but the web page needs an OpenID login to show the content I need.
wget supports basic auth and FTP, but I can't think of a sane way to use it in conjuction with an OpenID login. The only way I can think of
Perform an OpenID login using wget
Somehow store the resulting session cookie
Try to fetch the page using another wget call, and --load-cookies the cookies from the last call
this seems complex to build, though, as IIRC the OpenID login process is not entirely as straightforward as your plain old web site login. Does anyone either
know a less complex way (performing the OpenID login manually somewhere else would be completely acceptable)
know a ready-made implementation of what I describe above? I'm keen on avoiding having to build this from scratch if at all possible.
Other inspirations are welcome as well.
I can work either on Linux or on Windows. Linux would be preferred from an infrastructure point of view but either platform is fine.

performing the OpenID login manually somewhere else
Well the best I can think of is to use any browser for logging in to whatever service you want. The service will then preserve your "state" somehow in a cookie at your browser.
Get that cookie, e.g. store it at cookie.txt and pass it in the header
wget --header="Cookie: $(cat cookie.txt)" http://...
as long as the session is valid, you can use the wget script. Should work for 99% of all cases, thought probably not for online banking (if it does... switch banks immediately :-P)

Related

What is the best way to authenticate users with auth0 (oauth2) in a chrome extension that runs content scripts across multiple origins?

I've seen a few posts on this but I want to highlight some specific questions I have yet to see properly answered.
My current chrome extension contains the following:
background service worker
html pages to handle login / logout (although doing this in a popup would be great)
content scripts that run a SPA on certain domains
What I would like is for a user to be able to authenticate with auth0, and then any content script running on any domain can then use an access token to hit my API.
The current challenges I've been seeing that I'm not sure how to tackle:
Ideally each running content script has its own token. This involves using the auth0 session to silently get an access token. However, since auth0 checks the origin when hitting /authorize it would mean registering every domain as an "allowed origin" which is not possible for me due to volume. Currently if I try just setting the redirectURI to my chrome extension URL, it ends up timing out. I have seen some users report this approach working, so I'm not sure if I'm doing something wrong or not, but this approach feels unsafe in retrospect.
If I instead funnel my requests through the background script, so all running content scripts effectively use a single access token, how do I refresh that access token? The documentation recommends making a call to /oauth/token which involves the client secret. My guess is this is not something I should be putting into my javascript as all of that is visible to anyone who inspects the bundle. Is that a valid concern? If so, what other approach do I have?
If I do use a manually stored refresh_token, what is the best way to keep that available? The chrome storage API says not to use it for sensitive information. Is it preferred then to keep it in local storage?
If the best option is to have the background script make all the requests on behalf of the content scripts, what is the safest way for the content scripts to make a request through the background script? I would rely on chrome.runtime.sendMessage but it seems like the API supports arbitrarily sending messages to any extension, which means other code that isn't part of the extension could also funnel requests through the background script.
More generally, I would love to hear some guidance on a safe architecture to authenticate users for a multi-domain extension.
I am also not adverse to using another service, but from what I've seen so far, auth0 offers relatively good UX/DX.

HTTP Modules use Cookie/Credentials

I am using Metasploit auxiliary/scanner/http modules like dir_listing, http_login, files_dir.... and for some modules cookie is not required, everything can be testing on the root page.
But for some modules, like the scanner, the blind_sql_query you cannot test everything withing the root page scope, if the website requires a logging or a certaing page requires a cookie, or a http_referer.
The crawler module has USER and PASSWORD options but with the login web as the start poing of crawling and the credentials well set, it doesn't happen to work well, it doesn't ask for the name of the field if its a POST login, etc.
Does someone knows how to perform it¿? How to audit with metasploit as if you were a user, the same way in other applications you can set either a cookie or login-in a form.
Because every login mechanism can be implemented a bit differently, you might need a bit more manual interaction. I think that this MSF plugin might not be the right tool for that.
I would recommend using an interception proxy for this task, with already integrated crawler. That way, you can login to the app, get the required token of authority and crawl the site. One of the best - http://portswigger.net/. This task you can do with Free version. Or OWASP Zed Attack Proxy.
If you still need to use MSF, you can chain the plugin through one of these more capable proxies, using PROXIES MSF variable.

How do I block curl downloading of a specific page?

I read that spammers may be downloading a specific registration page on my site using curl. Is there any way to block that specific page from being CURLed, either through htaccess or other means?
I don't think this is possible to block curl, as curl has the ability to send user agents, cookies, etc. As far as I understand, it can completely emulate a normal user.
If you are worried about protecting a form, you can generate a random token which is submitted automatically when the form is submitted. That way, anyone who tries to make a script to automate registration will have to worry about scraping it first.
There is one weakness in CURL, which you can exploit, it can not run javascript like a browser. So you can take advantage of this fact, one first landing on the reg page, have your server side code check for a cookie, if it isnt there, send some javascript code to the browser, this code will set the cookie and do a redirect/reload ... after reload the server side again checks for the cookie, incase of browsers it will find it.. incase of curl the cookie generation and reload/redirect wont happen in the first place.
I hope i made some sense, bottom line .. utilize javascript to differentiate between curl and browser.
As Oren says, spammers can forge user-agents, so you can't just block the curl user-agent string. The typical solution here is some kind of CATPCHA. These are often jumbled images (though non-visual forms exist) sites (including StackOverflow) have you transcribe to prove you're human.

URL Based Authentication Link

What are some good suggestions or resources to look at to help me secure a single click URL based authentication?
Essentially, the situation is a third party system which accepts an HTTPS request, through the browser, where you supply authentication information (un, pw, authkey, etc...). The service then, upon authenticating the provided credentials, will allow or deny login access. The point being, that if someone clicks on the link, they're automatically granted access to this third party system.
Currently, there isn't a whole lot of security surrounding the whole process, (which isn't a big deal because the product isn't in production yet) and the third party is willing to make some modifications to secure this up a bit.
I've already determined I need to hash the information, and probably even submit it via a POST to prevent it from showing information in the browser history. But I'd like a little input on how you all would handle something like this.
[Edit: Requests are and will continue being sent via HTTPS. I also modified the HTTP previously used to be HTTPS]
Don't think about "secure this up a bit". It's either secure from the ground up, or it's got holes that will cost you dearly.
Look at HTTP Digest Authentication. It's simple, reliable and works well under most circumstances.
Look at the OWASP.org top-10 vulnerabilities. Be sure you understand and address each one.
You should probably use HTTPS to avoid the credentials being eavesdropped upon while in transit to the third party web server.
Protect yourself from using stale link to gain access to the application. Make the link be dependent on current time value

How do you set up an OpenID provider (server) in Ubuntu?

I want to log onto Stack Overflow using OpenID, but I thought I'd set up my own OpenID provider, just because it's harder :) How do you do this in Ubuntu?
Edit: Replacing 'server' with the correct term OpenID provider (Identity provider would also be correct according to wikipedia).
You might also look into setting up your own site as a delegate for another OpenID provider. That way, you can use your own custom URL, but not worry about security and maintenance as mentioned already. However, it's not very difficult, so it may not meet your criteria :)
As an example, you would add this snippet of HTML to the page at your desired OpenID URL if you are using ClaimID as the OpenID provider:
<link rel="openid.server" href="http://openid.claimid.com/server" />
<link rel="openid.delegate" href="http://openid.claimid.com/USERNAME" />
So when OpenID clients access your URL, they "redirect" themselves to the actual provider.
I've actually done this (set up my own server using phpMyID). It's very easy and works quite well. One thing that annoys me to no end is the use of HTML redirects instead of HTTP. I changed that manually, based on some information gotten in the phpMyID forum.
However, I have switched to myOpenId in the meantime. Rolling an own provider is fun and games but it just isn't secure! There are two issues:
More generally, you have to act on faith. phpMyID is great but it's developed in someone's spare time. There could be many undetected security holes in it – and there have been some, in the past. While this of course applies to all security-related software, I believe the problem is potentially more severe with software developed in spare time, especially since the code is far from perfect in my humble opinion.
Secondly, OpenID is highly susceptible to screen scraping and mock interfaces. It's just too easy for an attacker to emulate the phpMyID interface to obtain your credentials for another site. myOpenId offers two very important solutions to the problem.
The first is its use of a cookie-stored picture that is embedded in the login page. If anyone screen-scapes the myOpenId login page, this picture will be missing and the fake can easily be identified.
Secondly, myOpenId supports sign-in using strongly signed certificates that can be installed in the web browser.
I still have phpMyID set up as an alternative provider using Yadis but I wouldn't use it as a login on sites that I don't trust.
In any case, read Sam Ruby's tutorial!
I personnally used phpMyID just for StackOverflow. It's a simple two-files PHP script to put somewhere on a subdomain. Of course, it's not as easy as installing a .deb, but since OpenID relies completely on HTTP, I'm not sure it's advisable to install a self-contained server...
Take a look over at the Run your own identity server page. Community-ID looks to be the most promising so far.
I totally understand where you're coming from with this question. I already had a OpenID at www.myopenid.com but it feels a bit weird relying on a 3rd party for such an important login (a.k.a my permanent "home" on the internet).
Luckily, It is easy to move to using your own server as a openID server - in fact, it can be done with just two files with phpMyID.
Download "phpMyID-0.9.zip" from http://siege.org/projects/phpMyID/
Move it to your server and unzip it to view the README file which explains everything.
The zip has two files: MyID.config.php, MyID.php. I created a directory called <mydocumentroot>/OpenID and renamed MyID.config.php to index.php. This means my OpenID URL will be very cool: http://<mywebsite>/OpenID
Decide on a username and password and then create a hash of them using: echo -n '<myUserNam>:phpMyID:<myPassword>' | openssl md5
Open index.php in a text editor and add the username and password hash in the placeholder. Save it.
Test by browsing to http://<mywebsite>/OpenID/
Test ID is working using: http://www.openidenabled.com/resources/openid-test/checkup/
Rerefence info: http://www.wynia.org/wordpress/2007/01/15/setting-up-an-openid-with-php/ , http://siege.org/projects/phpMyID/ , https://blog.stackoverflow.com/2009/01/using-your-own-url-as-your-openid/
The above answers all seem to contains dead links.
This seems be a possible solution which is still working:
https://simpleid.org/

Resources