Cron Kohana action and prevent CSRF - cron

I need to call a Kohana action through cron. I can use this code to limit only to the server IP:
$allowedIps = array('127.0.0.1','::1');
if(in_array($_SERVER['REMOTE_ADDR'],$allowedIps))
Do I need CSRF prevention, like tokens? The server is a Parallel's VPS. I wouldn't think there would be any users on a network browsing other pages making them susceptible to CSRF.
The only way I can think of preventing this, if needed, is to create a non-accessible PHP script outside of Kohana called by cron, generate a token and save to flat file, and pass that token to Kohana via an outside include using this
http://forum.kohanaframework.org/discussion/1255/load-kohana-from-external-scriptapp/p1

If the script is going to be called via the local machine (which it is according to your code sample) then you could simplify that by making sure the code is called via the CLI.
if (Kohana::$is_cli)
{
// Run function
}
As for CSRF tokens, you don't need them for this. CSRF works by exploiting someone to click a link which initiates an action on their behalf. Since you can't access the cron controller/action via a browser (you shouldn't be able to) you don't need to worry about it.

I'm pretty sure you want to use this module for any CLI-related tasks. It'll probably be included as official Kohana module since 3.3 version as it's very popular and supported.

Related

What is the best way to authenticate users with auth0 (oauth2) in a chrome extension that runs content scripts across multiple origins?

I've seen a few posts on this but I want to highlight some specific questions I have yet to see properly answered.
My current chrome extension contains the following:
background service worker
html pages to handle login / logout (although doing this in a popup would be great)
content scripts that run a SPA on certain domains
What I would like is for a user to be able to authenticate with auth0, and then any content script running on any domain can then use an access token to hit my API.
The current challenges I've been seeing that I'm not sure how to tackle:
Ideally each running content script has its own token. This involves using the auth0 session to silently get an access token. However, since auth0 checks the origin when hitting /authorize it would mean registering every domain as an "allowed origin" which is not possible for me due to volume. Currently if I try just setting the redirectURI to my chrome extension URL, it ends up timing out. I have seen some users report this approach working, so I'm not sure if I'm doing something wrong or not, but this approach feels unsafe in retrospect.
If I instead funnel my requests through the background script, so all running content scripts effectively use a single access token, how do I refresh that access token? The documentation recommends making a call to /oauth/token which involves the client secret. My guess is this is not something I should be putting into my javascript as all of that is visible to anyone who inspects the bundle. Is that a valid concern? If so, what other approach do I have?
If I do use a manually stored refresh_token, what is the best way to keep that available? The chrome storage API says not to use it for sensitive information. Is it preferred then to keep it in local storage?
If the best option is to have the background script make all the requests on behalf of the content scripts, what is the safest way for the content scripts to make a request through the background script? I would rely on chrome.runtime.sendMessage but it seems like the API supports arbitrarily sending messages to any extension, which means other code that isn't part of the extension could also funnel requests through the background script.
More generally, I would love to hear some guidance on a safe architecture to authenticate users for a multi-domain extension.
I am also not adverse to using another service, but from what I've seen so far, auth0 offers relatively good UX/DX.

HTTP Modules use Cookie/Credentials

I am using Metasploit auxiliary/scanner/http modules like dir_listing, http_login, files_dir.... and for some modules cookie is not required, everything can be testing on the root page.
But for some modules, like the scanner, the blind_sql_query you cannot test everything withing the root page scope, if the website requires a logging or a certaing page requires a cookie, or a http_referer.
The crawler module has USER and PASSWORD options but with the login web as the start poing of crawling and the credentials well set, it doesn't happen to work well, it doesn't ask for the name of the field if its a POST login, etc.
Does someone knows how to perform it¿? How to audit with metasploit as if you were a user, the same way in other applications you can set either a cookie or login-in a form.
Because every login mechanism can be implemented a bit differently, you might need a bit more manual interaction. I think that this MSF plugin might not be the right tool for that.
I would recommend using an interception proxy for this task, with already integrated crawler. That way, you can login to the app, get the required token of authority and crawl the site. One of the best - http://portswigger.net/. This task you can do with Free version. Or OWASP Zed Attack Proxy.
If you still need to use MSF, you can chain the plugin through one of these more capable proxies, using PROXIES MSF variable.

circumventing browser security for a demo

I have a demo to make in which first a secure session is created with domain (let's call it paranoids.com), and then a bunch of locally read html+javascript (my demo) want to use that secure session. We are using google-chrome, started with --disable-web-security and --allow-file-access-from-files, on a linux/openSuse platform.
Why do I need to do that? I need to scrape some pages from that domain and re-render them with an alternative renderer. We have absolutely no say with that other domain owner.
What's the best possible approach for this, without asking my poor breadgiver to go through technical hoops? Can my script access the JSESSIONID of the domain paranoids.com when run with some command line arguments? Or, is it just not possible, and must the user copy/paste the cookie manually?
Thanks for any ideas that help realize that goal.

sfGuard token login for wkhtmltopdf

wkhtmltopdf allows to make a screenshot of a browser view with a webkit browser.
I have a Symfony 1.4 application that requires login, which I would like to use wkhtmltopdf to create a "print this page" function.
How can I securely facilitate this. I'm thinking of creating a one-off token on each screen for the print button that allows wkhtmltopdf to login without using the password of the user.
Any suggestions for how to structure this?
We'vbe come to the conclusion to use the built in "keep me logged in" functionality for this problem.
Would you consider a different printing framework ?
What about jquery plugin (e.g. https://github.com/ianoxley/jqueryprintpage#readme) ?
That way you won't have to allow access to the restricted area from outside the session.
If you still want to use wkhtmltopdf, you can easily create an action that receives a url and a user_id and creates a unique token, I might save this token in your DB or in a Key-Value cache (depends what is your system architecture). I wouldn't create the unique token in advance, I think its better creating it on demand (When your user is asking a print).
You have couple of options in order to enable printing in secured actions,
1) Create a custom security filter. In the filter, in addition to authenticated request, you have to allow requests that contain "token" parameter with right combination of url and user
2) Change the action to unsecured. If you don't want the change the security filter, you would have to change each action to "unsecured" and create a function that verifies if either the request is authenticated or it has a proper token parameter.
It would be smart to remove each token after you used it once to make it even harder to guess a token.
In addition you might want to create a periodic worker that clears old tokens that were never in use.
Even though you already decided on an approach, I would still like to add one more alternate option that might help others viewing this issue.
Another alternate route might be to grab the current source of the page being viewed and post that into your printer backend using something like
$.post("/printer", document.documentElement.outerHTML);
This way you can also preprocess the HTML in an easy way. Your backed could first store the HTML and then parse it to for example convert images or perhaps remove some parts of the page that will not be used when printing.

Using wget in conjunction with an OpenID Login

I have a (legit) edge case where I would like to download a web page using any command line tool, but the web page needs an OpenID login to show the content I need.
wget supports basic auth and FTP, but I can't think of a sane way to use it in conjuction with an OpenID login. The only way I can think of
Perform an OpenID login using wget
Somehow store the resulting session cookie
Try to fetch the page using another wget call, and --load-cookies the cookies from the last call
this seems complex to build, though, as IIRC the OpenID login process is not entirely as straightforward as your plain old web site login. Does anyone either
know a less complex way (performing the OpenID login manually somewhere else would be completely acceptable)
know a ready-made implementation of what I describe above? I'm keen on avoiding having to build this from scratch if at all possible.
Other inspirations are welcome as well.
I can work either on Linux or on Windows. Linux would be preferred from an infrastructure point of view but either platform is fine.
performing the OpenID login manually somewhere else
Well the best I can think of is to use any browser for logging in to whatever service you want. The service will then preserve your "state" somehow in a cookie at your browser.
Get that cookie, e.g. store it at cookie.txt and pass it in the header
wget --header="Cookie: $(cat cookie.txt)" http://...
as long as the session is valid, you can use the wget script. Should work for 99% of all cases, thought probably not for online banking (if it does... switch banks immediately :-P)

Resources