symfony2 get firewall name on login page - security

I'd want to use a login page to access different firewalls, so I need to get information about the firewall I'm logging in.
In my controller I'd use
$this->container->get('security.context')->getToken()->getProviderKey()
but as an anonymous user I don't have access to getProviderKey method.
I could also parse
_security.xxx.target_path
to get xxx firewall but I'm looking for a more general solution if it exists at all.
Any idea?

As of symfony 3.2, you can now get the current firewall configuration using the following:
public function indexAction(Request $request)
{
$firewall = $this->container
->get('security.firewall.map')
->getFirewallConfig($request)
->getName();
}
Ref: http://symfony.com/blog/new-in-symfony-3-2-firewall-config-class-and-profiler

For Symfony 3.4 I wrote this to avoid referencing the non-public "security.firewall.map" service:
$firewallName = null;
if (($firewallContext = trim($request->attributes->get("_firewall_context", null))) && (false !== ($firewallContextNameSplit = strrpos($firewallContext, ".")))) {
$firewallName = substr($firewallContext, $firewallContextNameSplit + 1);
}
(Referencing "security.firewall.map" on 3.4 will throw an exception.)
Edit: This will not work in a custom exception controller function.

I was doing a little research on this myself recently so that I could send this information in an XACML request as part of the environment.
As far as I can tell from GitHub issues like this one:
https://github.com/symfony/symfony/issues/14435
There is currently no way to reliably get the information out of Symfony except the dirty compiler pass hack suggested on the linked issue. It does appear from the conversation on these issues, they are working on making this available, however, the status is still open, so we will have to be patient and wait for it to be provided.

#Adambean's answer is pretty elegant, but I'd write it as a one-liner:
$firewallName = array_slice(explode('.', trim($request->attributes->get('_firewall_context'))), -1)[0];
The difference is that $firewallName will always be a string (which may be empty).
Also, please note that this answer (like #Adambean's) doesn't work for a firewall with a dot in its name.

Related

Using ZMQ_XPUB_MANUAL with zeromq.js

I am trying to implement a pub/sub broker with ZeroMQ where it is possible to restrict clients from subscribing to prefixes they are not allowed to subscribe to. I found a tutorial that tries to achieve a similar thing using the ZMQ_XPUB_MANUAL option. With zeromq.js it is possible to set this option:
import * as zmq from "zeromq";
// ...
const socket = new zmq.XPublisher({ manual: true });
After setting this option I am able to receive the subscription messages by calling .receive() on this socket:
const [msg] = await socket.receive();
But I have no Idea how to accept this subscription. Usally this is done by calling setSockOpt with ZMQ_SUBSCRIBE but I don't know how to do this with zeromq.js.
Is there a way to call setSockOpt with zeromq.js or is there another way to accept a subscription?
Edit
I tried user3666197's suggestion to call setSockOpt directly, but I am not sure how to do this. Rather than doing that, I took another look in the sources and found this: https://github.com/zeromq/zeromq.js/blob/master/src/native.ts#L617
It seems like setSockOpt is exposed to the TypeScript side as protected methods of the Socket class. To try this out, I created my own class that inherits XPublisher and exposed an acceptSubscription message:
class CustomPublisher extends zmq.XPublisher {
constructor(options?: zmq.SocketOptions<zmq.XPublisher>) {
super(options);
}
public acceptSubscription(subscription: string | null): void {
// ZMQ_SUBSCRIBE has a value of 6
// reference:
// https://github.com/zeromq/libzmq/blob/master/include/zmq.h#L310
this.setStringOption(6, subscription);
}
}
This works like a charm! But do not forget to strip the first byte of the subscription messages, otherwise your client won't receive any messages since the prefix won't match.
Q : "Is there a way to call setSockOpt() with zeromq.js or is there another way to accept a subscription?"
So, let me first mention Somdoron to be, out of doubts & for ages, a master of the ZeroMQ tooling.
Next comes the issue. The GitHub-sources, I was able to review atm, seem to me, that permit the ZMQ_XPUB-Socket-archetypes to process the native API ZMQ_XPUB_MANUAL settings ( re-dressed into manual-property, an idiomatic shift ), yet present no method (so far visible for me) to actually permit user to meet the native API explicit protocol of:
ZMQ_XPUB_MANUAL: change the subscription handling to manual...with manual mode subscription requests are not added to the subscription list. To add subscription the user need to call setsockopt() with ZMQ_SUBSCRIBE on XPUB socket./__ from ZeroMQ native API v.4.3.2 documentation __/
Trying to blind-call the Socket-inherited .SetSockOpt() method may prove me wrong, yet if successful, it may be a way to inject the { ZMQ_SUBSCRIBE | ZMQ_UNSUBSCRIBE } subscription-management steps into the XPUB-instance currently having been switched into the ZMQ_XPUB_MANUAL-mode.
Please test it, and if it fails to work via this super-class inherited method, the shortest remedy would be to claim that collision/conceptual-shortcomings directly to the zeromq.js maintainers ( it might be a W.I.P. item, deeper in their actual v6+ refactoring backlog, so my fingers are crossed for either case ).

Trouble upgrading to new ember-simple-auth

G'day all,
I've been having trouble upgrading to a more recent version of the ember-simple-auth module.
In particular I seem to have two challenges:
1) the application no longer transitions to the desired route after authenticating. the configuration looks like this:
ENV['ember-simple-auth'] = {
crossOriginWhiteList: ['http://10.10.1.7:3000'],
routeAfterAuthentication: 'profile',
//store: 'simple-auth-session-store:local-storage',
//authorizer: 'simple-auth-authorizer:token',
};
but it never gets to "profile".
2) I can't get the authenticated session to stick after a reload. I had been trying to use the local-store which I believed would do the trick, but it's not. Has something changed in the implementation?
The documentation seems to indicate that the configuration strings are right, but the transition and session store don't seem to be working.
Has anyone had a similar problem?
Thanks,
Andrew
you could try adding "routeIfAlreadyAuthenticated" to ENV['ember-simple-auth'] - or you could transition manually in index route "afterModel" hook, if session is already authenticated
have you configured a session store? https://github.com/simplabs/ember-simple-auth#session-stores - the way it's configured changed in 1.0, now you can add the desired session store to app/session-stores/application.js - maybe this solves #1 too.
OK. As the comments call out, there were two problems here:
1) I had written a customer authorizer for the old version of simple-auth which didn't work with the new version, and
2) I had a typo in the adapter code, where DataAdapterMixin was DAtaAdapterMixin.
Removing (1) and fixing (2) fixed the problem.

How to make sure a user can only see and access their own data in Yii

In Yii, is there a best way to make sure a user can only see and access their own data in Yii?
I thought an Admin should be able to see anything, but for now, I'll cross that bridge later.
Thanks
Look into scopes. Default scopes will be your friend:
http://www.yiiframework.com/doc/guide/1.1/en/database.ar#named-scopes
Because the defaultScopes array is inside of a function, you can also do conditional default scopes:
public function defaultScope()
{
$t=$this->getTableAlias(false,false);
if(Yii::app()->user->notAdmin()) {
return array(
'condition'=>"$t.<column_name> = :<columnName>",
'params'=>array(':<columnName>'=>Yii::app()->user->notAdmin),
);
}
else return array();
}
Edit: Note that this can get you in trouble down the road if you aren't careful. See this issue on the Yii site for more info.
There is no way Yii will do this for you, you'll do it on your own, but it's fairly straight forward.
You can consider scopes, or look into Relations and base them all on current user. For example, to get all posts by a user, you can do:
$posts = Post::model()->findAll(); //WRONG
$posts = Yii::app()->user->posts(); //RIGHT (Should define the relation in the User model)
Check out a solution which I wrote:
http://www.yiiframework.com/forum/index.php/topic/42735-restrict-users-to-only-editingdeleting-their-own-entries/page_gopid_237608#entry237608

Drupal - Security check all site paths by role

I'm writing this in the forlorn hope that someone has already done something similar. I would have posted on drupal.org - but that site is about as user-friendly as a kick in the tomatoes.
I don't know about you, but when I develop I leave all my Drupal paths with open access, and then think about locking them down with access permissions at the end.
What would be be really useful is a module which parses all the paths available (by basically deconstructing the contents of the menu_router table) and then trying them (curl?) in turn whilst logged-in as a given user with a given set of roles.
The output would be a simple html page saying which paths are accessible and which are not.
I'm almost resigned to doing this myself, but if anyone knows of anything vaguely similar I'd be more than grateful to hear about it.
Cheers
UPDATE
Following a great idea from Yorirou, I knocked together a simple module to provide the output I was looking for.
You can get the code here: http://github.com/hymanroth/Path-Lockdown
My first attempt would be a function like this:
function check_paths($uid) {
global $user;
$origuser = $user;
$user = user_load($uid);
$paths = array();
foreach(array_keys(module_invoke_all('menu')) as $path) {
$result = menu_execute_active_handler($path);
if($result != MENU_ACCESS_DENIED && $result != MENU_NOT_FOUND) {
$paths[$path] = TRUE;
}
else {
$paths[$path] = FALSE;
}
}
$user = $origuser;
return $paths;
}
This is good for a first time, but it can't handle wildcard paths (% in the menu path). Loading all possible values can be an option, but it doesn't work in all cases. For instance, if you have %node for example, then you can use node_load, but if you have just %, then you have no idea what to load. Also, it is a common practice to omit the last argument, which is a variable, in order to correctly handle if no argument is given (eg. display all elements).
Also, it might be a good idea to integrate this solution with the Drupal's testing system.
I did a bit of research and wasn't able to find anything. Though I'm inclined to think there is a way to check path access through Drupal API as opposed to CURL - but please keep me updated on your progress / let me know if you would like help developing. This would a great addition to the Drupal modules.

Best way to handle security and avoid XSS with user entered URLs

We have a high security application and we want to allow users to enter URLs that other users will see.
This introduces a high risk of XSS hacks - a user could potentially enter javascript that another user ends up executing. Since we hold sensitive data it's essential that this never happens.
What are the best practices in dealing with this? Is any security whitelist or escape pattern alone good enough?
Any advice on dealing with redirections ("this link goes outside our site" message on a warning page before following the link, for instance)
Is there an argument for not supporting user entered links at all?
Clarification:
Basically our users want to input:
stackoverflow.com
And have it output to another user:
stackoverflow.com
What I really worry about is them using this in a XSS hack. I.e. they input:
alert('hacked!');
So other users get this link:
stackoverflow.com
My example is just to explain the risk - I'm well aware that javascript and URLs are different things, but by letting them input the latter they may be able to execute the former.
You'd be amazed how many sites you can break with this trick - HTML is even worse. If they know to deal with links do they also know to sanitise <iframe>, <img> and clever CSS references?
I'm working in a high security environment - a single XSS hack could result in very high losses for us. I'm happy that I could produce a Regex (or use one of the excellent suggestions so far) that could exclude everything that I could think of, but would that be enough?
If you think URLs can't contain code, think again!
https://owasp.org/www-community/xss-filter-evasion-cheatsheet
Read that, and weep.
Here's how we do it on Stack Overflow:
/// <summary>
/// returns "safe" URL, stripping anything outside normal charsets for URL
/// </summary>
public static string SanitizeUrl(string url)
{
return Regex.Replace(url, #"[^-A-Za-z0-9+&##/%?=~_|!:,.;\(\)]", "");
}
The process of rendering a link "safe" should go through three or four steps:
Unescape/re-encode the string you've been given (RSnake has documented a number of tricks at http://ha.ckers.org/xss.html that use escaping and UTF encodings).
Clean the link up: Regexes are a good start - make sure to truncate the string or throw it away if it contains a " (or whatever you use to close the attributes in your output); If you're doing the links only as references to other information you can also force the protocol at the end of this process - if the portion before the first colon is not 'http' or 'https' then append 'http://' to the start. This allows you to create usable links from incomplete input as a user would type into a browser and gives you a last shot at tripping up whatever mischief someone has tried to sneak in.
Check that the result is a well formed URL (protocol://host.domain[:port][/path][/[file]][?queryField=queryValue][#anchor]).
Possibly check the result against a site blacklist or try to fetch it through some sort of malware checker.
If security is a priority I would hope that the users would forgive a bit of paranoia in this process, even if it does end up throwing away some safe links.
Use a library, such as OWASP-ESAPI API:
PHP - http://code.google.com/p/owasp-esapi-php/
Java - http://code.google.com/p/owasp-esapi-java/
.NET - http://code.google.com/p/owasp-esapi-dotnet/
Python - http://code.google.com/p/owasp-esapi-python/
Read the following:
https://www.golemtechnologies.com/articles/prevent-xss#how-to-prevent-cross-site-scripting
https://www.owasp.org/
http://www.secbytes.com/blog/?p=253
For example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$esapi = new ESAPI( "/etc/php5/esapi/ESAPI.xml" ); // Modified copy of ESAPI.xml
$sanitizer = ESAPI::getSanitizer();
$sanitized_url = $sanitizer->getSanitizedURL( "user-homepage", $url );
Another example is to use a built-in function. PHP's filter_var function is an example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$sanitized_url = filter_var($url, FILTER_SANITIZE_URL);
Using filter_var allows javascript calls, and filters out schemes that are neither http nor https. Using the OWASP ESAPI Sanitizer is probably the best option.
Still another example is the code from WordPress:
http://core.trac.wordpress.org/browser/tags/3.5.1/wp-includes/formatting.php#L2561
Additionally, since there is no way of knowing where the URL links (i.e., it might be a valid URL, but the contents of the URL could be mischievous), Google has a safe browsing API you can call:
https://developers.google.com/safe-browsing/lookup_guide
Rolling your own regex for sanitation is problematic for several reasons:
Unless you are Jon Skeet, the code will have errors.
Existing APIs have many hours of review and testing behind them.
Existing URL-validation APIs consider internationalization.
Existing APIs will be kept up-to-date with emerging standards.
Other issues to consider:
What schemes do you permit (are file:/// and telnet:// acceptable)?
What restrictions do you want to place on the content of the URL (are malware URLs acceptable)?
Just HTMLEncode the links when you output them. Make sure you don't allow javascript: links. (It's best to have a whitelist of protocols that are accepted, e.g., http, https, and mailto.)
You don't specify the language of your application, I will then presume ASP.NET, and for this you can use the Microsoft Anti-Cross Site Scripting Library
It is very easy to use, all you need is an include and that is it :)
While you're on the topic, why not given a read on Design Guidelines for Secure Web Applications
If any other language.... if there is a library for ASP.NET, has to be available as well for other kind of language (PHP, Python, ROR, etc)
For Pythonistas, try Scrapy's w3lib.
OWASP ESAPI pre-dates Python 2.7 and is archived on the now-defunct Google Code.
How about not displaying them as a link? Just use the text.
Combined with a warning to proceed at your own risk may be enough.
addition - see also Should I sanitize HTML markup for a hosted CMS? for a discussion on sanitizing user input
There is a library for javascript that solves this problem
https://github.com/braintree/sanitize-url
Try it =)
In my project written in JavaScript I use this regex as white list:
url.match(/^((https?|ftp):\/\/|\.{0,2}\/)/)
the only limitation is that you need to put ./ in front for files in same directory but I think I can live with that.
Using Regular Expression to prevent XSS vulnerability is becoming complicated thus hard to maintain over time while it could leave some vulnerabilities behind. Having URL validation using regular expression is helpful in some scenarios but better not be mixed with vulnerability checks.
Solution probably is to use combination of an encoder like AntiXssEncoder.UrlEncode for encoding Query portion of the URL and QueryBuilder for the rest:
public sealed class AntiXssUrlEncoder
{
public string EncodeUri(Uri uri, bool isEncoded = false)
{
// Encode the Query portion of URL to prevent XSS attack if is not already encoded. Otherwise let UriBuilder take care code it.
var encodedQuery = isEncoded ? uri.Query.TrimStart('?') : AntiXssEncoder.UrlEncode(uri.Query.TrimStart('?'));
var encodedUri = new UriBuilder
{
Scheme = uri.Scheme,
Host = uri.Host,
Path = uri.AbsolutePath,
Query = encodedQuery.Trim(),
Fragment = uri.Fragment
};
if (uri.Port != 80 && uri.Port != 443)
{
encodedUri.Port = uri.Port;
}
return encodedUri.ToString();
}
public static string Encode(string uri)
{
var baseUri = new Uri(uri);
var antiXssUrlEncoder = new AntiXssUrlEncoder();
return antiXssUrlEncoder.EncodeUri(baseUri);
}
}
You may need to include white listing to exclude some characters from encoding. That could become helpful for particular sites.
HTML Encoding the page that render the URL is another thing you may need to consider too.
BTW. Please note that encoding URL may break Web Parameter Tampering so the encoded link may appear not working as expected.
Also, you need to be careful about double encoding
P.S. AntiXssEncoder.UrlEncode was better be named AntiXssEncoder.EncodeForUrl to be more descriptive. Basically, It encodes a string for URL not encode a given URL and return usable URL.
You could use a hex code to convert the entire URL and send it to your server. That way the client would not understand the content in the first glance. After reading the content, you could decode the content URL = ? and send it to the browser.
Allowing a URL and allowing JavaScript are 2 different things.

Resources