DCOM Hardening - Autentication level for calls - com+

MS is changing the minimum security level for DCOM. The setting "Authentication level for calls", a setting for the COM+ application, will be set to minimum "Packet integrity" and this will be mandatory by March 2023.
According to Microsoft, the correct way to handle this is to make this change in you application through programming rather then changing the setting of the com+ app under Computer services>COM+ applications. Is there anybody who got experience from this, how it is done?
Me and my team are struggling a bit with this. We use Delphi RAD Studio but that is not important I would guess, the principles are likely the same regardless.
We have looked at the possibilities to change this setting by programming it but does not seem possible without a total rewrite of Delphi core functions. When the com+ object is created by a Delphi core-function, there is property for authentication level which is not set, it is left "blank". We interpret this as it will apply the Authentication level of the COM+ object under COM+ applications.
There is also a function CoInitializeSecurity that actually seem as a more plausible alternative but it is still unclear. Anyone?

We just resolve this problem in our Delphi application with CoInitializeSecurity function and it works.
We made two types:
type
TAuthenticationLevel = (
alMclDefault,
alMclNone,
alMclConnect,
alMclCall,
alMcllPacket,
alMclPacketIntegrity,
alMclPacketPrivacy
);
TImpersonationLevel = (
ilMclNone, // dummy
ilMclAnonymous,
ilMclIdentify,
ilMclImpersonate,
ilMclDelegate
);
and on the end of mainform.pas, initialization call:
initialization
OleCheck(CoInitializeSecurity(nil, -1, nil, nil, ord(alMclPacketIntegrity), ord(ilMclIdentify), nil, 0, nil));
end.

Trying to figure out the same for an old VB6 app using Activex.exes via DCOM, and I'm not convinced it's feasible.
VB6 hides the ability to change the security settings other than with the DCOM configuration tool, and attempting to match the settings there doesn't allow the client to connect at all.
The client does call CoInitialize, using None & Anonymous as default, but changing to PKT_Integrity in that call, along with changing gloabal DCOM permissions to Packet integrity doesn't allow the connection to complete.

Related

Catel - Use ModelBase as "Data Transfer Object" in Service-Based Application

I am currently creating some examples with Catel.
The Scenario I have in mind:
Database Server
WebServer with WCF DataService
WPF (or Silverlight) Client
My "Problem":
I do not want to repeat the Validation Code in webserver and Client, but the Problem is, that "ModelBase" does not work as DataService data Transfer objects.
(The additional properties create all Kinds of Problems)
So - how would you usually tackle that Problem?
There are some ideas which come to mind:
do not validate on server again (authenticated users are trustworthy?!)
do not use WCF-DataService at all, but create custom WCF-Services (which maybe under the hood use the EntityFrameworks db-context)
Both "Solutions" do not Sound very good...
Regards
Johannes Colmsee
First note: always validate on the server, never trust the client, never.
The solution is to create a shared project with your shared code. There you can share the validation on both the server and client (with the same code-base).

tacacs+ for Linux authentication/authorization using pam_tacplus

I am using TACACS+ to authenticate Linux users using pam_tacplus.so PAM module and it works without issues.
I have modified the pam_tacplus module to meet some of my custom requirements.
I know by default, TACACS+ does not have any means to support linux groups or access level control over linux bash commands, however, I was wondering is there any way that some information could be passed from TACACS+ server side to let the pam_tacplus.so module which can be used to allow/deny , or modify the user group on the fly [from pam module itself].
Example: If I could pass the priv-lvl number from server to the client and which could be used for some decision making at the PAM module.
PS: I would prefer a method which involved no modification at the server side [code], all modification should be done at Linux side ie pam_tacplus module.
Thanks for any help.
Eventually I got it working.
Issue 1:
The issue I faced was there is very few documentation available to configure TACACS+ server for a non CISCO device.
Issue 2:
The tac_plus version that I am using
tac_plus -v
tac_plus version F4.0.4.28
does not seem to support
service = shell protocol = ssh
option in tac_plus.conf file.
So eventually I used
service = system {
default attribute = permit
priv-lvl = 15
}
On the client side (pam_tacplus.so),
I sent the AVP service=system at authorization phase(pam_acct_mgmt), which forced the service to return priv-lvl defined at the configuration file, which I used to device privilege level of the user.
NOTE: In some documentations it is mentioned that service=system is not used anymore. So this option may not work with CISCO devices.
HTH
Depending on how you intend to implement this, PAM may be insufficient to meet your needs. The privilege level from TACACS+ isn't part of the 'authentication' step, but rather the 'authorization' step. If you're using pam_tacplus, then that authorization takes place as part of the 'account' (aka pam_acct_mgmt) step in PAM. Unfortunately, however, *nix systems don't give you a lot of ability to do fine grained control here -- you might be able to reject access based on invalid 'service', 'protocol', or even particulars such as 'host', or 'tty', but probably not much beyond that. (priv_lvl is part of the request, not response, and pam_tacplus always sends '0'.)
If you want to vary privileges on a *nix system, you probably want to work within that environments capabilities. My suggestion would be to grouping as a means of producing a sort of 'role-based' access controls. If you want these to exist on the TACACS+ server, then you'll want to introduce custom AVP that are meaningful, and then associate those with the user.
You'll likely need an NSS (name service switch) module to accomplish this -- by the time you get to PAM, OpenSSH, for example, will have already determined that your user is "bogus" and send along a similarly bogus password to the server. With an NSS module you can populate 'passwd' records for your users based on AVPs from the TACACS+ server. More details on NSS can be found in glibc's documentation for "Name Service Switch".

Enforcing Saas subscription requirements for client based apps

I want to create a SaaS extension for chrome.
How do I ensure that they cannot use my extension's functionality when their subscription is no longer current?
My basic idea is that whenever they want to use my chrome extension's functionality, the extension makes an ajax request to my server to check to see if today's date is before the subscription's ending date in my DB.
The extension is obviously is client based, so even if I have code on the client side that's only executed if my ajax request returns that they have a current subscription, couldn't an enterprising individual just look at my code and run it via console in a way that gets past my ajax request requirement?
Is there a way to enforce the subscription?
Edit:
This is mostly a conceptual question, but I'll try be clearer.
All the javascript code needed for my app to function is on their local machine, in their source files (to work it doesn't require access to my database).
so you could think of my code on their local machine as looking like this:
if (usersSubscriptionIsCurrent) {
runFeature()
}
And usersSubscriptionIsCurrent is true if the Ajax request to my server returns that their subscription is current.
Someone could still run my feature just by looking at the source code, and then typing runFeature() into their console.
I want to prevent that.
My extension relies on sending data from the extension to a related chrome app, so I just had the idea that I could also send the data to my server, which could then forward the data to user's chrome app if they have a current subscription. But yikes.
The more I think about it, the less I think it's possible for me to prevent, but I figured I'd ask in case anyone has a clever idea.
I think you are slightly confused about what counts as SaaS. Wikipedia:
Software as a service is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is typically accessed by users using a thin client via a web browser.
Emphasis mine.
If your app / extension contains all the logic required, it does not qualify as SaaS. Futhermore, as it is always possible to copy/dissect your app, taking out all license checks, you can't protect it against a determined attacker.
There are ways to protect your code to some degree, via obfuscation, offloading logic to (P)NaCl modules, native host modules, or, as Alex Belozerov suggested, load the code on runtime. Again, all of that can be broken by a determined attacker.
But if you truly have SaaS in mind (and not just subscription-based licensing), your client app should be a thin client: that is, your app logic should be processed on a server, with code safely away from clients. That is the only "sure" way to protect it, but incurs processing costs to you, but that's what subscription is supposed to cover in the first place.
You can get part of code needed from server side. So if user's subscription is over, he won't be apple to run your feature as part of code is missing. Concept of my idea:
var subscriptionStatusResponse = makeAjaxCall();
if(subscriptionStatusResponse.usersSubscriptionIsCurrent) {
runFeature_localCode(); // only part of functional
subscriptionStatusResponse.remoteCode(); // second part
}
Maybe the best solution is to check if their subscription is current as soon as the extension starts, and then use the chrome management API to uninstall or disable it if their subscription is over.
I'd love to hear better ideas though.

Is calling process.start a security risk from an azure worker role?

We have a worker role in Azure which uses Process.Start to kick off a background process (which hosts a native application we need to run)
FxCop gives me a whole load of CA2122 errors due to a link demand. When I tried to add this attribute:
[PermissionSet(SecurityAction.LinkDemand, Name = "FullTrust")]
I then started to get CA2135 instead, the solution to which seems to be to add the SecurityCritical attribute instead.
But then I get the CA2122 again.
Are either of these things an issue? Under what circumstances could they be and how can I be sure that I'm not introducing a security problem?
SecurityCritical should perform an equivalent role as a LinkDemand for full trust:
The SecurityCriticalAttribute is equivalent to a link demand for full
trust. A type or member marked with the SecurityCriticalAttribute can
be called only by fully trusted code; it does not have to demand
specific permissions. It cannot be called by partially trusted code.
Ergo, I'd suggest adding SecurityCritical (to fulfil the needs for CA2135) and suppress the CA2122, which is presumably just Microsoft forgetting to account for their newer solution in their code analysis.
The objective of CA2122 is to ensure that the method...
no longer provides unsecured access to the link demand-protected
member.
This isn't the case once SecurityCritical is added (which ensures the member can be called only by fully trusted code), so the second CA2122 is a false positive.

Transparent authorization reliability

I need a gear for custom authorization in business logic classes. It has to be permissions based system, but I can not decide how to apply authorization rules to methods.
My first thought was to apply custom attributes to method
[NeedPermission("Users", PermissionLevel.Read)]
public IList<User> GetAllUsers()
{
// some code goes here
}
My business logic class has interface, so I can use, for example, Unity Interception behavior and check in runtime if current user has required permissions. And throw an exception if he has not.
But now I'm concerned about reliability of this method.
Usually the reference to business logic class injected by unity container. So there is no problem because it is configured to apply interface interception mechanism.
But what if some developer will instantiate my business logic class directly? Then no interception will be applied and he will be able to call any method even if current user has not permissions to make some actions or even he is not authenticated.
Also somebody can change unity container configuration, turn off Interception extension completly. Again my authorization system will not work.
I saw ASP .NET MVC is using similar mechanism for authorization. Authorization rule is applied only when request came by standard way (IController.Execute). I think this is not a problem in this case because end user of controller (web user) has no way to access controller class directly.
In my case end user of business logic is a programmer who develops front end and he can intentionally or unintentionally screw things - create instance of business logic class and call any methods.
What can you suggest me? How do you deal with this kind of problems?
Thank you.
The .NET Framework supports a mechanism for declarative permission verifications that does not depend on Unity interception or other "external" AOP. In order to take advantage of this, your attribute must inherit from System.Security.Permissions.CodeAccessSecurityAttribute. The System.Security.Permissions.PrincipalPermissionAttribute that is included in the BCL is an example of using this mechanism to evaluate user permissions. If it does not suit your needs, there's nothing stopping you from creating your own attribute that does.
If your constructors are internal and your objects instantiated from a factory, a developper won't be able to bypass your security by error.
If someone really, really wants to create your objets without the security, he could still do it using reflection, but this would have be pretty intentional to do so.

Resources