I have an architectural question.
Lets say I have a route '/tickets'. I can easily authenticate users that are accessing this route using passport. I can further protect this route via acl.
Now let's say my internal app or a process want to access this same route. I'm thinking I might only have one option. I have to create a separate user/password with right role and have my internal app or process make an HTTP call to this route using this separate credentials.
So, is this a right way to access internal APIs ?
any other suggestions that might be useful ?
Thanks
There are lots of different options for routes that are only accessible internally:
Create a different server on a different port, but in the same process that implements the private routes and block access to that port at your firewall. This is my favorite option because it's really simple to do right. You create a second http server process on a non-standard port (like 8000 or something like that) and you make sure your firewall only allows public access to port 80 or 443. You then put all your internal routes on the port 8000 server. But, since it's in the same process as your public server, you can still access all the sme data. With one simple firewall rule (that is probably already implemented by default), you block access to all private routes.
Authenticate all routes, use different credentials for internal routes. Here, you have a different set of credentials that is kept internally and is used only for your internal routes. Since nobody in the outside world has those credentials, they shouldn't be able to use them.
Block public access to internal routes at the networking level. Here you would use either a proxy or a firewall to block access to specific internal routes at the networking level. So, if your incoming firewall saw a request for /admin/whatever, it would just block access to anything starting with /admin. Internal routes could be used freely with either with or without credentials, but could only be used from within the private network of your server. They couldn't be access from the outside.
Allow access only to known public routes at the networking level. This is just the reverse of option 2. Here you have to specifically tell your firewall/proxy which routes the public is allowed to access.
Allow access to private routes only by certain IP Addresses. Here, you could deny access to any private route if the source IP address was not a local IP address on your local network. This is just a different way of implementing 2 or 3. This can be made fairly restrictive to only allow access from one or a very specific number of internal IP addresses if you want.
For additional security, you can also combine various options which is sometimes useful for preventing internal attacks on your own infrastructure either from a mal-employee or from some other piece of compromised infrastructure on your own private network.
For example, you could combine options 1, 2 and 5. You'd create a separate server port that was not accessible from the public internet and you'd authenticate every request to it with internal-only credentials and you'd only allow access to it from specific internal IP addresses. I'm not saying you have to combine all those, but I'm giving you the idea that these are not all mutually exclusive. My favorite would be to combine 1 and 2.
FYI, if you want to have private access to the same functionality such as /tickets, but with different access, you can use the 2nd server that is only accessible internally (as described in option 1 above) and just have a /tickets route on it that has different access control. The two separate servers can share all the same /tickets implementation (just put the implementation in a function that the two can share) except they would have different route definitions on the two servers that define the authentication required. You could even have the private server set a flag on the request object that indicates to the rest of your code which entry point the route was called from (public or private) so it could branch based on that information.
Related
I want to securely store private keys of my users on a separate server (lets call it B) and it's used to sign, decrypt information. B stores keys on a database (postgres). Server A (public) sends information to B. Ideally B needs to get the private key, sign the token with information and send it back to A. Instead of sending the private key to A, which can be a security issue (if server A is compromised).
My options are:
web sockets
Https request (https://nodejs.org/api/https.html#https_https_request_options_callback)
Questions:
Is there any other options to securely communicate with two servers?
If server B was on port "7000" how can I make sure only server A can access it?
How does HSM server help in my case and how does it communicate with other servers (websocket or https request)?
I could just take the easy route and connect database of server B on port "7000" and run queries from A but as I said it's not as secure. I heard that HSM handles/decrypts information and sends it back, so I though I can do something similar with normal servers.
Thanks any help would be appreciated
UPDATE
#zaph has answered questions 2 and 3.
Question: Does server A need to do a https request and include the private ip address of server B, for example https://203.0.113.25? Then server B would use an API router to handle the request. However ip isn't a DNS, therefore it won't work due to certificates. So how do servers communicate, send/receive data?
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html
For others: Use security groups, configure them so only a specific instance can access it. Make a normal request, e.g.: domain.com:PORT. PORT is the instance that's listening to request...
When you specify a security group as the source or destination for a
rule, the rule affects all instances associated with the security
group. Incoming traffic is allowed based on the private IP addresses
of the instances that are associated with the source security group
(and not the public IP or Elastic IP addresses).
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
I'm new to Web API, HTTP and security in general, but I just want to know if this is possible: for a controller to relax security requirements when the HTTP request originated from within the local area network.
My particular application has very low security requirements for clients inside the firewall. For instance, I want internal client apps to be able to make requests of controller actions marked with [AllowAnonymous] so that they don't need to deal with OAuth, etc (which just seems like complete overkill for my internal use scenario).
However, if the same controller actions are available to the public Internet, of course strict security requirements should apply.
Can security be handled differently based on origin? Or is the standard practice to expose both a public-facing and an Internal API?
When you use the [AllowAnonymous] attribute on your controller or action, you tell ASP.NET that it should not check the user's identity at all. That is not what you want for users coming from the internet.
You could remove the [Authorize] attribute from your controller and manually check inside the action if the user is authenticated using:
if (User.Identity.IsAuthenticated || IsLocalUser())
{
// action implementation
}
You can implement this check in a custom authorization attribute.
This still leaves you with the task to determine whether the user is local or coming from the internet. You could check the client IP-address to determine this of course.
Another option would be to enable both Windows authentication and bearer scheme authentication if your local users are part of an Active Directory domain.
Users from your intranet could use Windows authentication to talk to the service, while internet users need to bring a JWT token. This would only work if the client application for users coming from the internet is different than for local users.
DISCLAIMER: I've never tried this last option.
Identifying a request as one from "inside the firewall" isn't always as simple as just investigating the IP address. Although, this may work for you now, it may make it difficult to move environments or modify the environment without affecting application logic.
I would recommend developing a simple middle layer application that simply has the job of calling your main application with enough authorization data to handle security in the same context as your regular app, but this middle layer would in itself not be authorized. You will then just have to make sure that this app is not accessible to users outside of the firewall.
I have a middleware server (A), under an Azure Web App role.
I'm using some SOAP service from a private server (B), a third party, that filters incoming IPs, so if my request IP is not registered in their firewall, I won't be able to access any information.
The middleware is not exclusive for (B) and a lot of other clients(C) request information from (A), including mobile devices (D).
I want to make sure that any request from (A) to (B) is accepted even if my current Request IP changes - and it will since the middleware is on the cloud and some changes are performed periodically -.
I had in mind a CSR certificate so that server (B) knows it's my Middleware (A), without caring for my request IP.
Is that idea a good choice?, am I missing something? are there any better solutions, recommendations?, I want to be able to connect (A) & (B) without affecting a lot (C) & (D).
Note: If the original idea works, where should I start to implement it, given the azure Web App constraints and private server ones?.
You seem to be asking how to authenticate to this private server (B). There are many ways to handle this. There is one big question...Do you have control over this private server?
If so, you have a slew of options; basic auth, cert auth, flavors of OAuth, etc. pick one. Make sure the transport is secure via SSL though.
You make it sounds like you don't have access, at which point you are at the mercy of this 3rd party. IP restrictions can be put in place to try to limit exposure when a authentication mechanism can't be added. I guess sometimes even both for those uber least-privilege types. If they have IP filters in place, it is likely those will be difficult to remove, especially if they have other consumers of this SOAP service.
Also, a CSR is only the start of a certificate creation. You can create on if you are going to use the Client Certificate auth option. Or if you need to purchase a cert to secure the transport your middleware services. However, it has nothing to do with authentication directly.
I have a public facing web service that has a token based security system. Log in is accomplished by providing a username & password and a unique token is returned that is used going forward whenever the service is called.
My question is this: Is there a secure way to differentiate between a call coming from outside our internal network and a call coming from within? I would like to provide elevated privileges to clients that are calling the service from within our internal network. Specifically we have a website running on the same network as our webservices and I would like to give the website elevated privileges when calling our service.
Is there a secure way to do this when the web service is public facing? What I don't want to happen is that someone from outside our internal network to somehow get access to elevated privileges.
The services were implemented using Java and the CXF framework.
Definitely possible, here's how I would suggest doing it.
Have an reverse proxy that sits between your application and the external clients. This reverse proxy would authenticate the token and the set required privileges in the request header.
Elevating privileges for internal clients can be done by following approaches
Set an authenticate header in the requests on the reverse proxy. IF this header is set to true, it signals that the call is from an external client. The app can decide if needs to authorize based on this header. Internal clients can call this service without having to go through any authentication/authorization. Note that this would complete eliminate any auth for internal clients.
Have rules on the RP that can set additional headers containing elevated privileges based on the IP of callers. Internal clients IP can be made into a list for which this applies.
Have two endpoints for internal and external clients with revers proxies on both of them. The internal would set elevated privileges in the request headers.
You have options, I can think of at least 2 approaches immediately.
1) Also require an API key to access your webservices, and special-case the access provided to the website based on its key.
2) Elevate privs based on IP address of the requestor (website, or internal network).
I want users, when they are in the workplace (e.g. on the LAN), to authenticate themselves with their regular username and password. Auto-login is disabled.
However - logging in from outside the LAN should trigger a 2-level authentication (like SMS, mail or similar). How can we get information about the users network when they try to log in to the application from outside the LAN?
NB - it does not matter if you have AD user and pwd. If you are on the outside you have to trigger the 2 level auth.
NB2 - we do not want any client-side scripts running, so this must be something coming with the initial request
Technology: IIS 7, ISA 2006, .Net 4, MS Sql 2008 server.
Question also asked here: https://serverfault.com/questions/354183/what-2-level-authentication-mechanism-is-available-that-can-differentiate-if-the
Information why ISA server remove the information I need: http://www.redline-software.com/eng/support/articles/isaserver/security/x-forwarded-isa-track.php
If it's reasonable, don't expose your web server to anything outside of your LAN -- require VPN access.
If that isn't reasonable, you should be able to use the REMOTE_ADDR variable to determine the source of the request. Whitelist your LAN as single-factor and require everything else to be multi-factor. Depending on the scenario, the server variables will be similar to either
Context.Request.ServerVariables ["REMOTE_ADDR"]
or
Request.UserHostAddress()
If you have a proxy in the way, make the proxy tag the originating IP source in the headers and read the request headers to determine the external IP.