I want to securely store private keys of my users on a separate server (lets call it B) and it's used to sign, decrypt information. B stores keys on a database (postgres). Server A (public) sends information to B. Ideally B needs to get the private key, sign the token with information and send it back to A. Instead of sending the private key to A, which can be a security issue (if server A is compromised).
My options are:
web sockets
Https request (https://nodejs.org/api/https.html#https_https_request_options_callback)
Questions:
Is there any other options to securely communicate with two servers?
If server B was on port "7000" how can I make sure only server A can access it?
How does HSM server help in my case and how does it communicate with other servers (websocket or https request)?
I could just take the easy route and connect database of server B on port "7000" and run queries from A but as I said it's not as secure. I heard that HSM handles/decrypts information and sends it back, so I though I can do something similar with normal servers.
Thanks any help would be appreciated
UPDATE
#zaph has answered questions 2 and 3.
Question: Does server A need to do a https request and include the private ip address of server B, for example https://203.0.113.25? Then server B would use an API router to handle the request. However ip isn't a DNS, therefore it won't work due to certificates. So how do servers communicate, send/receive data?
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html
For others: Use security groups, configure them so only a specific instance can access it. Make a normal request, e.g.: domain.com:PORT. PORT is the instance that's listening to request...
When you specify a security group as the source or destination for a
rule, the rule affects all instances associated with the security
group. Incoming traffic is allowed based on the private IP addresses
of the instances that are associated with the source security group
(and not the public IP or Elastic IP addresses).
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html
Related
Please forgive the wishy washy nature of this question, I'm unsure how better to phrase it.
I have a nodejs server which will be accessed (HTTP + websockets) through a variety of third party DNSs by the third parties adding a new A record in their DNS entry pointing at my IP. I can find the origination third party DNS name by looking at the request headers. Node is then acting as a proxy and ultimately modifying the request headers/adding metadata before forwarding the request back to another url at the third party.
Could anyone explain please how SSL/TLS operates when the third party certificate is a wildcard cert for the origination DNS; how is the chain of encryption carried to node -> do I need to host a copy of the third party certificate on the node server? (Obviously I'd rather not). Can I use a third party's original SSL set up to any advantage?
Many thanks in advance!
DNS and HTTPS are fairly unrelated here. The client only uses DNS to find the web server's IP address. After that, the http protocol contains the Host name it is requesting in the Host header, as you have determined.
Your server will need an HTTPS certificate for each Host name that is will handle requests for, otherwise browsers will not be able to make a trusted connection to it. The certificate says "This server is authorized to handle requests for this host name".
In practice, though DNS and HTTPS are related, because if you control dns, you can issue a certificate. Let's Encrypt has made this very easy to set up.
I would not recommend sharing certificates with third parties, as that can be a bit of a pain, and it is harder to keep private keys secure if you are emailing them back and forth or something. Just issue your own certs for the third-party domains you need to serve.
My personal favorite solution for a case like yours is running a caddy server instance in front of my app to manage https certificates automatically, and proxy requests to your node backend. It can even issue certs dynamically as it receives requests.
I need to post three strings in a form from the site A to other site B which are hosted on the different servers.
Both sites use HTTPS connection.
My question is:
Are the three strings encrypted(using site B's HTTPS connection) during transmission across the network? I feel the three strings doesn't be encrypted, but I don't know the main reasons.
When using HTTPS, all form data, indeed all data, passed between client and server is encrypted during transmission.
HTTPS is a secure channel between the client (browser) and the server that terminates HTTPS (usually the web server, but it can also be a load balancer for instance). Anything that is sent between the client and the server over HTTPS is encrypted, its integrity is protected and also the server is authenticated (but the client is not). It means a man in the middle attacker can not read the traffic, cannot modify it (by doing things like reordering packets), and also an attacker cannot impersonate the server (however, lack of client authentication means the attacker can impersonate the client unless authentication is implemented in the application).
All of this implies that any traffic downloaded from site A over HTTPS is secure between site A and the client, and then any traffic sent from the client to site B is again secure between the client and site B. However, in both cases the client terminates HTTPS connections, meaning the client can read or tamper with the data, ie. you cannot guarantee on server B that a potentially malicious user having access to the client has not changed the data downloaded from server A before passing it to server B.
Regardless of this, if you only take the connection from the client to server B, that is of course encrypted and secure.
It's worth to note that due to the way the network stack (TCP/IP) works, some information is leaked though. For example a man in the middle attacker will get to know endpoint IP addresses, and also the approximate amount of data transferred in most cases. However, he will have no information from the HTTP protocol (request or response headers, bodies, etc.)
I have an architectural question.
Lets say I have a route '/tickets'. I can easily authenticate users that are accessing this route using passport. I can further protect this route via acl.
Now let's say my internal app or a process want to access this same route. I'm thinking I might only have one option. I have to create a separate user/password with right role and have my internal app or process make an HTTP call to this route using this separate credentials.
So, is this a right way to access internal APIs ?
any other suggestions that might be useful ?
Thanks
There are lots of different options for routes that are only accessible internally:
Create a different server on a different port, but in the same process that implements the private routes and block access to that port at your firewall. This is my favorite option because it's really simple to do right. You create a second http server process on a non-standard port (like 8000 or something like that) and you make sure your firewall only allows public access to port 80 or 443. You then put all your internal routes on the port 8000 server. But, since it's in the same process as your public server, you can still access all the sme data. With one simple firewall rule (that is probably already implemented by default), you block access to all private routes.
Authenticate all routes, use different credentials for internal routes. Here, you have a different set of credentials that is kept internally and is used only for your internal routes. Since nobody in the outside world has those credentials, they shouldn't be able to use them.
Block public access to internal routes at the networking level. Here you would use either a proxy or a firewall to block access to specific internal routes at the networking level. So, if your incoming firewall saw a request for /admin/whatever, it would just block access to anything starting with /admin. Internal routes could be used freely with either with or without credentials, but could only be used from within the private network of your server. They couldn't be access from the outside.
Allow access only to known public routes at the networking level. This is just the reverse of option 2. Here you have to specifically tell your firewall/proxy which routes the public is allowed to access.
Allow access to private routes only by certain IP Addresses. Here, you could deny access to any private route if the source IP address was not a local IP address on your local network. This is just a different way of implementing 2 or 3. This can be made fairly restrictive to only allow access from one or a very specific number of internal IP addresses if you want.
For additional security, you can also combine various options which is sometimes useful for preventing internal attacks on your own infrastructure either from a mal-employee or from some other piece of compromised infrastructure on your own private network.
For example, you could combine options 1, 2 and 5. You'd create a separate server port that was not accessible from the public internet and you'd authenticate every request to it with internal-only credentials and you'd only allow access to it from specific internal IP addresses. I'm not saying you have to combine all those, but I'm giving you the idea that these are not all mutually exclusive. My favorite would be to combine 1 and 2.
FYI, if you want to have private access to the same functionality such as /tickets, but with different access, you can use the 2nd server that is only accessible internally (as described in option 1 above) and just have a /tickets route on it that has different access control. The two separate servers can share all the same /tickets implementation (just put the implementation in a function that the two can share) except they would have different route definitions on the two servers that define the authentication required. You could even have the private server set a flag on the request object that indicates to the rest of your code which entry point the route was called from (public or private) so it could branch based on that information.
My web app is using a 3rd party tool for storing sensitive data, which has the ability to send events via a callback url (i.e. when something changes it will make a request to the given url). In order to prevent malicious requests the 3rd party tool suggests checking the IP Address of the request to ensure that it came from their server, but this seems like it would be vulnerable to spoofing.
Questions:
Is it safe to validate origin of requests in this way?
Would client certificates be a more reasonable approach for them?
If the 3rd party app is across the internet, then your check would be protected against IP spoofing as the 3 way handshake cannot take place if the IP address is spoofed. (Discounting large scale attacks such as IP hijacking.)
If the 3rd party app is on another server within your local network, then another user on that network could just set their local IP to that of the app to spoof it.
To summarise:
Could a web app which authenticates a client only by IP address be exploited?
No - if the app is internet based, the risk of IP spoofing is very low.
I am developing a Node.js app based on the Express framework. On the backend, I need to have servers talk to each other (ie. Server 1 make a request of Server 2).
Is it OK to forego a DNS A-Record and just use the IP address of the server?
In that case, how do I authenticate the server and "client" (aka server). I was thinking of requiring the server and "client" to each pass a secure cookie with their request and responses. The secure cookie would then be verified before any other action was taken.
Using a IP might be more secure then DNS (e.g. no DNS spoofing), but it still allows ARP spoofing, e.g. some other computer claims to have this IP. And in case both computers are not in the same network there are also ways to hijack requests in routers etc.
The secure cookie is nothing else as a shared secret. And contrary to public key based authentication (e.g. using certificates) shared secrets have the disadvantage that you need to distribute them in a secure way so that nobody else gets access to them.
I don't think that your idea is easier to handle than SSL with certificates, so I don't see an advantage of making your own secure protocols. History tells us, that such homegrown protocols mostly provide worse security than established solutions.
If you don't care about security (these hosts are on your network, in which you have trust), don't bother with the homebaked cookies.
If you do care about security get (or generate your own) certificate and use SSL.
I was thinking of requiring the server and "client" to each pass a secure cookie with their request and responses. The secure cookie would then be verified before any other action was taken.
This is not secure at all! Anybody situated on an appropriate network between the client and server can see that "secure cookie", as well as any subsequent communications. This would allow them to reuse that cookie themselves to impersonate either the client or server, and would expose any sensitive information sent in the exchange.
Use SSL. It has already solved all of these problems, and more.