I have a public computer that is used in an ATM sort of fashion. When a certain action occurs (person inserts money), the program I've written on the computer sends a request to a trusted server which does a very critical task (transfers money).
I'm wondering, since I have to communicate to a server to start the critical task, the credentials to communicate with it are stored on this public computer. How do I prevent hackers from obtaining this information and running the critical task with their own parameters?
HSM (Hardware Security Modules) are designed to store keys safely:
A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server.
HSMs may possess controls that provide tamper evidence such as logging and alerting and tamper resistance such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing.
Impossible in general
If your user has access to this PC, they can easily insert fake money. Your model is doomed.
Minimize attack surface
This PC ought to have unique token (a permanent cookie is enough), and sever will refuse a request without a valid cookie. Server maintains database of device types, and this ATM-PC is only allowed certain operations (deposit money up to NNN units). Ideally it is also rate-limited (at most once per 3 seconds).
Related
The freeNAS_11.3 console allows a too simple password change:
When starting my freeNAS test server (freeNAS 11.3 Rel-p5 r325575),
I'm getting after a while of booting the option menu with 11 points
----
1) Configure Network Interfaces
...
7) Reset Root Password
...
11) Shut Down
----
With option 7) I'm immediately allowed to choose a new password,
without any question for the old password ...
In my opinion that can be a problem in situations where the freeNAS server
is running in a somehow "unsecure" environment.
Why there is (at least) no option to force
first the question for the old password
before changing to a new password?
From the Immutable Laws of IT Security:
Law #3: If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore
What you have here is access to the machine directly. No remote interface of any good NAS system will allow a remote user to reset a password without authentication. However, it is assumed that if you have physical access(direct tty, session, etc. on the device - keyboard attached to the physical device) to the machine, you could do all sorts of thing to steal the data. Hence no tight security around the reset-passwort button.
Especially with a NAS, one could just grab the data drives and hook them into a different system to read them. To prevent this, encrypt your data either individually and/or all at once with something like Bitlocker. Then you have to type in your password on each startup, but that data will be encrypted at rest, meaning it'll take an attacker longer to get to your data.
Assume the Joker is a maximally sophisticated, well-equipped and malicious user of Batman's start up batmanrules.com hosted by, say, AWS infrastructure. The business logic of batmanrules.com requires that unregistered users be able to send http requests to the REST API layer of batman.com, which lead to the invocation (in one way or another) of queries against an AWS-based DB. Batman doesn't want to be constrained by DB type (it can be either SQL or noSQL).
The Joker wants to ruin batman financially by sending as many http requests as he can in order to run up Batman's AWS bill. The Joker uses all the latest tricks in the book using DDOS-like methods to send http requests from different IP addresses that target all sorts of mechanisms within batman.com's business logic.
Main Question: how does Batman prevent financial ruin while keeping his service running smoothly for his normal users?
Assume a lot of traffic is going on, how can you weed out the 'malicious' queries from the non-malicious, especially when users arent being registered? I know you can do rate-limiting against IP addresses, but cant the Joker (who is maximally sophisticated and well-equipped) find clever ways to issue requests from ever-changing IP addresses, and or to tweak the requests so that no two are exactly the same?
Note: my question focuses not on denial of service -- let's assume it's ok if the site goes down for a while -- but, rather, on Batman's financial loss. Batman has done a great job on making the architecture scale up and down with varying load, his only concern is that high loads (induced by Joker's shenanigans) entail high cost.
My instinct tells me that there is no silver bullet here, and that batman would have to build safeguards into his business logic (e.g. shut down if traffic spikes within certain parameters) AND/OR to require reCAPTCHA tokens on all non-trivial requests submitted to the REST API.
You can use AWS WAF and configure rules to block malicious users.
For example a straight forward rule would be to do a rate base blocking where if you could find its highly unlikely to get above X amount of requests concurrently from a same IP address.
For advanced use cases you can implement custom rules by analyzing the request logs with Lambda and to apply the block in WAF.
In addition, as you clearly identified it is not possible to prevent all the malicious requests. The goal should be to inspect and prevent which is an ongoing process with the right architecture in place to block requests on need basis.
I wrote a multi-process realtime WebSocket server which uses the session id to load-balance traffic to the relevant worker based on the port number that it is listening on. The session id contains the hostname, source port number, worker port number and the actual hash id which the worker uses to uniquely identify the client. A typical session id would look like this:
localhost_9100_8000_0_AoT_eIwV0w4HQz_nAAAV
I would like to know the security implications for having the worker port number (in this case 9100) as part of the session id like that.
I am a bit worried about Denial of Service (DoS) threats - In theory, this could allow a malicious user to generate a large number of HTTP requests targeted at a specific port number (for example by using a fake sessionID which contains that port number) - But is this a serious threat? (assuming you have decent firewalls)? How do big companies like Google handle dealing with sticky sessions from a security perspective?
Are there any other threats which I should consider?
The reason why I designed the server like this is to account for the initial HTTP handshake and also for when the client does not support WebSocket (in which case HTTP long-polling is used - And hence subsequent HTTP requests from a client need to go to the same worker in the backend).
So there are several sub-questions in your question. I'll try to split them up and answer them accordingly:
Is DoS-Attack on a specific worker a serious threat?
It depends. If you will have 100 users, probably not. But you can be sure, that there are people out there, which will have a look at your application and will try to figure out the weaknesses and exploit those.
Now is a DoS-Attack on single workers a serious possibility, if you can just attack the whole server? I would actually say yes, because it is a more precise attack => you need less resources to kill the workers when you do it one by one. However if you allow connection from the outside only on port 80 for HTTP and block everything else, this problem will be solved.
How do big companies like Google handle dealing with sticky sessions?
Simple answer - who says, they do? There are multiple other ways to solve the problem of sessions, when you have a distributed system:
don't store anything session based on the server, just have a key in the cooky with which you can identify the user again, similar as with automatic login.
store the session state in a data base or object storage (this will add a lot of overhead)
store session information in the proxy (or broker, http endpoint, ...) and send them together with the request to the next worker
Are there any other threats which I should consider?
There are always unforeseen threats, and that's the reason, why you should never publish more information than necessary. In that case, most big companies don't even publish the correct name and version of their WebServer (for google it is gws for instance)
That being said, I see your point why you might want to keep your implementation, but maybe you can modify it slightly to store in your load balancer a dictionary with a hashed value of hostname, source port number, worker port number and have as a session id a collection of two hashes. Than the load balancer knows, by looking into the dictionary, to which worker it needs to be sent. This info should be saved together with a timestamp, when the info was retrieved the last time, and every minute you can delete unused data.
I am writing a build script which has some password protected files (keys). I need a way to prompt the user once for the password and then use this key across multiple scripts. These scripts do not live inside the same shell, and may spawn other windows via dbus. I can then send them commands, one of which must have access to the password.
I have this working already, but at a few points the passphrase is either used directly on a command-line (passed via dbus), or is put into a file (the name then passed to the other script). Both of these are less secure than I want*. The command-line ends up in a history which may be stored in a file, as well as appearing in the process list, and the second option stores in a file which can be read by somebody else.
Is there some standard way to create a temporary communications channel between two processes which could communicate the password and not be intercepted by another user on the system (including root)?
*Note: This is primarily an exercise to be fully secure. For my current project the temporary in-file storage of the password is okay.
Setting "root being all-powerful" aside, I would imagine that a Private DBus Connection would do the trick although the documentation I could find seems a little light on what exactly makes a private connection private.
However, the DBus Specification, more specifically, the Message Bus Specification subsection on eavesdropping says in part:
Receiving a unicast message whose DESTINATION indicates a different
recipient is called eavesdropping. On a message bus which acts as a
security boundary (like the standard system bus), the security policy
should usually prevent eavesdropping, since unicast messages are
normally kept private and may contain security-sensitive information.
So you may not even need to use private connections which incur more overhead costs. But on a risk/reward basis with security being paramount, that may be the more secure alternative for you. Hope that helps.
What mechanisms exist already for designing a P2P architecture, in which the different nodes do work separately, in order to split a task (say distributed rendering of a 3D image), but unlike torrents, they don't get to see, or hijack the contents of the packets being transmitted? Only the original task requester is entitled to view the? results of the complete task.
Any working implementations on that principle already?
EDIT: Maybe I formulated the question wrongly. The idea is that even when they are able to work on the contents of the separate packets being sent, the separate nodes never get the chance to assemble the whole picture. Only the one requesting the task is supposed to do this.
If you have direct P2P connections (no "promiscuous" or "multicasting" sort of mode), the receiving peers should only "see" the data sent to them, nothing else.
If you have relay servers on the way and you are worried that they can sniff the data, I believe encryption is the way to go.
What we do is that peer A transmits data to peer B in an S/MIME envelope: the content is signed with the Private Key of Peer A and encrypted with the public Key of Peer B.
Only peer B can decrypt the data and is guaranteed that peer A actually sent the data.
This whole process is costly CPU and byte wise and may not be appropriate for your application. It also requires some form of key management infrastructure: peers need to subscribe to a community which issues certificates for instance.
But the basic idea is there: asymetric encryption with a key or shared secret encryption.