SO_PEERCRED vs SCM_CREDENTIALS - why there are both of them? - linux

SO_PEERCRED is simple way to get pid/uid/gid of connected AF_UNIX stream socket, SCM_CREDENTIALS is more or less the same, but more complex (various ancillary messages). Links to example showing both ways.
Why there are two ways to get more or less the same information?
Why the more comfortable SO_PEERCRED is not listed in unix(7) manpage?
Which is use more in real-life applicatins?
What should I use?

If I understand correctly, there is a subtle difference between the two. SO_PEERCRED retrieves the credentials of the peer process, without requiring any interaction from the peer process. In contrast, SCM_CREDENTIALS is a mechanism to send / receive credentials of the peer process, which are then checked by the kernel. This subtle difference may matter when a process is running as UID 0. SCM_CREDENTIALS allows a process running as UID 0, to declare itself less privileged (e.g., UID 50), whereas this would not be possible with SO_PEERCRED.
See above. I guess using SCM_CREDENTIALS is encouraged and SO_PEERCRED is only supported for compatibility.
The dbus daemon seems to use SO_PEERCRED and getpeereid(). I think it is best to copy their code in order to portably get the credentials.
http://cgit.freedesktop.org/dbus/dbus/tree/dbus/dbus-sysdeps-unix.c?id=edaa6fe253782dda959d78396b43e9fd71ea77e3

SO_PEERCRED returns the socket peer's credentials. SCM_CREDENTIALS allows you to pass any credentials which you have privilege over. This is particularly valuable because the kernel will translate the ids, so a task in one pid namespace can send a pid to process in another namespace, and be assured that the received pid will refer to the same process it intended.
If you want the peer's credential then use SO_PEERCRED. SCM_CREDENTIAL is a credential which the caller specified (which it did have to have privilege over), not necessarily the peer's.

Related

zmq_connect() a socket while waiting for a zmq_send() or zmq_recv()

I'm working on an application where I want to use ZeroMQ to connect nodes of different types which may be added and removed while the system is running. This means that I want to call zmq_connect() or zmq_disconnect() at any time as nodes come and go.
Some connection use sockets of type ZMQ_REQ, which block when no peers are available. Thus, it may happen that one node is blocked in a zmq_recv(), without any node available for processing the request. If then a new node becomes available, I would like to connect the socket using zmq_connect(). The only way I can see how I could do that is to call zmq_connect() from a different thread. But the documentation states pretty clearly that zmq_socket instances cannot be used from multiple threads simultaneously.
How can I solve this problem, sending messages on a ZMQ_REQ socket without any connections (or connection which cannot be established) and then later add connections and have the waiting requests being processed?
You should not use zmq_recv() when no messages are ready. That way you avoid blocking your thread. Instead check that there indeed are a message to receive. The easiest way to achieve this is using a poller. Since you haven't stated which library or language you're using I can't give you the right example, but I guess C example from the ZeroMQ Guide's examples here could be of use.
Building ZeroMQ based applications is, in my experience, most effective by building one threaded nodes that reacts to messages and, if necessary, runs methods based on time intervals.
For building a system like you talk about I suggest you look at the Service Discovery chapter of the awesome ZeroMQ Guide.

Communicate password securely to another program (separate shell/dbus)

I am writing a build script which has some password protected files (keys). I need a way to prompt the user once for the password and then use this key across multiple scripts. These scripts do not live inside the same shell, and may spawn other windows via dbus. I can then send them commands, one of which must have access to the password.
I have this working already, but at a few points the passphrase is either used directly on a command-line (passed via dbus), or is put into a file (the name then passed to the other script). Both of these are less secure than I want*. The command-line ends up in a history which may be stored in a file, as well as appearing in the process list, and the second option stores in a file which can be read by somebody else.
Is there some standard way to create a temporary communications channel between two processes which could communicate the password and not be intercepted by another user on the system (including root)?
*Note: This is primarily an exercise to be fully secure. For my current project the temporary in-file storage of the password is okay.
Setting "root being all-powerful" aside, I would imagine that a Private DBus Connection would do the trick although the documentation I could find seems a little light on what exactly makes a private connection private.
However, the DBus Specification, more specifically, the Message Bus Specification subsection on eavesdropping says in part:
Receiving a unicast message whose DESTINATION indicates a different
recipient is called eavesdropping. On a message bus which acts as a
security boundary (like the standard system bus), the security policy
should usually prevent eavesdropping, since unicast messages are
normally kept private and may contain security-sensitive information.
So you may not even need to use private connections which incur more overhead costs. But on a risk/reward basis with security being paramount, that may be the more secure alternative for you. Hope that helps.

Long running process remote termination?

I am developing an application that allows users to run AI algorithms on the server remotely. Some of these algorithms take a VERY long time. It is set up such that AJAX calls supply the algorithm parameters and launch a C++ algorithm on the server. The results and status of the computation are tracked via AJAX calls polling status files. This solution seems to work well for multiple users concurrently using the service, but I am now looking for a way to cancel the computation from the user's browser. I have a stop button that stops the AJAX updating service and ceases any communication between the browser and the running process on the server. The problem is that the process still runs, and I would like to free up the server resources when the user cancels the operation. Below are some more details.
The web service where the AJAX calls hit are run under the user 'tomcat' and can be listed by ps -U tomcat. The algorithm executions are all child processes of 'java' and can be listed by ps --ppid ###.
The browser keeps a record of the time that the current computation began (user system time, not server system time).
Multiple computations may be going on at once from users connected from different locations, resulting in many processes under the same name and parent process.
The restful service executes terminal commands via java runtime.exec().
I am not so knowledgeable about shell scripting, so any help would be greatly appreciated. Can anyone think of a way to either use java process object or shell script/awk to locate a process via timestamp (maybe the closest timestamp to user system time..?) or some other way?
Thanks in advance.
--edit
Is there even a way in java to get a handle for a given process if you have the pid...? Doesn't seem like it.
--edit
I cannot change the source code of the long running process on the server. :(
Your AJAX call should be manipulating some sort of a resource (most conveniently a text file) that acts as a semaphore to the process, which in every iteration of polling checks whether that semaphore file has been set to the stop status. If the AJAX changes the semaphore file to stop, then the process stops because your application checks it and responds accordingly. Which in turn means that the functionality needs to be programmed into your Java AI application rather than figuring out what the PID is and then killing it at the OS level. That, of course, assumes you have access to the source code of the app.
Of course, the semaphore does not have to be a file but can be a value in the DB etc., whichever suits your taste and configuration.
I have finally found a secure solution. From the restful java service, using Process p = Runtime.getRuntime().exec() gives you a handle on the running process. The only way, however, to get the pid is through a technique called reflection.
Field f = p.getClass().getDeclaredField();
f.setAccessible(true);
String pid = Integer.toString(f.getInt(p));
How unbelievably awkward...
Anyways, due to the passing of p from the server to the client being impossible, and the insecurity of allowing a remote call to kill an arbitrary server process by a pid passed by parameter, the only logical strategy I could come up with was to write the obtained pid to a process-unique file indicated by the initial client timestamp, and to delete this file upon restful service function return. This unique file can be used as a termination handle via yet another restful service which reads the file, and terminates the process with pid equal to the contents of the file. This
You could keep the Process instance returned by runtime.exec and invoke Process.destroy to kill the subprocess. Not knowing much about your webservice application I would assume you can keep the process instances in a global session map that maps users to process lists. Make sure access to this map is thread-safe. Also it only works if you have one webservice process that allows to share such a global session map across different requests.
Alternatively take a look at Get subprocess id in Java.

Using SSL with Netty at the beginning of a connection, then disabling it

I'm writing a server application and its client counterpart that both use Netty for the network layer. I find myself facing typical safety concerns about sending a password from a client to the server so I decided SSL was the safest way of doing this.
I know of the securechat example and will use this to modify my pipelines accordingly. However, I would also like to disable SSL after password transmission and acknowledge to save a few precious CPU cycles on server side, which may be busy with many other clients. The ChannelPipeline documentation states that:
"Once attached, the coupling between the channel and the pipeline is permanent; the channel cannot attach another pipeline to it nor detach the current pipeline from it."
The idea is then to not change the pipeline on-the-fly, which is prohibited, but to somehow tell the SslHandler in the pipeline that it should stop encrypting messages at some point. I was thinking of creating a class inheriting from SslHandler, overriding its handleDownstream function to call context.sendDownstream(evt) after some point in the communication.
Question 1: Is this a bad idea, that is, disabling SSL at some point ?
To allow a block in the pipeline (say a Decoder) telling another block (say SslHandler) that it should change its behaviour from now on, I thought I could create, say, an AtomicBoolean in my ChannelPipelineFactory's getPipeline() and pass it to the constructor of both the Decoder and the SslHandler.
Question 2: Is this a bad idea, that is, sharing state between pipeline blocks ? I'm worried I might screw up the multithreading of Netty here: are the blocks of a pipeline working on a single message, one at a time ? i.e.: does the first block wait for the completion of the last block before pulling the next message ?
EDIT:
Oh my bad, this is from the ChannelPipeline page I had been visiting many times and quoting in this very question:
"A ChannelHandler can be added or removed at any time because a ChannelPipeline is thread safe. For example, you can insert a SslHandler when sensitive information is about to be exchanged, and remove it after the exchange."
So this answers question 2 about modifying the pipeline's content on-the-fly, and not the pipeline reference itself.
I'm not sure about the efficacy of turning off SSL once established, but I think you have misinterpreted the mutability of the pipeline. Once a given channel is associated with a pipeline, that association is immutable. However, the handlers in the pipeline can be safely modified. That is to say, you can add and remove handlers as your protocol requires. Accordingly,you should be able to remove the SSL handler once it has served its purpose.
You can remove SslHandler from the pipeline with ChannelPipeline.remove(..) then it should turn your connection to plaintext. Please file a bug if it does not work - we actually have not tried that scenario in production :-)
I'm not sure about Netty, but in principle, you could indeed carry on with plain traffic on the same TCP connection. There are a few downsides:
Only the authentication would be secured. A MITM could perform actions other than those intended by the user. (This is similar to using HTTP Digest to some extent: the credentials are protected, but the request/response entities aren't.)
From an implementation point of view, this is tricky to get right. The TLS specification says:
If the application protocol using TLS provides that any data may be
carried over the underlying transport after the TLS connection is
closed, the TLS implementation must receive the responding
close_notify alert before indicating to the application layer that
the TLS connection has ended.
This implies that you're going to synchronise your stream somehow to wait for the close_notify response, before carrying on with your plain traffic.
The SSLEngine programming model is rather complex, and you may find that the Netty API isn't necessary handling this situation.
While it may make sense to want to save a few CPU cycles, most of the SSL/TLS overhead is in the handshake, which you'll be doing anyway. The symmetric cryptographic operations used for the actual encryption of the data are much less expensive. (You should try to measure this overhead to see if it really is a problem.)

PUB/SUB with short-lived publisher and long-lived subscribers

Context: OS: Linux (Ubuntu), language: C (actually Lua, but this should not matter).
I would prefer a ZeroMQ-based solution, but will accept anything sane enough.
Note: For technical reasons I can not use POSIX signals here.
I have several identical long-living processes on a single machine ("workers").
From time to time I need to deliver a control message to each of processes via a command-line tool. Example:
$ command-and-control worker-type run-collect-garbage
Each of workers on this machine should receive a run-collect-garbage message. Note: it would be perfect if the solution would somehow work for all workers on all machines in the cluster, but I can write that part myself.
This is easily done if I will store some information about running workers. For example keep the PIDs for them in a known location and open a control Unix domain socket on a known path with a PID somewhere in it. Or open TCP socket and store host and port somewhere.
But this would require careful management of the stored information — e.g. what if worker process suddenly dies? (Nothing unmanageable, but, still, extra fuss.) Also, the information needs to be stored somewhere, thus adding an extra bit of complexity.
Is there a good way to do this in PUB/SUB style? That is, workers are subscribers, command-and-control tool is a publisher, and all they know is a single "channel url", so to say, on which to come for messages.
Additional requirements:
Messages to the control channel must wake up workers from the poll (select, whatever)
loop.
Message delivery must be guaranteed, and it must reach each and every worker that is listening.
Worker should have a way to monitor for messages without blocking — ideally by the poll/select/whatever loop mentioned above.
Ideally, worker process should be "server" in a sense — he should not bother about keeping connections to the "channel server" (if any) persistent etc. — or this should be done transparently by the framework.
Usually such a pattern requires a proxy for the publisher, i.e. you send to the proxy which immediately accepts delivery and then that reliably forwads to the end subscriber workers. The ZeroMQ guide covers a few different methods of implementing this.
http://zguide.zeromq.org/page:all
Given your requirements, Steve's suggestion does seem the simplest: run a daemon which listens on two known sockets - the workers connect to that and the command tool pushes to it which redistributes to connected workers.
You could do something complicated that would probably work, by effectively nominating one of the workers. For example, on startup workers attempt to bind() a PUB ipc:// socket somewhere accessible, like tmp. The one that wins bind()s a second IPC as a PULL socket and acts as a forwarder device on top of it's normal duties, the others connect() to the original IPC. The command line tool connect()s to the second IPC, and pushes it's message. The risk there is that the winner dies, leaving a locked file. You could identify this in the command line tool, rebind then sleep (to allow the connections to be established). Still, that's all a little bit complex, I think I'd go with a proxy!
I think what you're describing would fit well with a gearmand/supervisord implementation.
Gearman is a great task queue manager and supervisord would allow you to make sure that the process(es) are all running. It's TCP based too so you could have clients/workers on different machines.
http://gearman.org/
http://supervisord.org/
I recently set something up with multiple gearmand nodes, linked to multiple workers so that there's no single point of failure
edit: Sorry - my bad, I just re-read and saw that this might not be ideal.
Redis has some nice and simple looking pub/sub functionality that I've not used yet but sounds promising.
Use a mulitcast PUB/SUB. You'll have to make sure the pgm option is compiled into your ZeroMQ distribution (man 7 zmq_pgm).

Resources