I am developing initscripts for some of our software, and am having difficulty deciding how I should use it for a particular piece.
We have homegrown software responsible for passing data around out network, it's built on a standard pubsub model. There is a publisher process (two, actually, for two different use cases), a broker process, and a subscriber process). Any combination of these processes, and even multiple of the same process, can run simultaneously on a given box. I'm having trouble deciding how best to allow this to be configured. Since it can vary from box to box, that will likely go into /etc/sysconfig/pubsub which will be read in by the initscript.
The only things I will have to allow to be configured is (1) the process name, which is one of log_publish, dir_publish, broker, subscribe, and (2) the configuration file that corresponds to that particular process.
I wish to avoid telling people how to modify the initscript per box in order to change the list of running processes, so this unique configuration file per box is the best way I can come up with to accomplish that.
I assume this also means that I will have to have some kind of unique identifier per process on the box, as I intend to use the touch /var/lock/subsys/* method that most RedHat initscripts use already to lock a process from running twice. Knowing this, I know the identifier can't always be random, otherwise it will never be effective in order to prevent duplicate processes with the same configuration file (because, again, I need to be able to run multiple processes with different configuration files).
I have no idea how best to represent this in configuration.
I've implemented this similarly to how VNC does it when run as an initscript.
If you look at your distro's configuration file for vnc init (ex. RedHat/CentOS: /etc/sysconfig/vncservers), you see this:
# The VNCSERVERS variable is a list of display:user pairs.
#
# Uncomment the line below to start a VNC server on display :1
# as my 'myusername' (adjust this to your own). You will also
# need to set a VNC password; run 'man vncpasswd' to see how
# to do that.
#
# DO NOT RUN THIS SERVICE if your local area network is
# untrusted! For a secure way of using VNC, see
# <URL:http://www.uk.research.att.com/vnc/sshvnc.html>.
# VNCSERVERS="1:myusername"
# VNCSERVERARGS[1]="-geometry 800x600"
Pretty straight forward. You define a screen number, and parameters to match if necessary.
So now, I have, for example:
PUBSUBPROCS="1:publish 2:broker 3:subscribe"
PUBSUBARGS[1]="/config/publish.cfg"
PUBSUBARGS[2]="/config/broker.cfg"
PUBSUBARGS[3]="/config/subscribe.cfg"
And most all of the logic for parsing this was also ripped out from the vncserver initscript, which I will not post here for length reasons.
I'd say have multiple initscripts, one per process type, and then let the configuration for each determine how many of that process to spawn.
Related
We have a startup script for an application (Owned and developed by different team but deployments are managed by us), which will prompt Y/N to confirm starting post deployment. But the number of times it will prompt will vary, depends on changes in the release.
So the number of times it will prompt would vary from 1 to N (Might be even 100 or more than that).
We have automated the deployment and startup using Jenkins shell script jobs. But startup prompts number is hardcoded to 20 which might be more at sometime.
Could anyone please advise how number of prompts can be handled dynamically. We need to pass Y whenever there is pattern in the output "Do you really want to start".
Checked few options like expect, read. But not able to come up with a solution.
Thanks in advance!
In general, the best way to handle this is by (a) using a standard process management system, such as your distro's preferred init system; or, if that's not possible, (b) to adjust the script to run noninteractively (e.g., with a --yes or --noninteractive option).
Barring that, assuming your script reads from standard input and not the TTY, you can use the standard program yes and pipe it into the command you're running, like so:
$ yes | ./deploy
yes prints y (or its argument) over and over until it's killed, usually by SIGPIPE.
If your process is reading from /dev/tty instead of standard input, and you really can't convince the other team to come to their senses and add an appropriate option, you'll need to use expect for this.
Ηi,
Say I have a file called something.txt. I would like to find the most recent program to modify it, specifically the full path to said program (eg. /usr/bin/nano). I only need to worry about files modified while my program is running, so I can add an event listener at program startup and find out what program modified it when my program was running.
Thanks!
auditd in Linux could perform actions regarding file modifications
See the following URI xmodulo.com/how-to-monitor-file-access-on-linux.html
Something like this generally isn't going to be possible for arbitrary processes. If these aren't arbitrary processes, then you could use some sort of network bus (e.g. redis) to publish "write" messages. Otherwise your only other bet would be to implement your own filesystem using FUSE. Even with FUSE though, you may not always have access to the pid depending on who/what is writing to the file and the security setup of your OS.
Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file
I want to automatically start the proftpd service when the runlevel changes from 2 to 5. When it changes back to 2 it should be stopped again.
Any ideas?
If you use sysvinit, the procedure is easy. Just have a K??yourServiceName script in /etc/rc2.d and a S??yourServiceName in /etc/rc5.d. They will be called with the runlevel in $RUNLEVEL environment variable and with a stop and start (respectively) parameters. The ?? represent two digits that represent the order of execution to use (priority?).
This has been replaced in new scripts (mainly in debian, but I think others follow this approach also) by having several fields in the scripts themselves indicating the dependencies between scripts and execution is done in parallel for scriptis that don't depend on each other, but serially for scripts that depend between themselves. You can read about this approach in the scripts themselves. The scripts are installed normally in /etc/init.d, and symbolic links are made from there to the proper directory with the proper two digit positions by the utilities controlling this.
Finally, if you use systemd (it has replaced completely the sysv init process) there's another method to deal with it. You'll have to look for the doc of systemd(8) ad I'm not aware of it. I only know it's a dbus service provider and processes comunicate with it via this new technology.
The two first methods are somewhat interoperable, as if you fix the priority of execution and don't fill the dependencies, the system v init process will respect it.
Edit
This approach assumes you run proftpd as an independent service (not as dependant of xinetd(8) or inetd(8)) and it has scripts to launch and stop it on a runlevel change.
In case you need to run it depending on xinetd(8), i don't know now if xinetd has parameters to allow you to serve based on the runlevel. If it has, you are lucky. If it hasn't you will have to switch your approach.
I'd like to create an auto-testing/grading script for students on a Linux system such that:
Any student user can initiate the script at any time.
A separate script (with root privileges) copies student code to a non-student-accessible file space, using non-student-accessible unit tests, etc.
The user receives limited feedback in the form of a text file generated by the grading script.
In short, I'm looking to create something similar to programming contest submission systems, but allowing richer feedback without revealing all teacher unit testing.
I would imagine that a spooling behavior between one initiating script and one root-permission cron script might be in order. Are there any models/examples of how one might best structure communication between a user-initiated script and a separate root-initiated script for such purposes?
There are many options.
The things I would mention at the first line:
Don't use su; use sudo; there are several reasons for it, and the main reason, that to use su you need the password of the user you want to be and with sudo — you don't;
Scripts can't be suid, you must use binaries or just a normal script that will be started using sudo (of course students must have sudoers entry that allows them to use the script);
Cron is not that fast, as you may theoretically need; cron runs tasks every minute; please consider inotify usage;
To communicate between components of your system you need something that will react in realtime; there are many opensource components/libraries/frameworks that could help you, but I would recommend you to take a look at ZeroMQ and Redis;
Results of the scripts' executions/tests can be written either to a filesystem (I think it would be better), or to a DBMS.
If you want to stick to shell scripting, the method I suggest for communicating between processes would be to have the root script continually check a named pipe for input (i.e. keep opening it after each eof) and send each input through whatever various tests must be done. Have part of the input be a 'return address' - where to send the result.
This should allow the tests to be performed in a privileged space without exposing any control over the privileged space to the students. The students don't need sudo, and you don't need to pull in libraries. Just have the students pipe their code into a non-privileged script that adds the return address and whatever other markup you may need, which then gives it to the named pipe.