Run init.d script conditionally based on hostname - linux

What would be the best way to conditionally run an init.d script on linux based on hostname? I'm working with New Relic and some of the servers simply don't need it installed, but they're all otherwise basic copies of one another. This is Ubuntu.
I've tried (and failed) to put in a host conditional but for the life of me I can't get it working. Threw exits in the top of the file as well as in the start function, but it seems to fire up every time. Without knowing completely how those scripts are fired I'm a little confused on how to alter it to not fire if it server name isn't something like production, etc.
Any guidance would be super helpful.

Put this at the top of the script you would like to disable:
if [ $(hostname) != "goodhost" ]
then
exit
fi
replacing "goodhost" with the actual name of the host where the script is supposed to run.
Does that solve the problem?

Related

Maintain a session across multiple instances of app when called from same shell

I'm trying to have data (generated by an application only after its launch) persisted across multiple invocations of an application, but only when they're started from the same shell session.
One possible way to do that would be to pass the data back from the application to the calling shell, but since environment variable changes are only passed from parent to child, I don't know how to implement that.
Practical example:
There is job command that create subdirectory with current datetime and does work inside. Sometimes job needs to be killed and restarted, so it need directory where if finished, like job --resume 21Fri_1849/data. I would like to save 21Jan_1849/data so I don't have to check and type it each time I need to resume job. If I created something like .last_job, and wanted to restart job in another session, it could resume wrong (last) job, so files are not solution (AFAIK).
How can this be done?
Since you're only trying to target Linux, there are a fair number of tricks available here. Consider this one:
#!/usr/bin/env bash
current_boot_id=$(</proc/sys/kernel/random/boot_id)
# honor myprog_shell_pid if set and valid, fall back to PPID otherwise
if [[ $myprog_shell_pid ]] && [[ -e /proc/$myprog_shell_pid/stat ]]; then
parent_pid=$myprog_shell_pid
else
parent_pid=$PPID
fi
parent_start_time=$(awk '{print $22}' "/proc/$parent_pid/stat")
mkdir -p "$HOME/.cache/myscript-sessions"
data=$HOME/.cache/myscript-sessions/${current_boot_id}:${parent_pid}:${parent_start_time}
Now, we have a data file name that changes:
When we're rebooted (because current_boot_id is updated)
If we're run from a different shell (because our PPID changes).
If we're run from a different shell with the same PID (because the start time for the parent PID will be different).
...and you can easily delete files with the wrong boot id (because the system rebooted), or with names that refer to PID/start-time combinations that don't exist.
One caveat is that by default, this is sensitive to being called by subshells (output=$(./yourprog) will have a different PPID than ./yourprog will), but if the parent shell runs export myprog_shell_pid=$$, that issue goes away.
You're crossing over to where you need a simple job management engine instead of just shell. Using 'make' and writing Makefiles is the probably the simplest way to set this up. You can write a rule that tells how to turn a stage 1 file into a stage 2 file based on file extension, and then make will know how far things got and how to resume next time you run it.

How does DIG utility work in FreeBSD and BIND?

I want to know how does the DIG (Domain Information Groper) command really works when it comes to code and implementation. I mean when we enter a DIG command, which part of the code in FreeBSD or BIND hits first.
Currently, I see that when I hit the DIG command, I see the control going to a file client.c. Inside this file, following function is called:
static void
client_request(isc_task_t *task, isc_event_t *event);
But how does the control reach to this place is still a big mystery for me even after digging a lot into 'named' part of the BIND code.
Further, I see this function being called from two places within this file. I tried to put logs into such places to know if control reaches to this place through those paths, but unfortunately that doesn't happen. It seems "Client_request()" function is somehow being called from outside somewhere that I am not able to figure out.
Is there anybody here who can help me out to resolve this mystery for me ?
Thanks.
Not only for bind but to any other command, within FreeBSD you could use ktrace, it is very verbose but could help you to get a quick overview of how the program is behaving.
For example, in latest FreeBSD's you have drill command instead of dig so if you would like to know what is happening behind scenes when you run the command, you could give a try to:
# ktrace drill freebsd.org
Then to disable tracing:
# ktrace -C
Once tracing is enabled on a process, trace data will be logged until
either the process exits or the trace point is cleared. A traced process
can generate enormous amounts of log data quickly; It is strongly
suggested that users memorize how to disable tracing before attempting to
trace a process.
After running ktrace drill freebsd.org a file ktrace.out should be created the one you could read with kdump, for example:
# kdump -f ktrace.out | less
That will hopefully "reveal the mystery", in your case, just replace drill with dig and then use something like:
# ktrace dig freebsd.org
Thanks to FreeBSD Ports system you can compile your own BIND with debugging enabled. To do so run
cd /usr/ports/dns/bind913/ && make install clean WITH_DEBUG=1
Then you can run it inside debugger (lldb /usr/local/bin/dig), break on the line you are interested in and then look at backtrace to figure out how the control reached there.

abas-ERP: Excecute a FO-Service-Program from a cronjob

I need to run a service program, written in FO for abas-ERP continuous.
I heard about some already existing scripts for calling service programs from the shell. If that is possible I could simply use a cronjob for starting this script.
But I don't know exactly where to find a template for these shell scripts, which conditions have to be complied and if there are any restrictions.
For Example: Is it possible to call several FO-programs successively (this might be important relating to blocking licences)?
You can use edpinfosys.sh and execute infosystem TEXTZEIGEN per cronjob.
You could also use batchlg.sh
batchlg.sh 'FOP-Name' [ -PASSARGS ] [Parameter ...]

Ping remote arbitrary command execution on linux

I'm trying to execute code on a remote machine (virtual), which runs a webserver with a single POST form, intended to do a simple ping. On the other side is the following script (part of it):
exec("/bin/ping -c 4 ".$_POST["addr"]);
"addr" is where the data entered in the POST form goes. So basically it calls /bin/ping and appends whatever data I enter. The question is how can I leverage this to get a shell? I think that since the ping command runs with root privileges it should be fairly easy but I'm still new to this game and couldn't find any useful info on how to do this. Help will be very much appreciated :)

Setting up cron in windows without getting a popup?

I know that the scheduler can be used to create a cron job, but in my case, that job involves accessing a url. Problem is, if I use WGET or a batch file, a window keeps popping up. Any suggestions on how to get passed this?
Create a batch file that does what you want. Let's say it's called doit.bat. Create a file doit.vbs in the same directory. It should have the following contents:
CreateObject("Wscript.Shell").Run """doit.bat""", 0, False
Set the scheduler to run doit.vbs.
Yes, indeed. I'd like to cross link you to a site, where this has been discussed while just pasting.
C:\> at [\\machine] HH:MM[A|P] [/every:day] "command"
Furthermore schtasks might be of help. You might want to use curl within a script. It has a specific "silent" function.

Resources