I am new to linux, and I want to write to a .txt file all of the running processes on my PC that has the word "con" in them.
The script I wrote:
#!/bin/bash
ps -A | grep "con" > con_proc.txt
Why is this not working?
#!/bin/bash
ps -eaf | grep -i "con" > con_proc.txt
If you want to place inside of a script the contents of the script would be the above contents, for example script.sh.
To invoke the script you will need to do the following:
chmod +x script.sh
./script.sh
The first command gives the script execute permissions and the second command invokes the script.
Linux has pgrep to do this.
$ pgrep -a con
...
Related
I'm working on a program that can query running X apps, save all the commands of running apps and them reopen them latter.
I encounter an issue. wmctctl can query the pid of Onlyoffice, for example the pid is 123. Then run ps -ef -q 123, I see that the CMD is ./DesktopEditors which should be a invalid command, because ./one_command only can work in special folder include file one_command.
I can get a complete command by running ps -ef -q $(pgrep -P 123).
Is there a straight way to get the complete command of Onlyoffice just via wmctl and ps?
If there is a better way to get all commands of X apps, please let me know. Thanks.
I suggest using ps -h -e -o pid,args command piped with a grep
This should provide full command path with it arguments and options.
For example find all running java programs with their arguments (might be extensive):
ps -eo pid,args | grep java
In your case I suggest a small awk script, that looks for the pid given as 3rd input field in current line:
wmctrl -l -p|awk '{system("ps -h --pid "$3" -o args")}'
Sample output
nautilus-desktop --force
/usr/libexec/gnome-terminal-server
/usr/libexec/gnome-terminal-server
update
Transforming current directory ./ to to full path.
Assuming ./ represent the current working directory.
Add the following pipe.
wmctrl -l -p|awk '{system("ps -h --pid "$3" -o args")}'|sed "s|^\./|$PWD/|"
Find the script or program DesktopEditors in your computer, using find / -name "DesktopEditors".
But I believe this is useless if you are trying to reverse engineer a web based application that requires some kind of a browser emulator.
I know what they do. I was just wondering what kind of command are they. How can you make one using shell scripting.
For example, command like:
ignoreError ls /Home/
ignoreError mkdir /Home/
ignoreError cat
ignoreError randomcommand
Hope you get the idea
The way to do it in a shell script is with the "$#" construct.
"$#" expands to a quoted list of all of the arguments you passed to your shell script. $1 would be the command you want your shell script to run, and $2 $3 etc are the arguments to that command.
The only example I have is from cygwin. Cygwin does not have sudo, but I have this script that emulates it:
#!/usr/bin/bash
cygstart --action=runas "$#"
So when I run a command like
$ sudo ls -l
my sudo script does whatever it needs to do (cygstart --action=runas) and calls the ls command with the -l argument.
Try this script:
#!/bin/sh
"$#"
Call it, for example, run, make it runnable chmod u+x run, and try it:
$ run ls -l #or ./run ls -l
...
output of ls
...
The idea is that the script takes the parameters specified on the command line and use them as a (sub)command... Modify the script this way:
#!/bin/sh
echo "Trying to run $*"
"$#"
and you will see.
At the begining apologize for my English.
I have a running process on server, and when I execute:
ps -aux | grep script.sh
I get such a result:
root 28104 0.0 0.0 106096 1220 pts/7 S+ 08:27 0:00 /bin/bash ./script.sh
But this script is running from eg. /home/user/my/program/script.sh
So, how I can get the full path of from where the script was running? I have many scripts which name is exactly same, but they are running from different locations and I need to know from where the given script was running.
Thanks for reply!
Try the following script:
for each in `pidof script.sh`
do
readlink /proc/$each/cwd
done
This will find the pid.s of all script.sh scripts running and find the corresponding cwd (current working directories) for /proc.
use pwdx
usage: pwdx pid ...
(show process working directory)
for example,
pwdx 20102
where 20102 is the pid
this will show the process working directory of the process
#!/bin/bash
#declare the associative array with PID as key and process directory as value
declare -A dirr
#This will get the pid of the script
pid_proc=($(ps -eaf | grep "$1.sh" | grep -v "grep" | awk '{print $2}'))
for PID in ${pid_proc[#]}
do
#using Debasish method
dirr[$PID]=$(pwdx $PID)
# Below are different process to get the CWD of running process
# using user1984289 method
#dirr[$PID]=$(readlink /proc/"$PID"/cwd)
#dirr[$PID]=$(cd /proc/$PID/cwd; /bin/pwd)
done
# iterate using the keys of the associative and get the working directory
for PID in "${!dirr[#]}"
do
echo "The script '$1.sh' with PID:'$PID' is in the directory '${dirr[$PID]}'"
done
Use pgrep to get the PIDs of your instances, and then read the link of the associated CWD directory. Basically, the same approach as #user1984289 but using pgrep instead of pidof which does not match bash script names on my system (even with the -x option):
for pid in $(pgrep -f foo.sh); do readlink /proc/$pid/cwd; done
Just change foo.sh to the name of your script.
If I have a text file with a separate command on each line how would I make terminal run each line as a command? I just don't want to have to copy and paste 1 line at a time. It doesn't HAVE to be a text file... It can be any kind of file that will work.
example.txt:
sudo command 1
sudo command 2
sudo command 3
you can make a shell script with those commands, and then chmod +x <scriptname.sh>, and then just run it by
./scriptname.sh
Its very simple to write a bash script
Mockup sh file:
#!/bin/sh
sudo command1
sudo command2
.
.
.
sudo commandn
you can also just run it with a shell, for example:
bash example.txt
sh example.txt
Execute
. example.txt
That does exactly what you ask for, without setting an executable flag on the file or running an extra bash instance.
For a detailed explanation see e.g. https://unix.stackexchange.com/questions/43882/what-is-the-difference-between-sourcing-or-source-and-executing-a-file-i
You can use something like this:
for i in `cat foo.txt`
do
sudo $i
done
Though if the commands have arguments (i.e. there is whitespace in the lines) you may have to monkey around with that a bit to protect the whitepace so that the whole string is seen by sudo as a command. But it gives you an idea on how to start.
cat /path/* | bash
OR
cat commands.txt | bash
If I run
grep -i "echo" *
I get the results I want, but if I try the following simple bash script
#search.sh
grep -i "$1" *
echo "####--DONE--####"
and I run it with sh -x search.sh "echo" I get the following error output:
' grep -i echo '*
: No such file or directory
' echo '####--DONE--####
####--DONE--####
How come? I'm on CentOS
Add the sha-bang line at the top of your script
#!/bin/bash
and after making it executable, run the script using
./search.sh "echo"
The "sh -x" should print the files that '*' matches. It looks like it's not matching any files. Are you maybe running it in a directory with no readable files?