Search output of p4 command - linux

I am trying to perform an action depending on the output of perforce commands. But it seems that pipping and greping/acking the command doesn't appear to pickup the output
e.g.
p4 sync -n $HOME/... | grep -c up
/homedirectory/... - file(s) up-to-date.
0
p4 sync -n $HOME/... | grep -c nope
/homedirectory/... - file(s) up-to-date.
0
Further example of what i'm trying to do:
if ( `p4 sync -n $HOME/... | grep -c "no such file"` == 0 ) then
if command
else
do else command
endif
Is there anyway to read the output of a perforce command without having to write to a file then read the output? Ideally the command would be a single line.

grep isn't working because the "empty" messages like no such file and up-to-date go to stderr. As #heemayl suggested one way to fix that is to do a redirect.
You can also fix this in a shell-independent way by using the -s or -e flags to p4:
C:\Perforce\test>p4 -s sync
error: File(s) up-to-date.
exit: 0
C:\Perforce\test>p4 -e sync
error: File(s) up-to-date.
code0 554768772 (sub 388 sys 6 gen 17 args 1 sev 2 uniq 6532)
... code0 554768772
... fmt0 [%argc% - file(s)|File(s)] up-to-date.
... argc
exit: 0
Both of these flags redirect all output to stdout and also prepend every message with debugging information about the message itself. If you're trying to grep for a particular message, for example, you can use the -e flag and grep for its unique code rather than the string.
Using the -F flag lets you reformat the output to include particular elements from the message dict that you see with -e, so if you want just the code:
C:\Perforce\test>p4 -F %code0% sync
554768772
If you're trying to capture elements of the actual output, like file names, -F is even more useful:
C:\Perforce\test>p4 -F %localPath% sync -n ...#1
c:\Perforce\test\0.f1
c:\Perforce\test\1.15
c:\Perforce\test\1.18
c:\Perforce\test\2.f1
c:\Perforce\test\2.f2

Related

Pass each file obtained from a command to another command as a parameter

I am using the following line to take a pdf and split it:
pdfseparate -f 14 -l 23 ALF.SS.0.pdf "${FILE}"-%d.pdf
Now I want for each file produced, to run several commands like this:
pdfcrop --margins '-30 0 -385 0' outputOfpdfSeparate outputOfpdfSeparate-1stCol.pdf
I am trying to figure out the best way to do this:
With a single loop, for each file created by pdfseparate, if I manage to "know" what is the name of the file, I could pass it to pdfcrop and done. But since it is using %d, I do not know how to handle this "new name" in which each file has a new number. I know how to do this in Java but here I do not see it so clear.
Using pipes. I think I have the same issue since if I do
pdfseparate [options] | pdfcrops inputfile outputfile,
I do not know how to "use" the name of inputfile. I am sure it is easy but I dont see it.
Using xargs. I am studying this command since it is new for me.
Using exec. I am under the impression this is not necessary but maybe I am wrong since it's been a long while since I last used exec.
Thanks in advance.
You can use xargs. It is the best way in terms of speed.
I usually use it for converting a lot of .mp4 file to .mp3.
Doing this conversion one-by-one not only is tedious but also takes a long time. Therefore you can use the auto parallel mechanism with the help of -P 0 option in xargs
for example if I had 10 .mp4 files I would do this:
ls *.mp4 | xargs -I xxx -P 0 ffmpeg -i xxx xxx.mp3
After running this line; 10 ffmpet commands are running simultaneously.
The other way to do this is storing a list of .mp4 file in a text file like this:
ls *.mp4 > list-mp4
then:
xargs -I xxx -P 0 ffmpeg -i xxx xxx.mp3 < list-mp4
Or may you have access to GNU-parallel. Thus you can:
parallel ffmpeg -i {} {}.mp3 ::: *.mp4
Now for your case; if you want to use these (= xargs or parallel) or your own command, you should notice that your first command should send its output to stdout. Because the second command is going to read its stdin from the stdout of the first command and bash does this for your.
Thus when you can use pipe == | with your: pdfseparate than it sends its output to stdout. If it does/can NOT, then the right-side of the pipe == the second command does nothing and vice versa: the second command should/can read its stdin from incoming stdout.
For example
ls *.txt | echo {}
here echo does not read any incoming stdout from the ls command and just prints {}
Eventually, your pdfseparate should send to stdout. Then xargs store it in -I anything-your-like and then call your second command
Therefor:
pdfseparate options... | xargs -I ABC -P 0 your-second-command+its-options ABC
NOTE-1 that xargs stores the given stdout line-by-line in ABC and you pass this to your second command as its input
NOTE-2 you do not have to use -P 0 at all. It is just for speeding up the executing time. You can omit it but your second command are synchronized per incoming line.
pdfseparate does not output the files it created, thus you have to use "ls" command to get the filelist, you want to operate on.
# separate the pdfs
pdfseparate -f 14 -l 23 ALF.SS.0.pdf "${FILE}"-%d.pdf
# operate on the just created files, assumes that a "FILE" variable is set, which might not be the case
for i in $(ls "${FILE}-*.pdf"); do pdfcrop --margins '-30 0 -385 0' $i; done;
# assuming that FILE variable in your case would match ALF.SS.0-[0-9]*.pdf, you'd use this:
for i in $(ls ALF.SS.0-[0-9]*.pdf); do pdfcrop --margins '-30 0 -385 0' $i; done;

tail-like continuous ls (file list)

I am monitoring the new files created in a folder in linux. Every now and then I issue an "ls -ltr" in it. But I wish there was a program/script that would automatically print it, and only the latest entries. I did a short while loop to list it, but it would repeat the entries that were not new and it would keep my screen rolling up when there were no new files. I've learned about "watch", which does show what I want and refreshes every N seconds, but I don't want a ncurses interface, I'm looking for something like tail:
continuous
shows only the new stuff
prints in my terminal, so I can run it in the background and do other things and see the output every now and then getting mixed with whatever I'm doing :D
Summarizing: get the input, compare to a previous input, output only what is new.
Something that do that doesn't sound like such an odd tool, I can see it being used for other situations also, so I would expect it to already exist, but I couldn't find anything. Suggestions?
You can use the very handy command watch
watch -n 10 "ls -ltr"
And you will get a ls every 10 seconds.
And if you add a tail -10 you will only get the 10 newest.
watch -n 10 "ls -ltr|tail -10"
If you have access to inotifywait (available from the inotify-tools package if you are on Debian/Ubuntu) you could write a script like this:
#!/bin/bash
WATCH=/tmp
inotifywait -q -m -e create --format %f $WATCH | while read event
do
ls -ltr $WATCH/$event
done
This is a one-liner that won't give you the same information that ls does, but it will print out the filename:
inotifywait -q -m -e create --format %w%f /some/directory
This works in cygwin and Linux. Some of the previous solutions which write a file will cause the disk to thrash.
This script does not have that problem:
SIG=1
SIG0=SIG
while [ $SIG != 0 ] ; do
while [ $SIG = $SIG0 ] ; do
SIG=`ls -1 | md5sum | cut -c1-32`
sleep 10
done
SIG0=$SIG
ls -lrt | tail -n 1
done

Linux command most recent non soft link file

Linux command: I am using following command which returns the latest file name in the directory.
ls -Art | tail -n 1
When i run this command it returns me latest file changed which is actually soft link, i wants to ignore soft link in my result, and wants to get file names other then soft link how can i do that any quick help appreciated.
May be can i specify regex matched latest file file name is
rum-12.53.2.war
-- Latest file in directory without softlink
ls -ArtL | tail -n 1
-- Latest file without extension
ls -ArtL | sed 's/\(.*\)\..*/\1/' | tail -n 1
The -L option for ls does dereference the link, i.e. you'll see the information of the reference instead of the link. Is this what you want? Or would you like to completely ignore links?
If you want to ignore links completely you can use this solution, although I am sure there exists an easier one:
a=$( ls -Artl | grep -v "^l" | tail -1 )
aa=()
for i in $(echo $a | tr " " "\n")
do
aa+=($i)
done
aa_length=${#aa[#]}
echo ${aa[aa_length-1]}
First you store the output of your ls in a variable called a. By grepping for "^l" you chose only symbolic links and with the -v option you invert this selection. So you basically have what you want, only downside is that you need to use the -l option for ls, as otherwise there's no grepping for "^l". So in the second part you split the variable a by " " and fill an array called aa (sorry for the bad naming). Then you need only the last item in aa, which should be the filename.

How to capture the output of a top command in a file in linux?

I want to write the output of a specific 'top' command to a file. I did some googling and find out that it can be done by using the following command.
top -n 10 -b > top-output.txt
where -n is to specify the number of iterations and -b is for batch mode. This works very well if let top for the 10 iterations. But if i break the running of the command with a Ctrl-C, the output file seems to be empty.
I won't be knowing the number of iterations beforehand, so i need to break it manually. How can i capture the output of top in a file without specifying iterations?
The command which I am trying to use precisely is
top -b | grep init > top-output.txt
and break it whenever i want. But it doesn't work.
EDIT: To give more context to the question, I have a Java Code which invokes a tool with an Input File. As in the tool takes a file as an input and runs for some time, then takes the next file and so on. I have a set of 100,000 files which need to be fed to the tool. So now I am trying to monitor that specific tool ( It runs as a process in Linux). I cannot capture the whole of 'top' s data as the file as would be too huge with unwanted data. How to capture the system stats of just that process and write it to a file using top?
for me top -b > test.txt will store all output from top ok even if i break it with ctrl-c. I suggest you dump first, and then grep the resulting file.
How about using while loop and -n 1:
while sleep 3; do
top -b -n1 | grep init > top-output.txt
done
It looks like the output is not writing to the file until all iterations are finished. You could solve this by wrapping with an external loop like this:
touch top-output.txt
while true; do
top -b | grep init >> top-output.txt
done
Here is the 1-liner I like to use on my mac:
top -o -pid -l 1 | grep "some regexp"
Cheers.
As pointed out by #Thor in a comment, you just need to ensure that grep is not buffering arbitrarily but per-line with the --line-buffered option:
top -bn 10 | grep 'init' --line-buffered | tee top-output.txt
Without grep-ing, redirecting the output of top to a file works just fine, interrupt included.
Solved this issue. This works even if you press Ctrl+c Even I was facing the same issue when I wanted to log Cpu%.
Execute this shell script:
#!/bin/sh
while true; do
echo "$(top -b -n 1 | grep init)" | tee -a top-output.log
sleep 1
done
You can grep anything you wanna extract out of top command, use this script to store it to a file.
-b : Batch mode operation
Starts top in Batch mode, which could be useful for sending output from top to other programs or
to a file. In this mode, top will not accept input and runs until the iterations limit you've set
with the -n command-line option or until killed.
-n number, this option specifies the maximum number of iterations, or frames, top should produce before ending. Here I've used -n 1.
Do man top for more details
tee -a enables the output to be visible on the console and also stores the output onto the file. -a option appends the output to the file.
Here, I have given an interval of 1 second. You can mention any other interval.
Source for explanations of -b and -n: manpages
man top
Kruthika
CTRL+C is not a ideal solution due to control stays in CLI. You can use below command which dumps top output to a file:
top -n 1 -b > top-output.txt
I had the exact same problem...
here was my line:
top -b -u myUser | grep -v Prog.sh | grep Prog > myFile.txt
It would create myFile.txt but it would be empty when I Ctrl+C'd it. So after I kicked off my top command, then I started a SECOND top process. When I found the first top's PID (took some trial and error), and I killed it thru the second top, the first top wrote to the file as expected.
Hope that helps!
If you wish to run the top command in background (just not to worry about logout/sleep, etc) - you can make use of nohup or batch job or cron or screen.
Using nohup (stands for : No Hang Up):
Say suppose if you save the top command in a file called top-exec.sh with following content :
top -p <PID> -b > /tmp/top.log
You can replace the top command for whatever process you are interested in.
Then, You can execute top-exec.sh using nohup as follows :
$> nohup top-exec.sh &
This will redirect all the output of top command to a file named "top.log".
Set the -n argument to 1 it tells top how many frames it will produce before exits.
top -b -n 1 > ~/mytopview.txt
or even
myvar=`top -b -n 1`
echo $myvar
From the top command, we can see all the processes with their PID (Process ID).
To print top output for only one process, use the following command:
$ top –p PID
To save top command of any process to a file, use the following command:
top -p $PROCESS_ID -b > top.log
where > redirects standard output to a file.

How to redirect output to a file and stdout

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Resources