Saving Git Push to an output file - linux

I'm new to Git.
My problem is, with a shell script (running in Windows) I need to save the Git Push commands to an output file.
So far i've something like this:
echo -e "\n6) ${GREEN}Starting Push.${NC}"
git push -v >> logs/logPush.log
if grep -q -w -i "Rejected" logs/logPush.log ;
then
echo "${RED}A conflict has been detected. Exiting.${NC}"
read
exit
else
:
fi
But it always generates a blank file. The Pull works just fine tho...
Does anyone know how to make the output file receive the whole information that it appears on the terminal:
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 289 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To ssh:repository
42be914..ead1f82 master -> master
updating local tracking ref 'refs/remotes/origin/master'

Redirect stderr to the file as well:
git push -v >> logs/logPush.log 2>&1
It looks like git push has the --porcelain option for this purpose:
--porcelain
Produce machine-readable output. The output status line for each ref will be tab-separated and sent to stdout instead of stderr.
The full symbolic names of the refs will be given.

A UNIX shell provides two output streams by default -- stdout and stderr.
This is often useful because when you redirect output to something else, you still want errors to go to the screen.
$ cat nosuchfile | grep something
cat: nosuchfile: No such file or directory
This is what I wanted. I didn't want cat: nosuchfile: No such file or directory to be fed into grep.
As you know, you can redirect stdout using > and |.
You can redirect stderr using 2>:
$ cat nosuchfile > outfile 2>errormessage
A common idiom is:
$ somecommand > output 2>&1
Here &1 refers to the file descriptor used by stdout. So you're telling the shell to send stderr to the same place as stdout.
You can use 2>&1 to send stderr to your output file. Or you can use what you've learned here to make sense of the git documentation re --porcelain, or design some other solution, for example sending stderr to a second file where appropriate.

I guess that the exit status of git push will indicate whether it was successful or not. You should use it rather than parsing the log:
if ! git push -v >> logs/logPush.log 2>&1
then
echo "${RED}Failed to push. Exiting.${NC}"
read
exit
fi
I've used 2>&1 to redirect stderr to stdout, so the log would contain both outputs - this is optional.
If the command fails, it's not necessarily indicative of a conflict, so I modified the error message to something more generic.

Related

Execute p4 aliased command result in messed wrong order of lines in output

I have the following content inside my p4aliases.txt.
diff-cl $(target-cl) = diff -dl //...#$(EQ)$(target-cl)
Basically it diffs against your files in current workspace toward the target shelved files of changelist.
It is fine. I can execute it. But when I compare the result coming from above aliased command against the direct raw (non-aliased) command as follows
p4 diff -dl //...#=<target-cl>
the output lines of text from aliased command is in wrong order e.g. changes according to a certain file shows up first before a line of file shown, line orders are messed up. This is not the case if you execute with a non-aliased command.
Example
Expected result
==== //depot/common.h#none - x:\mydir\project\src\common.h ====
==== //depot/file.cpp#none - x:\mydir\project\src\file.cpp ====
3a4
> added line 1
==== //depot/file.h#none - x:\mydir\project\src\file.h ====
Actual result
3a4
> added line 1
==== //depot/common.h#none - x:\mydir\project\src\common.h ====
==== //depot/file.cpp#none - x:\mydir\project\src\file.cpp ====
==== //depot/file.h#none - x:\mydir\project\src\file.h ====
I have p4 version as of Rev. P4/NTX64/2021.1/2126753 (2021/05/12).
Perforce server version (got from p4 info) is Server version: P4D/LINUX26X86_64/2017.1/1574018 (2017/10/02).
How can I solve this issue?
Could this be a version too far away between client and server
Update
I have tested p4 client all the way down from 2016-2020 version by downloading old binaries from ftp.perforce.com (in directory perforce). No luck. Output still messed the same. So it's not the problem about version mismatch.
This looks like a bug in the p4 client. When the client does a diff, it's written by the ClientUser::Diff() method, which defaults to writing to stdout (i.e. it does not route the output through ClientUser::OutputText()):
https://workshop.perforce.com/projects/perforce_software-p4/files/2018-2/client/clientuser.cc#436
https://workshop.perforce.com/projects/perforce_software-p4/files/2018-2/client/clientuser.cc#573
Output from commands run as part of an alias go through the ClientUserStrBuf subclass, which buffers all of its output. The file headers, for example, are buffered by ClientUserStrBuf::OutputInfo():
https://workshop.perforce.com/projects/perforce_software-p4/files/2018-2/client/clientaliases.cc#1647
There isn't a ClientUserStrBuf::Diff() implementation, though, so that diff output goes straight to stdout while the headers are buffered and printed at the end (presumably after some post-processing) -- hence the diff output showing up first in the console.
The fix I'd make would be to have the base ClientUser::Diff() implementation route the output through OutputText() when no output file is provided, which seems like the least-surprise behavior; that'd fix the aliases behavior and might even make life a little easier for other client developers who would otherwise hit the same issue. If you have a support contract with Perforce you can file this as a bug report, or since the client is open source you can take a crack at fixing and building it yourself. I don't think there's a workaround that doesn't involve modifying the client source code.
Samwise has the correct approach to truly fix the problem at hands although it might take some effort to understand the code, and conduct the fix itself.
At any rate, if we took such approach we won't be able to take benefits of bug fixes and future updates as we will be stuck with 2018-2 version of p4 as it's the latest as it can be in which we can grab the source.
I would recommend to use WSL then interact with p4.exe (yes, a Windows-based binary) for Windows-based project, and p4 for Linux-based binary. If you didn't use WSL, please find the .bash_aliases-like solution as I have below to seamlessly solve aliases diff operation.
Put the following code into your ~/.bash_aliases
# p4 - fix of aliases diff operation
# platform independent, it will choose a correct binary path to execute properly
p4() { cmd="p4.exe" # default is Windows-based
# get the last argument value, if "-lx" passed in then we know it's linux
if [[ "${#: -1}" == "-lx" ]]; then
cmd="/usr/local/bin/p4"
fi
if [[ $1 == "diff-cl" ]]; then
if [ -z "$2" ]; then
echo "usage: p4 diff-cl <CL>"
return 1
fi
$cmd diff -dl //...#=$2 | diffp4 | less -r
elif [[ $1 == "diff-cl-fonly" ]]; then
if [ -z "$2" ]; then
echo "usage: p4 diff-cl-fonly <CL>"
return 1
fi
$cmd diff -Od -dl -ds //...#=$2 | diffp4 | grep ==== | less -r
else
$cmd "$#"
fi
}
then source ~/.bash_aliases.
What it does is to allow you to still use p4 with all of its original commands & arguments normally with exceptions of diff-cl (which is the same name of alias I've put into p4aliases.txt for Windows or ~/.p4aliases for Linux). You can safely remove diff-cl entry from p4's alias file, or just leave it there. What we have in ~/.bash_aliases file will intercept whenever such argument matches then execute the raw command, just that we don't have to type long command ourselves.
We can later remove such section in our ~/.bash_aliases file when upstream p4 has been fixed.
In else section, we just relay the the whole arguments, and it will be performed just as normally done.
Extra: diff-cl-fonly to list out only files (its depot path, and local workspace path) which have changes.

How to redirect both OUT and ERR to one file and only ERR to another

Hi expertsI want commands out and err is appended to one file, like this command > logOutErr.txt 2>&1
but I also want that err is also appended to another command 2> logErrOnly.txt
# This is a non working redirection
exec 1>> logOutErr.txt 2>> logOutErr.txt 2>> logErrOnly.txt
# This should be in Out log only
echo ten/two: $((10/2))
# This should be in both Out and Out+Err log files
echo ten/zero: $((10/0))
I understand than the last redirect 2>> overrides the preceding ...so what? tee? but how?
I have to do this once at the beginning of the script, without modifying the rest of the script (because it is dynamically generated and any modification is too complicated)
Please don't answer only with links to the theory, I have already spent two days reading everything with no good results, I would like a working example
Thanks
With the understanding that you lose ordering guarantees when doing this:
#!/usr/bin/env bash
exec >>logOutErr.txt 2> >(tee -a logErrOnly.txt)
# This should be in OutErr
echo "ten/two: $((10/2))"
# This should be in Err and OutErr
echo "ten/zero: $((10/0))"
This works because redirections are processed left-to-right: When tee is started, its stdout is already pointed to logOutErr.txt, so it appends to that location after first writing to logErrOnly.txt.

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

Rollover shell script

Assuming a shell script(commands.sh) with few commands.
I need to write a script which sends the output of commands executed by commands.sh to a file f1.csv
if file size exceeds 1MB then the output flowing should go to file f2.csv
if the file size exceeds 1 mb again here,the output flowing should go to file f3.csv
if f3.csv exceeds the size 1mb,then the older f1 should be deleted and again new file f1 should be created,
output flowing should be to written to f1. This process should go on .
I can write the crontab file, just the shell script is a bit tricky
I have been experimenting:
#!/usr/bin/env bash
PREFIX="f"
# Maximum size after which you want a new file in bytes
MAX_SIZE=1048576
LAST_FILE=`ls "$prefix"*.csv | tail -1`
# Check if file exists and if it does not, create it.
if [[ -z "$LAST_FILE" ]]
then
LAST_FILE=$PREFIX"1.csv"
touch $LAST_FILE
fi
LAST_FILE_NO=`echo $LAST_FILE | sed s/$PREFIX/''/ | sed s/.csv/''/`
LAST_FILE_SIZE=`stat -c %s $LAST_FILE`
if [ `stat -c %s $LAST_FILE` -lt 200 ]
then
`/bin/sh ./sam.sh >> $LAST_FILE`
else
UPCOMING_FILE_NO=$((LAST_FILE_NO+1))
`/bin/sh ./sam.sh >> $PREFIX$UPCOMING_FILE_NO.csv`
fi
help is appreciated guys.
EDIT: Have got the secondary shell script to work too...
Now if anyone could help me with resetting after 3 files are done and starting from f1.
thanks
It sounds like you'd be better off using logrotate, depending on how your script is running. If you are running 'commands.sh' on a cron, you can have logrotate rotate out the logs. There is a good guide on logrotate here:
http://linuxers.org/howto/howto-use-logrotate-manage-log-files
If your commands.sh isn't going to be on a cron, meaning it's not a regular time interval that triggers it, you could manually set up a log rotation at the beginning of your script. I once had to do something similar. I found this guide really useful:
http://wazem.blogspot.com/2013/11/simple-bash-log-rotate-function.html

cat multiple files over one ssh connection and get return value for each

As said in the title, i'm trying to cat multiple files (content needs to be appended to existing files on host) over one ssh connection and get return value for each, i.e. if that cat for the particular file was successful or not.
Up to now, i did this for each file individually, by just repeating the following command for each and checking the return value.
cat specific_file | ssh user#host -i /root/.ssh/id_rsa "cat >> result/specific_file"
I then just checked the return value for each transfer (automatically) and thereby could determine the status for each file. My question is: is it possible to to this over one single ssh connection, but to obtain a return value for every single file ?
Thanks in advance !
EDIT:
(b) As you can see in the following, i generate a command for one specific file and then check the return code. The method containing this part of the program is then called for the different types o
If the file was successfully logged, it can be removed from the client, and that further logging to the type of file is (still) allowed. If the file is exceeding its limits, this bit is unset in another part of the program.
sprintf(command,"/bin/cat /root/%s%s | /usr/bin/ssh log-bot#192.168.5.1 -i"
" /root/.ssh/id_rsa \"/bin/cat >> result/%s%s\"", apmac,
ending_str[source], apmac, ending_str[dest]);
rc = system(command);
if(rc != 0)
{
sprintf(buffer, "[%i] - LOG ERROR from %s to %s CODE %i ", (int)time(0),
ending_str[source], ending_str[dest], rc);
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
}
else
{
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
remove(filename);
sprintf(buffer, "[%i] - LOG process from %s to %s ok", (int)time(0),
ending_str[source], ending_str[dest]);
switch(source)
{
case PROBE_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_PROB);
break;
case LIST_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_LIST);
break;
case SCAN_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_SCAN);
break;
default:
/* Other cases not YET considered */
break;
}
}
SECOND EDIT:
I'm running this code on a router as a part of a program. Please note that i'm not allowed to add new libraries or non basic functions to the system. In addition, the ssh client on the system itself doesn't allow "-M" mode.
Edit in response to the added info (and code):
To the code: I'd strongly consider writing a script/program on the receiving end to talk to the sending process over the ssh pipe. That way you have full flexibility.
The simplest thing that could work, would still appear to be sending an archive over to the receiving host. On the receiving end, filter the archive with a script that
untars each file into a temporary location
tries the appending operation cat >> specific_file
prints a 'result record' to stdout as feedback to the sender
So you'd do:
tar c file1 file2 file3 |
ssh log-bot#remote /home/log-bot/handle_logappends.sh |
while read resultcode filename
do
echo "$filename" resulted in code "resultcode"
done
To handle the feedback in C/C++ you'd look at popen, that will allow you to read the streaming feedback as if from a file, simple!
An example of such a handle_logappends.sh script on the receiving end:
#!/bin/bash
set -e # bail on error
TEMPDIR="/tmp/.receiving_$RANDOM"
mkdir "$TEMPDIR"
trap "rm -rf '$TEMPDIR/'" INT ERR EXIT
tar x -v -C "$TEMPDIR/" | while read filename
do
echo "unpacked file $filename" > /dev/stderr
## implement your file append logic here :)
## e.g. (?):
cat "$TEMPDIR/$filename" >> "result/$filename"
## HERE COMES THE FEEDBACK PART: '<code> <filename>'
echo "$?" "$filename"
done
The really neat part of this is, that since everything is in streaming mode, the feedback for the first file(s) may be arriving while the sending tar is still sending the later files to the receiving host. No unnecessary delays!
I included a tiny bit of sane error handling/cleanup but I would suggest
perhaps receiving the whole archive first, then iterating through the files?
doing the appends in atomic fashion (i.e. on a copy, then move the copy into place only if the whole append operation succeeded; this prevents partially appended logs)
Hope that helps!
Older answer:
You'd usually employ devious little tricks (not) like:
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar x -C result/ -"
Add a verbose flag to see progress details
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar xvC result/ -"
If you want, you can substitute cpio for tar. Add options to get more functionality (-p for preserve permissions, e.g.)
To do various separate steps over a single logical connection, you can use a ssh Master connection:
ssh user#host -i /root/.ssh/id_rsa -MNf # login, master, background without a command
for specific_file in file1 file2 file3
do
cat "$specific_file" |
ssh user#host -Mi /root/.ssh/id_rsa "cat >> 'result/$specific_file'"
# check/use error code
done
How about building on libssh2 instead of scripting ssh, and using the sftp subsystem instead of building your own file-transfer system in shell?
There's an example of performing one file append in libssh2/examples/sftp_append.c, just repeat it for the multiple files you want.
if you look at the problem from a different tactical view, you could cat all the files over from another master file. That master file is a shell script that has here documents embedded with the files' contents. Then exec the master shell script and ls the files - all in one ssh session. It's not pretty or elegant but will be successful.

Resources