how run diff php files inside dockerfile? - linux

i try to automate geting differencies at php files on container, but I receive exit code: 1 and kill process.
RUN diff ../../../composer/vendor/leroy-merlin-br/mongolid/src/Mongolid/ActiveRecord.php ../../../diff/ActiveRecord.php > ../../../diff/diff-ActiveRecord.diff
How can I fix is?

diff returns 1 if there are differences between the files, which will fail the RUN directive. You could explicitly return 0 for that line if you don't want to break docker's workflow:
RUN diff ../../../composer/vendor/leroy-merlin-br/mongolid/src/Mongolid/ActiveRecord.php ../../../diff/ActiveRecord.php > ../../../diff/diff-ActiveRecord.diff; exit 0

Related

How WinSCP determined Errors? [duplicate]

I'm new to WinSCP. I'm trying to set an Errorlevel for the batch IM running. If the Errorlevel= 0 PRINT SUCCESS and transfers those files with Errorlevel 0 to folder call success.
if the error not equal to 0 moves the file with the error level to different folder call errors. Any suggestions.
HERE IS MY.bat
WinSCP.exe /console /script="c:\users\PDP\script.txt" /log="c:\users\PDP\lastrun.txt"
if %ERRORLEVEL% equ 0
echo Success
sendmail.exe -t < success_mail.txt
move OPTTXM* c:\users\PDP\sent ( the files in the batch start with OPTTXM*)
exit /b 0
if %ERRORLEVEL% neq 0 goto error
echo Error!
sendmail.exe -t < error_mail.txt
move ????????????????????????????????? ( how you can get inside each file and check the status
)
exit /b 1
Thanks in advance
You can make WinSCP create XML log with files that were and were not transferred. But it would be difficult (if possible at all) to parse/process those XML files in a batch file.
You better use a more powerful scripting language, like PowerShell.
See for example Move files after upload using PowerShell/WinSCP Script.
(there are many other similar questions here)
There's also WinSCP article on this topic:
Moving local files to different location after successful upload

How to use set -x without showing stdout?

Within CI, I am running a bash script that calls many bash scripts.
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
This doest not disable the stdout returned by the script.
The Gitlabi-CI runners stop logging after 100MB of log, It says Job's log exceeded limit of 10240000 bytes.
I know the log script can only grow up.
How can I optimize the output log size?
I don't need to have all the stdout, I can have stderr but then it will be a long running script without information.
Is there a way to display the commands which is running like when doing set -x?
Edit
Reading the answers, I was not able to solve my issue. I need to add that I am using nodejs to run the bash script that run the long bash script.
This is how I call my node script within .gitlab-ci.yml:
scripts:
- node my_script.js
Within my_script.js, I have:
exports.handler = () => {
const ls = spawn('bash', [path.join(__dirname, 'release.sh')], { stdio: 'inherit' });
ls.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
process.exitCode = code;
}
});
};
Within my_script.sh, I have:
./internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
You can selectively redirect file handles with exec.
exec >stdout 2>stderr
This however loses the connection to the terminal, so there is no simple way to output anything to the terminal after this point.
You can instead duplicate a file handle with m>&n where m is the number of the file descriptor to duplicate and n is the number of the new one (choose a big number like 99 to not accidentally clobber an existing handle).
exec 98<&1 # stdout
exec 99<&2 # stderr
exec >/dev/null 2>&1
:
To re-enable output,
exec 1<&98 2<&99
If you redirected to a temporary file instead of /dev/null you could obviously now show the tail of those files to the caller.
tail -n 100 "$TMPDIR"/stdout "$TMPDIR"/stderr
(On a shared server, probably use mktemp to create a unique temporary directory at the beginning of your script; static hard-coded file names make it impossible to run two builds at the same time.)
As you usually can't predict where the next error will happen, probably put all of this in a wrapper script which performs the redirection, runs the build, and finally displays the tail end of the temporary log files. Some build servers probably want to see some signs of life in the log file every few minutes, so perhaps tail a few lines every once in a while in a loop, too.
On the other hand, if there is just a single build command, the whole build job's stdout and stderr can simply be redirected to a log file, and you don't need to exec things back and forth. If you need to enable output selectively for portions of the script, use exec as above; but for wholesale redirection, just redirect the one command.
In summary, maybe your build script would look something like this.
#!/bin/sh
t=$(mktemp -t -d cibuild.XXXXXXXX) || exit
trap 'kill $buildpid; wait $buildpid; tail -n 500 "$t"/*; rm -rf "$t"' 0 1 2 3 5 15
# Your original commands here
${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}">"$t"/stdout 2>"$t"/stderr &
buildpid=$!
while kill -0 $buildpid; do
sleep 180
date
tail -n 1 "$t"/*
done
wait
A flaw with this approach is that you lose timing information. A proper solution woud let you see when each line was produced, and display standard output and standard error intermixed in the order the messages were printed, perhaps with visible time stamps, and even with coloring hints (red time stamps for stderr?)
Option 1
If your script will output the error message to stderr, you can ignore all output to stdout by using command > /dev/null, where /dev/null is a black hole that will take away any output to it.
Option 2
If there's any pattern on your error message, you can use grep to filter out those error messages.
Edit 1:
To show the command that is running, you can supply -x command to bash; therefore, your command will be
bash -x ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" > /dev/null
bash will print the command executed to stderr
Edit 2:
If you want to reduce the size of the output file, you can pass it to gzip by using ${initial_process_wd}/internals/declination/create "${RELEASE_VERSION}" "${CI_COMMIT_REF_NAME}" | gzip > logfile.
To read the content of the logfile, you can use zcat logfile.

UNIX diff command usuage on puppet exec

I have 2 Release.ini file:
one I am downloading from S3 storing in /CodeRootFolder/tmp location
and other in /DocRootDir location
I want to verify if there are changes in downloaded (i.e. /CodeRootFolder/tmp/Release.ini) file, and if there is change then execute the rsync command via puppet exec as below.
The error is if there change, it does not executing, it seems to as diff returns 1.
exec {'Actual code deployment with rsync':
command => "rsync ${myclass::CodeRootFolder}/tmp/* ${myclass::DocRootDir}/)",
#cwd => "${myclass::CodeRootFolder}",
onlyif => "diff --changed-group-format='%<' --unchanged-group-format='' ${myclass::CodeRootFolder}/tmp/Release.ini ${myclass::DocRootDir}/Release.ini",
path => ['/opt/rh/php55/root/usr/bin','/opt/rh/php55/root/usr/sbin','/usr/local/sbin','/usr/local/bin', '/sbin/' ,'/bin/', '/usr/sbin/','/usr/bin/'],
}
Is there a good solution to my problem.
Thanks in advance
The error is if there change, it does not executing, it seems to as
diff returns 1.
Yes, that's correct.
Invoking diff
An exit status of 0 means no differences were found, 1 means some
differences were found, and 2 means trouble.
If you want the command executed if there is change, you have to use unless instead of onlyif.

How do I rerun a bash script skipping over lines which have previously run sucesfully?

I have a bash script which acts as a wrapper for an analysis pipeline. If the script errors out I want to be able to run the script from the point at which the errors occurred by simply re-running the original command. I have set two different traps; one which will remove the last file being generated on a non-zero exit from my script, the other will remove all the temporary files on exit signal = 0 and essentially cleans up the file system at the end of the run. I turned on noclobber in the bash environment which allows my script to skip over lines of the script where files have already been written but this will only do this if I do not set the non-zero exit trap. As soon as I set this trap then it will exit at the first line where noclobber IDs a file it will not overwrite. Is there a way for me to skip over lines of code that have successfully run previously rather than having to re-run my code from the start? I know I could use conditional statements for each line but I thought there might be a neater way of doing this.
set -o noclobber
# Function to clean up temporary folders when script exits at the end
rmfile() { rm -r $1 }
# Function to remove the file being currently generated
# Function executed if script errors out
rmlast() {
if [ ! -z "$CURRENTFILE" ]
then
rm -r $1
exit 1
fi }
# Trap to remove the currently generated file
trap 'rmlast "$CURRENTFILE"' ERR SIGINT
#Make temporary directory if it has not been created in a previous run
TEMPDIR=$(find . -name "tmp*")
if [ -z "$TEMPDIR" ]
then
TEMPDIR=$(mktemp -d /test/tmpXXX)
fi
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Variants.vcf"
# Set CURRENTFILE variable
complexanalysis_tool input_file > $CURRENTFILE
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Filtered.vcf"
complexanalysis_tool2 input_file2 > $CURRENTFILE
CURRENTFILE="${TEMPDIR}/Filtered_2.vcf"
complexanalysis_tool3 input_file3 > $CURRENTFILE
# Move files to final destination folder
mv -nv $TEMPDIR/*.vcf /test/newdest/
# Trap to remove temporary folders when script finishes running
trap 'rmfile "$TEMPDIR"' 0
Update:
I have been offered answers suggesting the use of the make utility. I want to make use of its inbuilt utility to check if a dependency has been fulfilled.
In my hands the makefile suggested by VK Kashyap does not seem to skip execution for previously accomplished tasks. So for example I ran the script above and interrupted the script when it was running filtered.vcf with ctrl c. When I rerun the script again it runs from the beginning again i.e. starts again at varaints.vcf. Am I missing something in order to get the makefile to show sources as being fullfilled?
Answer to update:
OK this is a rookie mistake but since I am not familiar with generating makefiles I will post this explanation of my error. The reason my makefile was not rerunning from the exit point was that I had named the targets a different name to the output files being generated. So as VK Kashyap quite correctly answered if you name the targets eg.
variants.vcf
filtered.vcf
filtered2.vcf
the same as the output files being generated then the script will skip previously accomplished tasks.
make utility might be an answer for the thing you want to achive.
it has inbuilt dependecy checking (the stuff which you are trying to achive with tmp files)
#run all target when all of the files are available
all: variants.vcf filtered.vcf filtered2.vcf
mv -nv $(TEMPDIR)/*.vcf /test/newdest/
variants.vcf:
complexanalysis_tool input_file > variants.vcf
filtered.vcf:
complexanalysis_tool2 input_file2 > filtered.vcf
filtered2.vcf:
complexanalysis_tool3 input_file3 > filtered2.vcf
you may use bash script to invoke this make file as:
#/bin/bash
export TEMPDIR=xyz
make -C $TEMPDIR all
make utility will check itself for already accomplished task and skip execution for done stuffs. it will continue where you had the error finishing the task.
you can find more details on internet about exact syntax for makefile.
there is no built-in way to do that.
however, you could brew something like that by keeping track of the last successful line and building your own goto statement, as described here and in Is there a "goto" statement in bash? (just replace the 'labels' with actual line-numbers).
however, the question is whether this is really a smart idea.
a better way is to only run the commands needed, not the commands not-yet-executed.
this could be done either by explicit conditionals in your bash-script:
produce_if_missing() {
# check if first argument is existing
# if not run the rest of the arguments and pipe it into the first one
local curfile=$1
shift
if [ ! -e "${curfile}" ]; then
$# > "${curfile}"
fi
}
produce_if_missing Variants.vcf complexanalysis_tool input_file
produce_if_missing Filtered.vcf complexanalysis_tool2 input_file2
or using tools that are made for such things (see VK Kahyap's answer using make, though i prefer using variables in the make-rules to minimize typos):
Variants.vcf: input_file
complexanalysis_tool $^ > $#
Filtered.vcf: input_file
complexanalysis_tool2 $^ > $#

Exit codes of smbclient

i've a problem with the commandline command "smbclient" of samba on arm.
I wrote a script to download files from a Windows Share.
Here the smb-part of this script.
smbclient //CNAME/SNAME -I0.0.0.0 -N -c "case_sensitive; cd folder; prompt; mget file"
echo $?
My problem ar the exit codes.
If the file is downloaded completely, the exit code is 0 (OK)
If the file cannot be downloaded, the exit code is 1 (OK)
If the testmaschine loses the connection to the share due downloading a file, the exit code is 0 (NOT GOOD), but error ("Lost connection...etc.") is written to console. (OK)
I tried it with two different versions.
samba-3.0.32
samba-3.6.19
Both the same.
Does someone know a good workaround (or smbclient-argument) to let my script know, that the download failed?
PS. I checked the smbclient sources. It looks like they forgot to set the exitcode. Because everytime there is another error the set the Errormessage and do an (e.g. exit(1)). But for timeouts, they only set the Errormessage.
Thank you in advance!
What would be best is to use the -E argument to smbclient and redirect 2>/errorlog from the command line. You can then check this file to see if any errors occurred.
Warning, the first line is always the Domain=......... so you may need to strip that line out.
Something like this:
smbclient Hostname -A authfile -E 1>log 2>errorlog <<-EOF
get foo
EOF
In the errorlog you should find something like below, your log file will be empty
Domain=[Hostname] OS=[Windows Server 2008 R2 Standard 7601 Service
Pack 1] Server=[Windows Server 2008 R2 Standard 6.1]
NT_STATUS_OBJECT_NAME_NOT_FOUND opening remote file \foo

Resources