Applying the patch command using files in different directories in Linux - linux

I'm trying to apply a patch using 2 files in different directories. The output file should be in a different directory too. The first file is in /var/local/documents/document.xml and patch file is located in /var/local/patches/patch.diff and I want the output file should be created in /var/local/final/final.xml.
When the files are located in the same directory, this command works:
patch document.xml -i patch.diff -o final.xml
But when they are in separate directories and I try to use the following command:
patch
/var/local/documents/document.xml -i
/var/local/patches/patch.diff -o
/var/local/final/final.xml
I get the following error:
(Stripping trailing CRs from patch.)
patching file {file}
Hunk#1 FAILED at 20.
1 out of 1 hunk FAILED -- saving rejects to file {file}
I've read somewhere that I should use -d and -p to work correctly with directories but I have no clue how I should do it...

Yes, it's -p switch (in your case it should strip 2 entries from patch path):
cd /var/local/documents
patch -p 2 -o ../final/final.xml document.xml < ../patches/patch.diff

Try this:
$ mv /var/local/final/final.xml /var/local/final/document.xml
$ (cd /var/local/final && patch document.xml) < patch.diff
$ mv /var/local/final/document.xml /var/local/final/final.xml

Related

Need to convert a directory with subdirectorys with images to webp

Im trying to make my website faster and discovered that converting jpg and png to webp could be a difference so I want to copy all images in my theme directory including the ones in sub directories in a different folder but keep the original directory flow.
After searching on google I found a script on this website and it works and I made the directory flow in a new start folder using mkdir. But my problem is that I have to manualy edit the script and run it in each folder. Because Im not the best with working with bash I have no idea how to edit the converting script to look in all folders and copy them in the new one.
So my question is if there is a way to keep the folder structure as the original but with a different base folder?
for file in *.jpg
do
cwebp -q 100 "$file" -o "/var/www/themes/assets/images/webp/${file%.jpg}.webp"
done
Try this (Shellcheck-clean) code:
#! /bin/bash
shopt -s nullglob # Globs that match nothing expand to nothing
shopt -s globstar # ** matches multiple directory levels
root_webp_dir=/var/www/themes/assets/images/webp
for jpg_path in **/*.jpg ; do
jpg_file=${jpg_path##*/}
[[ $jpg_path == */* ]] && jpg_dir=${jpg_path%/*} || jpg_dir=.
webp_dir=${root_webp_dir}/${jpg_dir}
webp_path=${webp_dir}/${jpg_file%.jpg}.webp
[[ -d $webp_dir ]] || mkdir -p -- "$webp_dir"
cwebp -q 100 "$jpg_path" -o "$webp_path"
done
Note that the version of cwebp that I used for testing (stupidly) doesn't support the -- convention for terminating command line options. Otherwise the command would have been (and should be) cwebp -q 100 -o "$webp_path" -- "$jpg_path". The command in the code above could go wrong if any of the JPEG files has a path that begins with a -. One way to work around the problem and make the code completely safe would be to use ./**/*.jpg instead of **/*.jpg as the glob pattern to find JPEG files.

ZIP command causes in BASH Script zip warning: name not matched

in a script this
(cd "$amdir/archive" && zip -rm "$amdir/archive/a.zip" "$amdir/archive/*")
causes zip warning: name not matched
when I echo that and copy paste it to a command line it works fine
any idea why that doesn't work in a bash script on linux ?
You are already in $amdir/archive after your cd.
So your zip tries to find another directory $amdir/archive, when already being there.
I can reproduce the error message, when I try to do a zip -rm on a nonexisting directory.
In addition you should consider the remark from Inian: by quoting the * you escape it and therefore have no bash pattern matching - see for example here
So the second part should simply read:
zip -rm a.zip *

How can I download all the files from a remote directory to my local directory?

I want to download all the files in a specific directory of my site.
Let's say I have 3 files in my remote SFTP directory
www.site.com/files/phone/2017-09-19-20-39-15
a.txt
b.txt
c.txt
My goal is to create a local folder on my desktop with ONLY those downloaded files. No parents files or parents directory needed. I am trying to get the clean report.
I've tried
wget -m --no-parent -l1 -nH -P ~/Desktop/phone/ www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
I got
I want to get
How do I tweak my wget command to get something like that?
Should I use anything else other than wget ?
Ihue,
Taking a shell programatic perspective I would recommend you try the following command line script, note I also added the citation so you can see the original threads.
wget -r -P ~/Desktop/phone/ -A txt www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
-r enables recursive retrieval. See Recursive Download for more information.
-P sets the directory prefix where all files and directories are saved to.
-A sets a whitelist for retrieving only certain file types. Strings and patterns are accepted, and both can be used in a comma separated list. See Types of Files for more information.
Ref: #don-joey
https://askubuntu.com/questions/373047/i-used-wget-to-download-html-files-where-are-the-images-in-the-file-stored

Redirect 2to3 output to new file

When I run 2to3.py -w my_script.py it converts my_script.py to Python3 and then puts the original version my_script.py.bak.
I want the old file to remain as is, and the converted file to go into a new file, like my_script.converted.py. Is there a 2to3.py argument that allows this?
Turns out there's several options for this:
Copy the file first to a new location, then run 2to3 -w -n which modifies the file in place (-w) without making a backup (-n)
2to3 -n -o desired/path/to/new/file specifies an output directory (-o) and disables backup (-n)
2to3 -n -W --add-suffix=3 will put the file in the same location, but put a suffix on it (-W --add-suffix=) without making a backup (n)

Create Google Chrome extension package without specified files

Right now to create an extension with Google Chrome Extensions page we select a directory that contains created extension and it generates .crx file.
The problem is that it contains all files from this directory - for example, all docs, asset drafts, etc.
Is it possible to create some blacklist to ignore specified files like *.psd, *.pdf, docs/* ... ?
The Chromium team decided not to implement a manifest (or similar mechanism) for including only the desired files in a .CRX.
The recommended workflow is to have a build step that outputs only the needed files in a dist directory, and to create the CRX from that directory. This is common practice for JavaScript libraries.
My solution
I have created a .crxignore custom ignore file, like this:
.*
Makefile
*.md
bin
It is made for zip command! It is different than .gitignore. You can't add comments eg! See documentation: https://linux.die.net/man/1/zip , look for --exclude word.
Now you can create a zip without ignred files:
$ zip -qr -9 -X .out/out.zip . -x#.crxignore
# ^^^^^^^^^^^^^ using ignore file
After that I can convert zip to crx file with this go script: https://github.com/mmadfox/go-crx3 You have to build it with go build -o out/crx3 crx3/main.go command (you may get missing mod errors, but there will be the commands what you have to run). --> you can move your out/crx3 to where you want. Eg: [project]/bin/crx3.
I haven't tried, maybe chrome command also can convert zip to crx.
You have to generate a private key:
$ bin/crx3 keygen .out/key.pem
Final Makefile
build:
rm -f .out/out.zip
zip -qr -9 -X .out/out.zip . -x#.crxignore
bin/crx3 pack .out/out.zip -p .out/key.pem -o .out/out.crx
Build process:
$ make build
# --> build the .out/out.crx file
Try the command line program "zip", which could be found in cygwin if you're on windows, or is likely present on OSX, and easy to install if you're using linux.
zip package.zip -r * -x package.sh -x *.git* -x "*.*~" -x "*.pdf" docs/* "*.psd"

Resources