It seems that if I run gzip somefile.js, I get somefile.js.gz as the output.
However, I need to preserve the original file extension, i.e for somefile.js, I need it to remain somefile.js after gzipping.
How can I do that?
Edit: To be clear, I need to do this for dozens of files, so I can't just mv each one. I simply want to gzip all the static css / js files and then upload them to my CDN, so I can serve them as regular js / css files.
If you really want to do so, you could simply use a for construct which exists in almost every shell (even on cmd.exe !). In Bourne or Posix sh flavour, it gives
for file in *.js *.css ; do gzip "$file" ; mv "$file.gz" "$file"; done
In Windows cmd.exe it should write (provided you've got a gzip command in your path):
for %file in (*.js *.css) do gzip %file && move %file.gz %file
But BEWARE : as others warned you, you will have binary gzipped files named foo.js or fee.css. If you serve them to standard browsers it definitely will not work !
Be sure to make a backup copy before trying that - it can easily be reversed, but you could at least lose time ...
EDIT : added quotes to shell command as suggested by gniourf_gniourf
Related
I find myself in a situation similar to this question:
Linux: Overwrite all files in folder with specified data?
The answers there work nicely, however, they are for typed-out text. Allow me to provide context.
I have a Linux terminal which the following file structure: (with files & folders irrelevant to the question removed)
root/
empty.svg
svg/
257238.svg
297522.svg
a7yf872.svg
236y27fh.svg
38277.svg
... (~200 other .svg files with arbitrary names)
2903852.svg
The framework I am working with requires those .svg files to exist with those specific filenames, but obviously, it does not care about SVG image they contain. I do not plan on using such files and they take up a hefty amount of space on disk, so I wish to convert them all into empty SVGs, aka the empty.svg file on my root directory, which is a 12x12 transparent SVG file (124 bytes). This way the framework shouldn't error out like it did when I tried simply overwriting the raw data of those SVGs with plaintext using the answer of the question linked at the top of this question. I've tried many methods by trying to be creative with my basic Linux command-line knowledge but no success. How do I accomplish this?
TL;DR: How to recursively overwrite all files in a folder with the raw data of another file from Linux CLI?
Similar to the link, you can use tee command, but instead of echo use cat to copy file contents, where cat is the command to read the contents of the file.
cat empty.svg | tee svg/257238.svg svg/297522.svg <etc>
But if there are a lot of files in svg directory it will be useful to use loop to automate the previous command:
for f in svg/*; do
if [[ "$f" == *.svg ]]; then
cat empty.svg > "$f"
fi
done
Here we use pipes and redirections to connect commands and redirect previous command output.
i used the following:
gzip -9 -c -r <some_directory> > directory.gz
how do i decompress this directory ?
I have tried
gunzip directory.gz
i am just left with a single file and not a directory structure.
As others have already mentioned, gzip is a file compression tool and not an archival tool. It cannot work with directories. When you run it with -r, it will find all files in a directory hierarchy and compress them, i.e. replacing path/to/file with path/to/file.gz. When you pass -c the gzip output is written to stdout instead of creating files. You have effectively created one big file which contains several gzip-compressed files.
Now, you could look for the gzip file header/magic number, which is 1f8b and then reconstruct your files manually.
The sensible thing to do now is to create backups (if you haven't already). Backups always help (especially with problems such as yours). Create a backup of your directory.gz file now. Then read on.
Fortunately, there's an easier way than manually reconstructing all files: using binwalk, a forensics utility which can be used to extract files from within other files. I tried it with a test file, which was created the same way as yours. Running binwalk -e file.gz will create a folder with all extracted files. It even manages to reconstruct the original file names. The hierarchy of the directories is probably lost. But at least you have your file contents and their names back. Good luck!
Remember: backups are essential.
(For completeness' sake: What you probably intended to run: tar czf directory.tar.gz directory and then tar xf directory.tar.gz)
gzip will compress 1+ files, though not meant to function like an archive utility. The posted cmd-line would yield N compressed file images concatenated to stdout, redirected to the named output file; unfortunately stuff like filenames and any dirs would not be recorded. A pair like this should work:
(create)
tar -czvf dir.tar.gz <some-dir>
(extract)
tar -xzvf dir.tar.gz
I'm trying to zip all the files within a directory which contains .py files individually. But after zipping the files the output that I'm seeing is .py.zip vs just .zip
Here's the one liner command that I'm trying to execute.
cd scripts/python/
for i in *; do zip $i.zip $i; done
This is what you are looking for:
for i in *py; do
zip "${i%.*}".zip "$i";
done
Explanation
${i%.*}: This makes use of Bash's built in parameter expansion. Here it tries to match everything after %. If it does find a match, it uses everything before the match. https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion for more information.
I have a lot of zip files that I need to repack/recompress in order to work around a bug in MediaWiki 0.1.18.
I can do it with
#!/bin/bash
for f in *zip; do
cd tmp
rm -rf *
unzip ../"$f"
zip -r ../"$f" *
cd ..
done
but is there a way to do this e.g. with pipes or perhaps a zip option?
gzip -d -c old.gz | gzip >new.gz
There is a utility called AdvanceCOMP that does exactly what you're looking for. It recompresses ZIP and GZ files (and some others) without intermediary extraction to disk. (I do believe that the mechanism used is to decompress the data and recompress it, but that does not require writing files to disk or regenerating metadata.)
You can't. If you send some bits to zip it doesn't have a way to know when one file ends and a new one begins.
Actually you can write your own program to do the job but from your description it seems like an overkill. Also you are not telling what exactly bug are you fixing so other workarounds cannot be suggested.
A bit late, but it may be helpfull for those who come later:
zipsplit -n 2147483648 will repack zip upto 2GiB without extraction. But as this command is for splitting zip files, there is no option to overwrite original or specify output file, only output directory.
I'm trying to use something along the lines of
unexpand -t 4 *.php
but am unsure how to write this command to do what I want.
Weirdly,
unexpand -t 4 file.php > file.php
gives me an empty file. (i.e. overwriting file.php with nothing)
I can specify multiple files okay, but don't know how to then overwrite each file.
I could use my IDE, but there are ~67000 instances of to be replaced over 200 files, and this will take a while.
I expect that the answers to my question(s) will be standard unix fare, but I'm still learning...
You can very seldom use output redirection to replace the input. Replacing works with commands that support it internally (since they then do the basic steps themselves). From the shell level, it's far better to work in two steps, like so:
Do the operation on foo, creating foo.tmp
Move (rename) foo.tmp to foo, overwriting the original
This will be fast. It will require a bit more disk space, but if you do both steps before continuing to the next file, you will only need as much extra space as the largest single file, this should not be a problem.
Sketch script:
for a in *.php
do
unexpand -t 4 $a >$a-notab
mv $a-notab $a
done
You could do better (error-checking, and so on), but that is the basic outline.
Here's the command I used:
for p in $(find . -iname "*.js")
do
unexpand -t 4 $(dirname $p)/"$(basename $p)" > $(dirname $p)/"$(basename $p)-tab"
mv $(dirname $p)/"$(basename $p)-tab" $(dirname $p)/"$(basename $p)"
done
This version changes all files within the directory hierarchy rooted at the current working directory.
In my case, I only wanted to make this change to .js files; you can omit the iname clause from find if you wish, or use different args to cast your net differently.
My version wraps filenames in quotes, but it doesn't use quotes around 'interesting' directory names that appear in the paths of matching files.
To get it all on one line, add a semi after lines 1, 3, & 4.
This is potentially dangerous, so make a backup or use git before running the command. If you're using git, you can verify that only whitespace was changed with git diff -w.