Imgopt can't optimize file with whitespace in the name - linux

Hy,
I'm trying to optimize the uploaded pictures on my webserver, with imgopt. The problem is that, when it finds a file with whitespace in the name, it throw error like that:
$imgopt file - name.jpg
stat: cannot stat 'file': No such file or directory
stat: cannot stat 'name.jpg': No such file or directory
Can anyone help me please?
Thanks, Dave.

You can still use 'imgopt' but you'll have to rename your files with a command like this:
find your_folder -depth -name "* *" -execdir rename 's/ /_/g' "{}" \;
See the source for more details about renaming directories, etc...
Second answer: you can also pass file names with spaces with a command like this (source):
for fname in "$#"; do
process-one-file-at-a-time "$fname"
done

You're actually passing three options to the imgopt program :
file
-
name.jpg
The program probably treats each one of them as a separate file name (with - being stdin, following the standard Unix convention) and tries to open them. This, of course, fails. What you want are quotes :
imgopt "file - name.jpg"
This way, only one argument is given to the program and it contains a valid filename, with all the whitespace-y goodness.

Related

How to replace an unknown string in multiple files under Linux?

I want to change multiple different strings across all files in a folder to one new string.
When the string in the text files (within a same directory) is like this:
file1.json: "url/1.png"
file2.json: "url/2.png"
file3.json: "url/3.png"
etc.
I would need to point them all to a single PNG, i.e., "url/static.png", so all three files have the same URL inside pointing to the same PNG.
How can I do that?
you can use the command find and sed for this. make sure you are in the folder that you want to replace files.
find . -name '*.*' -print|xargs sed -i "s/\"url\/1.png\"/\"url\/static.png\"/g"
Suggesting bash script:
#!/bin/bash
# for each file with extension .json in current directory
for currFile in *.json; do
# extract files ordinal from from current filename
filesOrdinal=$(echo "#currFile"| grep -o "[[:digit:]]\+")
# use files ordinal to identify string and replace it in current file
sed -i -r 's|url/'"$filesOrdinal".png'|url/static.png|' $currFile
done

How to print only folder names where same name of file exists

I am trying to write a script where all matching file names exist.
For example, I have a file TEST123 in many subfolders under /opt . I have a list of file names. So, I need to print only folder names which has the same file name .
Desired output :
TEST123
/opt/test2
/opt/test3
I am not sure how to use grep command :
I have a list "elements.txt" with full path of file names. And I cutted only the filenames at the end to " onlyfile.txt" .
This is how elements.txt look with many different file names, some of them has duplicated and some dont.
/opt/test2/TEST123
/opt/test3/TEST123
/opt/test2/TEST577
/opt/test6/SUNNY
/opt/test8/SUNNY
This is onlyfile.txt:
TEST123
TEST577
SUNNY
and many more files
And need to loop with the filename , like
for item in `cat onlyfile.txt`
do
grep elements.txt
done
It is giving me all outputs, but I need only folder names .
Any help would be appreciated.!
To get the directory of every file named TEST123, use the find command:
find /opt -name TEST123 -printf "%h\n"
The output will be:
/opt/test2
/opt/test3
The %h specifier will output the directory of the file. See the find manpage for more information.
In this case i think you need more than one loop.
for item in `cat onlyfile.txt`; do
for path in `cat elements.txt |grep $item`; do
dirname $path
done
done
First loop iterate filenames
Second loop searches filenames in paths and strip file name from them

Shell recognizes files in ~ but not in ~/Documents

I'm taking a Unix class, and here's a part of my assignment:
For each file and subdirectory in the user’s ~/Documents directory, determine if the item is a file or directory, and display a message to that effect, using the file name in the statement.
So, what I have written is this:
docs=`ls ~/Documents`
for file in $docs ; do
if [ -f $file ] ; then
echo $file "is a file."
elif [ -d $file ] ; then
echo $file "is a directory."
else
echo $file "is not a file or directory."
fi
done
My Documents directory includes these files and directories:
DocList.txt (file)
Letter (file)
mypasswdfile (file)
samples (directory)
things (directory)
touchfile (file)
So I figured that the output should be this:
DocList.txt is a file.
Letter is a file.
mypasswdfile is a file.
samples is a directory.
things is a directory.
touchfile is a file.
However, this is the output:
DocList.txt is not a file or directory.
Letter is not a file or directory
mypasswdfile is not a file or directory
samples is not a file or directory
things is not a file or directory
touchfile is not a file or directory
I feel like I should mention that if I set the $docs variable to `ls ~' it will successfully display the contents of my home directory and whether the items are files or directories. This does not work with other paths I have tried.
Your problem is that ls only outputs the file names without path.
So your $file gets the values
DocList.txt
Letter
mypasswdfile
samples
things
touchfile
from loop run to loop run.
If your current directory is NOT ~/Documents, testing these file names is wrong, as this would search in the current directory and not in the intended one.
A much better way to accomplish your task is
for file in ~/Documents/* ; do
...
done
which will set $file to each of the full path names needed to find your file.
After doing so, it should work, but it is very error prone: once your path or one of your files starts having a space or other blank character in it, it will fall on your feet.
Putting " around variables which can potentially contain something with a space etc. is quite essential. There is almost no reason ever to use a variable without its surrounding ".
What is the difference here?
With [ -f $file ], and file='something with spaces', [ is called with the arguments -f, something, with, spaces and ]. This surely leads to wrong behaviour.
OTOH, with [ -f "$file" ], and file='something with spaces', [ is called with the arguments -f, something with spaces and ].
So quoting is very essential in shell programming.
Of course, the same holds for [ -d "$file" ].
The problem is your ls command - you're treating the output of ls as absolute e.g. /home/alex/Documents/DocList.txt, but when you do ls ~/Documents it prints out DocList.txt (a relative file path / name).
To get the expected absolute behaviour you can use the find command instead:
docs=`find ~/Documents`
As mentioned in the comments and in another answer, to also be able to handle whitespace in filenames you need to do something like:
docs=( ~/Documents/* )
for f in "${docs[#]}"; do
...

How to make this (l)unix script dynamically accept directory name in for-loop?

I am teaching myself more (l)unix skills and wanted to see if I could begin to write a program that will eventually read all .gz files and expand them. However, I want it to be super dynamic.
#!/bin/bash
dir=~/derp/herp/path/goes/here
for file in $(find dir -name '*gz')
do
echo $file
done
So when I excute this file, I simply go
bash derp.sh.
I don't like this. I feel the script is too brittle.
How can I rework my for loop so that I can say
bash derp.sh ~/derp/herp/path/goes/here (1)
I tried re-coding it as follows:
for file in $*
However, I don't want to have to type in bash
derp.sh ~/derp/herp/path/goes/here/*.gz.
How could I rewrite this so I could simply type what is in (1)? I feel I must be missing something simple?
Note
I tried
for file in $*/*.gz and that obviously did not work. I appreciate your assistance, my sources have been a wrox unix text, carpentry v5, and man files. Unfortunately, I haven't found anything that will what I want.
Thanks,
GeekyOmega
for dir in "$#"
do
for file in "$dir"/*.gz
do
echo $file
done
done
Notes:
In the outer loop, dir is assigned successively to each argument given on the command line. The special form "$#" is used so that the directory names that contain spaces will be processed correctly.
The inner loop runs over each .gz file in the given directory. By placing $dir in double-quotes, the loop will work correctly even if the directory name contains spaces. This form will also work correctly if the gz file names have spaces.
#!/bin/bash
for file in $(find "$#" -name '*.gz')
do
echo $file
done
You'll probably prefer "$#" instead of $*; if you were to have spaces in filenames, like with a directory named My Documents and a directory named Music, $* would effectively expand into:
find My Documents Music -name '*.gz'
where "$#" would expand into:
find "My Documents" "Music" -name '*.gz'
Requisite note: Using for file in $(find ...) is generally regarded as a bad practice, because it does tend to break if you have spaces or newlines in your directory structure. Using nested for loops (as in John's answer) is often a better idea, or using find -print0 and read as in this answer.

How to open all files in a directory in Bourne shell script?

How can I use the relative path or absolute path as a single command line argument in a shell script?
For example, suppose my shell script is on my Desktop and I want to loop through all the text files in a folder that is somewhere in the file system.
I tried sh myshscript.sh /home/user/Desktop, but this doesn't seem feasible. And how would I avoid directory names and file names with whitespace?
myshscript.sh contains:
for i in `ls`
do
cat $i
done
Superficially, you might write:
cd "${1:-.}" || exit 1
for file in *
do
cat "$file"
done
except you don't really need the for loop in this case:
cd "${1:-.}" || exit 1
cat *
would do the job. And you could avoid the cd operation with:
cat "${1:-.}"/*
which lists (cats) all the files in the given directory, even if the directory or the file names contains spaces, newlines or other difficult to manage characters. You can use any appropriate glob pattern in place of * — if you want files ending .txt, then use *.txt as the pattern, for example.
This breaks down if you might have so many files that the argument list is too long. In that case, you probably need to use find:
find "${1:-.}" -type f -maxdepth 1 -exec cat {} +
(Note that -maxdepth is a GNU find extension.)
Avoid using ls to generate lists of file names, especially if the script has to be robust in the face of spaces, newlines etc in the names.
Use a glob instead of ls, and quote the loop variable:
for i in "$1"/*.txt
do
cat "$i"
done
PS: ShellCheck automatically points this out.

Resources