I want to convert files in a specific order. For conversion I am using this command:
convert *.jpg output.pdf
The order of the image files in the output pdf should be:
ls -v
How can I combine these 2 commands?
Probably you mean this:
convert $(ls -v *.jpg) output.pdf
Using $() you can place the output of one command as part of an outer command.
PICS=`ls -v *.jpg`
convert $PICS output.pdf
Related
I wanted to grep the word "force" but most of the output listed is from the command -force.
When I did grep -v "-force" filename , it says grep : orce most probably because of the -f command.
I just want to find a force signal from files using grep. How?
use grep -v -- "-force" - the double - signals that there are no more options being expected.
If you want to grep specific word from file then we can use cat command
# cat filename.txt | grep force
For other basic Commands
this line maybe simpler:
grep '[^-]force' tmp
it says: grep "force", but only if it does not has a prefix - by using [^]. See some simple regular expression examples here.
Use [-] to remove the special significance. Check this out:
> cat rand_file.txt
1. list items of random text
2. -force
3. look similar as the first batch
4. force
5. some random text
> grep -v "-force" rand_file.txt
grep: orce: No such file or directory
> grep -v "[-]force" rand_file.txt | grep force
4. force
>
I'm trying to find a way to list all jpg images in all subdirectories and in csv format without the dir name present.
ls -R -1 -m . | grep '.jpg'
The ls command does output to csv fine, but the grep command breaks the csv format making each file appear on a new line instead of comma seperated.
I know I can use 'find' to list images but it seems to output the files in a different order than 'ls' and I don't see a output to csv parameter for 'find'
I need the images in each subdirectory on 1 comma seperated line.
I believe this does what you want. Each outputted line is a list of jpgs in a single directory separated by commas.
ls -d */ | xargs -i{} sh -c 'cd {};ls -m *jpg'
If you wanted to know which line was which directory you could run it in 2 steps like this
ls -d */ > dirs.txt
cat dirs.txt | xargs -i{} sh -c 'cd {};ls -m *txt'
and then the first line in dirs.txt would correspond to each line of output.
Without trivially saving stdin to a file, can I take the resulting output of stdin, and USE THAT as the pattern to search against a listing in a file?
Try this example:
I have a long list.txt of lines containing filenames that I want to know if I currently have in my directory. It's the backwards equivalent of:
ls | grep -F -f list.txt
My attempt goes as follows:
grep -F -f $( ls -1 ) list.txt
But that doesn't work.
Is this even possible?
It looks like you want to use the keyword - or /dev/stdin as the filename in your original form.
I am trying to recursively download several files using wget -m, and I intend to grep all of the downloaded files to find specific text. Currently, I can wait for wget to fully complete, and then run grep. However, the wget process is time consuming as there are many files and instead I would like to show progress by grep-ing each file as it downloads and printing to stdout, all before the next file downloads.
Example:
download file1
grep file1 >> output.txt
download file2
grep file2 >> output.txt
...
Thanks for any advice on how this could be achieved.
As c4f4t0r pointed out
wget -m -O - <wesbites>|grep --color 'pattern'
using grep's color function to highlight the patterns may seem helpful especially when dealing with bulky data output to terminal.
EDIT:
Below is a command line you can use. it creates a file called file and save the output messages from wget.Afterwards it tails the message file.
Using awk to find any lines with "saved" and extract filename, then use grep to pattern from filename.
wget -m websites &> file & tail -f -n1 file|awk -F "\'|\`" '/saved/{system( ("grep --colour pattern ") $2)}'
Based on Xorg's solution I was able to achieve my desired effect with some minor adjustments:
wget -m -O file.txt http://google.com 2> /dev/null & sleep 1 && tail -f -n1 file.txt | grep pattern
This will print out all lines that contain pattern to stdout, and wget itself will produce no output visible from the terminal. The sleep is included because otherwise file.txt would not be created by the time the tail command executed.
As a note, this command will miss any results that wget downloads within the first second.
I have a few files in a directory with names similar to
_system1.log
_system2.log
_system3.log
other.log
but they are not created in that order.
Is there a simple, non-hardcoded, way to cat the files starting with the underscore in date order?
Quick 'n' dirty:
cat `ls -t _system*.log`
Safer:
ls -1t _system*.log | xargs -d'\n' cat
Use ls:
ls -1t | xargs cat
ls -1 | xargs cat
You can concatenate and also store them in a single file according to their time of creation and also you can specify the files which you want to concatenate. Here, I find it very useful. The following command will concatenate the files which are arranged according to their time of creaction and have common string 'xyz' in their file name and store all of them in outputfile.
cat $(ls -t |grep xyz)>outputfile