Generating random filenames in ffmpeg output - string

hello guys I am using ffmpeg to output different output filenames. By some googling, I did find a command to generate random strings in shell. I used tha in my own command like this:
ffmpeg -f concat -i gif-list.txt -c copy cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1.mp4
But this is not working. How can I achieve this?

You obviously want to evaluate the expression for the destination filename, so in shell this would be done like that:
ffmpeg -f concat -i gif-list.txt -c copy \
$( cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1 ).mp4
Alternatively you can use backquotes
`
in place of $( as well as ) but IMHO the parentheses are better to be recognized.

Related

Using STDIN from pipe in sed command to replace value in a file

I've got a command to perform a series of commands that produce a variable output string such as 123456. I want to pipe that to a sed command replacing a known string in a csv file that looks like this:
Fred,Wilma,Betty,Barney
However, the command below does not work and I haven't found any other references to using pipe values as the variable for a replace.
How does this code change if the values in the csv are in a random order and I always want to change the second value?
Example code:
find / -iname awk 2>/dev/null | sha256sum | cut -c1-10 > test.txt |
sed -i -e '/Wilma/ r test.txt' -e 's/Wilma//' input.csv
Contents of input.csv should become: Fred,0d522cd316,Betty,Barney
Okay, in
find / -iname awk 2>/dev/null | sha256sum | cut -c1-10 > test.txt | sed -i -e '/Wilma/ r test.txt' -e 's/Wilma//' input.csv
you have a bug. That "> test.txt" after cut is going to eat your stdin on sed, so things go weird with that pipe afterwards taking stdin. You don't want a pipe there, or you don't want to redirect to a file.
The way to take piped stdin and use it as a parameter in a command is through xargs.
find / -iname awk 2>/dev/null | sha256sum | cut -c1-10 | xargs --replace=INSERTED -- sed -i -e 's/Wilma/INSERTED/' input.csv
(...though that find|shasum is suspect too, in that the order of files is random(ish) and it matters for a reliable sum. You prpobably mean to "|sort" after find.)
(Some would sed -i -e "s/Wilma/$(find|sort|shasum|cut)" f, but I ain't among them. Animals.)
For replacing a fixed string like "Wilma", try:
sed -i 's/Wilma/'"$(find / -iname awk 2>/dev/null |
sha256sum | cut -c1-10)"'/' input.csv
To replace the 2nd field no matter what's in it, try:
sed -i 's/[^,]*/'"$(find / -iname awk 2>/dev/null |
sha256sum | cut -c1-10)"'/2' input.csv

using xargs multiple times in one command in linux command line

I am trying to write a one liner and cannot figure out what I am doing wrong. I am trying to use the following command:
cat testadds | cut -f 1 -d "," | xargs -ifoo /bin/bash -c "cat testadds | cut -f 2 -d \",\" | xargs --replace=addr /bin/bash -c \"cat testadds | cut -f 3 -d \",\" | xargs --replace=num /bin/bash -c \"cat testmdl | sed 's/DUMMY/foo/g' | sed 's/IP1/addr/g' | sed 's/IP2/num/g'\"\""
I get nothing for an out put, my testadds file is set up as follows:
dev,IP1,IP2
when I do this with only 2 xargs, it works fine, but when I add the 3rd and last xargs, it provides no output. I am wondering if there is a limit to how many times you can use xargs when cating a file.
I guess the expected input is from a file that has multiple devices. the input would be testdevice,1.1.1.1,2.2.2.2
The exepected output would be:
-deviceSystemSoftware 'device:testdevice' '6500 7-SLOT OPTICAL SW:1021'
-deviceCname 'device:testdevice' 'PRIORITY SLA - identifier - testdevice'
-deviceDateAdded 'device:testdevice' '2017-02-24'
-deviceNotes 'device:testdevice' 'BTWB100269 - testdevice'
-hier 'nib:opr|0 group:Openreach group:TSO'
-hier 'nib:opr|0 group:Openreach group:TSO group:Ciena'
-hierUnique 'nib:opr|0 group:Openreach group:TSO group:Ciena device:testdevice'
-createEntity 'service:snmp-trap-handling{device:testdevice}CA|0[+opr-ciena-6500-alarms|+Nocol]'
-createEntity 'service:configuration-tracking{device:testdevice}opr|0[ciena6500]'
-createEntity 'interface:testdevice|COLAN-1-X'
-entityDescription 'interface:testdevice|COLAN-1-X' 'COLAN-1-X'
-createEntity 'address:testdevice|COLAN-1-X|1.1.1.1'
-devicePrimaryInterface 'device:testdevice' 'interface:testdevice|COLAN-1-X'
-deleteEntity 'address:testdevice|mgmt|1.1.1.1'
-deleteEntity 'service:ippingmon{interface:testdevice|mgmt}opr|0[]'
-deleteEntity 'interface:testdevice|mgmt'
-createEntity 'interface:testdevice|SHELFIP'
-entityDescription 'interface:testdevice|SHELFIP' 'SHELFIP'
-createEntity 'address:testdevice|SHELFIP|2.2.2.2'
Hopefully this helps
What I am trying to accomplish is to modify the files to display them as the expected output. This is to add it to my monitoring system. Sorry, this is the first time I have ever done this, so I apologize for any lack of information.
You just need a single while loop, which even on one line is shorter than your attempt (and far less expensive, since there are no external programs started; everything is done by built-in commands):
# while IFS=, read -r dev ip1 ip2; do printf "-createEntity 'address:%s|%s|%s'\n" "$dev" COLAN-1-X "$ip1" "$dev" SHELFIP "$ip2"; done < input.txt
while IFS=, read -r dev ip1 ip2; do
printf "-createEntity 'address:%s|%s|%s'\n" \
"$dev" COLAN-1-X "$ip1" \
"$dev" SHELFIP "$ip2"
done < input.txt

Why bc and args doesn't work together in one line?

I need help using xargs(1) and bc(1) in the same line. I can do it multiple lines, but I really want to find a solution in one line.
Here is the problem: The following line will print the size of a file.txt
ls -l file.txt | cut -d" " -f5
And, the following line will print 1450 (which is obviously 1500 - 50)
echo '1500-50' | bc
Trying to add those two together, I do this:
ls -l file.txt | cut -d" " -f5 | xargs -0 -I {} echo '{}-50' | bc
The problem is, it's not working! :)
I know that xargs is probably not the right command to use, but it's the only command I can find who can let me decide where to put the argument I get from the pipe.
This is not the first time I'm having issues with this kind of problem. It will be much of a help..
Thanks
If you do
ls -l file.txt | cut -d" " -f5 | xargs -0 -I {} echo '{}-50'
you will see this output:
23
-50
This means, that bc does not see a complete expression.
Just use -n 1 instead of -0:
ls -l file.txt | cut -d" " -f5 | xargs -n 1 -I {} echo '{}-50'
and you get
23-50
which bc will process happily:
ls -l file.txt | cut -d" " -f5 | xargs -n 1 -I {} echo '{}-50' | bc
-27
So your basic problem is, that -0 expects not lines but \0 terminated strings. And hence the newline(s) of the previous commands in the pipe garble the expression of bc.
This might work for you:
ls -l file.txt | cut -d" " -f5 | sed 's/.*/&-50/' | bc
Infact you could remove the cut:
ls -l file.txt | sed -r 's/^(\S+\s+){4}(\S+).*/\2-50/' | bc
Or use awk:
ls -l file.txt | awk '{print $5-50}'
Parsing output from the ls command is not the best idea. (really).
you can use many other solutions, like:
find . -name file.txt -printf "%s\n"
or
stat -c %s file.txt
or
wc -c <file.txt
and can use bash arithmetics, for avoid unnecessary slow process forks, like:
find . -type f -print0 | while IFS= read -r -d '' name
do
size=$(wc -c <$name)
s50=$(( $size - 50 ))
echo "the file=$name= size:$size minus 50 is: $s50"
done
Here is another solution, which only use one external command: stat:
file_size=$(stat -c "%s" file.txt) # Get the file size
let file_size=file_size-50 # Subtract 50
If you really want to combine them into one line:
let file_size=$(stat -c "%s" file.txt)-50
The stat command gets you the file size in bytes. The syntax above is for Linux (I tested against Ubuntu). On the Mac the syntax is a little different:
let file_size=$(stat -f "%z" mini.csv)-50

Bash script to return domains instead of URL's

I have this bash script that i wrote to analyse the html of any given web page. What its actually supposed to do is to return the domains on that page. Currently its returning the number of URL's on that web page.
#!/bin/sh
echo "Enter a url eg www.bbc.com:"
read url
content=$(wget "$url" -q -O -)
echo "Enter file name to store URL output"
read file
echo $content > $file
echo "Enter file name to store filtered links:"
read links
found=$(cat $file | grep -o -E 'href="([^"#]+)"' | cut -d '"' -f2 | sort | uniq | awk '/http/' > $links)
output=$(egrep -o '^http://[^/]+/' $links | sort | uniq -c > out)
cat out
How can i get it to return the domains instead of the URL's. From my programming knowledge I know its supposed to do parsing from the right but i am a newbie at bash scripting. Can someone please help me. This is as far as I have gone.
I know there's a better way to do this in awk but you can do this with sed, by appending this after your awk '/http/':
| sed -e 's;https\?://;;' | sed -e 's;/.*$;;'
Then you want to move your sort and uniq to the end of that.
So that the whole line will look like:
found=$(cat $file | grep -o -E 'href="([^"#]+)"' | cut -d '"' -f2 | awk '/http/' | sed -e 's;https\?://;;' | sed -e 's;/.*$;;' | sort | uniq -c > out)
You can get rid of this line:
output=$(egrep -o '^http://[^/]+/' $links | sort | uniq -c > out)
EDIT 2:
Please note, that you might want to adapt the search patterns in the sed expressions to your needs. This solution considers only http[s]?://-protocol and www.-servers...
EDIT:
If you want count and domains:
lynx -dump -listonly http://zelleke.com | \
sed -n '4,$ s#^.*http[s]?://\([^/]*\).*$#\1#p' | \
sort | \
uniq -c | \
sed 's/www.//'
gives
2 wordpress.org
10 zelleke.com
Original Answer:
You might want to use lynx for extracting links from URL
lynx -dump -listonly http://zelleke.com
gives
# blank line at the top of the output
References
1. http://www.zelleke.com/feed/
2. http://www.zelleke.com/comments/feed/
3. http://www.zelleke.com/
4. http://www.zelleke.com/#content
5. http://www.zelleke.com/#secondary
6. http://www.zelleke.com/
7. http://www.zelleke.com/wp-login.php
8. http://www.zelleke.com/feed/
9. http://www.zelleke.com/comments/feed/
10. http://wordpress.org/
11. http://www.zelleke.com/
12. http://wordpress.org/
Based on this output you achieve desired result with:
lynx -dump -listonly http://zelleke.com | \
sed -n '4,$ s#^.*http://\([^/]*\).*$#\1#p' | \
sort -u | \
sed 's/www.//'
gives
wordpress.org
zelleke.com
You can remove path from url with sed:
sed s#http://##; s#/.*##
I want to say you also, that these two lines are wrong:
found=$(cat $file | grep -o -E 'href="([^"#]+)"' | cut -d '"' -f2 | sort | uniq | awk '/http/' > $links)
output=$(egrep -o '^http://[^/]+/' $links | sort | uniq -c > out)
You must make either redirection ( > out ), or command substitution $(), but not two thing at the same time. Because the variables will be empty in this case.
This part
content=$(wget "$url" -q -O -)
echo $content > $file
would be also better to write this way:
wget "$url" -q -O - > $file
you may be interested by it:
https://www.rfc-editor.org/rfc/rfc3986#appendix-B
explain the way to parse uri using regex.
so you can parse an uri from the left this way, and extract the "authority" that contains domain and subdomain names.
sed -r 's_^([^:/?#]+:)?(//([^/?#]*))?.*_\3_g';
grep -Eo '[^\.]+\.[^\.]+$' # pipe with first line, give what you need
this is interesting to:
http://www.scribd.com/doc/78502575/124/Extracting-the-Host-from-a-URL
assuming that url always begin this way
https?://(www\.)?
is really hazardous.

xargs with multiple arguments

I have a source input, input.txt
a.txt
b.txt
c.txt
I want to feed these input into a program as the following:
my-program --file=a.txt --file=b.txt --file=c.txt
So I try to use xargs, but with no luck.
cat input.txt | xargs -i echo "my-program --file"{}
It gives
my-program --file=a.txt
my-program --file=b.txt
my-program --file=c.txt
But I want
my-program --file=a.txt --file=b.txt --file=c.txt
Any idea?
Don't listen to all of them. :) Just look at this example:
echo argument1 argument2 argument3 | xargs -l bash -c 'echo this is first:$0 second:$1 third:$2'
Output will be:
this is first:argument1 second:argument2 third:argument3
None of the solutions given so far deals correctly with file names containing space. Some even fail if the file names contain ' or ". If your input files are generated by users, you should be prepared for surprising file names.
GNU Parallel deals nicely with these file names and gives you (at least) 3 different solutions. If your program takes 3 and only 3 arguments then this will work:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo a2.txt; echo b2.txt; echo c2.txt;) |
parallel -N 3 my-program --file={1} --file={2} --file={3}
Or:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo a2.txt; echo b2.txt; echo c2.txt;) |
parallel -X -N 3 my-program --file={}
If, however, your program takes as many arguments as will fit on the command line:
(echo a1.txt; echo b1.txt; echo c1.txt;
echo d1.txt; echo e1.txt; echo f1.txt;) |
parallel -X my-program --file={}
Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
How about:
echo $'a.txt\nb.txt\nc.txt' | xargs -n 3 sh -c '
echo my-program --file="$1" --file="$2" --file="$3"
' argv0
It's simpler if you use two xargs invocations: 1st to transform each line into --file=..., 2nd to actually do the xargs thing ->
$ cat input.txt | xargs -I# echo --file=# | xargs echo my-program
my-program --file=a.txt --file=b.txt --file=c.txt
You can use sed to prefix --file= to each line and then call xargs:
sed -e 's/^/--file=/' input.txt | xargs my-program
Here is a solution using sed for three arguments, but is limited in that it applies the same transform to each argument:
cat input.txt | sed 's/^/--file=/g' | xargs -n3 my-program
Here's a method that will work for two args, but allows more flexibility:
cat input.txt | xargs -n 2 | xargs -I{} sh -c 'V="{}"; my-program -file=${V% *} -file=${V#* }'
I stumbled on a similar problem and found a solution which I think is nicer and cleaner than those presented so far.
The syntax for xargs that I have ended with would be (for your example):
xargs -I X echo --file=X
with a full command line being:
my-program $(cat input.txt | xargs -I X echo --file=X)
which will work as if
my-program --file=a.txt --file=b.txt --file=c.txt
was done (providing input.txt contains data from your example).
Actually, in my case I needed to first find the files and also needed them sorted so my command line looks like this:
my-program $(find base/path -name "some*pattern" -print0 | sort -z | xargs -0 -I X echo --files=X)
Few details that might not be clear (they were not for me):
some*pattern must be quoted since otherwise shell would expand it before passing to find.
-print0, then -z and finally -0 use null-separation to ensure proper handling of files with spaces or other wired names.
Note however that I didn't test it deeply yet. Though it seems to be working.
xargs doesn't work that way. Try:
myprogram $(sed -e 's/^/--file=/' input.txt)
It's because echo prints a newline. Try something like
echo my-program `xargs --arg-file input.txt -i echo -n " --file "{}`
I was looking for a solution for this exact problem and came to the conclution of coding a script in the midle.
to transform the standard output for the next example use the -n '\n' delimeter
example:
user#mybox:~$ echo "file1.txt file2.txt" | xargs -n1 ScriptInTheMiddle.sh
inside the ScriptInTheMidle.sh:
!#/bin/bash
var1=`echo $1 | cut -d ' ' -f1 `
var2=`echo $1 | cut -d ' ' -f2 `
myprogram "--file1="$var1 "--file2="$var2
For this solution to work you need to have a space between those arguments file1.txt and file2.txt, or whatever delimeter you choose, one more thing, inside the script make sure you check -f1 and -f2 as they mean "take the first word and take the second word" depending on the first delimeter's position found (delimeters could be ' ' ';' '.' whatever you wish between single quotes .
Add as many parameters as you wish.
Problem solved using xargs, cut , and some bash scripting.
Cheers!
if you wanna pass by I have some useful tips http://hongouru.blogspot.com
Actually, it's relatively easy:
... | sed 's/^/--prefix=/g' | xargs echo | xargs -I PARAMS your_cmd PARAMS
The sed 's/^/--prefix=/g' is optional, in case you need to prefix each param with some --prefix=.
The xargs echo turns the list of param lines (one param in each line) into a list of params in a single line and the xargs -I PARAMS your_cmd PARAMS allows you to run a command, placing the params where ever you want.
So cat input.txt | sed 's/^/--file=/g' | xargs echo | xargs -I PARAMS my-program PARAMS does what you need (assuming all lines within input.txt are simple and qualify as a single param value each).
There is another nice way of doing this, if you do not know the number of files upront:
my-program $(find . -name '*.txt' -printf "--file=%p ")
Nobody has mentioned echoing out from a loop yet, so I'll put that in for completeness sake (it would be my second approach, the sed one being the first):
for line in $(< input.txt) ; do echo --file=$line ; done | xargs echo my-program
Old but this is a better answer:
cat input.txt | gsed "s/\(.*\)/\-\-file=\1/g" | tr '\n' ' ' | xargs my_program
# i like clean one liners
gsed is just gnu sed to ensure syntax matches version brew install gsed or just sed if your on gnu linux already...
test it:
cat input.txt | gsed "s/\(.*\)/\-\-file=\1/g" | tr '\n' ' ' | xargs echo my_program

Resources