How can I create a file which containing 10 random numbers in Linux Shell? - linux

I want to create a Linux command which creates a file containing 10 random numbers.
This is a solution to generate 10 random numbers;
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done
It is working to generate random numbers but how can I combine this with 'touch' command? Should I create a loop?

Using touch? Like this?
touch file.txt && RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done >> file.txt
Not sure why you need touch though, this will also work:
for i in `seq 10`; do echo $RANDOM; done > file.txt

Use >> to write to file, $1 is first argument of your .sh file
FILE=$1
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM >> $FILE
echo "\n" >> $FILE
done

You could use head(1) and od(1) or GNU gawk with random(4).
For example, perhaps
head -20c /dev/random | od -s > /tmp/tenrandomnumbers.txt

Related

Shell - iterate over content of file but do something only the first x lines

So guys,
I need your help trying to identify the fastest and the most "fault" tolerant solution to my problem.
I have a shell script which executes some functions, based on a txt file, in which I have a list of files.
The list can contain from 1 file to X files.
What I would like to do is iterate over the content of the file and execute my scripts for only 4 items out of the file.
Once the functions have been executed for these 4 files, go over to the next 4 .... and keep on doing so until all the files from the list have been "processed".
My code so far is as follows.
#!/bin/bash
number_of_files_in_folder=$(cat list.txt | wc -l)
max_number_of_files_to_process=4
Translated_files=/home/german_translated_files/
while IFS= read -r files
do
while [[ $number_of_files_in_folder -gt 0 ]]; do
i=1
while [[ $i -le $max_number_of_files_to_process ]]; do
my_first_function "$files" & # I execute my translation function for each file, as it can only perform 1 file per execution
find /home/german_translator/ -name '*.logs' -exec mv {} $Translated_files \; # As there will be several files generated, I have them copied to another folder
sed -i "/$files/d" list.txt # We remove the processed file from within our list.txt file.
my_second_function # Without parameters as it will process all the files copied at step 2.
done
# here, I want to have all the files processed and don't stop after the first iteration
done
done < list.txt
Unfortunately, as I am not quite good at shell scripting, I do not know how to structure it so that it won't waste any resources and mostly, to make sure that it "processes" everything from that file.
Do you have any advice on how to achieve what I am trying to achieve?
only 4 items out of the file. Once the functions have been executed for these 4 files, go over to the next 4
Seems to be quite easy with xargs.
your_function() {
echo "Do something with $1 $2 $3 $4"
}
export -f your_function
xargs -d '\n' -n 4 bash -c 'your_function "$#"' _ < list.txt
xargs -d '\n' for each line
-n 4 take for arguments
bash .... - run this command with 4 arguments
_ - the syntax is bash -c <script> $0 $1 $2 etc..., see man bash.
"$#" - forward arguments
export -f your_function - export your function to environment so child bash can pick it up.
I execute my translation function for each file
So you execute your translation function for each file, not for each 4 files. If the "translation function" is really for each file with no inter-file state, consider rather executing 4 processes in parallel with same code and just xargs -P 4.
If you have GNU Parallel it looks something like this:
doit() {
my_first_function "$1"
my_first_function "$2"
my_first_function "$3"
my_first_function "$4"
my_second_function "$1" "$2" "$3" "$4"
}
export -f doit
cat list.txt | parallel -n4 doit

For loop with multiple varibles(statements)

I'm looking to automate dns-add by creating two for loop variables. I'm not sure how this is possible. I know my code below is wrong. I'm having difficulties understanding how to create two variables in a one-liner.
for i in `cat list.csv`;
for g in `cat list2.csv`; do
echo $i;
echo $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record-value=$g
done;
done
The only thing i thought might work was this, but I doubt it'll work. Does anyone have any hints?
for i in `cat list.csv` && \
for g in `cat list2.csv ; do
echo $i && $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record-value=$g
done;
done
A for loop is the wrong construct for iterating over any file (see Bash FAQ 001), let alone two files. Use a while loop with the read command instead.
while read -u 3 i; read -u 4 g; do
echo "$i"
echo "$g"
dns-add-record --zone=impl.wd2.wd --record="$i" --type=CNAME --record-value="$g"
done 3< list.csv 4< list2.csv
I think you are missing one do ?
for i in `cat list.csv`; **do**
for g in `cat list2.csv`; do
echo $i;
echo $g;
dns-add-record --zone=impl.wd2.wd --record=$i --type=CNAME --record- value=$g
done; done

Redirecting output of bash for loop

I have a simple BASH command that looks like
for i in `seq 2`; do echo $i; done; > out.dat
When this runs the output of seq 2 is output to the terminal and nothing is output to the data file (out.dat)
I am expecting standard out to be redirected to out.dat like it does simply running the command seq 2 > out.dat
Remove your semicolon.
for i in `seq 2`; do echo "$i"; done > out.dat
SUGGESTIONS
Also as suggested by Fredrik Pihl, try not to use external binaries when they are not needed, or at least when practically not:
for i in {1..2}; do echo "$i"; done > out.dat
for (( i = 1; i <= 2; ++i )); do echo "$i"; done > out.dat
for i in 1 2; do echo "$i"; done > out.dat
Also, be careful of outputs in words that may cause pathname expansion.
for a in $(echo '*'); do echo "$a"; done
Would show your files instead of just a literal *.
$() is also recommended as a clearer syntax for command substitution in Bash and POSIX shells than backticks (`), and it supports nesting.
The cleaner solutions as well for reading output to variables are
while read var; do
...
done < <(do something)
And
read ... < <(do something) ## Could be done on a loop or with readarray.
for a in "${array[#]}"; do
:
done
Using printf can also be an easier alternative with respect to the intended function:
printf '%s\n' {1..2} > out.dat
Another possibility, for the sake of completeness: You can move the output inside the loop, using >> to append to the file, if it exists.
for i in `seq 2`; do echo $i >> out.dat; done;
Which one is better certainly depends on the use case. Writing the file in one go is certainly better than appending to it a thousand times. Also, if the loop contains multiple echo statements, all of which shall go to the file, doing done > out.dat is probably more readable and easier to maintain. The advantage of this solution, of course, is that it gives more flexibility.
Try:
(for i in `seq 2`; do echo $i; done;) > out.dat

how to run more than one command one terminal?

i have to run "for" loop on linux terminal itself how i can do.
ex.for i in cat ~/log;do grep -l "UnoRuby" $i >> ~/logName; done.
Just as you typed it should be fine except: for i in $(cat ~/log); do grep -l "UnoRuby" $i >> ~/logName; done
You should prefer < instead of cat, and a more friendly format for the quesiton:
for i in $(< ~/log)
do
grep -l "UnoRuby" $i >> ~/logName
done

Linux Write Something on multiple files

I have a file "atest.txt" that have some text..
I want to print this text at files "asdasd.txt asgfaya.txt asdjfusfdgh.txt asyeiuyhavujh.txt"
This files is not exist on my server..
I'm running Debian.. What can i do?
Use the tee(1) command, which duplicates its standard input to standard output and any files specified on the command line. E.g.
printf "Hello\nthis is a test\nthank you\n"
| tee test1.txt test2.txt $OTHER_FILES >/dev/null
Using your example:
cat atest.txt
| tee asdasd.txt asgfaya.txt asdjfusfdgh.txt asyeiuyhavujh.txt >/dev/null
From your bash prompt:
for f in test1.txt test2.txt test3.txt; do echo -e "hello\nworld" >> $f; done
If the text lives in atest.txt then do:
for f in test1.txt test2.txt test3.txt; do cat atest.txt >> $f; done
Isn't it simply:
cp atest.txt asdasd.txt
cp atest.txt asgfaya.txt
cp atest.txt asdjfusfdgh.txt
cp atest.txt asyeiuyhavujh.txt
?
In bash you can write
#!/bin/bash
$TEXT="hello\nthis is a test\nthank you"
for i in `seq 1 $1`; do echo -e $TEXT >text$i.txt; done
EDIT (in response of question change)
If you can't determine programmatically the names of the target files then you can use this script it:
#!/bin/bash
ORIGIN=$1;
shift
for i in `seq $#`; do cp "$ORIGIN" "$1"; shift; done
you can use it this way:
script_name origin_file dest_file1 second_dest_file 'third file' ...
If you are wondering why there are the double quotes into the cp command, it is for cope with filename containing spaces
If anyone would like to write same thing to all files in dir:
printf 'your_text' | tee *

Resources