How do I get a cygwin script to output to file? - cygwin

I need to get the results of this to output to a file, but have no idea what command to use - any ideas please?
input="/cygdrive/c/dev/test/need_file_size.csv"
trunkRoot=/cygdrive/c/dev/trunk/test-trunk
outputFile=/cygdrive/c/temp/findSizes.log
while read row; do
class=$(echo $row | cut -f 2 -d ",")
find $trunkRoot/ -name "${class}.java" | xargs ls -long
done < "$input"

input="/cygdrive/c/dev/test/need_file_size.csv"
trunkRoot=/cygdrive/c/dev/trunk/test-trunk
outputFile=/cygdrive/c/temp/findSizes.log
while read row; do
class=$(echo $row | cut -f 2 -d ",")
find $trunkRoot/ -name "${class}.java" | xargs ls -long >> filename
done < "$input"
Notice the append to filename or run the script and do it
./myscript >> filename

Related

storing the output of ls command in a shell variable

I am using below command to find a most recent file with name "candump"
ls *.log | grep "candump" | tail -n 1
The output is "candump-2018-04-19_131908.log"
I want to store the output filename to a variable in my shell script. I am using the below commands:
logfile = `ls *.log | grep "candump" | tail -n 1`
and
logfile = $(ls *.log | grep "candump" | tail -n 1)
However, both times I am getting the same error, "logfile: command not found". Am I doing something wrong? Any help is appreciated.
You have to stick the variable name and its value (no space before and after the =).
Try :
logfile=$(ls *.log | grep "candump" | tail -n 1)
This is working for me.
#!/bin/bash
my_command='ls | grep server.js | wc -l';
my_data=$(eval "$my_command");
echo "value in echo is:" $my_data;
if [ $my_data == "1" ]; then
echo "value is equal to 1";
else
echo "value is not equal to 1";
fi

Looping a linux command with input and multiple pipes

This command works, but I want it run it on every document (input.txt) in every subdirectory.
tr -d '\n' < input.txt | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt
The code takes the file input and divides it into sentences with new lines. Then it finds all the sentences that contain a “[“ and outputs the sentences to a single file.
I tried several looping techniques with find and for loops, but couldn't get it to run in this example. I tried
for dir in ./*; do
(cd "$dir" && tr -d '\n' < $dir | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> /home/dan/SingleOutput.txt);
done;
and also
find ./ -execdir tr -d '\n' < . | awk '{gsub(/\. /,".\n");print}' | grep "\[" >> /home/dan/SingleOutput.txt;
but they didn't work execute just giving me > marks. any ideas?
Try this:
cd $dir
find ./ | grep "input.txt$" | while read file; do tr -d '\n' < $file | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt; done
This will find all files called input.txt under $dir, the it will perform what you say it's already working send output to $dir/SingleOutput.txt.
Why not just something like this?
tr -d '\n' < */input.txt | awk '{gsub(/\. /,".\n");print}' | grep “\[" >> SingleOutput.txt
Or are you interested in keeping the output for each input.txt separate?

bash command/script to delete older files version

I have 1 directory with a lot of pdf files.
This files are generated by another script that renames files with a progressive number for new version: (example)
newyork_v1.pdf
newyork_v2.pdf
newyork_v3.pdf
miami_v1.pdf
miami_v2.pdf
rome_v1.pdf
The version number is relative to the file, some files are a version 1, someone at version 2 etc like in example.
Some files stay in version 1 for all own life, some files may grow to 10th version.
After copying this directory in a temp directory I'd like to delete old version for all files, in the example must remain:
newyork_v3.pdf
miami_v2.pdf
rome_v1.pdf
I try sort and delete by ls and sort command but I do not get the desired result, i try:
ls | sort -k2 -t_ -n -r | tail -n +2 | xargs rm
with this command stay only rome_v1.pdf
command or script are indifferent, can anyone help me?
for file in $(ls *.pdf | awk -F'_' '{print $1}' | sort -u)
do
count=$(ls ${file}* | wc -l)
if [ ${count} -gt 1 ]; then
ls -rv ${file}* | tail -$(($count-1)) | xargs rm
fi
done
If you can use GNU ls, you can try below:
for p in $(ls -v *.pdf | cut -d_ -f1 | sort | uniq); do
ls -v $p* | head -n -1 | xargs -I{} rm {} 2>/dev/null
done
The -v flag of GNU ls sorts the files 'naturally' ie. in your case:
miami_v1.pdf
miami_v2.pdf
newyork_v1.pdf
newyork_v2.pdf
newyork_v3.pdf
newyork_v10.pdf #Added to show ls -v in action
rome_v1.pdf
We then loop through each uniq prefix and delete everything other than the last file which matches the prefix.
Result:
miami_v2.pdf
newyork_v10.pdf
rome_v1.pdf
Update:
Changed xargs to handle whitespace and special characters.
This Perl script can be used to filter out the old file names:
#!/usr/bin/perl
use warnings;
use strict;
my %files;
my #old_files;
while (<DATA>) {
chomp;
my ($name, $version, undef) = split /_v|\./, $_;
if (!$files{$name}->{version}) {
$files{$name}->{version} = $version;
$files{$name}->{name} = $_;
next;
}
if ($files{$name}->{version} < $version) {
push #old_files, $files{$name}->{name};
$files{$name}->{version} = $version;
$files{$name}->{name} = $_;
}
}
foreach my $file (#old_files) {
print "$file\n";
}
__DATA__
newyork_v1.pdf
newyork_v2.pdf
newyork_v3.pdf
miami_v1.pdf
miami_v2.pdf
rome_v1.pdf

File name printed twice when using wc

For printing number of lines in all ".txt" files of current folder, I am using following script:
for f in *.txt;
do l="$(wc -l "$f")";
echo "$f" has "$l" lines;
done
But in output I am getting:
lol.txt has 2 lol.txt lines
Why is lol.txt printed twice (especially after 2)? I guess there is some sort of stream flush required, but I dont know how to achieve that in this case.So what changes should i make in the script to get the output as :
lol.txt has 2 lines
You can remove the filename with 'cut':
for f in *.txt;
do l="$(wc -l "$f" | cut -f1 -d' ')";
echo "$f" has "$l" lines;
done
The filename is printed twice, because wc -l "$f" also prints the filename after the number of lines. Try changing it to cat "$f" | wc -l.
wc prints the filename, so you could just write the script as:
ls *.txt | while read f; do wc -l "$f"; done
or, if you really want the verbose output, try
ls *.txt | while read f; do wc -l "$f" | awk '{print $2, "has", $1, "lines"}'; done
There is a trick here. Get wc to read stdin and it won't print a file name:
for f in *.txt; do
l=$(wc -l < "$f")
echo "$f" has "$l" lines
done

Combining greps to make script to count files in folder

I need some help combining elements of scripts to form a read output.
Basically I need to get the file name of a user for the folder structure listed below and using count the number of lines in the folder for that user with the file type *.ano
This is shown in the extract below, to note that the location on the filename is not always the same counting from the front.
/home/user/Drive-backup/2010 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/4.txt
/home/user/Drive-backup/2011 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/3.ano
/home/user/Drive-backup/2010 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/4.ano
awk -F/ '{print $(NF-2)}'
This will give me the username I need but I also need to know how many non blank lines they are in that users folder for file type *.ano. I have the grep below that works but I dont know how to put it all together so it can output a file that makes sense.
grep -cv '^[[:space:]]*$' *.ano | awk -F: '{ s+=$2 } END { print s }'
Example output needed
UserA 500
UserB 2
UserC 20
find /home -name '*.ano' | awk -F/ '{print $(NF-2)}' | sort | uniq -c
That ought to give you the number of "*.ano" files per user given your awk is correct. I often use sort/uniq -c to count the number of instances of a string, in this case username, as opposed to 'wc -l' only counting input lines.
Enjoy.
Have a look at wc (word count).
To count the number of *.ano files in a directory you can use
find "$dir" -iname '*.ano' | wc -l
If you want to do that for all directories in some directory, you can just use a for loop:
for dir in * ; do
echo "user $dir"
find "$dir" -iname '*.ano' | wc -l
done
Execute the bash-script below from folder
/home/user/Drive-backup/2010 Backup/2010 Account/Jan
and it will report the number of non-blank lines per user.
#!/bin/bash
#save where we start
base=$(pwd)
# get all top-level dirs, skip '.'
D=$(find . \( -type d ! -name . -prune \))
for d in $D; do
cd $base
cd $d
# search for all files named *.ano and count blank lines
sum=$(find . -type f -name *.ano -exec grep -cv '^[[:space:]]*$' {} \; | awk '{sum+=$0}END{print sum}')
echo $d $sum
done
This might be what you want (untested): requires bash version 4 for associative arrays
declare -A count
cd /home/user/Drive-backup
for userdir in */*/*/*; do
username=${userdir##*/}
lines=$(grep -cv '^[[:space:]]$' $userdir/user.dir/*.ano | awk '{sum += $2} END {print sum}')
(( count[$username] += lines ))
done
for user in "${!count[#]}"; do
echo $user ${count[$user]}
done
Here's yet another way of doing it (on Mac OS X 10.6):
find -x "$PWD" -type f -iname "*.ano" -exec bash -c '
ar=( "${#%/*}" ) # perform a "dirname" command on every array item
printf "%s\000" "${ar[#]%/*}" # do a second "dirname" and add a null byte to every array item
' arg0 '{}' + | sort -uz |
while IFS="" read -r -d '' userDir; do
# to-do: customize output to get example output needed
echo "$userDir"
basename "$userDir"
find -x "${userDir}" -type f -iname "*.ano" -print0 |
xargs -0 -n 500 grep -hcv '^[[:space:]]*$' | awk '{ s+=$0 } END { print s }'
#xargs -0 -n 500 grep -cv '^[[:space:]]*$' | awk -F: '{ s+=$NF } END { print s }'
printf '%s\n' '----------'
done

Resources