Need some help here. I need to create an executable file for every user that exists on the system ( Linux ) and the format for file is the following :
fis_nr_username
where nr stands for 1st file, 2nd file etc...
EXAMPLE OF SITUATION
Users on machine :
stud01
stud02
stud03
I need a file for each of them to be executable and look like this :
file_1_stud01
file_2_stud02
file_3_stud03
You could loop through the user list, then loop through file number (here 0 to 10). Use printf with %03d to pad with zeros.
#!/usr/bin/env bash
username="stud01 stud02 stud03"
for name in $username; do
for ((i=0; i<11; i++)); do
printf "file_%03d_%s\n" $i $name
done
done
You could make this a function and put it in .bashrc
newfiles() {
username="$#"
for name in $username; do
for ((i=0; i<3; i++)); do
printf "file_%03d_%s\n" $i $name
done
done
}
call the function from terminal with: newfiles firstuser serconduser. Output:
fis_000_firstuser
fis_001_firstuser
fis_002_firstuser
fis_000_seconduser
fis_001_seconduser
fis_002_seconduser
Related
I'm trying to develop a perl script that looks through all of the user's directories for a particular file name without the user having to specify the entire pathname to the file.
For example, let's say the file of interest was data.list. It's located in /home/path/directory/project/userabc/data.list. At the command line, normally the user would have to specify the pathname to the file like in order to access it, like so:
cd /home/path/directory/project/userabc/data.list
Instead, I want the user just to have to enter script.pl ABC in the command line, then the Perl script will automatically run and retrieve the information in the data.list. which in my case, is count the number of lines and upload it using curl. the rest is done, just the part where it can automatically locate the file
Even though very feasible in Perl, this looks more appropriate in Bash:
#!/bin/bash
filename=$(find ~ -name "$1" )
wc -l "$filename"
curl .......
The main issue would of course be if you have multiple files data1, say for example /home/user/dir1/data1 and /home/user/dir2/data1. You will need a way to handle that. And how you handle it would depend on your specific situation.
In Perl that would be much more complicated:
#! /usr/bin/perl -w
eval 'exec /usr/bin/perl -S $0 ${1+"$#"}'
if 0; #$running_under_some_shell
use strict;
# Import the module File::Find, which will do all the real work
use File::Find ();
# Set the variable $File::Find::dont_use_nlink if you're using AFS,
# since AFS cheats.
# for the convenience of &wanted calls, including -eval statements:
# Here, we "import" specific variables from the File::Find module
# The purpose is to be able to just type '$name' instead of the
# complete '$File::Find::name'.
use vars qw/*name *dir *prune/;
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
# We declare the sub here; the content of the sub will be created later.
sub wanted;
# This is a simple way to get the first argument. There is no
# checking on validity.
our $filename=$ARGV[0];
# Traverse desired filesystem. /home is the top-directory where we
# start our seach. The sub wanted will be executed for every file
# we find
File::Find::find({wanted => \&wanted}, '/home');
exit;
sub wanted {
# Check if the file is our desired filename
if ( /^$filename\z/) {
# Open the file, read it and count its lines
my $lines=0;
open(my $F,'<',$name) or die "Cannot open $name";
while (<$F>){ $lines++; }
print("$name: $lines\n");
# Your curl command here
}
}
You will need to look at the argument-parsing, for which I simply used $ARGV[0] and I do dont know what your curl looks like.
A more simple (though not recommended) way would be to abuse Perl as a sort of shell:
#!/usr/bin/perl
#
my $fn=`find /home -name '$ARGV[0]'`;
chomp $fn;
my $wc=`wc -l '$fn'`;
print "$wc\n";
system ("your curl command");
Following code snippet demonstrates one of many ways to achieve desired result.
The code takes one parameter, a word to look for in all subdirectories inside file(s) data.list. And prints out a list of found files in a terminal.
The code utilizes subroutine lookup($dir,$filename,$search) which calls itself recursively once it come across a subdirectory.
The search starts from current working directory (in question was not specified a directory as start point).
use strict;
use warnings;
use feature 'say';
my $search = shift || die "Specify what look for";
my $fname = 'data.list';
my $found = lookup('.',$fname,$search);
if( #$found ) {
say for #$found;
} else {
say 'Not found';
}
exit 0;
sub lookup {
my $dir = shift;
my $fname = shift;
my $search = shift;
my $files;
my #items = glob("$dir/*");
for my $item (#items) {
if( -f $item && $item =~ /\b$fname\b/ ) {
my $found;
open my $fh, '<', $item or die $!;
while( my $line = <$fh> ) {
$found = 1 if $line =~ /\b$search\b/;
if( $found ) {
push #{$files}, $item;
last;
}
}
close $fh;
}
if( -d $item ) {
my $ret = lookup($item,$fname,$search);
push #{$files}, $_ for #$ret;
}
}
return $files;
}
Run as script.pl search_word
Output sample
./capacitor/data.list
./examples/data.list
./examples/test/data.list
Reference:
glob,
Perl file test operators
I Have parameters file which has got the data like given below
#host1 credentials
Host1=192.168.1.1
password=host1Password
#host2 credentials
Host2=192.168.1.2
password=host2password
I want to parse through this information in the text file using shell script and assign those values to variables.
$host1 = 192.168.1.1
$password1 = host1password
$host2 = 192.168.1.2
$password2 = host2password
I am newbie to shell scripting, please help me out to achieve this.
I would use a shell that implements arrays (bash or ksh), and do this:
hosts=()
passwords=()
# read the file, populate the arrays
while IFS="=" read -r key value; do
case $key in
password) passwords+=( "$value" ) ;;
Host*) hosts+=( "$value" ) ;;
esac
done < params
# print the contents of the arrays
for ((i=0; i < ${#hosts[#]}; i++)); do
printf "%d\t%s\t%s\n" $i "${hosts[i]}" "${passwords[i]}"
done
0 192.168.1.1 host1Password
1 192.168.1.2 host2password
I'm trying my first program in BASH
The program needs to change the files name in directory.
The first argument is base name and the second argument is a file extension
If I call to the function with:
rename Test jpg
then the resulting files should have names like:
Test001.jpg, Test002.jpg, Test003.jpg,...
What I tried:
function rename {
index=0
for i in $1"/"*".$2"; do
newName=$(printf $1/"$1%04d."$2 ${index})
mv $i $newName
let index=index+1
done
}
And when I call to the function
bash rename.sh pwd jpg
And nothing dosen't happened,please help me:)
What I would do :
rn(){
for i in $1*.$2; do
((index++))
newName=$(printf "$1%04d.$2" $index)
mv $i $newName
done
}
cd WHERE/YOU/WANT
rn "$#"
I have bunch of files with no pattern in their name at all in a directory. all I know is that they are all Jpg files. How do I rename them, so that they will have some sort of sequence in their name.
I know in Windows all you do is select all the files and rename them all to a same name and Windows OS automatically adds sequence numbers to compensate for the same file name.
I want to be able to do that in Linux Fedora but I you can only do that in Terminal. Please, help. I am lost.
What is the command for doing this?
The best way to do this is to run a loop in the terminal going from picture to picture and renaming them with a number that gets bigger by one with every loop.
You can do this with:
n=1
for i in *.jpg; do
p=$(printf "%04d.jpg" ${n})
mv ${i} ${p}
let n=n+1
done
Just enter it into the terminal line by line.
If you want to put a custom name in front of the numbers, you can put it before the percent sign in the third line.
If you want to change the number of digits in the names' number, just replace the '4' in the third line (don't change the '0', though).
I will assume that:
There are no spaces or other weird control characters in the file names
All of the files in a given directory are jpeg files
That in mind, to rename all of the files to 1.jpg, 2.jpg, and so on:
N=1
for a in ./* ; do
mv $a ${N}.jpg
N=$(( $N + 1 ))
done
If there are spaces in the file names:
find . -type f | awk 'BEGIN{N=1}
{print "mv \"" $0 "\" " N ".jpg"
N++}' | sh
Should be able to rename them.
The point being, Linux/UNIX does have a lot of tools which can automate a task like this, but they have a bit of a learning curve to them
Create a script containing:
#!/bin/sh
filePrefix="$1"
sequence=1
for file in $(ls -tr *.jpg) ; do
renamedFile="$filePrefix$sequence.jpg"
echo $renamedFile
currentFile="$(echo $file)"
echo "renaming \"$currentFile\" to $renamedFile"
mv "$currentFile" "$renamedFile"
sequence=$(($sequence+1))
done
exit 0
If you named the script, say, RenameSequentially then you could issue the command:
./RenameSequentially Images-
This would rename all *.jpg files in the directory to Image-1.jpg, Image-2.jpg, etc... in order of oldest to newest... tested in OS X command shell.
I wrote a perl script a long time ago to do pretty much what you want:
#
# reseq.pl renames files to a new named sequence of filesnames
#
# Usage: reseq.pl newname [-n seq] [-p pad] fileglob
#
use strict;
my $newname = $ARGV[0];
my $seqstr = "01";
my $seq = 1;
my $pad = 2;
shift #ARGV;
if ($ARGV[0] eq "-n") {
$seqstr = $ARGV[1];
$seq = int $seqstr;
shift #ARGV;
shift #ARGV;
}
if ($ARGV[0] eq "-p") {
$pad = $ARGV[1];
shift #ARGV;
shift #ARGV;
}
my $filename;
my $suffix;
for (#ARGV) {
$filename = sprintf("${newname}_%0${pad}d", $seq);
if (($suffix) = m/.*\.(.*)/) {
$filename = "$filename.$suffix";
}
print "$_ -> $filename\n";
rename ($_, $filename);
$seq++;
}
You specify a common prefix for the files, a beginning sequence number and a padding factor.
For exmaple:
# reseq.pl abc 1 2 *.jpg
Will rename all matching files to abc_01.jpg, abc_02.jpg, abc_03.jpg...
I am trying to merge two very different scripts together for consolidation and ease of use purposes. I have an idea of how I want these scripts to look and operate, but I could use some help getting started. Here is the flow and look of the script:
The input file would be a standard text file with this syntax:
#Vegetables
Broccoli|Green|14
Carrot|Orange|9
Tomato|Red|7
#Fruits
Apple|Red|15
Banana|Yellow|5
Grape|Purple|10
The script would take the input of this file. It would ignore the commented portions, but use them to dictate the output. So based on the fact that it is a Vegetable, it would perform a specific function with the values listed between the delimiter (|). Then it would go to the Fruits and do something different with the values, based on that delimiter. Perhaps, I would add Vegetable/Fruit to one of the values and dependent on that value it would perform the function while in this loop to read the file. Thank you for your help in getting this started.
UPDATE:
So I am trying to implement the IFS setup and thought of a more logical arrangement. The input file will have the "categories" displayed within the parameters. So the setup will be like this:
Vegetable|Carrot|Yellow
Fruit|Apple|Red
Vegetable|Tomato|Red
From there, the script will read in the lines and perform the function. So basically this type of setup in shell:
while read -r category item color
do
if [[ $category == "Vegetable" ]] ; then
echo "The $item is $color"
elif [[ $category == "Fruit" ]] ; then
echo "The $item is $color"
else
echo "Bad input"
done < "$input_file"
Something along those lines...I am just having trouble putting it all together.
Use read to input the lines. Do a case statement on their prefix:
{
while read DATA; do
case "$DATA" in
\#*) ... switch function ...;;
*) eval "$FUNCTION";;
esac
done
} <inputfile
Dependent on your problem you might want to experiment with setting $IFS before reading and read multiple variables in 1 go.
You can redefine the processing function each time you meet a # directive:
#! /bin/bash
while read line ; do
if [[ $line == '#Vegetables' ]] ; then
process () {
echo Vegetables: "$#"
}
elif [[ $line == '#Fruits' ]] ; then
process () {
echo Fruits: "$#"
}
else
process $line
fi
done < "$1"
Note that the script does not skip empty lines.