The following commands work on my terminal but not in my shell script. I later found out that my terminal was /bin/tcsh. Can somebody tell me what changes I need to do for /bin/sh. Here are the commands I need to change:
cp source_dir/*/dir1/*.xml destination_dir/
Error in sh-> cp: cannot stat `source_dir/*/dir1/*.xml': No such file or directory
sed -i "s+${initial_name}+${final_name}+" $file_name
This one does not complain but does not work as well.
I am adding an example for testing. The code tends to rename the names of xml files and also the contents of xml files. For example-
The file name crr.ya.na.aa.xml should be changed to aa.xml
The same name inside crr.ya.na.aa.xml should also be changed from crr.ya.na.aa to aa
Here is the code:
#!/bin/sh
# Create dir structure for testing
rm -rf audience
mkdir audience
mkdir audience/dir1 audience/dir2 audience/dir3
mkdir audience/dir1/ipxact audience/dir2/ipxact audience/dir3/ipxact
touch audience/dir1/ipxact/crr.ya.na.aa.xml
echo "<spirit:name>crr.ya.na.aa</spirit:name>" > audience/dir1/ipxact/crr.ya.na.aa.xml
touch audience/dir2/ipxact/crr.ya.na.bb.xml
echo "<spirit:name>crr.ya.na.bb</spirit:name>" > audience/dir2/ipxact/crr.ya.na.bb.xml
touch audience/dir3/ipxact/crr.ya.na.cc.xml
echo "<spirit:name>crr.ya.na.cc</spirit:name>" > audience/dir3/ipxact/crr.ya.na.cc.xml
# Create a dir for ipxact_drop files if it does not exist
mkdir -p ipxact_drop
rm -rf ipxact_drop/*
cp audience/*/ipxact/*.xml ipxact_drop/
ls ipxact_drop/ > ipxact_drop_files.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > file_names.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print "mv ipxact_drop/" $0 " ipxact_drop/" a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > command.log
chmod +x command.log
./command.log
while read line
do
echo ipxact_drop/$line
initial_name=`grep -m 1 crr ipxact_drop/$line | sed -e 's/<spirit:name>//' | sed -e 's/<\/spirit:name>//' `
final_name="${line%.*}"
echo $initial_name
echo $final_name
sed -i "s+${initial_name}+${final_name}+" ipxact_drop/$line
done < file_names.log
echo " ***** SCRIPT RUN FINISHED *****"
Only the sed command at the end is not working
I was reading some other posts and understood that xml files can have problems with scripts. Here is what that worked for me upto now.
To remove cp error: replace #!/bin/sh -f with #!/bin/sh
To remove sed error for the test input: replace sed -i ...... with sed -i.back ....
My goal is to search a file-hierarchy for certain text patterns, excluding certain file-name patterns, and recursively copy just the matching files to a local directory named confs. The following script does the job:
#!/bin/bash
export FEXCLUDE="{*edit,*debug,*orig,*BAK,*bak,*fcs,*NOPE,*tomcat,*full.xml,*-ha.xml}";
export SRCDIR=/opt/jboss-as-7.1.1.Final/standalone;
confshow() {
for ii in `grep -rlZ \
--exclude={*edit,*debug,*orig,*BAK,*bak,*fcs,*NOPE,*tomcat,*full.xml,*-ha.xml} \
--exclude-dir={log,tmp,i2b2.war,*.log,*_history,*.old} "<datasource\|username\|password\|user-name" \
$SRCDIR/* | xargs -0 ls {}` ;
do cp --parents $ii confs;
done;
}
However, the exclusion patterns are likely to need frequent updates and may need to be shared with other functions, so I prefer to have them all in a variables declared at the beginning of the script. When I do the following, files that should be excluded get copied to the confs directory:
#!/bin/bash
export FEXCLUDE="{*edit,*debug,*orig,*BAK,*bak,*fcs,*NOPE,*tomcat,*full.xml,*-ha.xml}";
export SRCDIR=/opt/jboss-as-7.1.1.Final/standalone;
confshow() {
for ii in `grep -rlZ \
--exclude=$FEXCLUDE \
--exclude-dir={log,tmp,i2b2.war,*.log,*_history,*.old} "<datasource\|username\|password\|user-name" \
$SRCDIR/* | xargs -0 ls {}` ;
do cp --parents $ii confs;
done;
}
Any idea how to obtain the desired behavior? Or how to see what grep sees when it gets passed the $FEXCLUDE argument (echo doesn't show anything wrong)?
Thanks.
Brace expansion is nice for interactive use, but if you are writing a script, just use your editor to quickly copy the necessary --exclude options and store them in an array. Parameter expansions do not undergo brace expansion, as you may have noticed.
#!/bin/bash
# You didn't need to export these anyway, since only your script uses them
FEXCLUDE=( --exclude '*edit'
--exclude '*debug'
# etc
)
DEXCLUDE=( --exclude-dir log
--exclude-dir tmp
# etc
)
SRCDIR=/opt/jboss-as-7.1.1.Final/standalone
confshow() {
while IFS= read -d'' -r ii; do
cp --parents "$ii" confs
done < <( grep -rlZ "${FEXCLUDE[#]}" "${DEXCLUDE[#]}" "<datasource\|username\|password\|user-name" $SRCDIR/* )
Also, using ls defeats the purpose of using null-delimited output from grep in the first place.
I know this will raise frowns but this can be solved by using eval and it might not come with usual risks as we're using pattern in --exclude= argument.
#!/bin/bash
fexclude='{*edit,*debug,*orig,*BAK,*bak,*fcs,*NOPE,*tomcat,*full.xml,*-ha.xml}'
dexclude='{log,tmp,i2b2.war,*.log,*_history,*.old}'
srcdir=/opt/jboss-as-7.1.1.Final/standalone
confshow() {
eval grep -rlZ \
--exclude="$fexclude" \
--exclude-dir="$dexclude" \
"<datasource\|username\|password\|user-name" \
$srcdir/* | xargs -0 -I {} cp --parents '{}' confs
done
}
I am using debian squeeze and want to create an offline repository or a cd/dvd for the debian non-free branch. I looked around the internet and all i found out is that there are neither iso images nor there are jidgo files for creating such image so I had the idea to fetch the packages from one of the debian package servers using:
wget -r --no-parent -nH -A*all.deb,*any.deb,*i386.deb \
ftp://debian.oregonstate.edu/debian/pool/non-free/
I know that that I must use file: in my */etc/apt/sources.list* to indicate local repositories but how do I actually create one so that apt or aptitude understands this?
(Answered in a questioned edit. Converted to a community wiki answer. See What is the appropriate action when the answer to a question is added to the question itself? )
The OP wrote:
Update: With a few ugly tricks I was able to extract the needed data from pool and the dist folder.
I used the unzipped Package.gz to do this:
grep '^Package\:.*' Packages|awk '{print $2}' >> Names.lst
grep '^Version\:.*' Packages|awk '{print $2}' >> Versions.lst
grep '^Architecture\:.*' Packages|awk '{print $2}' >> Arch.lst
With vim I find and remove the ':' in the file Versions.lst and generate a shorter Content.lst more easy to parse with bash tools:
paste Names.lst Versions.lst Arch.lst >> Content.lst
Now I do this:
cat content.lst | while read line; \
do echo "$(echo $line|awk '{print $1}')\
_$(echo $line|awk '{print $2}')_$(echo $line|awk '{print $3}')";\
done >> content.lst.tmp && mv content.lst.tmp content.lst
which generates me the file names in the debian directory I need. When finishing with my downloads using wget I find and rsync the needed file names. mv does not work here because I needed the structure as it is referring to in Packages.gz:
cat content.lst |while read line; \
do find debian/ -type f -name ${line}.deb -exec \
rsync -rtpog -cisR {} debian2/ \; ;done
rm -r debian && mv debian2 debian
To receive the complete dists tree structure I used wget again:
wget -c -r --no-parent -nH -A*.bz2,*.gz,Release \
ftp://debian.oregonstate.edu/debian/dists/squeeze/non-free/binary-i386/
I think the only thing I have to do now is to create the Contents.gz file.
The Contents.gz file can easily be created using the apt-ftparchive program:
apt-ftparchive contents > Contents-i386 && gzip -f Contents-i386
#!/bin/sh
LOCAL=/var/local
TMP=/var/tmp
URL=http://um10.eset.com/eset_upd
USER=""
PASSWD=""
WGET="wget --user=$USER --password=$PASSWD -t 15 -T 15 -N -nH -nd -q"
UPDATEFILE="update.ver"
cd $LOCAL
CMD="$WGET $URL/$UPDATEFILE"
eval "$CMD" || exit 1;
if [ -n "`file $UPDATEFILE|grep -i rar`" ]; then
(
cd $TMP
rm -f $TMP/$UPDATEFILE
unrar x $LOCAL/$UPDATEFILE ./
)
UPDATEFILE=$TMP/$UPDATEFILE
URL=`echo $URL|sed -e s:/eset_upd::`
fi
TMPFILE=$TMP/nod32tmpfile
grep file=/ $UPDATEFILE|tr -d \\r > $TMPFILE
FILELIST=`cut -c 6- $TMPFILE`
rm -f $TMPFILE
echo "Downloading updates..."
for FILE in $FILELIST; do
CMD="$WGET \"$URL$FILE\""
eval "$CMD"
done
cp $UPDATEFILE $LOCAL/update.ver
perl -i -pe 's/\/download\/\S+\/(\S+\.nup)/\1/g' $LOCAL/update.ver
echo "Done."
So I have this code to download definitions for my antivirus. The only problem is that, it downloads all files everytime i run script. Is it possible to implement some sort file checking ?, let's say for example,
"if that file is present and have same filesize skip it"
Bash Linux
The -nc argument to wget will not re-fetch files that already exist. It is, however, not compatible with the -N switch. So you'll have to change your WGET line to:
WGET="wget --user=$USER --password=$PASSWD -t 15 -T 15 -nH -nd -q -nc"
I am on a Solaris 8 box that does not support -i option for sed, so I am using the following from a google search on the topic:
# find . -name cancel_submit.cgi | while read file; do
> sed 's/ned.dindo.com\/confluence\/display\/CESDT\/CETS+DocTools>DOC Team/wwwin-dev.dindo.com\/Eng\/CntlSvcs\/InfoFrwk\/GblEngWWW\/Public\/index.html>EDCS Team/g' ${file} > ${file}.new
> mv ${file}.new ${file}
> done
This works except it messes up file permissions and group:owner.
How can I retain the original information?
You may use 'cat'.
cat ${file}.new > ${file} && rm ${file}.new
cp -p preserves the stuff you want. Personally I would do this (to imitate sed -i.bak):
...
cp -p ${file} ${file}.bak
sed 's/..../g' ${file}.bak > ${file}
...
You could add rm ${file}.bak to the end if desired, in which case you wouldn't technically need the -p in the cp line above. But with the above you can do mv ${file}.bak ${file} to recover if the replacement goes awry.