I wrote a script that uses a program to create a CSV from an iCOBOL flat file. Unfortunately, the program just adds on to the end of the file and doesn't overwrite it. So I added a line to the script to first remove the file. Everything works well when I run it manually. When I have cron run it, it seems to just delete the file and then stop. Did I miss something? Please see my code below.
#!/bin/bash
cd /home/directory
rm -f file.csv
icreorg -O l flatfile file.csv
The icreorg is the program that is generating the csv.
Thank you!
Related
I am currently working on project to automate a manual task in my office. We have a process that we have to re-trigger some of our ID's when they fall in repair. As part of the process, we have to extract those ID's from a oracle DB table and then put in a file on our Linux server and run the command like this-
Example file:
$cat /task/abc_YYYYMMDD_1.txt
23456
45678
...and so on
cat abc_YYYYMMDD_1.txt | scripttoprocess -args
I am using an existing java based code called 'scripttoprocess'. I can't see what's inside this code as it is encrypted( it seems) in my script. I simply go to the location where my files are present present and then use it like this:
cd /export/incoming/task
for i in `ls abc_YYYYMMDD*.txt`;do
cat $i | scripttoprocess -args
if [ $? -eq 0];then
mv $i /export/incoming/HIST/
fi
done
scripttoprocess is and existing script. I am just calling it in my own script. My script is running continuously in a loop in the background. It simply searches for abc_YYYYMMDD_1.txt file in /task directory and if it detects such a file then it starts processing the file. But I have noticed that my script starts processing the file well before it is fully written and sometime moves the file to HIST without fully processing it.
How can handle this situation. I want to be fully sure that file is completely written before I start processing it. Secondly, Is there any way to take control of the file like preparing a control file which contains list of the files which are present in the /task directory. And then I can cat this control file and pick up file names from inside of it ? Your guidance will be much appreciated.
I used
iwatch -e close_write -c "/usr/bin/pdflatex -interaction batchmode %f" document.tex
To run a command (Latex to PDF conversion) when a file (document.tex) is closed after writing to it, which you could do as well.
However, there is a caveat: This was only meant to catch manual edits to the file and failure was not critical. Therefore, this ignores the case that immediately after closing, it is opened and written again. Ask yourself if that is good enough for you.
I agree with #TenG, normally you shouldn't move a file until it is fully written. If you know for sure that the file is finished (like a file from yesterday) then you can move it safely, otherwise you can process it, but not move it. You can for example process a part of it and remember the number of processed rows so that you don't restart from scratch next time.
If you really really want to work with files that are "in progress", sometimes tail -F works for this case, but then your bash script is an ongoing process as well, not a job, and you have to manage it.
You can also check if a file is currently open (and thus unfinished) using lsof (see https://superuser.com/questions/97844/how-can-i-determine-what-process-has-a-file-open-in-linux ; check if file is open with lsof ).
Change the process, that extracts the ID's from the oracle DB table.
You can use the mv as commented by #TenG, or put something special in the file that shows the work is done:
#!/bin/bash
source file_that_runs_sqlcommands_with_credentials
output=$(your_sql_function "select * from repairjobs")
# Something more for removing them from the table and check the number of deleted records
printf "%s\nFinished\n" "${output}" >> /task/abc_YYYYMMDD_1.txt
or
#!/bin/bash
source file_that_runs_sqlcommands_with_credentials
output=$(your_sql_function "select * from repairjobs union select 'EOF' from dual")
# Something more for removing them from the table and check the number of deleted records
printf "%s\n" "${output}" >> /task/abc_YYYYMMDD_1.txt
I have two files for deployment,
1) deploymentpackage.zip -> It contains the database package with few shell scripts.
2) deployment.sh -> It is the primary shell script which first unzips the deploymentpackage.zip and then execute the list of shell files inside it.
It is working as expected.
But what I need is, I need to make the zip file as executable so that I dont want to deliver both deploymentpackage.zip and deployment.sh to client.
So Is it possible to make the deploymentpackage.zip as executable so that I don't want to have another script deployment.sh.
Expectation : Running this deploymentpackage.zip should unzip the same file and run the list of scripts inside it.
If it's ok to assume that the user who will run the script has the unzip utility, then you can create a script like this:
#!/usr/bin/env bash
# commands that you need to do ...
# ...
unzip <(tail -n +$((LINENO + 2)) "$0")
exit
Make sure the script has a newline \n character at the end of the line of exit. And, it's important that the last line of the script is the exit command, and that the unzip command with tail is right in front of it.
Then, you can append to this file the zipped content, for example with:
cat file.zip >> installer.sh
Users will be able to run installer.sh, which will unzip the zipped content at the end of the file.
Write a readme file, and ask your users to chmod the script, then to execute it.
For security reason I hope there is no way to auto-execute such things...
Edit: received a vote down because the OP did not like it, thanks a lot :)
On Linux I'm using "tee" to capture the output of "source" command and print it to output log file, but failed. The command I'm using is like this:
source ./my_run.sh 2>&1 | tee -i my_run_log
The intention of my_run.sh is to "make" some compile job, as well as some routine jobs like cd, rm and svn update. The content of my_run.sh is like follows:
make clean
cd ..
rm ./xxx
svn up -r 166
cd ./aaa/
sed -i -e ......
make compile
make run
However, when I run it the "tee" just does NOT work, and do NOT give me the log file at all. In order to verify that the entire environment is good, I did a simpler test with:
ll 2>&1 | tee -i log
and in this simpler scenario the "tee" works perfectly fine and prints out "log" as I expected.
Can anyone help me find out where my problem is?
btw,
I'm working on Red Hat Linux (Release 5.9), using bash shell.
Thanks in advance!
SOME MORE COMMENTS:
I did some more tests and found that as long as the my_run.sh script has got "make xxx" stuffs in it, then "tee" will fail. Seems like tee does NOT like make. Any solutions?
Problem solved; many thanks goes to #thatotherguy in leading me to the solution. The log output was actually deleted by the make clean process. After fixing the clean stuff in the makefile, everything is good.
I am working on a script using #!/bin/csh -f
this script is designed to do a bunch of things but one of the things is its suppose to move file_1 to file_old and the problem is whenever you have already ran the script and there is already has a file_old it says sorry cant help ya and exits out. Is there something I can add to the script to change the old file to file_time stamp?
If you use
timestamp=\`date +%s\`
you can use append the $timestamp variable to your filename how you want it, that will give you a unique name.
mv -f file_new file_old
Use option -f.
I'm facing a problem in a bash shell script when I try to read some lines from a file and execute them one by one. The problem occurs when the line has an argument with spaces. Code:
while read i
do
$i
done < /usr/bin/tasks
tasks file:
mkdir Hello\ World
mkdir "Test Directory"
Both of the above instructions work perfectly when executed directly from the terminal, creating only two directories called "Hello World" and "Test Directory" respectively, but the same doesn't happen when the instructions are read and executed from the script, meaning that four directories are created.
Having said that, I would like to keep my code as simple as possible and, if possible, I'd prefer not to use the cat command. Thanks in advance for any help.
As simple as possible? You are re-implementing the . (or source, as bash allows you to spell it) command:
. /usr/bin/tasks
or
source /usr/bin/tasks
To execute one line at a time, use eval.
while IFS= read -r i; do
eval "$i"
done
This assumes that each line of the file contains one or more complete commands that can be executed individually.