I have a script start.sh which runs another script run.sh. run.sh starts my executable.
I wanted to record what run.sh does and I used a tee command to log it into a file loglink, start.sh:
exec run.sh | tee -a loglink
loglink is a soft linked file.
I have a logic where I have 3 log files log1.txt log2.txt log3.txt, I need each file to have a max size of only 1024 bytes. So in my code I keep checking every 5 seconds if log1.txt has reached max size, if reached max size, I change the softlink loglink to point to log2.txt, same with log2.txt and log3.txt in circular way.
As per my understanding, when I change the softlink from log1.txt to log2.txt, tee should print to log2.txt, but strange, tee is still saving output to log1.txt and not log2.txt
And to add on,
I see the softlink changed in ls -l
I tried something ls-l | tee loglink, it does to log2.txt.
Why the tee in script start.sh is not recognising this link change?
Am I missing some logic here?
In short, a filename or symbol link is just a proxy for program to tell the kernel setup the reading or writing path for the real file representation in kernel.
tee used file descriptor to represent files, as its source code(from freebsd) explains:
for (exitval = 0; *argv; ++argv)
if ((fd = open(*argv, append ? O_WRONLY|O_CREAT|O_APPEND :
O_WRONLY|O_CREAT|O_TRUNC, DEFFILEMODE)) < 0) {
warn("%s", *argv);
exitval = 1;
} else
add(fd, *argv);
once a file is opened, in your case, the symbol link is followed and open the target log file, after then, the path for writing do file is opened, and symbol link or filename is not need anymore.
A program which opens a file keeps that file link. If you change the link from outside, the program is not impressed and keeps writing to (or reading from) the original file.
Only if your program closes the file and reopens it, it will be using the new link.
You may, for example, open a file in vlc and play it, then, while playing, move it to a different directory. No problem. Then delete it. You now can't open it with a new program, but the old one is using it until the file is closed by that program.
Its a normal behaviour, as rightly explained in other answers.
As a solution you should periodically open and close output file in your run.sh or use
very nice utility for runtime change of the other process output:
reredirect -m newfile.log `pidof run.sh`
Related
In a particular directory, I made a file named "fileName" and add contents to it. When I typed cat fileName, it's content are printed on the terminal. Now I used the following command:
cat fileName>fileName
No error was shown. Now when I try to see contents of file using,
cat fileName
nothing was shown in the terminal and file is empty (when I checked it). What is the reason for this?
> i.e. redirection to the same file will create/truncate the file before cat command is invoked as it has a higher precedence. You could avoid the same by using intermediate file and then from intermediate to actual file or you could use tee like:
cat fileName | tee fileName
To clarify on SMA's answer, the file is truncated because redirection is handled by the shell, which opens the file for writing before invoking the command. when you run cat file > file,the shell truncates and opens the file for writing, sets stdout to the file, and then execute ["cat", "file"]. So you will have to use some other command for the task like tee
The answers given here are wrong. You will have a problem with truncating regardless of using the redirect or pipeline, although it may APPEAR to work sometimes, depending on size of file or length of your pipeline. It is a race condition, as the reader may have a chance to read some or all of the file before the writer starts, but the point of the pipeline is to run all these at the same time so they will be starting at the same time and the first thing tee executable will do is open the output file (and truncate it in the process). The only way you will not have a problem in this scenario is if the end of the pipeline would load the entirety of the output into memory and only write it to file on shutdown. It is unlikely to happen and defeats the point of having a pipeline.
Proper solution for making this reliable is to just write to a temp file and then rename the temp file back to original filename:
TMP="$(mktemp fileName.XXXXXXXX)"
cat fileName | grep something | tee "${TMP}"
mv "${TMP}" fileName
I wanted to write a script that triggers some code when a file gets changed (meaning the content changes or the file gets overwritten by file with the same name) in a specific directory (or in a subdirectory). When running my code and changing a file it seems to run it twice everytime since I get the echo output twice. Is there something I am missing?
while true; do
change=$(inotifywait -e close_write /home/bla)
change=${change#/home/bla/ * }
echo "$change"
done
Also it doesn't do anything when I change something in a subdirectory of the specified directory.
The outpoot looks like this after i change a file in the specified directory:
Setting up watches.
Watches established.
filename
Setting up watches.
Watches established.
filename
Setting up watches.
Watches established.
I can't reproduce that the script outputs a message twice. Are you sure you don't run it twice (in the background)? Or are you using an editor to change the file? Some editors place a backup file beside the edited file while the file is open. This would explain that you see two messages.
For recursive directory watching you need to pass the option -r to inotifywait. However, you should not run that on a super larger filesystem tree since the number of inotify watches is limited. You can obtain the current limit on your system through
cat /proc/sys/fs/inotify/max_user_watches
Is there a way to "spoof" the file extension of a file in bash for consumption by another program? I can think of doing some shell scripting and making lots of soft-links, but that isn't very scalable.
Let's imagine I have a program I'm trying to use that requires input files to be of a specific file extension, and it has no method of turning off this check.
You could make a fifo with the requisite extension and cat any other file type into it. So, if your crazy program needs to see files that end in .funky, you can do this:
mkfifo file.funky
cat someotherfile > file.funky &
someprogram file.funky
Create a symbolic link for each file you want to have a particular extension, then pass the name of the symlink to the command.
For example suppose you have files with names of the form *.foo and you need to refer to them with extensions of .bar:
for file in *.foo ; do
ln -s $file _$$_$file.bar
done
I precede each symlink name with _$$_ to avoid the possibility of colliding with an existing file name (you don't want to do ln -s file.foo file.bar if file.bar already exists).
With a little more programming, your script can keep track of which symlinks it created and, if you like, clean them up after executing the command.
This assumes, as you stated in the question, that the command can't be forced to accept a different extension.
You could, without too much difficulty, create a wrapper script that replaces the command in question, creating the symlinks, invoking the command, and cleaning up after itself automatically.
I have a question.
I'm running a program on a LINUX machine. This program is writing output to the file 'output.txt' within the subfolder 'SUB' of the parent folder 'PARENT'
PARENT
|________SUB
|_________ output.txt
I accidentally renamed PARENT while output was writing...Namely, I did the following command
mv PARENT PARENT_NEW
So far my program hasn't crashed or anything. Does anyone know the repercussions of what I just did?
On Linux, as inherited from Unix, once a file on the local disk is open, the process has a handle to it. You may rename the parent directory, you may even delete the file. These operations don't trouble the process writing to the file as long as it does not close and reopen it.
The file is kept open by the program via a file descriptor, which is an unsigned integer that the kernel uses to access files. Your action should have no effect.
According to UNIX, the fill will be present in the new location. Here is a simple experiment:
$ mkdir /tmp/test
$ cat > /tmp/test/abc.txt
hello
world
and again!
So while cat is still waiting for input, open a new terminal and rename the folder:
$ mv /tmp/test/ /tmp/test2
Now back to earlier terminal: ( press Ctrl+D to complete the input to cat )
$ ls /tmp/test/
ls: cannot access /tmp/test1/abc.txt: No such file or directory
$ ls /tmp/test2/
abc.txt
$ cat /tmp/test2/abc.txt
hello
world
and again!
So basically, unless the file or directory is deleted completely, it will be present in the new location after the write is complete.
However, if process B deletes a file f while some other process A is still writing to that file, the file f will be available to process A because it holds an inode reference. But for rest of the processes including B it will not be accessible. Any other process can still access file f only if it can obtain a reference to inode via file descriptors from /proc/<PID-of-A>/fd.
Well, I'm a linux newbie, and I'm having an issue with a simple bash script.
I've got a program that adds to a log file while it's running. Over time that log file gets huge. I'd like to create a startup script which will rename and move the log file before each run, effectively creating separate log files for each run of the program. Here's what I've got so far:
pastebin
DATE=$(date +"%Y%m%d%H%M")
mv server.log logs/$DATE.log
echo program
When run, I see this:
: command not found
program
When I cd to the logs directory and run dir, I see this:
201111211437\r.log\r
What's going on? I'm assuming there's some syntax issue I'm missing, but I can't seem to figure it out.
UPDATE: Thanks to shellter's comment below, I've found the problem to be due to the fact that I'm editing the .sh file in Notepad++ in windows, and then sending via ftp to the server, where I run the file via ssh. After running dos2unix on the file, it works.
New question: How can I save the file correctly in the first place, to avoid having to perform this fix every time I resend the file?
mv server.log logs/$(date -d "today" +"%Y%m%d%H%M").log
The few lines you posted from your script look okay to me. It's probably something a bit deeper.
You need to find which line is giving you this error. Add set -xv to the top of your script. This will print out the line number and the command that's being executed to STDERR. This will help you identify where in your script you're getting this particular error.
BTW, do you have a shebang at the top of your script? When I see something like this, I normally expect its an issue with the Shebang. For example, if you had #! /bin/bash on top, but your bash interpreter is located in /usr/bin/bash, you'll see this error.
EDIT
New question: How can I save the file correctly in the first place, to avoid having to perform this fix every time I resend the file?
Two ways:
Select the Edit->EOL Conversion->Unix Format menu item when you edit a file. Once it has the correct line endings, Notepad++ will keep them.
To make sure all new files have the correct line endings, go to the Settings->Preferences menu item, and pull up the Preferences dialog box. Select the New Document/Default Directory tab. Under New Document and Format, select the Unix radio button. Click the Close button.
A single line method within bash works like this.
[some out put] >$(date "+%Y.%m.%d-%H.%M.%S").ver
will create a file with a timestamp name with ver extension.
A working file listing snap shot to a date stamp file name as follows can show it working.
find . -type f -exec ls -la {} \; | cut -d ' ' -f 6- >$(date "+%Y.%m.%d-%H.%M.%S").ver
Of course
cat somefile.log > $(date "+%Y.%m.%d-%H.%M.%S").ver
or even simpler
ls > $(date "+%Y.%m.%d-%H.%M.%S").ver
I use this command for simple rotate a file:
mv output.log `date +%F`-output.log
In local folder I have 2019-09-25-output.log
Well, it's not a direct answer to your question, but there's a tool in GNU/Linux whose job is to rotate log files on regular basis, keeping old ones zipped up to a certain limit. It's logrotate
You can write your scripts in notepad but just make sure you convert them
using this ->
$ sed -i 's/\r$//' yourscripthere
I use it all they time when I'm working in cygwin and it works. Hope this helps
First, thanks for the answers above! They lead to my solution.
I added this alias to my .bashrc file:
alias now='date +%Y-%m-%d-%H.%M.%S'
Now when I want to put a time stamp on a file such as a build log I can do this:
mvn clean install | tee build-$(now).log
and I get a file name like:
build-2021-02-04-03.12.12.log