I am using grep to match a string in a file but at the same time another application might be writing to the same file.
In that case what will grep do.
Will it allow the file to being written by other application or will it not give access to the file?
Also if it does give access will my grep results be based on before the file was written or after?
Basically I want the grep to not lock the access to the file but if it does that is their an alternative to prevent it from doing so..
My Sample command:
egrep -r -i "regex" /directory/*
grep does not lock the file, so it is safe to use it, while the file is being actively used by another application
Related
I am trying to tail all the log files present under a directory and it's sub-directories recursively using below command
shopt -s globstar
tail -f -n +2 /app/mylogs/**/* | awk '/^==> / {a=substr($0, 5, length-8); next} {print a":"$0}'
and the output is below:
/app/mylogs/myapplog10062020.log:Hi this is first line
/app/mylogs/myapplog10062020.log:Hi this is second line
which is fine, but problem is when I add a new log file under /app/mylogs/,directory after I fire above tail command. tail will not take that new file into consideration.
Is there a way to get this done?
When you start your the tail process, you pass to it a (fixed) list of the files which tail is suppoed to follow, as you can see from the tail man page. This is different to, say, 'find', where you can in its options pass a file name pattern. After the process has been started, tail has no way of knowing that you suddenly want it to follow another file too.
If you want to have a feature like this, you would have to program your own version of tail, which gets passed for instance a directory to scan, and either periodically checks the directory content for change, or using a service such as inotify to be informed by directory changes.
I have a service "A" which generates some compressed files comprising of the data it receives in requests. In parallel there is another service "B" which consumes these compressed files.
The trick is "B" shouldn't consume any of the files unless they are written completely. The service deduces this information by looking for a ".ready" file created by service "A" with name exactly same as the file generated along with the extension mentioned; once the compression is done. Service "B" uses Apache Camel to do this filtering.
Now, I am writing a shell script which needs the same compressed files and this would need the same filtering be implemented in shell. I need help writing this script. I am aware of find command but a naive shell user, so have very limited knowledge.
Example:
Compressed file: sumit_20171118_1.gz
Corresponding ready
file: sumit_20171118_1.gz.ready
Another compressed file: sumit_20171118_2.gz
No ready file is present for this one.
Of the above listed files only the first should be picked up as it has a corresponding ready file.
The most obvious way would be to use a busy loop. But if you are on GNU/Linux you can do better than that (from: https://www.gnu.org/software/parallel/man.html#EXAMPLE:-GNU-Parallel-as-dir-processor)
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
parallel -uj1 echo Do stuff to file {}
This way you do not even have to wait for the .ready file: The command will only be run when writing to the file is finished and the file is closed.
If, however, the .ready file is only written much later then you can search for that one:
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
grep --line-buffered '\.ready$' |
parallel -uj1 echo Do stuff to file {.}
I have been using this in my scripts an my coworkers disagree with me.
My script takes in a file as a parameter and creates the file. Then I use the following to see if it was actually created.
ls -p | grep [filename]
Then, I see if the file I am trying to create is in the grep list.
but they are suggesting I use
test -f [filename]
instead.
What is the proper way to check for if a file exists in Linux?
test -f [filename] is the way to go. Running both ls and grep just for this operation is way overkill.
I know the second method you mentioned is advised in the Linux Foundation's Introductory course. This method will test whether that name exists, and whether it is a file or not. Grep will simply tell you whether it found that string or not.
I have some large files that I need to concatenate into one giant file to put through a software package that does not accept stdin. I would rather not duplicate the content of existing files on the hard drive if necessary, and am looking for a shortcut that basically does cat files*.txt silently when opened.
You can use process substitution to make the output of a command appear to be a file.
some_command <(cat files*.txt)
But if the application reads from standard input, you can just pipe it:
cat files*.txt | some_command
Another solution I just discovered, using named pipes...
mkfifo files.star.txt
chmod 666 files.star.txt
cat files*.txt > files.star.txt &
some_command files.star.txt
I have created this code which will allow user to change the port in a specific file,
#Change Port
IRSSIPORT1=`head -n 1 /etc/ports.txt | tail -n 1`
sudo perl -pi -e "s/^$IRSSIPORT1.*\n$//g" /etc/ports.txt
sudo perl -pi -e "s/web_port = 8081/web_port = $IRSSIPORT1/g" .sickbread/config.ini
echo "sickbread Port: $IRSSIPORT1" | sudo tee -a $HOME/private/SBinfo.txt
What this code do is it takes a number from a file and then put it in the config file where it is required to change and deletes that number from the initial file from where it took it, but it requires read access as well as write access,
I tried everything in my knowledge to get it work without sudo, but i failed to do it.
Any suggestion?
I get this error -
Can't remove /etc/ports.txt: Permission denied, skipping file.
You can't do inplace edit on 666 files inside /etc as -i switch makes new file and deletes old one inside directory.
Since users don't have sufficient permissions to add/delete files from /etc (nor it would be good idea to do so), you have to read all file content at once, change it, and write back to the same file. Using a temporary file is also a workable solution.
While it may seem that the question is more about system administration rather than about programming, it's actually somewhat about perl so it's may be a good place for it here.
Doing chmod 666 /etc/ports.txt grants all users read-write access to this particular file (of course you don't need to put 777 as it's not an executable or script). So anyone will be able to open this file for writing and put any contents in it.
But when you do perl -pi -e ... /etc/ports.txt you don't only write into that file. Instead, perl will want to delete and then recreate this file, as shown here in strace output:
# strace perl -pi -e 's/a/b/' /etc/ports.txt 2>&1 | grep /etc/ports.txt
...
open("/etc/ports.txt", O_RDONLY) = 3
unlink("/etc/ports.txt") = 0
open("/etc/ports.txt", O_WRONLY|O_CREAT|O_EXCL, 0600) = 4
To delete the file it will need to have a write access not to the file itself, but to the directory /etc, which of course you cannot give to any user.
So I suppose you just don't need to try using in-place edit as it's always related to removing or renaming files, but instead get the contents of the file, make required changes and then write it back to the same file.