Hi I need a few pointers,
i have a system that processes a file and then writes that file to a bunch of storage. I would like to write a simple bash script that I can run as a post processing script to Zip or tar the file once its done with output.
Could someone tell me how I could do this once the file is done ?
Related
in Python I have the following command executed unzip '{dir}ATTOM_RECORDER/*.zip' -d {dir}ATTOM_RECORDER/ as a bash command. The python call works perfectly. my question is about the unzip command itself.
for some reason when unzip is called to expand any relevent zip files in the folder specified, not all the files WITHIN the zip is extracted. There's usually a rpt and a txt file. However, sometimes the txt file is not coming out and I do not have an error command.
How can I ensure the txt file is guaranteed to be extracted before moving on?
Thanks
While you want to unzip your specific zip file. There are many option to decompress any file from zip files. Easiest way is the ā-lā option with unzip command is used to list the contents of a zip file after extracting it.
Syntax: unzip -l [file_name.zip]
I have a zip file named test_kit.zip which contains a shell script deploy.sh inside it. The shell script file has read, write and, execute permission.
I want to unzip this test_kit.zip file
Using jar xf or 7zip
jar xf test_kit.zip
7za x test_kit.zip
These commands unzip the zip file correctly but, the deploy.sh shell script loses the execute permission. Is there any way to unzip using 'jar xf' or '7za x' without losing execute permission for the files and folders inside the zip file?
Using unzip
unzip test_kit.zip
This command gives me an error which is as follows-
warning [test_kit.zip]: 1318507533 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [test_kit.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
I want to understand what is going wrong.
zip has -# which takes file names from stdin. I just want to add one file with the content taken from stdin. The filename stored in the zip file should be specified in the command line. I don't think that this is possible with the zip command line. Could anybody confirm if this is the case? Thanks.
I have a service "A" which generates some compressed files comprising of the data it receives in requests. In parallel there is another service "B" which consumes these compressed files.
The trick is "B" shouldn't consume any of the files unless they are written completely. The service deduces this information by looking for a ".ready" file created by service "A" with name exactly same as the file generated along with the extension mentioned; once the compression is done. Service "B" uses Apache Camel to do this filtering.
Now, I am writing a shell script which needs the same compressed files and this would need the same filtering be implemented in shell. I need help writing this script. I am aware of find command but a naive shell user, so have very limited knowledge.
Example:
Compressed file: sumit_20171118_1.gz
Corresponding ready
file: sumit_20171118_1.gz.ready
Another compressed file: sumit_20171118_2.gz
No ready file is present for this one.
Of the above listed files only the first should be picked up as it has a corresponding ready file.
The most obvious way would be to use a busy loop. But if you are on GNU/Linux you can do better than that (from: https://www.gnu.org/software/parallel/man.html#EXAMPLE:-GNU-Parallel-as-dir-processor)
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
parallel -uj1 echo Do stuff to file {}
This way you do not even have to wait for the .ready file: The command will only be run when writing to the file is finished and the file is closed.
If, however, the .ready file is only written much later then you can search for that one:
inotifywait -qmre MOVED_TO -e CLOSE_WRITE --format %w%f my_dir |
grep --line-buffered '\.ready$' |
parallel -uj1 echo Do stuff to file {.}
I have a linux server that receives data files via sftp. These files contain data that is immediately imported into an application for use. The directory which the files are sent to is constantly read by another process looking for the new files to process.
The problem I am having is that the files are getting read before they are completely transferred. Is there a way to hide the files before they have transferred?
One thought I had is by leveraging the .filepart concept that many sftp clients use to rename files before they are complete. I don't have control of the clients though, so is there a way to do this on the server side?
Or is there another way to do this by permissions or such?
We have solved a similar problem by creating a directory on the same file-system that the files will be read from by the clients, and use inotifywait.
You sftp to the staging directory and have inotifywait watch that staging directory.
Once inotify sees the "FILE_CLOSE" event for any received file you simply "mv" the file to the directory the client reads from.
#!/bin/bash
inotifywait -m -e close --format "%f\n" /path/to/tmp | while read newfile
do
mv /path/to/tmp/"$newfile" ~/real
done