Upload output of a program directly to a remote file by ftp - linux

I have some program that generates a lot of data, to be specific encrypting tarballs. I want to upload result on a remote ftp server.
Files are quite big (about 60GB), so I don't want to waste hdd space for tmp dir and time.
Is it possible? I checked ncftput util, but there is not option to read from a standard input.

curl can upload while reading from stdin:
-T, --upload-file
[...]
Use the file name "-" (a single dash) to use stdin instead of a given
file. Alternately, the file name "." (a single period) may be
specified instead of "-" to use stdin in non-blocking mode to allow
reading server output while stdin is being uploaded.
[...]

I guess you could do that with any upload program using named pipe, but I foresee problems if some part of the upload goes wrong and you have to restart your upload: the data is gone and you cannot start back your upload, even if you only lost 1 byte. This also applied to a read from stdin strategy.
My strategy would be the following:
Create a named pipe using mkfifo.
Start the encryption process writing to that named pipe in the background. Soon, the pipe buffer will be full and the encryption process will be blocked trying to write data to the pipe. It should unblock when we will read data from the pipe later.
Read a certain amount of data from the named pipe (let say 1 GB) and put this in a file. The utility dd could be used for that.
Upload that file though ftp doing it the standard way. You then can deal with retries and network errors. Once the upload is completed, delete the file.
Go back to step 3 until you get a EOF from the pipe. This will mean that the encryption process is done writing to the pipe.
On the server, append the files in order to an empty file, deleting the files one by one once it has been appended. Using touch next_file; for f in ordered_list_of_files; do cat $f >> next_file; rm $f; done or some variant should do it.
You can of course prepare the next file while you upload the previous file to use concurrency at its maximum. The bottleneck will be either your encryption algorithm (CPU), you network bandwidth, or your disk bandwidth.
This method will waste you 2 GB of disk space on the client side (or less or more depending the size of the files), and 1 GB of disk space on the server side. But you can be sure that you will not have to do it again if your upload hang near the end.
If you want to be double sure about the result of the transfer, you could compute hash of you files while writing them to disk on the client side, and only delete the client file once you have verify the hash on the server side. The hash can be computed on the client side at the same time you are writing the file to disk using dd ... | tee local_file | sha1sum. On the server side, you would have to compute the hash before doing the cat, and avoid doing the cat if the hash is not good, so I cannot see how to do it without reading the file twice (once for the hash, and once for the cat).

You can write to a remote file using ssh:
program | ssh -l userid host 'cd /some/remote/directory && cat - > filename'

This is a sample of uploading to ftp site by curl
wget -O- http://www.example.com/test.zip | curl -T - ftp://user:password#ftp.example.com:2021/upload/test.zip

Related

Is it dangerous to pipe data from S3 to a long-running process

I have a 30GB file and I want to feed it into a program that accepts data via stdin and that will take 24hr to process the data.
Can I just do aws s3 cp s3://bigfile.txt - | long_process.sh?
I'm attracted to this b/c I don't have to store bigfile.txt on disk and I can start working on the data stream immediately, but I worry that if the s3 command has a problem during the 24 hours that it will crash and I will lose all the progress.
EDIT:
Alternatively, I am looking for a way to stage the data to a file and read from the staged data. For example: aws s3 cp s3://bigfile.txt /local/bigfile.txt &; long_process.sh < /local/bigfile.txt. The trouble is that long_process.sh complains of a truncated file; It does not seem to wait for further input like a pipe would. I have thought about tee and mkfifo but nothing seems to quite fit my needs.
Yes, it’s dangerous because the processing might crash during the time and you could lose everything

How to simulate an ftp transfer of a big file with curl?

I want to check the FTP connection with FTP server. To do that I want to make a transfer of big file (like 10 G) to the FTP server (upload) inorder to check my FTP connection.
But to do that I have to generate a big file of 10 G. and this is not good especially if my device does not have enough memory/disk. So I want to simulate a transfer of big file (of 10 G for example) without generating a file of 10G.
Is it possible to do it with curl command?
'truncate' (the command line tool) is your friend. It can create huge files that are "sparse", meaning that the long sequence of zeroes is just a hole that won't actually occupy space on the drive.
Therefore, you can create a 10G file (containing all zeroes) in a much smaller file system and hard drive like this (works on Linux):
$ truncate --size=10G bigfile
With FTP, assuming that the client and server have small RT delay, sending 10G will take the same amount of sending 100 files of 100M, over the same open connection. Hopefully your device can have that much space allocated.
The key is to avoid re-openning the connection. Do something like the following to keep the connection open.
X=ftp://server/path/to/100M
X100=$(for i in {1..100} ; do echo " " $X ; done )
curl $X100

When does the writer of a named pipe do its work?

I'm trying to understand how a named pipe behaves in terms of performance. Say I have a large file I am decompressing that I want to write to a named pipe (/tmp/data):
gzip --stdout -d data.gz > /tmp/data
and then I sometime later run a program that reads from the pipe:
wc -l /tmp/data
When does gzip actually decompress the data, when I run the first command, or when I run the second and the reader attaches to the pipe? If the former, is the data stored on disk or in memory?
Pipes (named or otherwise) have only a very small buffer if any -- so if nothing is reading, then nothing (or very little) can be written.
In your example, gzip will do very little until wc is run, because before that point its efforts to write output will block. Out-of-the-box there is no nontrivial buffer either on-disk or in-memory, though tools exist which will implement such a buffer for you, should you want one -- see pv with its -B argument, or the no-longer-maintained (and, sadly, removed from Debian by folks who didn't understand its function) bfr.

Comparing a big file on two servers

I have two servers and I want to move a backup tar.bz file(50G) from one to other one.
I used AXEL to download file from source server. But now when I want to extract it, it gives me error unexpected EOF. The size of them are same and it seems like there is a problem in content.
I want to know if there is a program/app/script that can compare these two files and correct only damaged parts?! Or do I need to split it by hand and compare each part's hash code?
Problem is here that source server has limited bandwidth and low transfers speed so I cant transfer it again from zero.
You can use a checksum utility, such as md5 or sha, to see if the files are the same on either end. e.g.
$ md5 somefile
MD5 (somefile) = d41d8cd98f00b204e9800998ecf8427e
by running such a command on both ends and comparing the result, you can get some certainty as to if the files are the same.
As for only downloading the erroneous portion of a file, this would require checksums on both sides for "pieces" of the data, such as with the bittorrent protocol.
Ok, I found "rdiff" the best way to solve this problem. Just doing:
On Destination Server:
rdiff signature destFile.tar.bz destFile.sig
Then transferring destFile.sig to source server and execute rdiff there on Source Server again:
rdiff delta destFile.sig srcFile.tar.bz delta.rdiff
Then transferring delta.rdiff to destination server and execute rdiff once again on Destination Server:
rdiff patch destFile.tar.bz delta.rdiff fixedFile.tar.bz
This process really doesn't need a separate program. You can simply do it by using a couple of simple commands. If any of the md5sums don't add up, copy over the mismatched one(s) and concatenate them back together. To make comparing the md5sums easier, just run a diff between the output of the two files (or do an md5sum of the outputs to see if there is a difference at all without having to copy over the output).
split -b 1000000000 -d bigfile bigfile.
for i in bigfile.*
do
md5sum $i
done

CIFS/SMB Write Optimization

I am looking at making a write optimization for CIFS/SMB such that the writing of duplicate blocks are suppressed. For example, I read a file from the remote share and modify a portion near the end of the file. When I save the file, I only want to send write requests back to the remote side for the portions of the file that have actually changed. So basically, suppress all writes up until the point at which a non duplicate write is encountered. At that point the suppression will be disabled and the writes will be allowed as usual. The problem is I can't find any documentation MS-SMB/MS-SMB2/MS-CIFS or otherwise that indicates whether or not this is a valid thing to do. Does anyone know if this would be valid?
Dig deep into the sources of the Linux kernel, there is documentation on CIFS - both in source and text. E.g. http://www.mjmwired.net/kernel/Documentation/filesystems/cifs.txt
If you want to study the behaviour of e.g. the CIFS protocol, you may be able to test it with the unix command "dd". Mount any remote file-system via CIFS, e.g. into /media/remote. Change into this folder cd /media/remote Now create a file with some random stuff (e.g. from the kernel's random pool): dd if=/dev/urandom of=test.bin bs=4M count=5 In this example, you should see some 20MB of traffic. Then create another smaller file, somewhere on your machine, say, your home-folder: dd if=/dev/urandom of=~/test_chunk.bin bs=4M count=1 The interesting thing is what happens, if you attempt to write the chunk into a specific position of the remote test file: dd if=~/test_chunk.bin of=test.bin bs=4M count=1 seek=3 conv=notrunc Actually, this should only change block #4 out of 5 in the target file.
I guess you can adjust the block size ... I did this with 4 MB blocks. But it should help to understand what happens on the network.
The CIFS protocol does allow applications to write back specific portions of the file. This is controlled by the parameters DataOffset and DataLength in the SMB WriteAndX packet.
Documentation for the same can be found here:
http://msdn.microsoft.com/en-us/library/ee441954.aspx
The client can use these fields to write a specific length of data to specific offsets within the file.
Similar support exists in more recent versions of the protocol as well ...
SMB protocol have such write optimization. It works with append cifs operation. Where protocol read EOF for file and start writing new data with offset set to EOF value and length as append data bytes.

Resources