For a project of mine I am using a very limited Linux busybox machine.
I am trying to upload files to that machine (connected to me via Ethernet) using telnet.
So far I had several ideas for implementing it:
Writing the files in chunks (using echo -e on chunks of 128 bytes) to the disk. The idea failed because the echo command doesn't have a -e option.
Redirecting socket into a file using something like /dev/tcp/192.168.1.2/12345 > /tmp/file. The idea failed because the /dev/ folder didn't contain the tcp device.
Using utilities such as nc / ncat / nfqueue to do so, the idea also failed because none of them exists and I can't install anything on that machine (no apt-get / yum etc..)
Using echo (without the -e option) to write a base-64 encoded file to the disk and then decode it. The idea failed because I couldn't find any utility to decode base64 strings
Do you have any creative idea to upload files?
Thank you for the fast replies.
I found on the machine a tftp client.
Therefore I could execute:
/usr/bin/tftp -l /tmp/tst -r testfile.txt -g <server ip>
after opening a tftp server on .
See if your busybox build includes rx .
This will give you XMODEM receive functionality on your target.
I asked our software group to add it to our image; it was added with very little effort.
Related
I am connected to a server through putty, and I want to download (to my PC) certain files on a regular basis using a shell script. Specifically, these are the files...
ls -t ~/backup | head -n2
What is the best strategy for this? I was trying with command line FTP but I am prompted to login to something. I'm already logged into the server that has the files I need to download, so I am missing something.
The SSH protocol can be a good way, with scp command. You can take a look at this thread
To automate the process and script a solution, you will need use password-less ssh and ssh keys.
The first step will be to get the list of files to copy and so:
fils=$(ssh username#host ls -t ~/backup | head -n2)
Then once we have the files read into a variable fils, we can loop on the entries and run a secure copy command:
while read fyle
do
scp username#host:~/"$fyle" "$fyle"
done <<< "$fils"
I'm trying to get a Ubuntu server to periodically (preferably whenever it gets updated, if possible) to copy a file remotely from an FTP server to a directory on the Ubuntu server. I should note I'm not very advanced with this kind of stuff.
I of course am not doing this without a tutorial, however it doesn't cover grabbing the file from an ftp.
What would be simplest for me is to be able to run:
tail -F ftp://ftp.addr.ess/files/file-i-want.txt | grep --line-buffered": <" | while read x ; do echo -ne $x | curl -X POST -d #- http://url/hook ; done
What I'm following has that FTP address as a local address. This is a problem, because that command returns this:
tail: cannot open 'ftp://ftp.addr.ess/files/file-i-want.txt' for reading: No such file or directory
I've tried to run:
rsync username#ftp.addr.ess:XX/files/file-i-want.txt /home/ubuntu/destination
however this returns:
ssh: connect to host ftp.addr.ess port XX: Connection refused.
So really if I can get rsync to run FTP instead of SSH, I figure I'd be golden. I researched it though and I can't figure out how to do this (keep in mind I'm no programmer). I originally thought the error was because I wasn't giving it a password, because I didn't know how. It might be that also, though.
This however brings me to my next issue. If it's possible to make rsync do FTP instead of SSH, how would I make it periodically do that?
What is being updated? The remote file (my guess) or something on your server? If it's the remote file, you're out of luck unless there is mecanism/process on the remote server that can send you a notification (an email for example).
I've not used ftp for ages, but have a look at this as a starting point.
A periodic task can be quite easily configured with a cron task.
I need to build a simple web based printer server that will print a file to any given printers IP address
Using lp or lpr how can I print a file directly to a network printer by IP address? NOTE: The printer will NOT be setup in CUPS locally as it needs to have the ability to print to any IP address thrown at it.
What I have tried:
lp -d 10.11.234.75 /path/to/file
lpr -P 10.11.234.75 /path/to/file
Both give this: 'The printer or class does not exist.'
Try this:
cat you_file.prn | netcat -w 1 printer_ip 9100
If using bash then:
cat /path/to/file > /dev/tcp/10.11.234.75/9100
What you want to do is probably not feasible. If the printers at the ends of these IP addresses are just random printers, then the server you're building would need to know which driver to use to be able to print to them. If you haven't installed them in any way beforehand then it's not going to work.
If you only want to talk to other Internet Printing Protocol (IPP) servers then it is possible, although not necessarily elegant. I don't know of any other Linux implementations of an IPP client than CUPS, and CUPS requires you to install printers in advance. This can be done very easily though (as explained here). It's the same code to add a normal printer (but you need to know which driver to use) as for an IPP server. Alternatively, you might be able to find another IPP implementation (or write one - it should be fairly simple just to send a document) which doesn't require installing printers.
Here's the code to add an IPP printer to CUPS:
lpadmin -E -p <printer-name> -v http://<ip_address>:631/<dir>/<printer> -L <location> -E
<printer-name> and <location> can be whatever you like, and you need the full network path to the printer.
To add a normal printer:
lpadmin -E -p <printer-name> -v <device-uri> -m <model> -L <location> -E
This is the same, except that you need to give a <model>, which is the driver for the printer. Scrap the first -E if you don't want encryption.
If you want to delete the printer afterwards, use this:
lpadmin -x <printer-name>
I found an old program called tcpsend.c to send a file to a printer at an IP address. Build with gcc -o tcpsend tcpsend.c
$ ./tcpsend
use: tcpsend [-t timeout] host port [files]
-t timeout - try connecting for timeout seconds
tcpsend.c source code
I had success using lp with a hostname and port.
echo foobar | lp -h 10.10.13.37:9100 -
Without specifying a port, i would get
lp: Error - No default destination
If printing a PDF, you can first convert it to PostScript using pdf2ps
pdf2ps file.pdf - | lp -h 10.10.13.37:9100 -
The argument - is used as an alias for standard input or output, letting us pipe the output of postscript straight into standard input of lp.
I am working in a large lab with linux machines and we are using it for doing CGI stuff. Basically, I want to be able to execute commands on the machine that I am logged in there, while I am at home (using Windows here). So far I've been able to get the output of the terminal to be written in realtime on a txt file which is saved on dropbox, so I can check the progress of my processes while I am at home. So I am thinking about a way of reversing the process. Is it possible to save the commands in a txt or sh file on dropbox and have a process on my machine at the labs which is constantly looking at this file and executing the commands written there ?
Install inotify-tools
Ubuntu:
sudo apt-get install inotify-tools
Code
inotifywait -m ~/Dropbox -e create -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
# do something with the file
done
This works for me
Is there an alternative to scp, to transfer a large file from one machine to another machine by opening parallel connections and also able to pause and resume the download.
Please don't transfer this to severfault.com. I am not a system administrator. I am a developer trying to transfer past database dumps between backup hosts and servers.
Thank you
You could try using split(1) to break the file apart and then scp the pieces in parallel. The file could then be combined into a single file on the destination machine with 'cat'.
# on local host
split -b 1M large.file large.file. # split into 1MiB chunks
for f in large.file.*; do scp $f remote_host: & done
# on remote host
cat large.file.* > large.file
Take a look at rsync to see if it will meet your needs.
The correct placement of questions is not based on your role, but on the type of question. Since this one is not strictly programming related it is likely that it will be migrated.
Similar to Mike K's answer, check out https://code.google.com/p/scp-tsunami/ - it handles splitting the file, starting several scp processes to copy the parts and then joins them again...it can also copy to multiple hosts...
./scpTsunami.py -v -s -t 9 -b 10m -u dan bigfile.tar.gz /tmp -l remote.host
That splits the file into 10MB chunks and copies them using 9 scp processes...
The program you are after is lftp. It supports sftp and parallel transfers using its pget command. It is available under Ubuntu (sudo apt-get install lftp) and you can read a review of it here:
http://www.cyberciti.biz/tips/linux-unix-download-accelerator.html