ftp mget does not show successful completion - linux

I was using ftp and successfully connecting to the host.
Then I do
mget test.tar.gz
Even though it successfully test.tar.gz, I have to check the size of the file manually with ls -l command in the other terminal. Is there any way that mget or some other command will indicate to me that the transfer of the file is complete.

It depends a bit on the implementation of the FTP server, but in most cases you should receive something like
250 transfer complete.
250 basically is the return code for Requested file action okay, completed. while the text messages differ sometimes between servers.
In order to make sure you see all the return codes, run your ftp with -v, which enables verbose mode. This forces ftp (and many other linux ftp clients I know of) to show all responses from the remote server, as well as report on data transfer statistics.
A list of most server return codes can be found here.

Related

snmp proxy in python

I need kind of snmp v2c proxy (in python) which:
react on snmpset command
read value from command and write it to yaml file
run custom action (prefer in different thread and somehow reply success to snmpset command):
run another snmpset to different machine, or
ssh to user#host and run some command, or
run some local tool
and:
react on snmpget command
check value for requested oid in yaml file
return this value
I'm aware of pysnmp but documentation just confuse me. I can image I need some kind of command responder (I need snmp v2c) and some object to store configuration/values form yaml file. But I'm completely lost.
I think you should be able to implement all that with pysnmp. Technically, that would not be SNMP proxy but SNMP command responder.
You can probably take this script as a prototype and implement your (single) cbFun which (like the one in the prototype) receives the PDU and branches on its type (GET or SET SNMP command in your case). Then you can implement value read from the .yaml file in the GetRequestPDU branch and .yaml file write, along with sending SNMP SET command elsewhere, in the SetRequestPDU branch.
The pysnmp API we are talking here is the low-level one. With it you can't ask pysnmp to route messages between callback functions -- it always calls the same callback for all message types.
However, you can also base your tool on the higher-level SNMP API which was introduces along with the SNMPv3 model. With it you can register your own SNMP applications (effectively, callbacks) based on PDU type they support. But given you only need SNMPv2c support I am not sure the higher-level API would pay off in the end.
Keep in mind that SNMP is generally time sensitive. If running local command or SSH-ing elsewhere is going to take more than a couple of seconds, standard SNMP manager might start retrying and may eventually time out. If you look at how Net-SNMP's snmpd works - it runs external commands and caches the result for tends of seconds. That lets the otherwise timing out SNMP manager eventually get a slightly outdated response.
Alternatively, you may consider writing a custom variation plugin for SNMP simulator which largely can do what you have described.

How to stream file using curl from one server to another(limited server resources)

My API server has a very limited disk space(500MB) and memory(1GB). One of the API calls that it gets, is to receive a file. The consumer calls API and passes URL to be downloaded.
The "goal" of my server is to upload this file to Amazon S3. Unfortunately, I can't ask the consumer to upload the file directly to S3(part of requirements).
The problem is, sometimes those are huge files(10GB) and saving them to disk and then uploading it to S3 is not an option(500MB disk space limit).
My question is, how can I "pipe" the file from the input url to S3 using curl Linux program ?
Note: I was able to pipe it in different ways but, either it first tries to download the whole file and fails or I hit memory error and curl quits. My guess is that the download is much faster than the upload, so the pipe buffer/memory grows and explodes(1GB memory on server) when I get 10GB files.
Is there a way to achieve what I'm trying to do using curl and piping ?
Thank you,
- Jack
Another SO user asked a similar question about curl posts from stdin. See use pipe for curl data.
Once you are able to post your upload stream from the output of the first curl process's standard output, if you are running out of memory because you are downloading faster than you can upload, have a look at the mbuffer utility. I haven't used it myself, but it seems to be designed for exactly this sort of problem.
Lastly, if all else fails, I guess you could use curl's --limit-rate option to lock the transfer rates of the upload and download to some identical and sustainable values. This potentially underutilizes bandwidth, and won't scale well with multiple parallel download/upload streams, but for some one-off batch process, it might be good enough.

Save mail message as a file on Linux using sendmail

I have an application running on several RHEL 5.8 systems which monitors and alerts (via email). I need to create a durable log of these alerts locally on each node.
I think the easiest way to do this would be to add a local email user to the alerts and then use mailbox settings or a script (if needed) to save each message on a local filesystem
I would settle for message body dumped to a text file (one file per email.)
It would be better if it could extract time, host, subject, & body as seperate fields for consumption by an open source log reader.
My systems are using sendmail 8.1 and I would prefer to stick with it, although I also have postfix 2.3.3 available.
As you reported your sendmail uses procmail as local mailer => create special OS user account (e.g. log_user) and use ~log_user/.procmailrc to instruct procmail to deliver messages to maildir folder.
~log_user/.procmailrc
# deliver ALL messages to ~/maillog/ maildir.
# see "man procmailex" for email sorting examples
:0
maillog/

Apache James - reduce message time in spool

I'm using a local Apache James 2.3.2 install for development and automated testing. It's configured to forward all incoming messages to a single address and to not relay emails outside:
<mailet match="All" class="Forward">
<forwardto>test#localhost</forwardto>
</mailet>
Everything works correctly: emails are accepted, placed in the spool directory, then finally moved to the inbox/test directory, from which they are then picked up by my automated tests for verification.
The only problem is, it can take anywhere between 10 and 60 seconds for those emails to be moved from the spool directory to the inbox/test directory, meaning the tests need to wait that long before retrieving them and doing their checks.
Is this something that can be configured otherwise? Or should I simply move to a different email server for testing purposes?
Thanks!
Not a direct answer to this question, but I've ended up switching to JES http://www.ericdaugherty.com/java/mailserver/ . You can configure how many SMTP and POP3 threads do the work as well as the frequency at which these threads pick up messages from the spool and try to send deliver them
# The server stores incoming SMTP messages on disk before attempting to deliver them. This
# setting determines how often (in seconds) the server checks the disk for new messages to deliver. The
# smaller the number, the faster message will be processed. However, a smaller number will cause
# the server to use more of your system's resources.
smtpdelivery.interval=5
This meets my needs.

Open MPI, contact information is unknown

I am working on Mac OSX and using bash as my shell. I have been working for the past few hours trying to get the simplest of code to run using Open MPI on multiple computers. After fiddling with configuring Open MPI, I believe I am on the verge of getting the thing to work. However, I have hit a dead end.
The code runs fine without asking other computers over the internet to run it (meaning, I can run it using Open MPI on my own desktop), but when I put in a hostfile and ask a host to run the code I get an error. I think I am connecting to the host fine otherwise, I can ssh to them and do whatever I want, it's just when I run the code.
To produce the following error I run: mpirun -n 4 -hostfile /path/hostfile.txt ./mpi_hello_world. Then it ask for the password on the host I am access, I enter it and then receive the following:
[MyComputer] [[62774,0],0] ORTE_ERROR_LOG: A message is attempting to be sent to
a process whose contact information is unknown in file /opt/local/var/macports/
build/_opt_mports_dports_science_openmpi/openmpi/work/openmpi-1.7.1/orte/mca/rml/
oob/rml_oob_send.c at line 362
[MyComputer] [[62774,0],0] attempted to send to [[62774,0],1]: tag 15
[MyComputer] [[62774,0],0] ORTE_ERROR_LOG: A message is attempting to be sent to
a process whose contact information is unknown in file /opt/local/var/macports/
build/_opt_mports_dports_science_openmpi/openmpi/work/openmpi-1.7.1/orte/mca/
grpcomm/base/grpcomm_base_xcast.c at line 166
Would anyone be able to give me an idea of what is going wrong here? Thanks for any insight you can offer.

Resources