I need kind of snmp v2c proxy (in python) which:
react on snmpset command
read value from command and write it to yaml file
run custom action (prefer in different thread and somehow reply success to snmpset command):
run another snmpset to different machine, or
ssh to user#host and run some command, or
run some local tool
and:
react on snmpget command
check value for requested oid in yaml file
return this value
I'm aware of pysnmp but documentation just confuse me. I can image I need some kind of command responder (I need snmp v2c) and some object to store configuration/values form yaml file. But I'm completely lost.
I think you should be able to implement all that with pysnmp. Technically, that would not be SNMP proxy but SNMP command responder.
You can probably take this script as a prototype and implement your (single) cbFun which (like the one in the prototype) receives the PDU and branches on its type (GET or SET SNMP command in your case). Then you can implement value read from the .yaml file in the GetRequestPDU branch and .yaml file write, along with sending SNMP SET command elsewhere, in the SetRequestPDU branch.
The pysnmp API we are talking here is the low-level one. With it you can't ask pysnmp to route messages between callback functions -- it always calls the same callback for all message types.
However, you can also base your tool on the higher-level SNMP API which was introduces along with the SNMPv3 model. With it you can register your own SNMP applications (effectively, callbacks) based on PDU type they support. But given you only need SNMPv2c support I am not sure the higher-level API would pay off in the end.
Keep in mind that SNMP is generally time sensitive. If running local command or SSH-ing elsewhere is going to take more than a couple of seconds, standard SNMP manager might start retrying and may eventually time out. If you look at how Net-SNMP's snmpd works - it runs external commands and caches the result for tends of seconds. That lets the otherwise timing out SNMP manager eventually get a slightly outdated response.
Alternatively, you may consider writing a custom variation plugin for SNMP simulator which largely can do what you have described.
Related
Everytime we create a new server I have a bash script that asks the end-user a set of questions to help chef configure the custom server, his/her answer to those questions needs to be injected into chef so that I can use their responses within my chef script (to set the server "hostname" = "server1.stack.com", for instance). There is a json attribute when running chef-client I've read about that may be helpful but I'm not sure how that would work in our environment.
Note: We run chef-client on all of our systems every 15 minutes via cronjob to get updates.
Psuedocode:
echo -n "What is the server name?"
read hostname
chef-client -j {'hostname' => ENV['$hostname']}
Two issues, first is that -j takes a filename not raw JSON and second is that using -j will entirely override the node data coming from the server which also includes the run list and environment. If this is being done at system provisioning time you can definitely do stuff like this, see my AMI bootstrap script for an example. If this is done after initial provisioning, you are probably best off writing those responses to a file, and then reading that in from you Chef recipe code.
Passing raw json into chef-client is possible, but requires a little creativity. You simply do something like this:
echo '{"hostname": "$hostname"}' | chef-client -j /dev/stdin
The values in your json will be deep merged with the "normal" attributes stored in the chef-server. You can also include a run_list in your json, which will replace (not be merged) the run_list on the chef server.
You can see the run_list replacing the server run list here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L327-L338
And you can see the deep merge of attributes here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L305-L311
Also, any attributes you declare in your json will override the attributes already stored on the chef-server.
I'm using the Jenkins Dynamic Parameter Plugin and the Jenkins SSH credentials plugin and would like to use them together so that I can have groovy code that auto-populates a choice parameter with the ssh systems I have configured (or a subset more likely) and allows me to pick which system I want to run a deployment to. I've found a URL that provides the list of ssh hosts and have this basic code already written:
def myURL="http://myJenkins/job/myJob/descriptorByName/org.jvnet.hudson.plugins.SSHBuilder/fillSiteNameItems"
def allText=new URL(myURL).getText()
I've verified the URL does return JSON with the list of connections when I hit it from anywhere outside of Jenkins (REST client, wget, and even groovysh), but when I try to call it inside the dynamic parameter groovy code, I keep getting:
java.net.ConnectException: Connection refused
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230
...
So I'm wondering if this is some sort of threading issue (the thread that is running this is maybe the same thread that would respond to the http request), but my knowledge of Jenkins at a programming level is somewhat limited. If anyone can point me to how to get what I'm after (maybe even in a simpler way), I'd really appreciate it.
I am using Fedora 18 with the avahi command line tools (version 0.6.31)
I use avahi-resolve-host-name to discover the IP address of units on my subnet, for testing purposes during development. I monitor the request and response with Wireshark. After one successful request and response, no further requests show up on Wireshark, but the tool still returns an IP address. Is it possible the computer/avahi daemon/something else is 'caching' the result?
The Question: I wish to send out the request packet with EVERY CALL of avahi-resolve-host-name. Is this possible?
The Reason: I'm getting 'false positives' so to speak. I try to resolve 'test1.local', and I am getting a resulting IP, but the unit is no longer located at this IP. I want the request sent every time so I can avoid seeing units at incorrect IP addresses.
I see that I'm a bit late to answer your question but I'm going to leave a generic answer in case someone else stumbles upon this.
My answer is based on avahi-0.6.32_rc.
Is it possible the computer/avahi daemon/something else is 'caching' the result?
Yes, avahi-daemon is caching lookup results. While this doesn't seem to be explicitly listed in features, the avahi-daemon(8) manpage tips it:
The daemon [...] provides two IPC APIs for local programs to make use of the mDNS record cache the avahi-daemon maintains.
I wish to send out the request packet with EVERY CALL of avahi-resolve-host-name. Is this possible?
Yes, it is. The relevant option is cache-entries-max (from avahi-daemon.conf(5)):
cache-entries-max= Takes an unsigned integer specifying how many resource records are cached per interface. Bigger values allow mDNS work correctly in large LANs but also increase memory consumption.
To achieve the desired effect, you can simply set:
cache-entries-max=0
This will disable the caching entirely and force avahi-daemon to reissue the MDNS packets on every request, therefore making it possible for you to monitor them.
However, I should note here that this will also render avahi pretty much useless for normal use. While avahi-daemon will be issuing lookup packets, it will be unable to store the results and every call of avahi-resolve-host-name (as well as other command-line tools, nss-mdns, D-Bus API…) will fail.
I just stumbled upon this problem myself and found solution that doesn't require changing the config. It seems that simply killing the daemon (avahi-daemon --kill) flushes the cache. I'm on Ubuntu 18.04 and the daemon is restarted automatically. If on some other distro it isn't running after being killed, it can be restarted with avahi-daemon --daemonize.
Note that root is needed to kill avahi daemon, so this might not be the best option in some cases.
I am developing an application that allows users to run AI algorithms on the server remotely. Some of these algorithms take a VERY long time. It is set up such that AJAX calls supply the algorithm parameters and launch a C++ algorithm on the server. The results and status of the computation are tracked via AJAX calls polling status files. This solution seems to work well for multiple users concurrently using the service, but I am now looking for a way to cancel the computation from the user's browser. I have a stop button that stops the AJAX updating service and ceases any communication between the browser and the running process on the server. The problem is that the process still runs, and I would like to free up the server resources when the user cancels the operation. Below are some more details.
The web service where the AJAX calls hit are run under the user 'tomcat' and can be listed by ps -U tomcat. The algorithm executions are all child processes of 'java' and can be listed by ps --ppid ###.
The browser keeps a record of the time that the current computation began (user system time, not server system time).
Multiple computations may be going on at once from users connected from different locations, resulting in many processes under the same name and parent process.
The restful service executes terminal commands via java runtime.exec().
I am not so knowledgeable about shell scripting, so any help would be greatly appreciated. Can anyone think of a way to either use java process object or shell script/awk to locate a process via timestamp (maybe the closest timestamp to user system time..?) or some other way?
Thanks in advance.
--edit
Is there even a way in java to get a handle for a given process if you have the pid...? Doesn't seem like it.
--edit
I cannot change the source code of the long running process on the server. :(
Your AJAX call should be manipulating some sort of a resource (most conveniently a text file) that acts as a semaphore to the process, which in every iteration of polling checks whether that semaphore file has been set to the stop status. If the AJAX changes the semaphore file to stop, then the process stops because your application checks it and responds accordingly. Which in turn means that the functionality needs to be programmed into your Java AI application rather than figuring out what the PID is and then killing it at the OS level. That, of course, assumes you have access to the source code of the app.
Of course, the semaphore does not have to be a file but can be a value in the DB etc., whichever suits your taste and configuration.
I have finally found a secure solution. From the restful java service, using Process p = Runtime.getRuntime().exec() gives you a handle on the running process. The only way, however, to get the pid is through a technique called reflection.
Field f = p.getClass().getDeclaredField();
f.setAccessible(true);
String pid = Integer.toString(f.getInt(p));
How unbelievably awkward...
Anyways, due to the passing of p from the server to the client being impossible, and the insecurity of allowing a remote call to kill an arbitrary server process by a pid passed by parameter, the only logical strategy I could come up with was to write the obtained pid to a process-unique file indicated by the initial client timestamp, and to delete this file upon restful service function return. This unique file can be used as a termination handle via yet another restful service which reads the file, and terminates the process with pid equal to the contents of the file. This
You could keep the Process instance returned by runtime.exec and invoke Process.destroy to kill the subprocess. Not knowing much about your webservice application I would assume you can keep the process instances in a global session map that maps users to process lists. Make sure access to this map is thread-safe. Also it only works if you have one webservice process that allows to share such a global session map across different requests.
Alternatively take a look at Get subprocess id in Java.
Context: OS: Linux (Ubuntu), language: C (actually Lua, but this should not matter).
I would prefer a ZeroMQ-based solution, but will accept anything sane enough.
Note: For technical reasons I can not use POSIX signals here.
I have several identical long-living processes on a single machine ("workers").
From time to time I need to deliver a control message to each of processes via a command-line tool. Example:
$ command-and-control worker-type run-collect-garbage
Each of workers on this machine should receive a run-collect-garbage message. Note: it would be perfect if the solution would somehow work for all workers on all machines in the cluster, but I can write that part myself.
This is easily done if I will store some information about running workers. For example keep the PIDs for them in a known location and open a control Unix domain socket on a known path with a PID somewhere in it. Or open TCP socket and store host and port somewhere.
But this would require careful management of the stored information — e.g. what if worker process suddenly dies? (Nothing unmanageable, but, still, extra fuss.) Also, the information needs to be stored somewhere, thus adding an extra bit of complexity.
Is there a good way to do this in PUB/SUB style? That is, workers are subscribers, command-and-control tool is a publisher, and all they know is a single "channel url", so to say, on which to come for messages.
Additional requirements:
Messages to the control channel must wake up workers from the poll (select, whatever)
loop.
Message delivery must be guaranteed, and it must reach each and every worker that is listening.
Worker should have a way to monitor for messages without blocking — ideally by the poll/select/whatever loop mentioned above.
Ideally, worker process should be "server" in a sense — he should not bother about keeping connections to the "channel server" (if any) persistent etc. — or this should be done transparently by the framework.
Usually such a pattern requires a proxy for the publisher, i.e. you send to the proxy which immediately accepts delivery and then that reliably forwads to the end subscriber workers. The ZeroMQ guide covers a few different methods of implementing this.
http://zguide.zeromq.org/page:all
Given your requirements, Steve's suggestion does seem the simplest: run a daemon which listens on two known sockets - the workers connect to that and the command tool pushes to it which redistributes to connected workers.
You could do something complicated that would probably work, by effectively nominating one of the workers. For example, on startup workers attempt to bind() a PUB ipc:// socket somewhere accessible, like tmp. The one that wins bind()s a second IPC as a PULL socket and acts as a forwarder device on top of it's normal duties, the others connect() to the original IPC. The command line tool connect()s to the second IPC, and pushes it's message. The risk there is that the winner dies, leaving a locked file. You could identify this in the command line tool, rebind then sleep (to allow the connections to be established). Still, that's all a little bit complex, I think I'd go with a proxy!
I think what you're describing would fit well with a gearmand/supervisord implementation.
Gearman is a great task queue manager and supervisord would allow you to make sure that the process(es) are all running. It's TCP based too so you could have clients/workers on different machines.
http://gearman.org/
http://supervisord.org/
I recently set something up with multiple gearmand nodes, linked to multiple workers so that there's no single point of failure
edit: Sorry - my bad, I just re-read and saw that this might not be ideal.
Redis has some nice and simple looking pub/sub functionality that I've not used yet but sounds promising.
Use a mulitcast PUB/SUB. You'll have to make sure the pgm option is compiled into your ZeroMQ distribution (man 7 zmq_pgm).