Let subprocess.check_output timeout when there is a user prompt - python-3.x

I have python script that use scp to transfer some files to other remote hosts.
try:
out = subprocess.check_output(f"scp filename.txt user1#host1:/home/user1/", stderr=subprocess.STDOUT, timeout=15)
print(out)
except subprocess.TimeoutExpired as e:
print(e.output)
// Handle hostkey verification error
Now some times, its possible that the remote server isnt authenticated and part of known hosts files, in which case the script just forever stays in waiting state for user input with Are you sure you want to continue connecting (yes/no/[fingerprint])?
What i want is to be able to catch this scenario, so i can handle host key verification.
I tried adding timeout and catch the exception but instead of timing out it still stays in user prompt state.

Related

Pyzmq swallows error when connecting to blocked port (firewall)

I'm trying to connect to a server using python's pyzmq package.
In my case I'm expecting an error, because the server the client connects to, blocks the designated port by a firewall.
However, my code runs through until I terminate the context and then blocks infinitely.
I tried several things to catch the error condition beforehand, but none of them succeeded.
My base example looks like this:
import zmq
endpoint = "tcp://{IP}:{PORT}"
zmq_ctx = zmq.Context()
sock = zmq_ctx.socket(zmq.PAIR)
sock.connect(endpoint) # <--- I would expect an exception thrown here, however this is not the case
sock.disconnect(endpoint)
sock.close()
zmq_ctx.term() # <--- This blocks infinetely
I extended the sample by sock.poll(1000, zmq.POLLOUT | zmq.POLLIN), hoping that the poll command would fail if the connection could not be established due to the firewall.
Then, I tried to solve the issue by setting some sock options, before the sock = zmq_ctx.socket(zmq.PAIR):
zmq_ctx.setsockopt(zmq.IMMEDIATE, 1) # Hoping, this would lead to an error on the first `send`
zmq_ctx.setsockopt(zmq.HEARTBEAT_IVL, 100) # Hoping, the heartbeat would not be successful if the connection could not be established
zmq_ctx.setsockopt(zmq.HEARTBEAT_TTL, 500)
zmq_ctx.setsockopt(zmq.LINGER, 500) # Hoping, the `zmq_ctx.term()` would throw an exception when the linger period is over
I also added temporarily a sock.send_string("bla"), but it just enqueues the msg without returning me some error and did not provide any new insights.
The only thing I can imagine to solve the problem would be using the telnet package and attempting a connection to the endpoint.
However, adding a dependency just for the purpose of testing a connection is not really satisfactory.
Do you have any idea, how to determine a blocked port from the client side still using pyzmq? I'm not very happy that the code always runs into the blocking zmq_ctx.term() in that case.

How to type the credentials manually in Parallel-SSH or Paramiko

I am trying to create a script that will run commands over my 1000 Cisco devices.
The device model is: Cisco Sx220 Series Switch Software, Version 1.1.4.1
The issue is that there is some kind of strange behavior for some of those Cisco devices.
When I am trying to login with regular SSH (PUTTY) with the correct credentials we are first getting 'Authentication Failure' and after 1 seconds I am getting the User Password Prompt again, typing the same credentials again is giving me a successful login.
The problem is that when I am trying to connect using my script (uses ParallelSSHClient), the connection drops after getting the authentication failure message and not able to enter the credentials again since it is getting the exception and terminal the program.
I am looking for a way to enter those credentials manual by connecting to the machine, getting the Authentication Failure message and ignoring it, recognizing that the current prompt has the User or Password appears on screen and then send it manually.
I look for this kind of procedure anywhere but without any luck.
Does ParallelSSHClient has this feature?
If Paramiko has it, I am willing to move to Paramiko.
Thanks :)
try:
client = ParallelSSHClient(hosts=ip_list, user=user, password=password)
except Exception as err:
print("There was an issue with connecting to the machine")
command_output = client.run_command(command)
Here is the accrual error that I am getting:
pssh.exceptions.AuthenticationException: ('Authentication error while connecting to %s:%s - %s', '172.31.255.10', 22, AuthenticationException('Password authentication failed',))

subprocess.Popen ssh tunnel launches 2nd process which requires answering a prompt with a pin

I am creating an ssh tunnel using subprocess.Popen. However to successfully create this tunnel I am using a Yubikey which requires a pin to release the keys for successful authentication, built in with ssh config. Code below is as far as I can get.
def launch_tunnel(self):
try:
enterpin = getpass.getpass()
bytepin = str.encode(enterpin)
launchtunnel = subprocess.Popen('ssh tunnel command',
shell=True,
stdout=subprocess.PIPE,
stdin=subprocess.PIPE
stderr=subprocess.PIPE).communicate(input=bytepin)
except Exception as e:
print(e)
When I run it I get the following 2 prompts.
Password:
Enter PIN for 'PIV_II (PIV Card Holder pin)':
First one being getpass.getpass() and the second one being another process which requires the Yubikey Pin. It's clear that .communicate() is not working here and from what I can tell it's because the ssh process spawns another process(the pin prompt) that requires the pin for ssh authentication.
Is there anyway to set the pin prior using something like getpass and pass it directly to the 2nd process. Currently this 2nd process(pin prompt) is interrupting the rest of my application so I would like to control it?
I found part of the solution. Using communicate() without anything in () halts the process at the prompt so I do not need to implement getpass before launching subprocess. It has given me another issue but I will take that to a new thread.
launchtunnel = subprocess.Popen(checktunnelexists[1], shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
launchtunnel.communicate()[0]

NodeJS. Child_process.spawn. Handle process' input prompt

I'm currently working on my web interface for git. Accessing git itself by child_process.spawn. Everything is fine while there is simple "command -> response" mechanism, but I cannot understand what should I do with command prompts (git fetch asks for password for example). Hypothetically there is some event fired, but I don't know what to listen to. All I see is "git_user#myserver's password: _" in command line where node.js process itself is running.
It would be great to redirect this request into my web application, but is it even possible?
I've tried to listen on message, data, pipe, end, close, readable at all streams (stdout, stdin, stderr), but no one fires on password prompt.
Here is my working solution (without mentioned experiments):
var out="";
var err="";
var proc=spawn(exe,cmd);
proc.on("exit",function(exitCode){
});
proc.stdout.on("data",function(data){
out+=data;
});
proc.stderr.on("data",function(data){
err+=data;
});
proc.on("close",function(code){
if(!code)func(out);
else return errHandler(err);
});
Can you please help me with my investigations?
UPDATE
Current situation: on my GIT web interface there is a button "FETCH" (as an example, for simple "git fetch"). When I press it, http request is generated and being sent to node.js server created by http.createServer(callback).listen(8080). callback function receives my request and creates child_process.spawn('git',['-C','path/to/local/repo','fetch']). All this time I see only loading screen on my web interface, but if I switch to command line window where node script is running I will see a password prompt. Now let's pretend that I can't switch window to console, because I work remotely.
I want to see password prompt on my web interface. It would be very easy to achieve if, for instance, child_process would emit some event on child.stdin (or somewhere else) when prompting for user input. In that case I would send string "Come on, dude, git wants to know your password! Enter it here: _______" back to web client (by response.end(str)), and will keep on waiting for the next http connection with client response, containing desired password. Then simply child.stdin.write(pass) it to git process.
Is this solution possible? Or something NOT involving command line with parent process.
UPDATE2
Just tried to attach listeners to all possible events described in official documentation: stdout and stderr (readable, data, end, close, error), stdin (drain, finish, pipe, unpipe, error), child (message, exit, close, disconnect, message).
Tried the same listeners on process.stdout, process.stderr after piping git streams to it.
Nothing fires on password request...
The main reason why your code wont work is because you only find out what happened with your Git process after is what executed.
The major reason to use spawn is beacause the spawned process can be configured, and stdout and stderr are Readable streams in the parent process.
I just tried this code out and it worked pretty good. Here is an example of spawning a process to perform a git push. However, as you may know git will ask you for username and password.
var spawn = require('child_process').spawn;
var git = spawn('git', ['push', 'origin', 'master']);
git.stderr.on('data', function(data) {
// do something with it
});
git.stderr.pipe(process.stderr);
git.stdout.pipe(process.stdout);
Make a local git repo and setup things so that you can do the above push command. However, you can really do any git command.
Copy this into a file called git_process.js.
Run with node git_process.js
Don't know if this would help but I found the only way to intercept the prompts from child processes was to set the detached option to true when you spawn a new child process.
Like you I couldn't find any info on prompts from child process in node on the interwebs. One would suspect it should go to stdout and then you would have to write to stdin. If I remember correctly you may find the prompt being sent to stderr.
Its a bit amazing to me that others haven't had this problem. Maybe we just doing it wrong.

Graylog2 ssh stream rules

I have gotten a task to setup a new stream that catches all failed ssh logins. I have never used graylog before and iam really bad at regex.
I have figured out that you need to create a new stream, make so that all the failed ssh login messages gets caught in that stream and then make an alarm on that.
You can create a stream just for an example then let us call it
SSH accepted/failed
Then create a rule where you enter
field : messages
type : match regular expression
value for failed: Failed password for.+ from .+
and then create a new rule for the same stream with the value : Accepted password for.+ from .+
Then you will have a Stream there is collecting Failed and Accepted logins for your SSH

Resources