Python Parallel SSH - Netmiko/Napalm - Cisco SMB switches stuck at sending command - python-3.x

I am trying to determine vendor + version (using python NAPALM and parallel-ssh) of network switches (Huawei VRP5/8, Cisco Catalyst and Cisco SMB (SF/SG):
admin#server:~$ python3
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from napalm import get_network_driver
>>> driver = get_network_driver('ios')
>>> device = driver('ip', 'username', 'password')
>>> device.open()
>>> print(device.get_facts())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/altepro/.local/lib/python3.8/site-packages/napalm/ios/ios.py", line 811, in get_facts
show_ver = self._send_command('show version')
File "/home/altepro/.local/lib/python3.8/site-packages/napalm/ios/ios.py", line 165, in _send_command
output = self.device.send_command(command)
File "/home/altepro/.local/lib/python3.8/site-packages/netmiko/utilities.py", line 600, in wrapper_decorator
return func(self, *args, **kwargs)
File "/home/altepro/.local/lib/python3.8/site-packages/netmiko/base_connection.py", line 1694, in send_command
raise ReadTimeout(msg)
netmiko.exceptions.ReadTimeout:
Pattern not detected: '\x1b\\[Ksg300\\-ab\\-1\\#' in output.
Things you might try to fix this:
1. Explicitly set your pattern using the expect_string argument.
2. Increase the read_timeout to a larger value.
Where sg300-ab-1 is sysname of the switch (Cisco SMB - sg300 in this case, but i have tested this on several versions and types of the SMB lineup)
Things that i have tried:
Tried several version of netmiko, napalm (And its drivers including ios-350) and parallel-ssh. Tried several fresh linux servers with fresh install of napalm and parallel-ssh.
SSH is tested using the same server and credentials and it works without any problems.
When i use parallel-ssh the device doesnt even raise exception or timeout - it just goes stuck in the command:
output = client.run_command(cmd)
hosts = ['192.168.1.50']
client = ParallelSSHClient(hosts, user='my_user', password='my_pass')
cmd = 'show version'
output = client.run_command(cmd)
for host_out in output:
for line in host_out.stdout:
print(line)
Thanks for any kind of help !

It looks like the prompt isn't getting recognized properly. I'm not very familiar with either ParallelSSHClient or napalm, but I have worked with netmiko and that looks like where the error is. Here's some steps that can possible get you closer to figuring out what's happening. I suspect it's the prompt not being read correctly from the device.
Set up debugging and a netmiko session and run a simple command
import logging
import netmiko
logging.basicConfig(level=logging.DEBUG)
session = netmiko.ConnectHandler(
host='192.168.1.50',
username='my_user',
password='my_pass',
device_type='cisco_ios')
results = session.send_command('show version')
If this fails with the same error, then it's the prompt (possibly the \x1b escape character). Try again but with a simpler expect_string, like what's expected at the end of the prompt:
session.send_command('show version', expect_string="#")
If this gets you a result, then it's something about the how the prompt is being set for this device.
To see what's being found for the prompt:
session.find_prompt()
Edit:
Based on what you're reporting, the issue seems to be with the control code \x1b\[ being included in the prompt. It's possible this can be disabled on the device itself, but I'm unfamiliar with that platform. The napalm API doesn't expose netmiko's send_command method. It should still be fixable. This solution would be a hack to make things work, nothing that I'd recommend relying on.
Establish a class that will act as your fix. This will be instantiated with the netmiko session (device.device) and will be used to replace the send_command method.
class HackyFix:
def __init__(self, session):
self.session = session
self.original_send_command = session.send_command
def send_command(self, command):
original_prompt = self.session.find_prompt()
fixed_prompt = original_prompt.replace(r"\x1b[", "")
print(
f"send_command intercepted. {original_prompt} replaced with {fixed_prompt}"
)
return self.original_send_command(command, expect_string=fixed_prompt)
Then in your existing napalm code, add this right after device.open():
hackyfix = HackyFix(device.device)
device.device.send_command = hackyfix.send_command
Now all of napalm's calls to send_command will go through your custom fix that will find the prompt and modify it before passing it to expect_string.
Last edit.
It's an ANSI Escape Code that's being thrown in by the SG300. Specifically it's the one that clears from cursor to end of line. It's also a known issue with the SG300. The good news is that someone made a napalm driver to support it. One big difference between the SG300 driver and the IOS driver is the netmiko device_type is cisco_s300. When this device_type is used, strip_ansi_escape_codes is ran against the output.
Behavior of that escape code tested in bash:
$ printf "This gets cleared\r"; code="\x1b[K"; printf "${code}This is what you see\n"
This is what you see
You can validate that setting cisco_s300 as the device_type fixes the issue:
session = netmiko.ConnectHandler(
host='192.168.1.50',
username='my_user',
password='my_pass',
device_type='cisco_s300')
results = session.send_command('show version')
This should give a result with no modification to the expect_string value. If that works and you're looking to get results sooner or later, the following is a better fix than the hacky fix above.
from napalm.ios import IOSDriver
class QuickCiscoSG300Driver(IOSDriver):
def __init__(self, hostname, username, password, timeout=60, optional_args=None):
super().__init__(hostname, username, password, timeout, optional_args)
def open(self):
device_type = "cisco_s300"
self.device = self._netmiko_open(
device_type, netmiko_optional_args=self.netmiko_optional_args
)
device = QuickCiscoSG300Driver("192.168.1.50", "my_user", "my_pass")
device.open()
device.get_facts()
Or you can get the driver (better option, unless this happens to be the driver you already tried)

Related

Python: subprocess + isql on windows: a bytes-like object is required, not 'str'

This is a company issued laptop and I can't install any new software on it. I can install Python modules to my own directory though. I need to run something on Sybase and there are 10+ servers. Manual operation is very time consuming hence I'm looking for the option as Python + subprocess.
I did some research and referred to using subprocess module in python to run isql command. However, my version doesn't work. The error message is TypeError: a bytes-like object is required, not 'str'. This error message popped up from the "communicate" line.
I can see my "isql" has connected successfully as I can get a isql.pid.
Anything I missed here?
Thank you
import subprocess
from subprocess import Popen, PIPE
import keyring
from textwrap import dedent
server = "MYDB"
cmd = r"C:\Sybase15\OCS-15_0\bin\isql.exe"
interface = r"C:\Sybase15\ini\sql.ini"
c = keyring.get_credential("db", None)
username = c.username
password = c.password
isql = Popen([cmd, '-I', interface,
'-S', server,
'-U', username,
'-P', password,
'-w', '99999'], stdin=PIPE, stdout=PIPE)
output = isql.communicate("""\
SET NOCOUNT ON
{}
go
""".format("select count(*) from syslogins"))[0]
From the communicate() documentation,
If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.
By default, streams are opened in binary format, but you can change it to text mode by using the text=True argument in your Popen() call.

AttributeError: 'Timer' object has no attribute '_seed'

This is the code I used. I found this code on https://github.com/openai/universe#breaking-down-the-example . As I'm getting error on remote manager so I have to copy this code to run it. But it still giving me error as below
import gym
import universe # register the universe environments
env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1) # automatically creates a local docker container
observation_n = env.reset()
while True:
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n] # your agent here
observation_n, reward_n, done_n, info = env.step(action_n)
env.render()
I'm getting this when try to run above script. I tried every possible way to solve it, but it still causing the same error. There is not even one thread about this. I don't know what to do now please tell me if anyone of you solved it.
I'm using Ubuntu 18.04 LTS on virtual box which is running on Windows 10
WARN: Environment '<class 'universe.wrappers.timer.Timer'>' has deprecated methods '_step' and '_reset' rather than 'step' and 'reset'. Compatibility code invoked. Set _gym_disable_underscore_compat = True to disable this behavior.
Traceback (most recent call last):
File "gymtest1.py", line 4, in <module>
env = gym.make("flashgames.CoasterRacer-v0")
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 167, in make
return registry.make(id)
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 125, in make
patch_deprecated_methods(env)
File "/home/mystery/.local/lib/python3.6/site-packages/gym/envs/registration.py", line 185, in patch_deprecated_methods
env.seed = env._seed
AttributeError: 'Timer' object has no attribute '_seed'
So I think what you need to do add a few lines in the Timer module because the code checks whether the code implements certain functions (_step, _reset, _seed, etc...)
So all you need to do (I think) is add at the end of the Timer class:
def _seed(self, seed_num=0): # this is so that you can get consistent results
pass # optionally, you could add: random.seed(random_num)
return

Controlling a minecraft server with python

I've searched a lot for this and have not yet found a definitive solution. The closest thing I've found is this:
import shutil
from os.path import join
import os
import time
import sys
minecraft_dir = ('server diectory')
world_dir = ('server world driectory')
def server_command(cmd):
os.system('screen -S -X stuff "{}\015"'.format(cmd))
on = "1"
while True:
command=input()
command=command.lower()
if on == "1":
if command==("start"):
os.chdir(minecraft_dir)
os.system('"C:\Program Files\Java\jre1.8.0_111\bin\java.exe" -Xms4G -Xmx4G -jar craftbukkit-1.10.2.jar nogui java')
print("Server started.")
on = "0"
else:
server_command(command)
When I launch this program and type 'start' the CMD flashes up and closes instantly. Instead I want the CMD to stay open with the minecraft sever running from it. I'm not sure why this happens or what the problem is, any help would be greatly appreciated.
p.s. I have edited this to my needs (such as removing a backup script that was unnecessary) but it didn't work before. The original link is: https://github.com/tschuy/minecraft-server-control
os.system will simply run the command then return to your python script with no way to further communicate with it.
On the other hand using subprocess.Popen gives you access to the process while it runs, including writing to it's .stdin which is how you send data to the server:
def server_command(cmd):
process.stdin.write(cmd+"\n") #just write the command to the input stream
process = None
executable = '"C:\Program Files\Java\jre1.8.0_111\bin\java.exe" -Xms4G -Xmx4G -jar craftbukkit-1.10.2.jar nogui java'
while True:
command=input()
command=command.lower()
if process is not None:
if command==("start"):
os.chdir(minecraft_dir)
process = subprocess.Popen(executable, stdin=subprocess.PIPE)
print("Server started.")
else:
server_command(command)
you can also pass stdout=subprocess.PIPE so you can also read it's output and stderr=subprocess.PIPE to read from it's error stream (if any)
As well instead of process.stdin.write(cmd+"\n") you could also use the file optional parameter of the print function, so this:
print(cmd, file=process.stdin)
Will write the data to process.stdin formatted in the same way that print normally does, like ending with newline for you unless passing end= to override it etc.
Both of the above answers do not work in the environment I tried them in.
I think the best way is to use RCON, not sending keys to a window.
RCON is the protocol used by games to run commands.
Many python libraries support Minecraft RCON, and the default server.properties file has an option for RCON.
We will use the python module: MCRON.
Install it. It works for windows, mac, linux.
Type:
pip install mcrcon
Lets configure your server to allow RCON.
In server.properties, find the line 'enable-rcon' and make it look like this:
enable-rcon=true
Restart and stop your server.
Find the line 'rcon.password' and set it to any password you will remember.
You can leave the port default at 25575.
Now, open your terminal and type:
mcron localhost
Or your server ip.
You will be prompted to enter the password you set.
Then you can run commands and will get the result.
But we are doing this with python, not the PYPI MCRON scripts - so do this.
from mcrcon import MCRcon as r
with r('localhost', 'insertyourpasswordhere') as mcr:
resp = mcr.command('/list')
print(resp) #there are 0/20 players online: - This will be different for you.

using python, selenium and phantomjs on openshift (socket binding permission denied?)

Alright I'm at the end of my tether trying to get phantomJS to work with selenium in an openshift environment. I've downloaded the phantomjs binary using ssh and can even run it in the shell. But when it comes to starting a webdriver service using selenium I keep getting this traceback error no matter the args I put in.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/phantomjs/webdriver.py", line 50, in __init__
service_args=service_args, log_path=service_log_path)
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/phantomjs/service.py", line 50, in __init__
service.Service.__init__(self, executable_path, port=port, log_file=open(log_path, 'w'))
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/common/service.py", line 33, in __init__
self.port = utils.free_port()
File "/var/lib/openshift/576e22027628e1fb13000211/python/virtenv/venv/lib/python3.3/site-packages/selenium/webdriver/common/utils.py", line 36, in free_port
free_socket.bind(('0.0.0.0', 0))
PermissionError: [Errno 13] Permission denied
Not sure what's going on, am I supposed to bind to an IP address? If so I tried using service args but that hasn't helped.
I came across the same issues trying to run phantomJS on my Openshift-hosted Django application, running on a Python 3 gear. Finally I managed to make it work, this is how:
The main issue to overcome is that Openshift does not allow applications to bind on localhost (nor '0.0.0.0' nor '127.0.0.1). So the point is to bind to the actual IP address of your Openshift Gear instead
You have to deal with this issue at the ghostdriver level as well as within the Python-selenium binding.
ghostdriver (phantomJS binary)
Unfortunately, as explained brilliantly by Paolo Bernardi in this post : http://www.bernardi.cloud/2015/02/25/phantomjs-with-ghostdriver-on-openshift/ you have to use a patched version of phantomjs for this, as the released version doesn't allow to bind to a specified IP. The binary linked by Paolo did not work on my Python3 cardridge, yet this one worked perfectly: https://github.com/jrestful/server/blob/master/seo/phantomjs-1.9.8-patched.tar.gz?raw=true (see question Trying to run PhantomJS on OpenShift: cannot patch GhostDriver so that it can bind on the server IP address for details)
Upload this phantomjs binary to app-root/data/phantomjs/bin (for example) and make sure it is runnable :
> chmod 711 app-root/data/phantomjs/bin/phantomjs
You can now check that you can bin to your IP like this (I chose port number 15002 for my app, I reckon you can pick any value you want above 15000):
> echo $OPENSHIFT_PYTHON_IP
127.13.XXX.XXX
> app-root/data/phantomjs/bin/phantomjs --webdriver=127.13.XXX.XXX:15002
PhantomJS is launching GhostDriver...
[INFO - 2017-03-24T13:16:36.031Z] GhostDriver - Main - running on port 127.13.XXX.XXX:15002
Ok, now kill this process and proceed to step 2 : python webdriver
Custom python-selenium webdriver for PhantomJS
The point is to add the IP address to bind to as a parameter of the PhantomJS Webdriver.
First, I defined new settings to adapt to Openshift's constraint in my settings.py
PHANTOMJS_BIN_PATH = os.path.join(os.getenv('OPENSHIFT_DATA_DIR'), 'phantomjs', 'bin', 'phantomjs')
PHANTOMJS_LOG_PATH = os.path.join(os.getenv('OPENSHIFT_LOG_DIR'), 'ghostdriver.log')
(make sure app-root/logs/ is writable, maybe you'll have to chmod it)
Then, I had to override the PhantomJS Webdriver class, to provide the ip address as an argument. Here is my own implementation:
from selenium.webdriver import phantomjs
from selenium.webdriver.common import utils
from selenium.webdriver.remote.webdriver import WebDriver as RemoteWebDriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
class MyPhantomJSService(phantomjs.service.Service):
def __init__(self, executable_path, port=0, service_args=None, log_path=None, ip=None):
if ip is None:
self.ip = '0.0.0.0'
else:
self.ip = ip
phantomjs.service.Service.__init__(self, executable_path, port, service_args, log_path)
def command_line_args(self):
return self.service_args + ["--webdriver=%s:%d" % (self.ip, self.port)]
def is_connectable(self):
return utils.is_connectable(self.port, host=self.ip)
#property
def service_url(self):
"""
Gets the url of the GhostDriver Service
"""
return "http://%s:%d/wd/hub" % (self.ip, self.port)
class MyPhantomWebDriver(RemoteWebDriver):
"""
Wrapper to communicate with PhantomJS through Ghostdriver.
You will need to follow all the directions here:
https://github.com/detro/ghostdriver
"""
def __init__(self, executable_path="phantomjs",
ip=None, port=0, desired_capabilities=DesiredCapabilities.PHANTOMJS,
service_args=None, service_log_path=None):
"""
Creates a new instance of the PhantomJS / Ghostdriver.
Starts the service and then creates new instance of the driver.
:Args:
- executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH
- ip - IP sur lequel on veut se binder : c'est la spécificité de ce monkeypatch
- port - port you would like the service to run, if left as 0, a free port will be found.
- desired_capabilities: Dictionary object with non-browser specific
capabilities only, such as "proxy" or "loggingPref".
- service_args : A List of command line arguments to pass to PhantomJS
- service_log_path: Path for phantomjs service to log to.
"""
self.service = MyPhantomJSService(
executable_path,
port=port,
service_args=service_args,
log_path=service_log_path,
ip=ip)
self.service.start()
try:
RemoteWebDriver.__init__(
self,
command_executor=self.service.service_url,
desired_capabilities=desired_capabilities)
except Exception:
self.quit()
raise
self._is_remote = False
def quit(self):
"""
Closes the browser and shuts down the PhantomJS executable
that is started when starting the PhantomJS
"""
try:
RemoteWebDriver.quit(self)
except Exception:
# We don't care about the message because something probably has gone wrong
pass
finally:
self.service.stop()
Finally, invoke this custom webdriver instead of webdriver.PhantomJS(..., like this:
from .myphantomjs import MyPhantomWebDriver
browser = MyPhantomWebDriver(executable_path=settings.PHANTOMJS_BIN_PATH, service_log_path=settings.PHANTOMJS_LOG_PATH, ip=os.getenv('OPENSHIFT_PYTHON_IP'), port=15002)
From then on, you can use the browser object normally

Running neo4j-Python code in Eclipse with Pydev under ArchLinux

so I installed neo4j on ArchLinux (AUR Link) and want to test it using python 3.2.
I am using python 3.2, Eclipse with Pydev.
I tried following code from the neo4j website, allthough I think it was still 2.7 python code and I tried to convert it to Python 3.2 code.
Here's the code:
import os
libpath = '/usr/share/java/neo4j'
os.environ['CLASSPATH'] = ';'.join( [ os.path.abspath(p) for p in
os.listdir(libpath)])
from neo4j import GraphDatabase
# Create a database
db = GraphDatabase('/home/USERNAME/.db/neo4j/HelloWorld')
# All write operations happen in a transaction
with db.transaction:
firstNode = db.node(name='Hello')
secondNode = db.node(name='world!')
# Create a relationship with type 'knows'
relationship = firstNode.knows(secondNode, name='graphy')
# Read operations can happen anywhere
message = ' '.join([firstNode['name'], relationship['name'], secondNode['name']])
print(message)
# Delete the data
with db.transaction:
firstNode.knows.single.delete()
firstNode.delete()
secondNode.delete()
# Always shut down your database when your application exits
db.shutdown()
But I get following error message:
Traceback (most recent call last):
File "/home/USERNAME/PATH/TO/src/neo4j-HelloWorld.py", line 12, in <module>
from neo4j import GraphDatabase
File "/usr/lib/python3.2/site-packages/neo4j_embedded-1.6-py3.2.egg/neo4j/__init__.py", line 29, in <module>
from neo4j.core import GraphDatabase, Direction, NotFoundException, BOTH, ANY, INCOMING, OUTGOING
File "/usr/lib/python3.2/site-packages/neo4j_embedded-1.6-py3.2.egg/neo4j/core.py", line 19, in <module>
from _backend import *
ImportError: No module named _backend
I just can't figure out what's wrong!
I tried to set the CLASSPATH as described here, but it doesn't change anything.
I would really appreciate any help!
Did you run the code through 2to3?
If not, I suggest you do.
I think the problem is that the relative import syntax changed in 3.x, see PEP328 for details.
e.g. the offending import in core.py should probably say from ._backend import *

Resources