I am trying to initiate an SSH connection from my local machine to a host via pwntools and I keep getting a ValueError raised, even though when I ssh the same host from the terminal I get no errors.
My code is as follows:
from pwn import ssh
username='existing_username'
hostname='X.X.X.X'
password='correct_password'
s1=ssh(user=username,host=hostname, password=password)
The error I get is
ValueError: q must be exactly 160, 224, or 256 bits long
Any ideas on how to find the root cause of this and resolve my problem?
Related
I'm using Debian and connecting it to Bit vise SSH server, I configured everything is /etc/ssh/sshd_config but when I try to connect using root and its password I get this Error:
The SSH session has terminated with error. Reason: FlowSocketReader: Error receiving bytes. Windows error 10054: An existing connection was forcibly closed by the remote host.
Is there anyway to fix this? I've been searching all night please help
I have been trying to get a connection between TwinCAT 3 on Windows and Python on Ubuntu. I already have the connection between Twincat 3 Windows and Python Windows working, but not to Ubuntu. I have a virtual machine set up through Oracle VM Virtualbox. I tried many things but so far had no success in creating the connection.
I have a bridged adapter network and tried to open the port of the IP address of the virtual machine in linux through sudo ufw allow
I have the following code:
pyads.open_port()
pyads.add_route('10.11.104.206.1.1','127.0.0.1')
pyads.close_port()
plc = pyads.Connection('10.11.104.206.1.1', 851)
plc.open()
try:
# try to connect to PLC
plc.read_state()
print('Connection succeeded')
except Exception:
print('Connection failed')
And this is the error I get:
2020-11-22T22:45:46+0100 Error: Connect TCP socket failed with: 111
Traceback (most recent call last):
File "/home/laurence/ws_moveit/devel/lib/moveit_tutorials/move_panda_LKO.py", line 15, in <module>
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/laurence/ws_moveit/src/moveit_tutorials/doc/move_panda_LKO/scripts/move_panda_LKO.py", line 64, in <module>
pyads.add_route('10.11.104.206.1.1','127.0.0.1')
File "/usr/local/lib/python3.8/dist-packages/pyads/ads.py", line 188, in add_route
return adsAddRoute(adr.netIdStruct(), ip_address)
File "/usr/local/lib/python3.8/dist-packages/pyads/pyads_ex.py", line 155, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pyads/pyads_ex.py", line 177, in adsAddRoute
raise ADSError(error_code)
pyads.pyads_ex.ADSError: ADSError: target port not found ADS Server not started (6).
These are the netid/IPadresses.
-
-
-
CX-52EE70
169.254.64.202
5.82.238.112.1.1
TCP_IP
-
LEENLAPTOP19
127.0.0.1
10.11.104.206.1.1
TCP_IP
I have tried combinations with other netid/IP addresses so sometimes I get other errors (110,113) but usually 111 which means connection refused, but I do not know what I am doing wrong. Any ideas?
Please make sure the plc runtime is running (a plc program) when you connect. If the plc is in config or exception mode the plc runtime ads port (851 (or 801 for TC2)) is not present. That is what the ADS error 6 target port not found is trying to tell us.
OS: Windows 10
tensorflow and keras succesfully imported, python 3.7.9
tf.__version__
>>> '2.1.0'
keras.__version__
>>> '2.2.4-tf'
Problem
Tried load_datasets or any dataset available in tf.keras such as:
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
give this error
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
.
.
.
URLError: <urlopen error [WinError 10054] An existing connection was forcibly closed by the remote host>
During handling of the above exception, another exception occurred:
.
.
.
Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-
idx1-ubyte.gz: None -- [WinError 10054] An existing connection was forcibly closed by the remote host
The three dots showing bunch of code lines that can't be executed.
Anyone knows how to solve? I've been looking for possible solutions but the closest I can find is solving certification/verification issue, I think mine is about URL.
I know the workaround is to download the dataset from kaggle etc., but I want to know what cause this. Thanks guys
EDIT: it's not URL problem, unable to access https://storage.googleapis.com using IDM, but files can be downloaded directly in browser. So I guess it's security issue
Finally after 5 hours reading here and there..
Please check the solution by CRLannister here https://github.com/tensorflow/tensorflow/issues/33285
What it doesn't mention is where data_utils.py is located in case of Windows OS and anaconda environment. It's located here
~\Anaconda3\envs\*your_env*\Lib\site-packages\tensorflow_core\python\keras\utils\data_utils.py
just add the following after all the import statement
import requests
requests.packages.urllib3.disable_warnings()
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
I'm trying to do a FTPS (or FTP) connection to a FTP server. This is done on Python 3.8.5 32 bit via Visual Studio Code.
Here is the code:
import ftplib
session = ftplib.FTP_TLS('server address')
#session.connect ('server address', 991)
session.login(user='username',passwd='password')
#session.prot_p()
session.set_pasv(True)
session.cwd("files")
print(session.pwd())
filename = "ftpTest.txt"
my_file = open('filepath\\ftpTest.txt', 'wb') # Open a local file to store the downloaded file
session.retrbinary('RETR ' + filename, my_file.write, 1024)
session.quit()
I am able to get the session.pwd (which display /files) but the connection timeout at line 11 (session.retrbinary) in approximately 22 sec with the following error:
Exception has occurred: TimeoutError
[WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I had tried setting session.set_pasv to both True and False following Python ftplib timing out. Setting it True raised the TimeoutError and setting it False raise the following error at line 11:
Exception has occurred: error_perm
500 Illegal PORT command
and also tried setting a different port (991) following Python SSL FTP connection timing out and it raised the Timeout Error at line 3.
Using FTP without TLS raised the following error at line 4 (session.login):
Exception has occurred: error_perm
530 Non-anonymous sessions must use encryption.
Turning off my McAfee LiveSafe firewall didnt help either.
Btw file transfer works with Filezilla, was able to freely transfer.
Setting up the secure data connection and changing the session af to INET6 seemed to work for me. This was suggested to me by a colleague, and as to why it works is beyond me. If anyone can provide a proper explanation, please do.
Code:
session.login(user='username',passwd='password')
session.prot_p()
session.af = socket.AF_INET6
I installed Docker and run the first Cassandra node and used Cqlsh to run some commands and it works fine. I installed python driver and then when i run the command below i get the following error. I saw many stack questions and not much people were able to answer. Please give your ideas. I have been longing for a while to use cassandra but could never come up with a good solution for this problem. Thanks
>>> from cassandra.cluster import Cluster
>>> cluster=Cluster()
>>> keyspace='north'
>>> session=Cluster(['192.168.1.xx']).connect()
Error
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'192.168.1.xx': ConnectionRefusedError(111, "Tried connecting to [('192.168.1.xx', 9042)]. Last error: Connection refused")})
When i tried to replace the Ip address with the cassandra node which i created, 'node1' in my case it gives me this error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "cassandra/cluster.py", line 826, in cassandra.cluster.Cluster.__init__
File "/usr/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -5] No address associated with hostname
I actually solved this by giving the container Ip address which was inside the docker. I was quite confused which address i should give. But then after running this command.
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
I was not aware that i need to specify the container id which had the cluster node. So i was always giving the ip address of the machine.
The IP address you have provided is not valid: 192.168.1.xx.
You need provide the IP address (or valid hostname) of at least one node in your cluster.