So, as context, I'm writing some controller software in Python for a RTM3004 oscilloscope for laboratory use. I had it working just fine on my laptop (Win10) and the previous laboratory computer (Linux), but upon changing to a new measurement computer, I can no longer connect to the device using Pyvisa. Here is a console log of the error, which happens whenever I attempt to connect to the device via Pyvisa:
Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyvisa as visa
>>> rm= visa.ResourceManager()
>>> rm.list_resources()
('ASRL1::INSTR', 'ASRL3::INSTR', 'USB0::2733::470::103028::0::INSTR')
>>> ID='USB0::2733::470::103028::0::INSTR'
>>> instr=rm.open_resource(ID)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa\highlevel.py", line 3304, in open_resource
res.open(access_mode, open_timeout)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa\resources\resource.py", line 297, in open
self.session, status = self._resource_manager.open_bare_resource(
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa\highlevel.py", line 3232, in open_bare_resource
return self.visalib.open(self.session, resource_name, access_mode, open_timeout)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa_py\highlevel.py", line 167, in open
sess = cls(session, resource_name, parsed, open_timeout)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa_py\sessions.py", line 323, in __init__
self.after_parsing()
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa_py\usb.py", line 81, in after_parsing
self.interface = self._intf_cls(
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\pyvisa_py\protocols\usbtmc.py", line 293, in __init__
self.usb_dev.set_configuration()
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\usb\core.py", line 905, in set_configuration
self._ctx.managed_set_configuration(self, configuration)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\usb\core.py", line 113, in wrapper
return f(self, *args, **kwargs)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\usb\core.py", line 159, in managed_set_configuration
self.backend.set_configuration(self.handle, cfg.bConfigurationValue)
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\usb\backend\libusb0.py", line 509, in set_configuration
_check(_lib.usb_set_configuration(dev_handle, config_value))
File "C:\Users\Hbeam\AppData\Local\Programs\Python\Python39\lib\site-packages\usb\backend\libusb0.py", line 447, in _check
raise USBError(errmsg, ret)
usb.core.USBError: [Errno None] b'libusb0-dll:err [set_configuration] could not set config 1: win error: The parameter is incorrect.\r\n'
As well as the information I get out of pyvisa-info:
Machine Details:
Platform ID: Windows-10-10.0.18362-SP0
Processor: Intel64 Family 6 Model 158 Stepping 13, GenuineIntel
Python:
Implementation: CPython
Executable: c:\users\hbeam\appdata\local\programs\python\python39\python.exe
Version: 3.9.4
Compiler: MSC v.1928 64 bit (AMD64)
Bits: 64bit
Build: Apr 6 2021 13:40:21 (#tags/v3.9.4:1f2e308)
Unicode: UCS4
PyVISA Version: 1.11.3
Backends:
ivi:
Version: 1.11.3 (bundled with PyVISA)
Binary library: Not found
py:
Version: 0.5.2
ASRL INSTR: Available via PySerial (3.5)
USB INSTR: Available via PyUSB (1.1.1). Backend: libusb0
USB RAW: Available via PyUSB (1.1.1). Backend: libusb0
TCPIP INSTR: Available
TCPIP SOCKET: Available
GPIB INSTR:
Please install linux-gpib (Linux) or gpib-ctypes (Windows, Linux) to use this resource type. Note that installing gpib-ctypes will give you access to a broader range of funcionality.
No module named 'gpib'
I've been racking my brain over this for two workdays now with no solution in sight, so I thought it best to ask for help, since I'm obviously no programming genius. Thank you in advance.
Related
I use fabric 2.6.0, paramiko 2.9.2 and invoke 1.4.0
Is this bug or something incompatible I got an error like this.
File "/usr/local/lib/python3.7/dist-packages/paramiko/message.py",
line 274, in add_string
self.add_int(len(s)) TypeError: object of type 'bool' has no len()
when I set dry=True I got like this.
>>> conn.run('touch hello.txt', dry=True)
touch hello.txt
<Result cmd='touch hello.txt' exited=0>
Here's the complete Error that I got.
Python 3.7.5 (default, Dec 9 2021, 17:04:37)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from fabric import Connection
>>> conn = Connection('192.168.1.16')
>>> conn.open()
>>> conn.is_connected
True
>>> conn.run('touch hello.txt')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<decorator-gen-3>", line 2, in run
File "/usr/local/lib/python3.7/dist-packages/fabric/connection.py", line 30, in opens
return method(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/fabric/connection.py", line 723, in run
return self._run(self._remote_runner(), command, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/invoke/context.py", line 102, in _run
return runner.run(command, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/invoke/runners.py", line 380, in run
return self._run_body(command, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/invoke/runners.py", line 431, in _run_body
self.start(command, self.opts["shell"], self.env)
File "/usr/local/lib/python3.7/dist-packages/fabric/runners.py", line 57, in start
self.channel.update_environment(env)
File "/usr/local/lib/python3.7/dist-packages/paramiko/channel.py", line 72, in _check
return func(self, *args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/paramiko/channel.py", line 332, in update_environment
self.set_environment_variable(name, value)
File "/usr/local/lib/python3.7/dist-packages/paramiko/channel.py", line 72, in _check
return func(self, *args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/paramiko/channel.py", line 361, in set_environment_variable
m.add_string(value)
File "/usr/local/lib/python3.7/dist-packages/paramiko/message.py", line 274, in add_string
self.add_int(len(s))
TypeError: object of type 'bool' has no len()
and here my ssh config
Host *
Port 22
User ubuntu
IdentityFile /home/ubuntu/.ssh/id_rsa
when I run ssh 192.168.1.16 from shell can successfully connect with remote machine.
After debugging for a while, I realize that this error happen because of miss configuration.
The thing that actually happen is that I put some environment variable on fabric.yaml so that will loaded automatically when fabric run and the problem happen when I want to open connection to the remote server.
What happen in the back when I run command with fabric is, fabric will open ssh connection using paramiko and then try to send the local environment variable that I set in fabric.yaml to the remote machine (this thing that cause the error)
You can't send environment variable form local machine to remote using ssh without put the setting into your sshd_config for the SendEnv and AcceptEnv for your local and remote machine or the alternatives is enable PermitUserEnvironment yes this only for remote machine and don't forget to create environment file in your remote home directory (home when you do ssh).
Hope my mistake can help another who also has similiar problem.
https://github.com/zzh8829/yolov3-tf2 is the project. I've installed all the correct versions ofthings I think.
google is telling me that it is probably a low VRAM issue but I am still looking around for other reasons. please help.
I am using :
Windows 10 (don't say "there's your problem" I need it)
cuDNN 7.4.6
CUDA 10.0
tensorflow 2.0.0
python 3.6
I have a gtx1660 super 6GB VRAM with a ryzen 7 2700x on 16GB of RAM. I'm getting a gt1080 8gig in a few days I'm going to add to the second PCI slot.
the Error is as follows:
2019-11-30 06:31:26.167368: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2019-11-30 06:31:27.843742: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2019-11-30 06:31:27.853725: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
Traceback (most recent call last):
File ".\convert.py", line 34, in <module>
app.run(main)
File "C:\Program Files\Python36\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Program Files\Python36\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File ".\convert.py", line 25, in main
output = yolo(img)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
convert_kwargs_to_constants=base_layer_utils.call_context().saving)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
output_tensors = layer(computed_tensors, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
convert_kwargs_to_constants=base_layer_utils.call_context().saving)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
output_tensors = layer(computed_tensors, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 197, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 1134, in __call__
return self.conv_op(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 639, in __call__
return self.call(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 238, in __call__
name=self.name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 2010, in conv2d
name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1031, in conv2d
data_format=data_format, dilations=dilations, name=name, ctx=_ctx)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1130, in conv2d_eager_fallback
ctx=_ctx, name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a wa
rning log message was printed above. [Op:Conv2D]
I had the same problem in the same repository.
The solution that worked for me and my team was to upgrade cuDNN to version 7.5 or higher (as opposed to your 7.4).
The instructions for updating can be found on Nvidia's site:
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
This could happen for a few reasons.
(1) As you mentioned, it may be a a memory issue, which you could try to verify by allocating less memory to the GPU and seeing if that error still occurs. You can do this in TF 2.0 like so (https://github.com/tensorflow/tensorflow/issues/25138#issuecomment-484428798):
import tensorflow as tf
tf.config.gpu.set_per_process_memory_fraction(0.75)
tf.config.gpu.set_per_process_memory_growth(True)
# your model creation, etc.
model = MyModel(...)
I see the code you're running sets dynamic memory growth if you have > 1 GPU (https://github.com/zzh8829/yolov3-tf2/blob/master/train.py#L46-L47), but since you only have 1 GPU, then it is likely just trying to allocate all memory (>90%) at the start.
(2) Some users seem to have experienced this on Windows when there were other TensorFlow or similar processes using the GPU simultaneously, either by you or by other users: https://stackoverflow.com/a/53707323/10993413
(3) As always, make sure your PATH variables are correct. Sometimes if you tried multiple installations and didn't clean things up properly, the PATHs may be finding the wrong version first and cause an issue. If you add new paths to the beginning of PATH, they should be found first: https://www.tensorflow.org/install/gpu#windows_setup
(4) As mentioned by #xenotecc, you could try upgrading to a newer version of CUDNN, though I'm not sure this will help since your config is listed as supported on TF docs: https://www.tensorflow.org/install/source#gpu. If this does solve it, it may have been PATH issue after all since you will likely update the PATHs after installing the newer version.
Got the same error and resolved by below:
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_virtual_device_configuration(
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5000)])
(with GTX 1660, 6G memory, tensorflow 2.0.1)
Simple fix:
insert this line under the imports in "convert.py"
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
this will ignore your gpu while loading the weights.
I have a python script that runs on python3.4 and uses the package keyboard to allow for keybinds;
keyboard.add_hotkey("enter", self.listener.stop, suppress=True)
keyboard.add_hotkey("shift+enter", self.listener.finish, suppress=True)
When I run this on Windows, it works perfectly listening to both hotkeys, also when run on linux (CentOS) it works.
At work I've gotten a Ubuntu environment on my windows via the windows 10 feature and app store. However this environment has a problem with this keyboard hotkey.
/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/_nixkeyboard.py:110: UserWarning: Failed to create a device file using `uinput` module. Sending of events may be limited or unavailable depending on plugged-in devices.
device = aggregate_devices('kbd')
Traceback (most recent call last):
File "main.py", line 32, in <module>
], 'test')
File "/mnt/.../can_controller.py", line 28, in __init__
self.__initialise_key_handler()
File "/mnt/.../can_controller.py", line 95, in __initialise_key_handler
keyboard.add_hotkey("enter", self.listener.stop, suppress=True)
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/__init__.py", line 637, in add_hotkey
_listener.start_if_necessary()
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/_generic.py", line 35, in start_if_necessary
self.init()
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/__init__.py", line 194, in init
_os_keyboard.init()
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/_nixkeyboard.py", line 113, in init
build_device()
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/_nixkeyboard.py", line 110, in build_device
device = aggregate_devices('kbd')
File "/usr/local/lib/python3.6/dist-packages/keyboard-0.13.2-py3.6.egg/keyboard/_nixcommon.py", line 168, in aggregate_devices
assert fake_device
AssertionError
If anybody knows how to fix this or has a good work-around. Please let me know.
I'm trying to bring up [pipenv][1] on a Raspberry Pi Zero W. The symptom I'm seeing is that pexpect times out when trying to create a tutorial project.
Admittedly, the RPi is a small machine, but I was monitoring memory usage and swap space during the process, and it wasn't running out of memory or swap.
Any idea what it was trying to do? Or how I should debug this? Here's the stack trace:
pi#blue-server:~/testdir $ pipenv install requests
Creating a virtualenv for this project…
Using /usr/bin/python3 (3.5.3) to create virtualenv…
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 109, in expect_loop
return self.timeout()
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 82, in timeout
raise TIMEOUT(msg)
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0xb555c950>
searcher: searcher_re:
0: EOF
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/.local/bin/pipenv", line 11, in <module>
sys.exit(cli())
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/cli.py", line 478, in uninstall
keep_outdated=keep_outdated,
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 2077, in do_uninstall
ensure_project(three=three, python=python)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 620, in ensure_project
three=three, python=python, site_packages=site_packages
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 569, in ensure_virtualenv
do_create_virtualenv(python=python, site_packages=site_packages)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/core.py", line 936, in do_create_virtualenv
click.echo(crayons.blue(c.out), err=True)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/delegator.py", line 99, in out
self.__out = self._pexpect_out
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/delegator.py", line 87, in _pexpect_out
result += self.subprocess.read()
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 441, in read
self.expect(self.delimiter)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 341, in expect
timeout, searchwindowsize, async_)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/spawnbase.py", line 369, in expect_list
return exp.expect_loop(timeout)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 119, in expect_loop
return self.timeout(e)
File "/home/pi/.local/lib/python3.5/site-packages/pipenv/vendor/pexpect/expect.py", line 82, in timeout
raise TIMEOUT(msg)
pexpect.exceptions.TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0xb553a710>
searcher: searcher_re:
0: EOF
<pexpect.popen_spawn.PopenSpawn object at 0xb553a710>
searcher: searcher_re:
0: EOF
Here's the environment info:
pi#blue-server:~/foo $ uname -a
Linux blue-server 4.14.34+ #1110 Mon Apr 16 14:51:42 BST 2018 armv6l GNU/Linux
pi#blue-server:~/foo $ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
pi#blue-server:~/foo $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
pi#blue-server:~/foo $ free -m
total used free shared buff/cache available
Mem: 433 26 260 3 145 353
Swap: 99 0 99
additional info
I noticed it was timing out inside of a subprocess call. Using pdb, I traced it down to the command:
<Command ['/usr/bin/python3.5', '-m', 'pipenv.pew', 'new', 'foo-su43ObVR', '-d', '-p', '/usr/bin/python3.5']>
I tried replicating that call from the command line and it completed without error:
pi#blue-server:~/foo $ /usr/bin/python3.5 -m pipenv.pew new 'asdf' -d -p /usr/bin/python3.5
Already using interpreter /usr/bin/python3.5
Using base prefix '/usr'
New python executable in /home/pi/.local/share/virtualenvs/asdf/bin/python3.5
Also creating executable in /home/pi/.local/share/virtualenvs/asdf/bin/python
Installing setuptools, pip, wheel...done.
That seems hopeful, but I still don't know what causes the timeout.
TL;DR: the fix is to extend the timeout
Solved it. Because the RPi Zero is slow, it was timing out. Taking a clue from the discussion on github, I noticed that its now possible to extend the default timeout with environment variables. So this solved the problem:
pi#blue-server:~/testdir $ export PIPENV_TIMEOUT=500
pi#blue-server:~/testdir $ pipenv install requests
Per below, I'm not sure how to troubleshoot this pretty simple usage scenario.
I have script (that I run about once a month) that functionally does the identical thing and which used to work as of a month ago.
I'd appreciate any pointers on places to start looking into why this does not work.
$ python3
Python 3.6.1 (default, Mar 23 2017, 16:49:06)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from selenium import webdriver
>>> from splinter import Browser
>>> chrome_options = webdriver.ChromeOptions()
>>> browser = Browser('chrome')
>>> browser.cookies.add({'aaa':'bbb'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dummyuser/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/splinter/driver/webdriver/cookie_manager.py", line 28, in add
self.driver.add_cookie({'name': key, 'value': value})
File "/Users/dummyuser/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 708, in add_cookie
self.execute(Command.ADD_COOKIE, {'cookie': cookie_dict})
File "/Users/dummyuser/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 256, in execute
self.error_handler.check_response(response)
File "/Users/dummyuser/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unable to set cookie
(Session info: chrome=62.0.3202.94)
(Driver info: chromedriver=2.33.506106 (8a06c39c4582fbfbab6966dbb1c38a9173bfb1a2),platform=Mac OS X 10.13.1 x86_64)
you should open the url first, and load the cookies, then you can open the next url with cookies.you can also open like that if you want open the same url:
driver = webdriver.Chrome(executable_path=r'X:\home\xxx\chromedriver.exe')
cookies = pickle.load(open("cookies.pkl", "rb"))
driver.get("https://www.douban.com/")
for cookie in cookies:
driver.add_cookie(cookie)
driver.get("https://www.douban.com/")
hope this helps
The answer of Florent B. works for me as well, just want to put it into the right place.
browser.cookies.add needs to be called after some browser.visit(...)
see Florent's comment:
The method browser.cookies.add is bound to the current domain which is undefined in your example. You need to set the domain first with driver.get('http://...')