Chromedriver is started with xvfb-run chromedriver --disable-dev-shm-usage --enable-chrome-logs --allowed-ips=127.0.0.1.
Here's what I know:
bash-4.2# chromedriver --version
ChromeDriver 110.0.5481.77 (65ed616c6e8ee3fe0ad64fe83796c020644d42af-refs/branch-heads/5481#{#839})
bash-4.2# google-chrome --version
Google Chrome 110.0.5481.77
bash-4.2# curl -XPOST -H "Content-Type: application/json;charset=UTF-8" -H "Accept: application/json;charset=UTF-8" -d '{"capabilities":{"firstMatch":[{"goog:chromeOptions":{"args":["--no-sandbox","--user-data-dir=/tmp/chrome/2","--data-path=/tmp/chrome/3","--disk-cache-dir=/tmp/chrome/4","--no-zygote","--disable-dev-shm-usage","--disable-gpu","--single-process"]}}],"alwaysMatch":{"browserName":"chrome","pageLoadStrategy":"none"}}}' http://localhost:9515/session
{"value":{"error":"disconnected","message":"disconnected: Unable to receive message from renderer\n (failed to check if window was closed: disconnected: not connected to DevTools)\n (Session info: chrome=110.0.5481.77)","stacktrace":"#0 0x562dfdcfad93 \u003Cunknown>\n#1 0x562dfdac92d7 \u003Cunknown>\n#2 0x562dfdab36d3 \u003Cunknown>\n#3 0x562dfdab1ee2 \u003Cunknown>\n#4 0x562dfdab2682 \u003Cunknown>\n#5 0x562dfdab25d4 \u003Cunknown>\n#6 0x562dfdaa4170 \u003Cunknown>\n#7 0x562dfdaa39c0 \u003Cunknown>\n#8 0x562dfdaa2d35 \u003Cunknown>\n#9 0x562dfdb32653 \u003Cunknown>\n#10 0x562dfdb29353 \u003Cunknown>\n#11 0x562dfdaf8e40 \u003Cunknown>\n#12 0x562dfdafa038 \u003Cunknown>\n#13 0x562dfdd4e8be \u003Cunknown>\n#14 0x562dfdd528f0 \u003Cunknown>\n#15 0x562dfdd32f90 \u003Cunknown>\n#16 0x562dfdd53b7d \u003Cunknown>\n#17 0x562dfdd24578 \u003Cunknown>\n#18 0x562dfdd78348 \u003Cunknown>\n#19 0x562dfdd784d6 \u003Cunknown>\n#20 0x562dfdd92341 \u003Cunknown>\n#21 0x7fa628df844b start_thread\n"}}
Looking into the chromedriver verbose logs, Chrome was happily sitting at a new tab and then things went haywire:
[1675906446.284][DEBUG]: DevTools WebSocket Response: Target.getTargets (id=1) (session_id=) browser {
"targetInfos": [ {
"attached": false,
"browserContextId": "D38C850EC0824ADB75AD7FE1597FE262",
"canAccessOpener": false,
"targetId": "1AFD8F644C4F1088FA776DA626B4F18D",
"title": "New Tab",
"type": "page",
"url": "chrome://newtab/"
} ]
}
[1675906446.284][DEBUG]: DevTools WebSocket Command: Target.attachToTarget (id=2) (session_id=) browser {
"flatten": true,
"targetId": "1AFD8F644C4F1088FA776DA626B4F18D"
}
[0209/013406.300892:ERROR:file_io_posix.cc(144)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq: No such file or directory (2)
[0209/013406.300954:ERROR:file_io_posix.cc(144)] open /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq: No such file or directory (2)
[1675906446.312][SEVERE]: Unable to receive message from renderer
We know the cpufreq thing is a red herring as it is coming from the crash reporter.
But here's the insane catch: if I replace the chrome binary with
cat /opt/google/chrome/chrome
#!/bin/bash
strace -o /tmp/strace -f /opt/google/chrome/chrome-binary "$#"
Then it works. If I remove the strace and the keep the shell script it crashes again. Also tried Chrome 98 and 102, they crash just the same.
Here is the payload from above nicely formatted:
{
"capabilities": {
"firstMatch": [
{
"goog:chromeOptions": {
"args": [
"--no-sandbox",
"--user-data-dir=/tmp/chrome/2",
"--data-path=/tmp/chrome/3",
"--disk-cache-dir=/tmp/chrome/4",
"--no-zygote",
"--disable-dev-shm-usage",
"--disable-gpu",
"--single-process"
]
}
}
],
"alwaysMatch": {
"browserName": "chrome",
"pageLoadStrategy": "none"
}
}
}
note I added pageLoadStrategy after the error surfaced and it doesn't matter -- but I wanted to show I've tried.
This is some sort of CentOS, a Dockerfile based on public.ecr.aws/lambda/provided:al2 running locally (it fails the same on AWS Lambda). Here's how:
FROM public.ecr.aws/lambda/provided:al2
COPY chrome-deps.txt /tmp/
RUN yum install -q -y $(cat /tmp/chrome-deps.txt) https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
COPY install-chrome.sh /tmp/
RUN /usr/bin/bash /tmp/install-chrome.sh
The requirements is probably utter overkill to spell out but:
acl adwaita-cursor-theme adwaita-icon-theme alsa-lib at-spi2-atk at-spi2-core
atk avahi-libs cairo cairo-gobject colord-libs cryptsetup-libs cups-libs dbus
dbus-libs dconf desktop-file-utils device-mapper device-mapper-libs elfutils-default-yama-scope
elfutils-libs emacs-filesystem fribidi gdk-pixbuf2 glib-networking gnutls graphite2
gsettings-desktop-schemas gtk-update-icon-cache gtk3 harfbuzz hicolor-icon-theme hwdata jasper-libs
jbigkit-libs json-glib kmod kmod-libs lcms2 libX11 libX11-common libXau libXcomposite libXcursor libXdamage
libXext libXfixes libXft libXi libXinerama libXrandr libXrender libXtst libXxf86vm libdrm libepoxy
liberation-fonts liberation-fonts-common liberation-mono-fonts liberation-narrow-fonts liberation-sans-fonts
liberation-serif-fonts libfdisk libglvnd libglvnd-egl libglvnd-glx libgusb libidn libjpeg-turbo libmodman
libpciaccess libproxy libsemanage libsmartcols libsoup libthai libtiff libusbx libutempter libwayland-client
libwayland-cursor libwayland-egl libwayland-server libxcb libxkbcommon libxshmfence lz4 mesa-libEGL mesa-libGL
mesa-libgbm mesa-libglapi nettle pango pixman qrencode-libs rest shadow-utils systemd systemd-libs trousers ustr
util-linux vulkan vulkan-filesystem wget which xdg-utils xkeyboard-config
xorg-x11-server-Xvfb unzip
the install script is not much:
#!/usr/bin/bash
CHROMEVERSION=$(curl -s https://chromedriver.storage.googleapis.com/LATEST_RELEASE)
curl -s https://chromedriver.storage.googleapis.com/$CHROMEVERSION/chromedriver_linux64.zip > /tmp/chromedriver_linux64.zip
unzip /tmp/chromedriver_linux64.zip -d /opt/bin/
downloaded chromium.br from https://github.com/Sparticuz/chrome-aws-lambda/blob/2fb71eb0ca2e08292973efe3ca59182debbbffc8/bin/chromium.br while this repo is deprecated, the Chromium 110 from the new repo doesn't seem to work?
brotli -d chromium.br
In Dockerfile, COPY chromium /usr/bin/google-chrome
CHROMEVERSION=$(curl -s https://chromedriver.storage.googleapis.com/LATEST_RELEASE_106.0.5249)
I still have no idea what's going on but I will take it. For the next few years being stuck on Chromium 106 is fine.
Related
From what I understand using ansible-inventory-plugins over dynamic-inventory-provisioners is the new way of handling dynamic hosts, as of cloud providers and so on.
So, at first I've set the azure credentials in my environment:
± env | grep AZ
AZURE_SECRET=asdf
AZURE_TENANT=asdf
AZURE_SUBSCRIPTION_ID=asdf
AZURE_CLIENT_ID=asdf
Next, I've written an ansible.cfg with the following content:
± cat ansible.cfg
[inventory]
enable_plugins = azure_rm
Finally I wrote the yaml file with the minimum setting as shown at the ansible inventory plugin page:
± cat foo.azure_rm.yaml
---
plugin: azure_rm
When I am running the ansible-inventory binary on that file, I get:
± ansible-inventory -i foo.azure_rm.yaml --list
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
[WARNING]: Unable to parse /path/to/foo.azure_rm.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {}
}
Summing up: The main problem seems to be the line:
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
Help, anyone?
I think this is an error in the script. Adding the debug flag to Ansible gives me the following stacktrace:
File "/usr/local/lib/python3.6/site-packages/ansible/inventory/manager.py", line 273, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 235, in parse
self._get_hosts()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 292, in _get_hosts
self._process_queue_batch()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 412, in _process_queue_batch
result.handler(r['content'], **result.handler_args)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 357, in _on_vm_page_response
self._hosts.append(AzureHost(h, self, vmss=vmss))
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 466, in __init__
self.default_inventory_hostname = '{0}_{1}'.format(vm_model['name'], hashlib.sha1(vm_model['id']).hexdigest()[0:4])
It seems this was only recently fixed: https://github.com/ansible/ansible/pull/46608. So either you'll have to wait for 2.8 or use development version.
I've fixed it in a github fork and use pipenv to include this version in my environment. Actually it should be a backup port from devel, where the problem is already fixed. Maybe I'll fix this during the coming days and do a PR at ansible to include it into stable-2.7, but maybe the better option is to wait for 2.8 in May.
I have had the same issue and solve it by using python3
you can check your ansible python version with the following command
ansible --version | grep "python version"
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
install all python3 packages
pip3 install ansible azure azure-cli
if needed export env variable for the authentication
export ANSIBLE_AZURE_AUTH_SOURCE=cli
then run ansible inventory with
python3 $(which ansible-inventory) -i my.azure_rm.yaml --graph
my.azure_rm.yml file looks like this one:
plugin: azure_rm
include_vm_resource_groups:
- my_resource_group_rg
auth_source: cli
I am trying to create a build system to run pandoc and save as pdf using pdflatex:
{
"selector": "text.html.markdown",
"working_dir": "$file_path",
"cmd": [
"C:\\Users\\Administrator\\AppData\\Local\\Pandoc",
"-f", "markdown",
"-t", "latex",
"--pdf-engine=pdflatex",
"-s",
"-o",
"C:\\Users\\Administrator\\Desktop\\output.pdf",
"$file"
]
}
I get an "Access is denied" error and cannot figure out what's wrong:
[WinError 5] Access is denied
[cmd: ['C:\\Users\\Administrator\\AppData\\Local\\Pandoc', '-f', 'markdown', '-t', 'latex', '--pdf-engine=pdflatex', '-s', '-o', 'C:\\Users\\Administrator\\Desktop\\output.pdf', 'C:\\Users\\Administrator\\Desktop\\CP10328R-TEST.md']]
[dir: C:\Users\Administrator\Desktop]
I have confirmed that I can write a file to the desktop without any issues, so the "access is denied" error must be related to something other than the output file. I have also tested using pdflatex as the engine and I can create PDFs from LaTex without any errors using an install of Texworks on the same machine, so the issue cannot be with LaTex. Finally, I have confirmed the path to pandoc is correct.
I'm trying to install cl-jupyter. (Debian)
I've done the following steps:
install gcl (apt-get install gcl)
install sbcl (apt-get install sbcl)
run command python3 ./install-cl-jupyter.py
try to run command sbcl --load ./cl-jupyter.lisp but I get the following and don't know what to do. I've tried to choose all option, but nothing happend.
Output:
This is SBCL 1.2.4.debian, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
debugger invoked on a SB-C::INPUT-ERROR-IN-LOAD in thread
#<THREAD "main thread" RUNNING {10039CE993}>:
READ error during LOAD:
Package ASDF does not exist.
Line: 3, Column: 29, File-Position: 150
Stream: #<SB-SYS:FD-STREAM
for "file /home/ivan/all/language_packages/cl-jupyter-master/cl-jupyter.lisp"
{10039D64A3}>
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT ] Abort loading file "/home/ivan/all/language_packages/cl-jupyter-master/./cl-jupyter.lisp".
1: [CONTINUE] Ignore runtime option --load "./cl-jupyter.lisp".
2: Skip rest of --eval and --load options.
3: Skip to toplevel READ/EVAL/PRINT loop.
4: [EXIT ] Exit SBCL (calling #'EXIT, killing the process).
(SB-C:COMPILER-ERROR SB-C::INPUT-ERROR-IN-LOAD :CONDITION #<SB-INT:SIMPLE-READER-PACKAGE-ERROR "Package ~A does not exist." {10039D9E83}> :STREAM #<SB-SYS:FD-STREAM for "file /home/ivan/all/language_packages/cl-jupyter-master/cl-jupyter.lisp" {10039D64A3}>)
update file ipython/your_kernel_name/kernel.json
from
{"language": "lisp", "display_name": "SBCL Lisp", "argv": ["sbcl", "--non-interactive", "--load", "/home/ivan/all/language_packages/cl-jupyter-master/cl-jupyter.lisp", "/home/ivan/all/language_packages/cl-jupyter-master/src", "/home/ivan/all/language_packages/cl-jupyter-master", "{connection_file}"]}
to
{
"argv": [
"sbcl","--non-interactive", "--load",
"/path/to/cl-jupyter/cl-jupyter.lisp",
"/path/to/cl-jupyter/src",
"/path/to/cl-jupyter",
"{connection_file}"
],
"display_name": "SBCL Lisp",
"language": "lisp"
}
(information from here)
Now I can see the SBCL Lisp kernel in jupyter but it doesn't work and breaks all the time when I try to write and run smth.
Please, help
I've done it! Now it works fine.
1)I forgot to install quicklisp
2) here i just put right paths to cl-jupyter.lisp, /src, /cl-jupyter:
{
"argv": [
"sbcl","--non-interactive", "--load",
"/path/to/cl-jupyter/cl-jupyter.lisp",
"/path/to/cl-jupyter/src",
"/path/to/cl-jupyter",
"{connection_file}"
],
"display_name": "SBCL Lisp",
"language": "lisp"
}
In development enviroment works but not in production enviroment.
Production enviroment error:
$e->getMessage():
"no decode delegate for this image format
'/tmp/sws5725c638311df5.24370631' # error/svg.c/ReadSVGImage/2871"
Generate tmpname:
$tmpname = '/tmp/'.uniqid('sws', true);
Method:
public function load($filename)
{
try
{
$this->filename = $filename;
$this->image = new Imagick($filename);
return $this;
}
catch(ImagickException $e)
{
error_log($e->getMessage(), 3, "/tmp/log.log");
OPException::raise(new OPInvalidImageException('Could not create jpeg with ImageMagick library'));
}
}
Delegates:
convert -list configure | grep DELEGATES
bzlib djvu fftw fontconfig freetype jbig jpeg jng jp2 lcms2 lqr lzma openexr pango png rsvg tiff x11 xml wmf zlib
Versions (dev and prod enviroments):
ImageMagick 6.7.7-10
Ubuntu 14.04
PHP 5.5.9-1ubuntu4.16
Any suggestions? Thanks for your attention.
Temporary file
In reviewing the temporary file with the command vim I got the error message coming from the server in XML format and is why the library imagick treated it as svg file.
vim /tmp/sws5725c638311df5.24370631
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>RequestTimeTooSkewed</Code>
<Message>The difference between the request time and the current time is too large.</Message>
<RequestTime>Sun, 01 May 2016 06:43:17 GMT</RequestTime>
<ServerTime>2016-05-01T06:28:06Z</ServerTime>
<MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
<RequestId>382A337F3660C5A9</RequestId>
<HostId>+2RI3xW5KBd64P13x4yaWly6VdZzn1okvMAYgKvbFYlGnLp87udxpSBCVx7iQNk7zgnkK/ckUXA=</HostId>
</Error>
Solution
Synchronize time operating system with the server time.
sudo service ntp stop
sudo ntpdate ntp.ubuntu.com
sudo service ntp start
Thanks for the comments.
I have a problem with my build system specified in project settings. Currently my project settings look like this
{
"build_systems":
[
{
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"name": "Anaconda Python Builder",
"selector": "source.python",
"shell_cmd": "$project_path/bin/python -u $file"
}
],
"folders":
[
{
"file_exclude_patterns":
[
"pip-selfcheck.json",
"pyvenv.cfg"
],
"folder_exclude_patterns":
[
"lib",
"include",
"bin"
],
"follow_symlinks": true,
"path": "."
}
],
"settings":
{
"binary_file_patterns":
[
"*.jpg",
"*.jpeg",
"*.png",
"*.gif",
"*.ttf",
"*.tga",
"*.dds",
"*.ico",
"*.eot",
"*.pdf",
"*.swf",
"*.jar",
"*.zip",
"client/node_modules/**",
"data/**"
]
}
}
The actual problem problem is in the line:
"shell_cmd": "$project_path/bin/python -u $file"
Every time I close sublime and reopen it my shell_cmd gets substituted with this
"shell_cmd": "\"python\" -u \"$file\""
Which fails my build. Is there a way to fix this problem? What do I do to disable this automatic substitution?
Mac OS X 10.11.3
Sublime Text 3103
I already answered this question in the Anaconda's issues tracker but I will do it here for any other user that lands in this question with a similar problem.
That specific build system is used by Anaconda itself and that is why it is called "Anaconda Python Builder", it will update the "shell_cmd" with anything that you used as configuration for your "python_interpreter" setting into your anaconda's settings file (general, user or per project).
If you have specific needs for your build system, you should create a new build entry with your own options and stick to that one. Anaconda's build system is a convenience for users that need to use their configured Python interpreter instead of the embedded ST3 one.
I solved this issue without a custom build system by adding python_interpreter: 'full/path/to/python' to the settings dictionary. python part in the Anaconda Python Builder shell_cmd gets replaced with it.