Selenium 2.45 with Firefox 35.0.1 not working in production only - linux

I am using Selenium 2.45 with Firefox 35.0.1 headless browser. Things are fine in dev and test environment, but in production, I am getting error.
Driver info: driver.version: FirefoxDriver
org.openqa.selenium.WebDriverException: Failed to connect to binary FirefoxBinary(/usr/bin/firefox) on port 7055; process output follows:
Xlib: extension "RANDR" missing on display ":1".
process 20275: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/var/lib/dbus/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
D-Bus not built with -rdynamic so unable to print a backtrace
Xlib: extension "RANDR" missing on display ":1".
process 20300: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/var/lib/dbus/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
D-Bus not built with -rdynamic so unable to print a backtrace
Build info: version: '2.45.0', revision: '5017cb8e7ca8e37638dc3091b2440b90a1d8686f', time: '2015-02-27 09:10:26'
System info: host: 'prod', ip: '127.0.0.1', os.name: 'Linux', os.arch: 'amd64', os.version: '2.6.32-431.1.2.0.1.el6.x86_64', java.version: '1.7.0_65'
Production Environment:
1) Downloaded firefox-35.0.1
[prod#prod ~]$ ls /usr/local/
bin etc firefox firefox-35.0.1.tar.bz2 games include lib lib64 libexec sbin share src
2) soft linked to /usr/bin/firefox
[prod#prod ~]$ ll /usr/bin/firefox
lrwxrwxrwx 1 root root 26 Jun 11 15:59 /usr/bin/firefox -> /usr/local/firefox/firefox
[prod#prod ~]$
3) Ran Xvfb
[prod#prod ~]$ ps ax |grep Xvfb
15425 ? S 0:00 sudo Xvfb +extension RANDR :1 -screen 0 1024x768x24
15426 ? S 0:00 Xvfb +extension RANDR :1 -screen 0 1024x768x24
23102 pts/6 S+ 0:00 grep Xvfb
Test Environment:
[root#vc-stage ~]# ll /usr/bin/firefox
lrwxrwxrwx 1 root root 26 May 24 21:32 /usr/bin/firefox -> /usr/local/firefox/firefox
[root#stage ~]#
[root#stage ~]# ls /usr/local/
bin etc firefox firefox-35.0.1.tar.bz2 games include lib lib64 libexec sbin share src
[root#stage ~]#
[root#stage ~]# ps ax | grep Xvfb
3899 pts/5 S+ 0:00 grep Xvfb
27393 ? S 0:01 Xvfb +extension RANDR :1 -screen 0 1024x768x24
[root#stage ~]#
The only difference between test and prod is; in test I am running everything from a root user. In prod, I am running by a sudo user.
Update: The error message is gone without any changes whatsoever, Duh. Now, it is simply not creating the firefox Driver.

Everything was all right. 1 package was missing in production- "dbus". After installing and configuring the package, everything worked fine.

Related

httpd won't start with custom conf files and mod_wsgi built with Python 3.9

I am working on a RedHat Centos 7 box. I have installed python 3.9.2 into a folder under /opt/python3.9. I am in the midst of moving my Django server to production, and have chosen to use Apache (I have installed httpd-devel) with mod_wsgi. I was in the midst of following their instructions to make sure it gets configured correctly.
I installed Apache:
sudo yum install httpd
sudo yum install httpd-devel
then
wget https://github.com/GrahamDumpleton/mod_wsgi/archive/refs/tags/4.9.2.tar.gz
mv 4.9.2.tar.gz ./mod_wsgi_4.9.2.tar.gz
tar xvfz mod_wsgi-4.9.2.tar.gz
cd mod_wsgi*
./configure --with-python=/opt/python3.9/bin/python39
make
sudo make install
all with no errors.
sudo systemctl enable httpd
sudo systemctl start httpd
But as soon as I try to use the demo here (which basically entails adding a conf file to /etc/httpd/conf.d/, called wsgi.conf, and a response file to /var/www/html/, called test_wsgi.py, then restarting Apache), it throws an error and tells me to check journalctl -xe.
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal httpd[11893]: AH00526: Syntax error on line 2 of /etc/httpd/conf.d/wsgi.conf:
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal httpd[11893]: Invalid command 'WSGIScriptAlias', perhaps misspelled or defined by a module not included in the server configuration
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal systemd[1]: Failed to start The Apache HTTP Server.
-- Subject: Unit httpd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit httpd.service has failed.
--
-- The result is failed.
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal systemd[1]: Unit httpd.service entered failed state.
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal systemd[1]: httpd.service failed.
Jun 08 21:04:01 ip-172-31-18-8.ec2.internal sudo[11888]: pam_unix(sudo:session): session closed for user root
I am 95% certain that if I did what was suggested here that it would compile mod_wsgi for Python 2.7 and I don't want to use python2.7... that's why I compiled mod_wsgi for python 3.9.2.
If I try to use my django.conf file instead of the one I linked in the demo, I get a different error that might be more helpful for the slue:
httpd.conf: Syntax error on line 3 of /etc/httpd/conf.d/django.conf: Cannot load /usr/lib64/httpd/modules/mod_wsgi.so into server: libpython3.9.so.1.0: cannot open shared object file: No such file or directory
Line 3 is:
LoadModule wsgi_module /usr/lib64/httpd/modules/mod_wsgi.so
Output of ldd /usr/lib64/httpd/modules/mod_wsgi.so:
[ec2-user#ip-172-31-18-8 ~]$ ldd /usr/lib64/httpd/modules/mod_wsgi.so
linux-vdso.so.1 (0x00007ffd7b50d000)
libpython3.9.so.1.0 => /opt/python39/lib/libpython3.9.so.1.0 (0x00007ff0ebf85000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007ff0ebd4e000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff0ebb30000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007ff0eb92c000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007ff0eb729000)
libm.so.6 => /lib64/libm.so.6 (0x00007ff0eb3e9000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff0eb03e000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff0ec781000)
I can verify that /opt/python39/lib/libpython3.9.so.1.0 exists and I have it in my LD_RUN_PATH and LD_LIBRARY_PATH variables
Output of sudo apachectl -V:
Server version: Apache/2.4.53 ()
Server built: Apr 12 2022 12:00:44
Server's Module Magic Number: 20120211:124
Server loaded: APR 1.7.0, APR-UTIL 1.6.1, PCRE 8.32 2012-11-30
Compiled using: APR 1.7.0, APR-UTIL 1.6.1, PCRE 8.32 2012-11-30
Architecture: 64-bit
Server MPM: prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_PROC_PTHREAD_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=256
-D HTTPD_ROOT="/etc/httpd"
-D SUEXEC_BIN="/usr/sbin/suexec"
-D DEFAULT_PIDLOG="/run/httpd/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"
You are seeing this error...
libpython3.9.so.1.0: cannot open shared object file: No such file or directory
...because httpd has no idea it's supposed to look in /opt/python3.9/lib to find the necessary shared library. There are several ways of resolving this problem:
Set -rpath when linking the module.
This embeds a path to the library in the compiled binary. You would set it by running make like this inside the mod_wsgi-4.9.2 directory:
make LDFLAGS='-L/opt/python3.9/lib -Wl,-rpath,/opt/python3.9/lib'
Set LD_LIBRARY_PATH in httpd's environment. This provides
httpd with an additional list of directories to search for shared
libraries. We can test it like this:
LD_LIBRARY_PATH=/opt/python3.9/lib httpd -DFOREGROUND
To set it persistently, you'd want to customize the httpd
service unit:
Run systemctl edit httpd
In the editor that comes up, add the following content:
[Service]
Environment=LD_LIBRARY_PATH=/opt/python3.9/lib
This creates
/etc/systemd/system/httpd.service.d/override.conf.
Run systemctl daemon-reload to refresh the cached version of the unit file.
Restart your httpd service.
Edit the global library search path by creating
/etc/ld.so.conf.d/python3.9.conf with the following content:
/opt/python3.9/lib
Then run:
ldconfig
Any of the above options should get things running for you.

Failed to connect to containerd: failed to dial

Just installed Docker CE following official instructions with the repository in Ubuntu 14.04
Installation went successfully, the daemon is running
$ ps aux | grep docker
[...] /usr/bin/dockerd --raw-logs [...]
My user is in the docker group:
$ groups
[...] docker
The cli can't seem to communicate (same with sudo)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?
The socket seems to have the correct permissions:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Feb 4 16:21 /var/run/docker.sock
The log seems to claim about some issues though
$ sudo tail -f /var/log/upstart/docker.log
Failed to connect to containerd: failed to dial "/var/run/docker/containerd/docker-containerd.sock": dial unix:///var/run/docker/containerd/docker-containerd.sock: timeout
/var/run/docker.sock is up
time="2018-02-04T16:22:21.031459040+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=17147
INFO[0000] starting containerd module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
INFO[0000] setting subreaper... module=containerd
containerd: invalid argument
time="2018-02-04T16:22:21.056685023+01:00" level=error msg="containerd did not exit successfully" error="exit status 1" module=libcontainerd
Any advice to make this work ?
Relog and Docker restart already done of course
As #bobbear suggested and is actually mentioned in the official doc one of the prerequisites is:
Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for you platform is recommended.
After having checked my Kernel version:
$ uname -a
Linux [...] 3.2.[...]-generic [...]-Ubuntu [...] x86_64
I searched for candidates:
$ apt-cache search linux-image
And installed my new_kernel:
$ sudo apt-get install \
linux-image-new_kernel \
linux-headers-new_kernel \
linux-image-extra-new_kernel
Same situation happend on me. IS because your linux kernel version too low !!! check it use command "uname -r" , if the version below "3.10" (for example: debian 7 whezzy default version is 3.2 ) ,even you install docker-ce suceessfully, you will still can not start docker daemon success.That why! All most answers on the web tell you to 'restart' bla bla bla... but they did not consider this problem.

Remote LLDB debugging - Docker container

I'm trying to set up a remote debugging with LLDB 4.0.1.
There's a docker (17.06.0-ce) container with Arch linux.
Docker container is set in privileged mode, so now LLDB can be started in container.
Container contains core_service which is Rust executable.
Commands run inside container
(lldb) target create target/debug/core_service
Current executable set to 'target/debug/core_service' (x86_64).
(lldb) process launch
Process 182 launched: '/srv/core_service/target/debug/core_service' (x86_64)
Problem exists with remote debugging, lldb-server is started inside container with lldb-server platform --server --listen 0.0.0.0:1234.
I can connect from host lldb to container lldb-server, but I can't attach/create processes.
Commands run on host (lldb-server in container = localhost:1234)
(lldb) platform select remote-linux
Platform: remote-linux
Connected: no
(lldb) platform connect connect://localhost:1234
Platform: remote-linux
Triple: x86_64-*-linux-gnu
OS Version: 4.12.4 (4.12.4-1-ARCH)
Kernel: #1 SMP PREEMPT Fri Jul 28 18:54:18 UTC 2017
Hostname: 099bd76c07c9
Connected: yes
WorkingDir: /srv/core_service
(lldb) target create target/debug/core_service
Current executable set to 'target/debug/core_service' (x86_64).
(lldb) process launch
error: connect remote failed (Connection refused)
error: process launch failed: Connection refused
How can I fix it? Are there any docker, arch linux settings that would cause this error?
It seems, like there's some problem with lldb-server permissions in docker container.
Commands run on host (lldb-server in container)
(lldb) platform shell ps -A
PID TTY TIME CMD
1 ? 00:00:00 bash
9 ? 00:00:00 nginx
10 ? 00:00:00 nginx
11 ? 00:00:00 lldb-server
25 ? 00:00:00 core_service
59 ? 00:00:00 lldb-server
68 ? 00:00:00 ps
(lldb) platform shell kill -9 25
(lldb) platform process launch target/debug/core_service
error: connect remote failed (Connection refused)
error: Connection refused
(lldb) platform process launch anything
error: connect remote failed (Connection refused)
error: Connection refused
But I can't figure out what can it be. lldb-server is run as root in container, I can execute shell commands using lldb.
There is needed both platform port (1234 in your case) and gdbserver port (randomly generated by default). You can enforce the gdbserver port by lldb-server option --gdbserver-port.
Tested on Fedora 29 x86_64:
docker run --privileged -p 5000:5000 -p 5001:5001 fedora bash -c 'dnf -y install lldb;lldb-server platform --server --listen 0.0.0.0:5000 --gdbserver-port 5001'
and
echo 'int main(){}' >main.c;gcc -g -o main main.c;lldb -o 'platform select remote-linux' -o 'platform connect connect://localhost:5000' -o "target create ./main" -o 'b main' -o 'process launch'
(lldb) process launch
Process 45 stopped
* thread #1, name = 'main', stop reason = breakpoint 1.1
frame #0: 0x000000000040110f main`main at main.c:1
-> 1 int main(){}
Process 45 launched: '/root/main' (x86_64)
(lldb) _
This may be because the server cannot see any process on the host. It is still wrapped in its own PID namespace. When you launch the LLDB server, use a host pid name space
docker run --pid=host --privileged <yourimage>
Hopefully this will allow your container see all the host processes

What starts this docker process on my laptop?

Every time I boot up my Lubuntu 16.04 laptop I can see I have a running docker container:
$ ps -ef | grep docker
root 1724 1 3 21:17 ? 00:01:30 /usr/bin/dockerd -H fd://
root 1774 1724 0 21:17 ? 00:00:04 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4750 1774 0 21:17 ? 00:00:00 docker-containerd-shim 72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 /var/run/docker/libcontainerd/72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 docker-runc
stephane 10755 1793 0 22:07 pts/0 00:00:00 grep docker
It serves a Jenkins application on the port 80 and requesting localhost/ in the browser redirects to http://localhost/login?from=%2F and shows a Jenkins warning page:
Unlock Jenkins
To ensure Jenkins is securely set up by the administrator, a password has been written to the log (not sure where to find it?) and this file on the server:
A wget request shows:
$ wget localhost/
--2017-05-23 22:09:55-- http://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-05-23 22:09:55 ERROR 403: Forbidden.
How can I know which service is firing up this docker process ?
I looked in the /etc/init.d/ directory:
$ l /etc/init.d/
alsa-utils* checkroot-bootclean.sh* halt* mattermostd* nginxd* rc* single* uuidd*
anacron* checkroot.sh* hostname.sh* mountall-bootclean.sh* ntp* rc.local* skeleton whoopsie*
apachedsd* console-setup* httpd* mountall.sh* ondemand* rcS* ssh* x11-common*
apparmor* cron* hwclock.sh* mountdevsubfs.sh* openvpn* README tomcatd*
apport* cups* irqbalance* mountkernfs.sh* php-fpm* reboot* udev*
avahi-daemon* cups-browsed* keyboardd* mountnfs-bootclean.sh* plymouth* redis* ufw*
bluetooth* dbus* killprocs* mountnfs.sh* plymouth-log* resolvconf* umountfs*
bootmisc.sh* docker* kmod* mysqld* postfix* rsync* umountnfs.sh*
cgroupfs-mount* dropboxd* lightdm* networking* pppd-dns* rsyslog* umountroot*
checkfs.sh* grub-common* mariadbd* network-manager* procps* sendsigs* urandom*
The /etc/init.d/docker is mine and removing it from the directory, a reboot still comes up with a running docker process.
I removed the /etc/init.d/docker file, rebooted, and there is a docker process:
$ ps -ef | grep docker
root 1560 1 5 22:15 ? 00:00:06 /usr/bin/dockerd -H fd://
root 1645 1560 0 22:15 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4644 1645 0 22:15 ? 00:00:00 docker-containerd-shim 069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 /var/run/docker/libcontainerd/069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 docker-runc
stephane 5520 1741 0 22:17 pts/0 00:00:00 grep docker
So I looked up for anything docker in all these files, but found nothing named docker:
$ cd /etc/init.d/
[stephane#stephane-ThinkPad-X301 init.d]
$ grep.sh docker
[stephane#stephane-ThinkPad-X301 init.d]
This docker process is there every time I start my laptop, even when off line.
What starts this docker process ?
Lubuntu 16.04 comes with systemd by default. At some point you must have started up a jenkins instance in docker - it's hard to tell exactly what started the process initially. However, systemd would be what is currently causing it to start. In order to stop it from running, run the following commands:
systemctl status docker <- Find out of systemctl thinks docker is running.
It'll likely show something like this:
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-05-21 22:59:46 EDT; 1 day 17h ago
Docs: http://docs.docker.com
Main PID: 1314 (dockerd-current)
Tasks: 14 (limit: 8192)
CGroup: /system.slice/docker.service
└─1314 /usr/bin/dockerd-current --add-runtime oci=/usr/libexec/docker/docker-runc-current --default-runtime=oci --containerd /run/containerd.sock --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald
To stop it, run systemctl stop docker and then systemctl disable docker. As a last resort if this doesn't work, you can run systemctl mask docker.
Docker is being started by systemd in your environment. You can disable the entire engine by running:
sudo systemctl disable docker
sudo systemctl stop docker
You can also stop only the container that is running (the shim and Jenkins application):
sudo docker ps # lists the running containers along with their container id
sudo docker update --restart=no $container_id
sudo docker stop $container_id
If you know that you do not need this container and want to permanently delete it, you can run this instead of the above two last commands:
sudo docker rm -f $container_id
The -f switch also stops the container if it's currently running.
Edit: from your comment, your container is running under swarm mode which is redeploying it. To stop that first find the stack or service that is running it.
sudo docker stack ls
sudo docker service ls
If you see a stack listed, you can remove that with:
sudo docker stack rm $stack_name
If there are no stacks listed, or they don't apply to this container, you can delete the service with:
sudo docker service rm $service_name

JBoss5 and JBoss7 installation conflict same Linux Machine

I had a properly working JBoss7 installation, but recently on my machine my team mate installed JBoss5.1.0GA, Since then i am facing two problems and still unable to resolve them.
Whene ever i stop the jboss with init.d script. I get this error.
[root ~]# service jboss stop
Stopping jboss-as: [ OK ]
[root ~]# *** JBossAS process (25571) received KILL signal ***
grep: /var/run/jboss-as/jboss-as-standalone.pid: No such file or directory
Could there be any conflict with the processID file that jboss generates to check weather server is running or not.
I doubt that there is a conflict with another JBoss5 Installation.
Second Issue is I am unable to connect with the server via jboss-cli.sh
[root bin]# sh jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect localhost
The controller is not available at localhost:9999
[disconnected /] connect localhost
One thing i want you to check, the ps auxwww |grep jboss commands result
I can see two process , is this any conflict ? With PId
root 25970 0.0 0.0 161476 1960 pts/0 S 07:58 0:00 su - jboss -c LAUNCH_JBOSS_IN_BACKGROUND=1 JBOSS_PIDFILE=/var/run/jboss-as/jboss-as-standalone.pid /usr/share/jboss-as/bin/standalone.sh -c standalone.xml
jboss 25973 0.0 0.0 106096 1344 ? Ss 07:58 0:00 /bin/sh /usr/share/jboss-as/bin/standalone.sh -c standalone.xml
jboss 26022 8.7 8.7 1027368 342776 ? Sl 07:58 0:45 /usr/java/jdk1.7.0_25/bin/java -D[Standalone] -server -XX:+TieredCompilation -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.server.default.config=standalone.xml -Dspring.profiles.active=dev -Dorg.jboss.boot.log.file=/usr/share/jboss-as/standalone/log/boot.log -Dlogging.configuration=file:/usr/share/jboss-as/standalone/configuration/logging.properties -jar /usr/share/jboss-as/jboss-modules.jar -mp /usr/share/jboss-as/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/usr/share/jboss-as -c standalone.xml
root 26365 0.0 0.0 103244 848 pts/0 S+ 08:07 0:00 grep jboss
I can see multiple process started with command sh standalone.sh command.
Is this the interference ?
If there is no PID file, then you might have a permissions issue on /var/run/jboss-as
You should also check JBOSS_HOME/standalone/log/console.log to see if there are any errors there
Replying after a long time, but we were successfully able to run two instances of JBoss, with using Port Binding Offset and while connecting via Jboss-cli.sh provide your incremented Port.
If you set port.binding.offset=2 then connect to port 10001 i.2 9999+2

Resources