Selenium remote webdriver not accepting chrome options in Synology Nas. Works fine on desktop - python-3.x

Changing from windows to synology has broken my docker compose.
The chrome options are now not being listened to buy the selenium container.
The downloads location has changed and is asking for download confirmation.
I have build a python app to log in an download a report.
I have dockerise it with two seperate containers. A selenuim standalone chrome broswer and ptyhon 3 image. It works fine on my windows 10 PC.
But when i set it up on my DS918+ the chrome some of the chrome options are not being listened to in the chrome container.
If i use the debug i can vnc in and manually confirm and it does download fine.
any help with the Chrome options would be Appreciated?
Compose file
version: '3'
services:
pythoncode:
build: ./app
volumes:
- ./app:/usr/src/app
networks:
testing_net:
ipv4_address: 172.28.1.1
environment:
- PYTHONUNBUFFERED=1
- EmailUser=
- EmailSender=
- EmailPass=
- NZCUser=
- NZCPass=
browser:
image: selenium/standalone-chrome-debug
ports:
- "4444:4444"
- "5900:5900"
volumes:
- ./app/Downloads:/home/seluser/Downloads
depends_on:
- pythoncode
networks:
testing_net:
ipv4_address: 172.28.1.2
networks:
testing_net:
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
python options
capabilities_chrome = {
'browserName': 'chrome',
# 'proxy': { \
# 'proxyType': 'manual',
# 'sslProxy': '50.59.162.78:8088',
# 'httpProxy': '50.59.162.78:8088'
# },
'goog:chromeOptions': {
'args': [
],
'prefs': { \
# 'download.default_directory': "",
# 'download.directory_upgrade': True,
'download.prompt_for_download': False,
'plugins.always_open_pdf_externally': True,
'safebrowsing_for_trusted_sources_enabled': False
}
}
}
# driver = webdriver.Chrome(
# executable_path=, desired_capabilities=capabilities_chrome)
driver = webdriver.Remote(
'http://172.28.1.2:4444/wd/hub', capabilities_chrome)
selenuim debug report
2020-03-17 17:01:08,955 INFO stopped: xvfb (terminated by SIGTERM)
2020-03-17 17:01:08,954 INFO stopped: fluxbox (terminated by SIGTERM)
2020-03-17 17:01:08,951 INFO stopped: vnc (terminated by SIGTERM)
2020-03-17 17:01:08,950 INFO stopped: selenium-standalone (terminated by SIGTERM)
2020-03-17 17:01:08,949 INFO waiting for xvfb, selenium-standalone, vnc, fluxbox to die
2020-03-17 17:01:08,949 WARN received SIGTERM indicating exit request
Trapped SIGTERM/SIGINT/x so shutting down supervisord...
17:01:03.576 INFO [ActiveSessions$1.onStop] - Removing session f885300040124a841b3627e1fdf57233 (org.openqa.selenium.chrome.ChromeDriverService)
[1584464415.350][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464415.248][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464415.146][SEVERE]: Timed out receiving message from renderer: 0.100
[1584464414.961][SEVERE]: Timed out receiving message from renderer: 0.100
17:00:14.392 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session f885300040124a841b3627e1fdf57233 (org.openqa.selenium.chrome.ChromeDriverService)
17:00:14.338 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
[1584464411.379][SEVERE]: bind() failed: Cannot assign requested address (99)
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Only local connections are allowed.
Starting ChromeDriver 80.0.3987.106 (f68069574609230cf9b635cd784cfb1bf81bb53a-refs/branch-heads/3987#{#882}) on port 2548
17:00:11.227 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
}
  }
    }
      "safebrowsing_for_trusted_sources_enabled": false
      "plugins.always_open_pdf_externally": true,
      "download.prompt_for_download": false,
    "prefs": {
    ],
    "args": [
  "goog:chromeOptions": {
  "browserName": "chrome",
17:00:11.223 INFO [ActiveSessionFactory.apply] - Capabilities are: {
17:00:10.515 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
17:00:10.317 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
2020-03-17 17:00:09.747:INFO::main: Logging initialized #1154ms to org.seleniumhq.jetty9.util.log.StdErrLog
17:00:09.634 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
17:00:09.306 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
2020-03-17 17:00:09,298 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: fluxbox entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:09,298 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2020-03-17 17:00:08,296 INFO spawned: 'selenium-standalone' with pid 14
2020-03-17 17:00:08,294 INFO spawned: 'vnc' with pid 13
2020-03-17 17:00:08,292 INFO spawned: 'fluxbox' with pid 12
2020-03-17 17:00:08,290 INFO spawned: 'xvfb' with pid 11
2020-03-17 17:00:07,287 INFO supervisord started with pid 8
2020-03-17 17:00:07,283 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2020-03-17 17:00:07,283 INFO Included extra file "/etc/supervisor/conf.d/selenium-debug.conf" during parsing

Related

Cannot Proxy Requests From React Node.js App to Express Backend (Docker)

I'm having issues understanding how to proxy my requests to Express routes in my backend while accounting for the local development env. use case and the docker containerized use case. What I'm trying to setup up is a situation in which I have "proxy" configured for "http://localhost:8080" on my local env and http://api:8080 configured for my container. What I have thus far is createProxyMiddleware configured like so...
module.exports = function(app) {
console.log(process.env.API_URL);
app.use(
'/api',
createProxyMiddleware({
target: process.env.API_URL,
changeOrigin: true,
})
);
};
And my docker-compose file is configured like so...
version: "3.7"
services:
client:
image: webapp-client
build: ./client
restart: always
environment:
- API_URL=http://api:8080
volumes:
- ./client:/client
- /client/node_modules
labels:
- "traefik.enable=true"
- "traefik.http.routers.client.rule=PathPrefix(`/`)"
- "traefik.http.routers.client.entrypoints=web"
- "traefik.port=3000"
depends_on:
- api
networks:
- webappnetwork
api:
image: webapp-api
build: ./api
restart: always
ports:
- "8080:8080"
volumes:
- ./api:/api
- /api/node_modules
networks:
- webappnetwork
traefik:
image: "traefik:v2.5"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- webappnetwork
networks:
webappnetwork:
external: true
volumes:
pg-data:
pgadmin:
Upon startup, the container logs out...
[HPM] Proxy created: / -> http://api:8080
My axios calls look like this...
const <config_name> = {
method: 'post',
url: '/<route>',
headers: {
'Content-Type': 'application/json'
},
data: dataInput
}
As you can see, I set the environment variable and pass that into the createProxyMiddleWare method, but for some reason, this config doesn't work and gives a 404 when I try to hit a route. Any help with this would be greatly appreciated!

How to configure trojan to make it fall back to the site correctly?

I use the mirror jwilder/nginx-proxy to automatically HTTPS, and I deploy the trojan-go service through the compose.yml file. The content of the compose.yml file is shown below. I can open the HTTPS website correctly by the domain name, but trojan-go does not fall back to the website correctly, and the log shows
github.com/p4gefau1t/trojan-go/proxy.(*Node).BuildNext:stack.go:29 invalid redirect address. check your http server: trojan_web:80 | dial tcp 172.18.0.2:80: connect: connection refused, where is the problem? thank you very much!
version: '3'
services:
trojan-go:
image: teddysun/trojan-go:latest
restart: always
volumes:
- ./config.json:/etc/trojan-go/config.json
- /opt/trojan/nginx/certs/:/opt/crt/:ro
environment:
- "VIRTUAL_HOST=domain name"
- "VIRTUAL_PORT=38232"
- "LETSENCRYPT_HOST=domain name"
- "LETSENCRYPT_EMAIL=xxx#gmail.com"
expose:
- "38232"
web1:
image: nginx:latest
restart: always
expose:
- "80"
volumes:
- /opt/trojan/nginx/html:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=domain name
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=domain name
- LETSENCRYPT_EMAIL=xxx#gmail.com
networks:
default:
external:
name: proxy_nginx-proxy
the content of trojan-go config.conf is shown below:
{
"run_type": "server",
"local_addr": "0.0.0.0",
"local_port": 38232,
"remote_addr": "trojan_web",
"remote_port": 80,
"log_level": 1,
"password": [
"mypasswd"
],
"ssl": {
"verify": true,
"verify_hostname": true,
"cert": "/opt/crt/domain name.crt",
"key": "/opt/crt/domain name.key",
"sni":"domain name"
},
"router":{
"enabled": true,
"block": [
"geoip:private"
]
}
}
(ps:I confirm that the trojan-go service and the web container are on the same intranet and can communicate with each other)

AWS cli working but boto3 not finding profile

I am running a python script to connect to AWS SSM.
My docker-compose has this volume set up:
- ~/.aws/:/home/airflow/.aws
Boto3 Code:
LOCALHOST = 1
SERVICE = 'ssm'
PROFILE = 'profile3'
#File path
CURRENT_PATH = os.path.dirname(os.path.realpath(__file__))
def get_aws_client(localhost=None):
"""
Creates boto3 aws client for any service.
:param localhost: Parameter that enables use of roles in localhost.
:return: aws client object
"""
if localhost is not None:
globals().update(LOCALHOST=localhost)
boto_object = Boto3AwsClient(localhost=LOCALHOST, profile=PROFILE)
aws_client = boto_object.aws_client_connect(service=SERVICE)
return aws_client
It returns:
botocore.exceptions.ProfileNotFound: The config profile (profile3) could not be found
If I run:
docker exec -it webserver bash
And print
cat /home/airflow/.aws/credentials
cat /home/airflow/.aws/config
I see for credentials:
[default]
aws_access_key_id = XXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXxxxxxxxxxxxxxXXXXXXXXXXX
For config:
[default]
region=eu-west-1
output=json
[profile profile3]
region=eu-west-1
role_arn=arn:aws:iam::333333333333:role/AllowBlablahblah
source_profile=default
[profile profile2]
region=eu-west-1
role_arn=arn:aws:iam::22222222222:role/AllowBliblihblih
source_profile=default
[profile profile1]
region=eu-west-1
role_arn=arn:aws:iam::1111111111111:role/AllowBlubluhbluh
source_profile=default
And event I can run without problem:
aws s3 ls
aws s3 ls --profile profile3
So I guess config and credentials are not really missing, and no format issue as aws cli is working.
I don't know what's going on here. Any idea?
Dockerfile:
FROM apache/airflow:2.1.2-python3.8
ARG AIRFLOW_USER_HOME=/opt/airflow
ENV PYTHONPATH "${PYTHONPATH}:/"
ADD ./environtment_config/docker_src ./environtment_config/docker_src
RUN pip install -r environtment_config/docker_src/requirements.pip
Full docker-compose:
version: '3'
services:
webserver:
image: own-airflow2
command: webserver
ports:
- 8080:8080
healthcheck:
test: [ "CMD", "curl", "--fail", "http://localhost:8080/health" ]
interval: 10s
timeout: 10s
retries: 5
restart: always
build:
context: .
dockerfile: Dockerfile3
env_file:
- ./airflow.env
container_name: webserver
volumes:
- ./database_utils:/database_utils
- ./maintenance:/maintenance
- ./utils:/utils
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./airflow_sqlite:/opt/airflow
- ~/.aws/:/home/airflow/.aws
scheduler:
image: own-airflow2
command: scheduler
healthcheck:
test: [ "CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"' ]
interval: 10s
timeout: 10s
retries: 5
restart: always
container_name: scheduler
build:
context: .
dockerfile: Dockerfile3
env_file:
- ./airflow.env
volumes:
- ./database_utils:/database_utils
- ./maintenance:/maintenance
- ./utils:/utils
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./airflow_sqlite:/opt/airflow
- ~/.aws/:/home/airflow/.aws
depends_on:
- webserver
EDIT:
I forgot to say that I added env vars such as:
#Boto3
AWS_CONFIG_FILE=/home/airflow/.aws/config
AWS_SHARED_CREDENTIALS_FILE=/home/airflow/.aws/credentials
To specify clearly which one is the correct path of the file.

Fix DNS on a docker-compose selenium grid so the selenium node connects to a docker-compose hostname

I have a selenium grid running under docker-compose on a Jenkins machine. My docker-compose includes a simple web server that serves up a single page application, and a test-runner container that orchestrates tests.
version: "3"
services:
hub:
image: selenium/hub
networks:
- selenium
privileged: true
restart: unless-stopped
container_name: hub
ports:
- "4444:4444"
environment:
- SE_OPTS=-browserTimeout 10 -timeout 20
chrome:
image: selenium/node-chrome-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5900:5900"
firefox:
image: selenium/node-firefox-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5901:5900"
runner:
build:
context: ./
dockerfile: ./python.dockerfile
security_opt:
- seccomp=unconfined
cap_add:
- SYS_PTRACE
command: sleep infinity
networks:
- selenium
volumes:
- ./:/app
depends_on:
- hub
- app
- chrome
- firefox
environment:
HUB_CONNECTION_STRING: http://hub:4444/wd/hub
TEST_DOMAIN: "app"
app:
image: nginx:alpine
networks:
- selenium
volumes:
- ../dist:/usr/share/nginx/html
ports:
- "8081:80"
networks:
selenium:
When my tests run (in the runner container above) I can load the home page as long as I use an ip address -
def test_home_page_loads(self):
host = socket.gethostbyname(self.test_domain) // this is the TEST_DOMAIN env var above
self.driver.get(f"http://{host}")
header = WebDriverWait(self.driver, 40).until(
EC.presence_of_element_located((By.ID, 'welcome-message')))
assert(self.driver.title == "My Page Title")
assert(header.text == "My Header")
But I can't use the host name app. The following times out -
def test_home_page_with_hostname(self):
self.driver.get("http://app/")
email = WebDriverWait(self.driver, 10).until(
EC.presence_of_element_located((By.ID, 'email')))
The problem I'm facing is that I can't do all this using IP addresses because the web app is connecting to an external IP and I need to configure the API for CORS requests.
I'd assumed the problem was that the chrome container couldn't reach the app container - the issue was that the web server on the app container wasn't serving pages for the hostname I was using. Updating the Nginx conf to include the correct server has fixed the issue.
I can now add the hostname to the access-control-allow-origin settings on the api's that the webpage is using.
I'm attaching a basic working config here for anyone else looking to do something similar.
docker-compose.yml
version: "3"
services:
hub:
image: selenium/hub
networks:
- selenium
privileged: true
restart: unless-stopped
container_name: hub
ports:
- "4444:4444"
environment:
- SE_OPTS=-browserTimeout 10 -timeout 20
chrome:
image: selenium/node-chrome-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5900:5900"
firefox:
image: selenium/node-firefox-debug
networks:
- selenium
privileged: true
restart: unless-stopped
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SE_OPTS=-browserTimeout 10 -timeout 20
ports:
- "5901:5900"
runner:
build:
context: ./
dockerfile: ./python.dockerfile
security_opt:
- seccomp=unconfined
cap_add:
- SYS_PTRACE
command: sleep infinity
networks:
- selenium
volumes:
- ./:/app
depends_on:
- hub
- webserver
- chrome
- firefox
environment:
HUB_CONNECTION_STRING: http://hub:4444/wd/hub
TEST_DOMAIN: "webserver"
webserver:
image: nginx:alpine
networks:
- selenium
volumes:
- ../dist:/usr/share/nginx/html
- ./nginx_conf:/etc/nginx/conf.d
ports:
- "8081:80"
networks:
selenium:
default.conf
server {
listen 80;
server_name webserver;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The 'runner' container is based on the docker image from python:3 and includes pytest. A simple working test looks like -
test.py
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import os
import pytest
import socket
#Fixture for Chrome
#pytest.fixture(scope="class")
def chrome_driver_init(request):
hub_connection_string = os.getenv('HUB_CONNECTION_STRING')
test_domain = os.getenv('TEST_DOMAIN')
chrome_driver = webdriver.Remote(
command_executor=hub_connection_string,
desired_capabilities={
'browserName': 'chrome',
'version': '',
"chrome.switches": ["disable-web-security"],
'platform': 'ANY'})
request.cls.driver = chrome_driver
request.cls.test_domain = test_domain
yield
chrome_driver.close()
#pytest.mark.usefixtures("chrome_driver_init")
class Basic_Chrome_Test:
driver = None
test_domain = None
pass
class Test_Atlas(Basic_Chrome_Test):
def test_home_page_loads(self):
self.driver.get(f"http://{self.test_domain}")
header = WebDriverWait(self.driver, 40).until(
EC.presence_of_element_located((By.ID, 'welcome-message')))
assert(self.driver.title == "My Page Title")
assert(header.text == "My Header")
This can be run with something like docker exec -it $(docker-compose ps -q runner) pytest test.py (exec into the runner container and run the tests using pytest).
This framework can then be added to a Jenkins step -
Jenkinsfile
stage('Run Functional Tests') {
steps {
echo 'Running Selenium Grid'
dir("${env.WORKSPACE}/functional_testing") {
sh "/usr/local/bin/docker-compose -f ${env.WORKSPACE}/functional_testing/docker-compose.yml -p ${currentBuild.displayName} run runner ./wait-for-webserver.sh pytest tests/atlas_test.py"
}
}
}
wait-for-webserver.sh
#!/bin/bash
# wait-for-webserver.sh
set -e
cmd="$#"
while ! curl -sSL "http://hub:4444/wd/hub/status" 2>&1 \
| jq -r '.value.ready' 2>&1 | grep "true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://webserver)" != "200" ]]; do
echo 'Waiting for Webserver'
sleep 1;
done
>&2 echo "Grid & Webserver are ready - executing tests"
exec $cmd
Hope this is useful for someone.

Gitlab 'Gateway Timeout' behind traefik proxy

So I'm trying to set up a gitlab-ce instance on docker swarm using traefik as reverse proxy.
This is my proxy stack;
version: '3'
services:
traefik:
image: traefik:alpine
command: --entryPoints="Name:http Address::80 Redirect.EntryPoint:https" --entryPoints="Name:https Address::443 TLS" --defaultentrypoints="http,https" --acme --acme.acmelogging="true" --acme.email="freelyformd#gmail.com" --acme.entrypoint="https" --acme.storage="acme.json" --acme.onhostrule="true" --docker --docker.swarmmode --docker.domain="mydomain.com" --docker.watch --web
ports:
- 80:80
- 443:443
- 8080:8080
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-net:
external: true
And my gitlab stack
version: '3'
services:
omnibus:
image: 'gitlab/gitlab-ce:latest'
hostname: 'lab.mydomain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://lab.mydomain.com'
nginx['listen_port'] = 80
nginx['listen_https'] = false
registry_external_url 'https://registry.mydomain.com'
registry_nginx['listen_port'] = 80
registry_nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 2222
gitlab_rails['gitlab_email_from'] = 'lab#mydomain.com'
gitlab_rails['gitlab_email_reply_to'] = 'lab#mydomain.com'
ports:
- 2222:22
volumes:
- gitlab_config:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
networks:
- traefik-net
deploy:
labels:
traefik.enable: "port"
traefik.frontend.rule: 'Host: lab.mydomain.com, Host: registry.mydomain.com'
traefik.port: 80
placement:
constraints:
- node.role == manager
runner:
image: 'gitlab/gitlab-runner:v1.11.4'
volumes:
- gitlab_runner_config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
volumes:
gitlab_config:
gitlab_logs:
gitlab_data:
gitlab_runner_config:
networks:
traefik-net:
external: true
traefik-net is an overlay network
So when I deploy using docker stack deploy and visit lab.mydomain.com, i get the Gateway Timeout error. When I execute curl localhost within the gitlab container, it seems to work fine. Not sure what the problem is, any pointers would be appreciated
Turns out all I had to do was set the traefik label, traefik.docker.network to traefik-net, see https://github.com/containous/traefik/issues/1254

Resources