I am trying to run Mainline linux kernel v5.4.0-rc3 as DomU. My setup details below.
Target hw: Pine64+
Architecture: aarch64 (arm64)
Xen version: 4.6.5
Dom0: Mainline linux Kernel v5.4.0-rc3
DomU config file:
kernel = "path to kernel image"
memory = "128"
name = "domU"
vcpus = 1
disk = [ 'phy:/dev/loop0,xvda,w' ]
extra = "earlyprintk=xenboot console=hvc0 root=/dev/xvda debug rw init=/bin/sh"
I loaded the DomU as below.
ubuntu#LXC_NAME:~/workspace/domu$ sudo losetup /dev/loop0 rootfs.ext4
ubuntu#LXC_NAME:~/workspace/domu$ sudo xl -vvv create -d domu.config
But DomU boot failed with following message.
libxl: debug: libxl_device.c:337:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=phy
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x238e0bd0 wpath=/local/domain/0/backend/vbd/1/51712/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1586:do_domain_create: ao 0x238de720: inprogress: poller=0x238de7b0, flags=i
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x238e0bd0 wpath=/local/domain/0/backend/vbd/1/51712/state token=3/0: event epath=/local/domain/0/backend/vbd/1/51712/state
libxl: debug: libxl_event.c:884:devstate_callback: backend /local/domain/0/backend/vbd/1/51712/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x238e0bd0 wpath=/local/domain/0/backend/vbd/1/51712/state token=3/0: event epath=/local/domain/0/backend/vbd/1/51712/state
libxl: debug: libxl_event.c:880:devstate_callback: backend /local/domain/0/backend/vbd/1/51712/state wanted state 2 ok
libxl: debug: libxl_event.c:677:libxl__ev_xswatch_deregister: watch w=0x238e0bd0 wpath=/local/domain/0/backend/vbd/1/51712/state token=3/0: deregister slotnum=3
libxl: debug: libxl_device.c:991:device_backend_callback: calling device_backend_cleanup
libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x238e0bd0: deregister unregistered
libxl: error: libxl.c:1991:libxl__get_domid: failed to get own domid (domid)
libxl: error: libxl_device.c:1041:device_hotplug: Failed to get domid
libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x238e0cd0: deregister unregistered
libxl: error: libxl_create.c:1176:domcreate_launch_dm: unable to add disk devices
libxl: error: libxl.c:1991:libxl__get_domid: failed to get own domid (domid)
libxl: error: libxl_device.c:849:libxl__initiate_device_remove: unable to get my domid
libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch w=0x238d9210: deregister unregistered
libxl: error: libxl.c:1991:libxl__get_domid: failed to get own domid (domid)
libxl: error: libxl.c:1684:devices_destroy_cb: libxl__devices_destroy failed for 1
libxl: debug: libxl.c:1738:devices_destroy_cb: forked pid 679 for destroy of domain 1
libxl: debug: libxl_event.c:1874:libxl__ao_complete: ao 0x238de720: complete, rc=-3
libxl: debug: libxl_event.c:1843:libxl__ao__destroy: ao 0x238de720: destroy
libxl: debug: libxl.c:1477:libxl_domain_destroy: ao 0x238d8a90: create: how=(nil) callback=(nil) poller=0x238de7b0
libxl: error: libxl.c:1610:libxl__destroy_domid: non-existant domain 1
libxl: error: libxl.c:1568:domain_destroy_callback: unable to destroy guest with domid 1
libxl: error: libxl.c:1495:domain_destroy_cb: destruction of domain 1 failed
libxl: debug: libxl_event.c:1874:libxl__ao_complete: ao 0x238d8a90: complete, rc=-21
libxl: debug: libxl.c:1486:libxl_domain_destroy: ao 0x238d8a90: inprogress: poller=0x238de7b0, flags=ic
libxl: debug: libxl_event.c:1843:libxl__ao__destroy: ao 0x238d8a90: destroy
xc: debug: hypercall buffer: total allocations:97 total releases:97
xc: debug: hypercall buffer: current allocations:0 maximum allocations:3
xc: debug: hypercall buffer: cache current size:3
xc: debug: hypercall buffer: cache hits:87 misses:3 toobig:7
DomU Complete boot log
xl list confirms that domU is not created.
ubuntu#LXC_NAME:~/workspace/domu$ sudo xl list
Name ID Mem VCPUs State Time(s)
(null) 0 256 2 r----- 9.8
I am finding difficult to understand the root cause of this issue. Could anyone throw light on what went wrong?
Xen boot log
While loading the drive on creation and while accessing the instance are same ? , if so can you attach another disk to the domain and try accessing the same domain.
For example to attach it to the Domain Using the below :
xm block-attach Domain-0 file:/home/xen/vmdisk0 xvda w
and then modify the same in the config file and then run:
disk=['file:/home/xen/vmdisk0,xvda,w']
There are some additional parameters than can be mentioned in the file "builder" (domain build function), device_model and if you are using the physical disk drive you should mention /dev/sdb
If you are using a physical image to load as mentioned above you have to use something like this : disk = [ 'file:/home/XEN_MI.img,hda,w']
Related
Hardware/IDE Context:
Part/board: Genuine STM32F103C8 (BluePill)
Programmer: ST-Link V2
IDE: STM32CubeIDE 1.5.1 on fully-updated Windows 10
Flashing utility/debugger: OpenOCD
In attempting to build/flash a simple PC_13 LED blinky program to my BluePill board, I experience errors from OpenOCD like so:
Open On-Chip Debugger 0.10.0+dev-01288-g7491fb4 (2020-10-27-17:36)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : STLINK V2J37S7 (API v2) VID:PID 0483:3748
Info : Target voltage: 3.256346
Info : Unable to match requested speed 8000 kHz, using 4000 kHz
Info : Unable to match requested speed 8000 kHz, using 4000 kHz
Info : clock speed 4000 kHz
Info : stlink_dap_op_connect(connect)
Info : SWD DPIDR 0x1ba01477
Info : STM32F103C8Tx.cpu: hardware has 6 breakpoints, 4 watchpoints
Info : STM32F103C8Tx.cpu: external reset detected
Info : starting gdb server for STM32F103C8Tx.cpu on 3333
Info : Listening on port 3333 for gdb connections
Info : accepting 'gdb' connection on tcp/3333
Error: timed out while waiting for target halted
Error executing event gdb-attach on target STM32F103C8Tx.cpu:
TARGET: STM32F103C8Tx.cpu - Not halted
Info : device id = 0x20036410
Info : flash size = 64kbytes
Info : accepting 'gdb' connection on tcp/3333
Error: timed out while waiting for target halted
Error executing event gdb-attach on target STM32F103C8Tx.cpu:
TARGET: STM32F103C8Tx.cpu - Not halted
Error: timed out while waiting for target halted
Error executing event gdb-flash-erase-start on target STM32F103C8Tx.cpu:
TARGET: STM32F103C8Tx.cpu - Not halted
Error: Target not halted
Error: failed erasing sectors 0 to 5
Error: flash_erase returned -304
shutdown command invoked
Info : dropped 'gdb' connection
shutdown command invoked
I'm interested in using OpenOCD-based flashing for my project to make use of some STM32F103C8 clone boards I have lying around, but the upload process works again when I switch the flashing mode/"Debug Probe" in STM32CubeIDE back to ST-Link (ST-Link GDB Server) from ST-Link (OpenOCD).
This is a peculiar error to me, especially since I specifically remember this exact configuration (STM32CubeIDE + OpenOCD + ST-Link + STM32F103C8) working a couple of months ago. Does anyone have any ideas as to what this could be caused by? I have the OpenOCD debugger to use the standard auto-generated config file.
Also please let me know if there is any more information/details you'd need to help diagnose this issue. I'd be happy to provide anything necessary.
EDIT 2/22/2021:
Here is a copy of the auto-generated (by STM32CubeIDE) OpenOCD .cfg file:
# This is an genericBoard board with a single STM32F103C8Tx chip
#
# Generated by STM32CubeIDE
# Take care that such file, as generated, may be overridden without any early notice. Please have a look to debug launch configuration setup(s)
source [find interface/stlink-dap.cfg]
set WORKAREASIZE 0x5000
transport select "dapdirect_swd"
set CHIPNAME STM32F103C8Tx
set BOARDNAME genericBoard
# Enable debug when in low power modes
set ENABLE_LOW_POWER 1
# Stop Watchdog counters when halt
set STOP_WATCHDOG 1
# STlink Debug clock frequency
set CLOCK_FREQ 8000
# Reset configuration
# use hardware reset, connect under reset
# connect_assert_srst needed if low power mode application running (WFI...)
reset_config srst_only srst_nogate connect_assert_srst
set CONNECT_UNDER_RESET 1
set CORE_RESET 0
# ACCESS PORT NUMBER
set AP_NUM 0
# GDB PORT
set GDB_PORT 3333
# BCTM CPU variables
source [find target/stm32f1x.cfg]
#SWV trace
tpiu config disable
Ultimately, after some further research and trial & error, I settled on a fix that seems to work for me. I noticed that when the error with halting the CPU came up, the correct program seemed to have been loaded and the RESET button just needed to be toggled manually. These are the OpenOCD settings I ended up settling on:
Changes from the default configuration:
SWD Frequency: 8 → 4 MHz
This is technically not required to work, but OpenOCD will automatically revert back to 4 MHz during the upload anyway
Reset Mode: Connect Under Reset → None
This works for me with the following output:
Open On-Chip Debugger 0.10.0+dev-01288-g7491fb4 (2020-10-27-17:36)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : STLINK V2J37S7 (API v2) VID:PID 0483:3748
Info : Target voltage: 3.254751
Info : clock speed 4000 kHz
Info : stlink_dap_op_connect(connect)
Info : SWD DPIDR 0x1ba01477
Info : STM32F103C8Tx.cpu: hardware has 6 breakpoints, 4 watchpoints
Info : STM32F103C8Tx.cpu: external reset detected
Info : starting gdb server for STM32F103C8Tx.cpu on 3333
Info : Listening on port 3333 for gdb connections
Info : accepting 'gdb' connection on tcp/3333
Info : device id = 0x20036410
Info : flash size = 64kbytes
undefined debug reason 8 - target needs reset
Info : accepting 'gdb' connection on tcp/3333
undefined debug reason 8 - target needs reset
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08000474 msp: 0x20005000
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08000474 msp: 0x20005000
shutdown command invoked
Info : dropped 'gdb' connection
shutdown command invoked
I have deployed this PostgreSQL image to the IBM Cloud, Google Cloud Platform and Microsoft Azure using Kubernetes. https://github.com/paunin/PostDock
It was successfully deployed on all 3 platforms with identical configurations and an identical process. The IBM cloud fails with the error "psql: FATAL: password authentication failed for user "replica_user""
You can find below the logs from all 3 cloud platforms. Has anyone experienced this?
IBM Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: FATAL: password authentication failed for user "replica_user"
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (172.30.65.206) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
psql: FATAL: password authentication failed for user "replica_user"
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 30 times more)
....
The last couple of lines are then repeated many times.
This is the log file from deploying the same application, using identical processes on the Google Cloud. It works just fine on the Google Cloud Platform.
Google Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (10.52.0.12) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 30 times more)
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 29 times more)
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 28 times more)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> REPLICATION_UPSTREAM_NODE_ID=1
>>> Sending in background postgres start...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> Starting standby node...
>>> Instance hasn't been set up yet.
>>> Clonning primary node...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
NOTICE: destination directory '/var/lib/postgresql/data' provided
INFO: connecting to upstream node
INFO: Successfully connected to upstream node. Current installation size is 34 MB
INFO: checking and correcting permissions on existing directory /var/lib/postgresql/data ...
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
NOTICE: starting backup (using pg_basebackup)...
INFO: executing: '/usr/lib/postgresql/9.5/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h cyclos-postgres-node1-service -p 5432 -U replica_user -c fast -X stream '
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example : pg_ctl -D /var/lib/postgresql/data start
HINT: After starting the server, you need to register this standby with "repmgr standby register"
[REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-02-02 13:24:32.87843+00; Details: Cloned from host 'cyclos-postgres-node1-service', port 5432; backup method: pg_basebackup; --force: Y
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Will add configs to exists file
>>> Starting postgres...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on cyclos-postgres-node2-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
LOG: incomplete startup packet
LOG: incomplete startup packet
LOG: database system was interrupted; last known up at 2018-02-02 13:24:31 UTC
FATAL: the database system is starting up
psql: FATAL: the database system is starting up
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node2-service:5432 (will try 60 times more)
LOG: entering standby mode
LOG: redo starts at 0/2000028
LOG: consistent recovery state reached at 0/20000F8
LOG: database system is ready to accept read only connections
LOG: started streaming WAL from primary at 0/3000000 on timeline 1
>>>>>> Db replica_db exists on cyclos-postgres-node2-service:5432!
>>> Waiting for replication on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
>>> Replication is done
>>> Unregister the node if it was done before
DELETE 0
>>> Registering node with role standby
INFO: connecting to standby database
INFO: connecting to master database
INFO: retrieving node list for cluster 'postgres_cluster'
INFO: registering the standby
[REPMGR EVENT] Node id: 2; Event type: standby_register; Success [1|0]: 1; Time: 2018-02-02 13:24:51.891592+00; Details:
INFO: standby registration complete
NOTICE: standby node correctly registered for cluster postgres_cluster with id 2 (conninfo: user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2)
Locking standby (NEW_UPSTREAM_NODE_ID=1)...
>>> Starting repmgr daemon...
[2018-02-02 13:24:53] [NOTICE] looking for configuration file in current directory
[2018-02-02 13:24:53] [NOTICE] looking for configuration file in /etc
[2018-02-02 13:24:53] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-02 13:24:53] [INFO] connecting to database 'user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-02 13:24:53] [INFO] connected to database, checking its state
[2018-02-02 13:24:53] [INFO] connecting to master node of cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] retrieving node list for cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] checking role of cluster node '1'
[2018-02-02 13:24:53] [INFO] checking cluster configuration with schema 'repmgr_postgres_cluster'
[2018-02-02 13:24:53] [INFO] checking node 2 in cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] reloading configuration file
[2018-02-02 13:24:53] [INFO] configuration has not changed
[2018-02-02 13:24:53] [INFO] starting continuous standby node monitoring
ERROR: cannot execute DELETE in a read-only transaction
STATEMENT: DELETE FROM repmgr_postgres_cluster.repl_nodes WHERE conninfo LIKE '%host=cyclos-postgres-node3-service%'
And on the Azure Cloud, it works just fine as well.
Azure Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (10.244.0.9) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: 'cyclos-postgres-node1-service'
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> REPLICATION_UPSTREAM_NODE_ID=1
>>> Sending in background postgres start...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> Starting standby node...
>>> Instance hasn't been set up yet.
>>> Clonning primary node...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
NOTICE: destination directory '/var/lib/postgresql/data' provided
INFO: connecting to upstream node
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
INFO: Successfully connected to upstream node. Current installation size is 34 MB
INFO: checking and correcting permissions on existing directory /var/lib/postgresql/data ...
NOTICE: starting backup (using pg_basebackup)...
INFO: executing: '/usr/lib/postgresql/9.5/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h cyclos-postgres-node1-service -p 5432 -U replica_user -c fast -X stream '
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example : pg_ctl -D /var/lib/postgresql/data start
HINT: After starting the server, you need to register this standby with "repmgr standby register"
[REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-02-02 06:50:47.340146+00; Details: Cloned from host 'cyclos-postgres-node1-service', port 5432; backup method: pg_basebackup; --force: Y
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Will add configs to exists file
>>> Starting postgres...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on cyclos-postgres-node2-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
LOG: incomplete startup packet
LOG: database system was interrupted; last known up at 2018-02-02 06:50:46 UTC
LOG: incomplete startup packet
FATAL: the database system is starting up
psql: FATAL: the database system is starting up
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node2-service:5432 (will try 60 times more)
LOG: entering standby mode
LOG: redo starts at 0/2000028
LOG: consistent recovery state reached at 0/2000130
LOG: database system is ready to accept read only connections
LOG: started streaming WAL from primary at 0/3000000 on timeline 1
>>>>>> Db replica_db exists on cyclos-postgres-node2-service:5432!
>>> Waiting for replication on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
>>> Replication is done
>>> Unregister the node if it was done before
DELETE 0
>>> Registering node with role standby
INFO: connecting to standby database
INFO: connecting to master database
INFO: retrieving node list for cluster 'postgres_cluster'
INFO: registering the standby
[REPMGR EVENT] Node id: 2; Event type: standby_register; Success [1|0]: 1; Time: 2018-02-02 06:51:05.083455+00; Details:
INFO: standby registration complete
NOTICE: standby node correctly registered for cluster postgres_cluster with id 2 (conninfo: user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2)
Locking standby (NEW_UPSTREAM_NODE_ID=1)...
>>> Starting repmgr daemon...
[2018-02-02 06:51:05] [NOTICE] looking for configuration file in current directory
[2018-02-02 06:51:05] [NOTICE] looking for configuration file in /etc
[2018-02-02 06:51:05] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-02 06:51:05] [INFO] connecting to database 'user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-02 06:51:06] [INFO] connected to database, checking its state
[2018-02-02 06:51:06] [INFO] connecting to master node of cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] retrieving node list for cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] checking role of cluster node '1'
[2018-02-02 06:51:06] [INFO] checking cluster configuration with schema 'repmgr_postgres_cluster'
[2018-02-02 06:51:06] [INFO] checking node 2 in cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] reloading configuration file
[2018-02-02 06:51:06] [INFO] configuration has not changed
[2018-02-02 06:51:06] [INFO] starting continuous standby node monitoring
ERROR: cannot execute DELETE in a read-only transaction
STATEMENT: DELETE FROM repmgr_postgres_cluster.repl_nodes WHERE conninfo LIKE '%host=cyclos-postgres-node3-service%'
I was able to run this on a paid cluster in IBM Cloud and it appears to be working. I did NOT use the persistent volumes and I was on a paid cluster. Please note that persistent volumes are not available on free clusters, so if you are testing on a free cluster you will get issues if you use persistent volumes.
My cluster has 3 workers of size u2c.2x4 (the smallest available) and is on the default version of Kubernetes for IBM Cloud (1.8.6), if that helps you debug at all. Please try again or if your setup is different than mine, let me know and I can try with a matching setup.
$ kubectl logs --namespace=mysystem mysystem-db-node1-0
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not translate host name "mysystem-db-node1-service" to address: Name or service not known
psql: could not translate host name "mysystem-db-node2-service" to address: Name or service not known
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
>>> Sending in background postgres start...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on mysystem-db-node1-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
psql: could not translate host name "mysystem-db-node3-service" to address: Name or service not known
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
psql: could not connect to server: Connection refused
Is the server running on host "mysystem-db-node1-service" (172.30.207.54) and accepting
TCP/IP connections on port 5432?
selecting default shared_buffers ... >>>>>> Db replica_db is still not accessable on mysystem-db-node1-service:5432 (will try 60 times more)
128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /var/lib/postgresql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG: could not bind IPv6 socket: Cannot assign requested address
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
LOG: database system was shut down at 2018-02-14 15:40:14 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
done
server started
CREATE DATABASE
CREATE ROLE
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/entrypoint.sh
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Config file was replaced with standard one!
>>>>>> Adding config 'wal_keep_segments'='250'
>>>>>> Adding config 'shared_buffers'='300MB'
>>>>>> Adding config 'archive_command'=''/bin/true''
>>> Creating replication user 'replica_user'
CREATE ROLE
>>> Creating replication db 'replica_db'
LOG: received fast shutdown request
LOG: aborting any active transactions
LOG: autovacuum launcher shutting down
waiting for server to shut down....LOG: shutting down
LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2018-02-14 15:40:16 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
LOG: incomplete startup packet
LOG: incomplete startup packet
>>>>>> Db replica_db exists on mysystem-db-node1-service:5432!
>>> Registering node with role master
INFO: connecting to master database
INFO: master register: creating database objects inside the 'repmgr_mysystem_cluster' schema
INFO: retrieving node list for cluster 'mysystem_cluster'
[REPMGR EVENT] Node id: 1; Event type: master_register; Success [1|0]: 1; Time: 2018-02-14 15:40:27.337393+00; Details:
[REPMGR EVENT] will execute script '/usr/local/bin/cluster/repmgr/events/execs/master_register.sh' for the event
[REPMGR EVENT::master_register] Node id: 1; Event type: master_register; Success [1|0]: 1; Time: 2018-02-14 15:40:27.337393+00; Details:
[REPMGR EVENT::master_register] Locking master...
[REPMGR EVENT::master_register] Unlocking standby...
NOTICE: master node correctly registered for cluster 'mysystem_cluster' with id 1 (conninfo: user=replica_user password=replica_pass host=mysystem-db-node1-service dbname=replica_db port=5432 connect_timeout=2)
>>> Starting repmgr daemon...
[2018-02-14 15:40:27] [NOTICE] looking for configuration file in current directory
[2018-02-14 15:40:27] [NOTICE] looking for configuration file in /etc
[2018-02-14 15:40:27] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-14 15:40:27] [INFO] connecting to database 'user=replica_user password=replica_pass host=mysystem-db-node1-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-14 15:40:27] [INFO] connected to database, checking its state
[2018-02-14 15:40:27] [INFO] checking cluster configuration with schema 'repmgr_mysystem_cluster'
[2018-02-14 15:40:27] [INFO] checking node 1 in cluster 'mysystem_cluster'
[2018-02-14 15:40:27] [INFO] reloading configuration file
[2018-02-14 15:40:27] [INFO] configuration has not changed
[2018-02-14 15:40:27] [INFO] starting continuous master connection check
I'm building some CentOS VM with VMWare, with no access to internet, so I've downloaded and made local repositories, including this one
Then I have installed docker-engine.x86_64, and when starting the docker daemon, I get the following errors :
[root]# dockerd
DEBU[0000] docker group found. gid: 993
...
...
DEBU[0001] Error retrieving the next available loopback: open /dev/loop-control: no such device
ERRO[0001] **There are no more loopback devices available.**
ERRO[0001] [graphdriver] prior storage driver "devicemapper" failed: loopback attach failed
DEBU[0001] Cleaning up old mountid : start.
FATA[0001] Error starting daemon: error initializing graphdriver: loopback attach failed
After manually add the loop module which control loop device with this command :
insmod /lib/modules/3.10.0-327.36.2.el7.x86_64/kernel/drivers/block/loop.ko
The error changes to :
[graphdriver] prior storage driver "devicemapper" failed: devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed
I've read that it could be because I have not enough space disk, I think it's not that, any idea?
[root]# df -k .
Filesystem blocs de 1K Used Available Used Mounted on
/dev/mapper/centos-root 51887356 2436256 49451100 5% /
I got the "There are no more loopback devices available" error, which stopped dockerd from running.
I fixed it by ensuring the storage driver was 'overlay':
# /usr/bin/dockerd -D --storage-driver=overlay
This was on Debian Jessie and docker running as a systemd service/unit.
To make it permanent, I created a systemd drop-in:
$ cat /etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay
I am trying to configure Appium Grid with below node configuration.
-Hub is receiving proper capabilities from testng but hub is sending wrong capabilities to my only two nodes. (node configs are as follows )
Please do suggest where am I going wrong.
Emulator Node config
Command running the node with : node appium.js --port 4723 --nodeconfig G:\Selenium2\Grid\AppiumEmulatorNode.json
{
"capabilities": [{
"browserName": "Emulator_5.1.0",
"version": "5.1.0",
"maxInstances": 1,
"platform": "ANDROID"
}],
"configuration": {
"cleanUpCycle": 2000,
"timeout": 30000,
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"url": "http://192.168.0.104:4723/wd/hub",
"host": 192.168.0.104,
"port": 4723,
"maxSession": 1,
"register": true,
"registerCycle": 5000,
"hubPort": 4444 ,
"hubHost": "192.168.0.104"
}
}
Real device Node config
Command running the node with : node appium.js --port 4724 --nodeconfig G:\Selenium2\Grid\AppiumRealDevice.json
{
"capabilities": [{
"browserName": "Sony Xperia SP",
"version": "5.1.1",
"maxInstances": 1,
"platform": "ANDROID"
}],
"configuration": {
"cleanUpCycle": 2000,
"timeout": 30000,
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"url": "http://192.168.0.104:4724/wd/hub",
"host": "192.168.0.104",
"port": 4724,
"maxSession": 1,
"register": true,
"registerCycle": 5000,
"hubPort": 4444 ,
"hubHost": "192.168.0.104"
}
}
Testng file
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="TestsSuite" parallel="tests" thread-count="2" verbose="5">
<test name="Test1">
<parameter name="UdidOrDeviceName" value="YT910LQ91K"></parameter>
<parameter name="MobileVersion" value="5.2.0"></parameter>
<classes>
<class name="com.apps.AppiumTest.AppiumMessengerTest" />
</classes>
</test>
<test name="Test2">
<parameter name="UdidOrDeviceName" value="192.168.113.101:5555"></parameter>
<parameter name="MobileVersion" value="5.1.0"></parameter>
<classes>
<class name="com.apps.AppiumTest.AppiumMessengerTest" />
</classes>
</test>
</suite>
Grid hub Output
19:47:56.092 INFO - Registered a node http://192.168.0.104:4723
19:51:05.917 INFO - Registered a node http://192.168.0.104:4724
19:51:31.419 INFO - Got a request to create a new session: Capabilities [{appActivity=com.android.mms.ui.ConversationList, appPackage=com.android.mms, platformVersion=5.2.0, newCOmmandTimeout=1000, platformName=Android, deviceName=YT910LQ91K}]
19:51:31.433 INFO - Available nodes: [http://192.168.0.104:4723, http://192.168.0.104:4724]
19:51:31.438 INFO - Got a request to create a new session: Capabilities [{appActivity=com.android.mms.ui.ConversationList, appPackage=com.android.mms, platformVersion=5.1.0, newCOmmandTimeout=1000, platformName=Android, deviceName=192.168.113.101:5555}]
19:51:31.438 INFO - Trying to create a new session on node http://192.168.0.104:4723
19:51:31.460 INFO - Trying to create a new session on test slot {maxInstances=3, platformName=ANDROID, deviceName=192.168.113.101:5555, version=5.1.0}
19:51:31.476 INFO - Available nodes: [http://192.168.0.104:4724, http://192.168.0.104:4723]
19:51:31.486 INFO - Trying to create a new session on node http://192.168.0.104:4724
19:51:31.501 INFO - Trying to create a new session on test slot {browserName=Sony Xperia SP, maxInstances=1, version=5.1.1, platform=ANDROID}
Emulator Node output
This node receiving wrong set of data from Selenium Grid.It should receive emulator capabilities but it has received real device capabilities
C:\Program Files (x86)\Appium\node_modules\appium\bin>node appium.js --port 4723 --nodeconfig G:\Selenium2\Grid\AppiumEmulatorNode.json
info: Welcome to Appium v1.4.16 (REV ae6877eff263066b26328d457bd285c0cc62430d)
info: Appium REST http interface listener started on 0.0.0.0:4723
info: [debug] Non-default server args: {"nodeconfig":"G:\\Selenium2\\Grid\\AppiumEmulatorNode.json"}
info: Console LogLevel: debug
info: [debug] starting auto register thread for grid. Will try to register every 5000 ms.
info: [debug] Appium successfully registered with the grid on 192.168.0.104:4444
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 28.871 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 11.314 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> POST /wd/hub/session {"desiredCapabilities":{"appActivity":"com.android.mms.ui.ConversationList","appPackage":"com.android.mms","platformVersion":"5.2.0","newCOmmandTimeout":1000,"platformName":"Android","deviceName":"YT910LQ91K"}}
info: Client User-Agent string: Apache-HttpClient/4.5.1 (Java/1.8.0_65)
Real Device Node output
This node receiving wrong set of data from Selenium Grid.It should receive Real device capabilities but it has received emulator capabilities
C:\Program Files (x86)\Appium\node_modules\appium\bin>node appium.js --port 4724 --nodeconfig G:\Selenium2\Grid\AppiumRealDevice.json
info: Welcome to Appium v1.4.16 (REV ae6877eff263066b26328d457bd285c0cc62430d)
info: Appium REST http interface listener started on 0.0.0.0:4724
info: [debug] Non-default server args: {"port":4724,"nodeconfig":"G:\\Selenium2\\Grid\\AppiumRealDevice.json"}
info: Console LogLevel: debug
info: [debug] starting auto register thread for grid. Will try to register every 5000 ms.
info: [debug] Appium successfully registered with the grid on 192.168.0.104:4444
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 22.103 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 12.602 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 11.690 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 11.460 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: <-- GET /wd/hub/status 200 11.927 ms - 105 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}}}
info: --> POST /wd/hub/session {"desiredCapabilities":{"appActivity":"com.android.mms.ui.ConversationList","appPackage":"com.android.mms","platformVersion":"5.1.0","newCOmmandTimeout":1000,"platformName":"Android","deviceName":"192.168.113.101:5555"}}
info: Client User-Agent string: Apache-HttpClient/4.5.1 (Java/1.8.0_65)
info: [debug] The following desired capabilities were provided, but not recognized by appium. They will be passed on to any other services running on this server. : newCOmmandTimeout
info: [debug] Didn't get app but did get Android package, will attempt to launch it on the device
info: [debug] Creating new appium session 515cbed7-46c0-4fce-a2e0-652976c5a659
info: Starting android appium
info: [debug] Getting Java version
info: Java version is: 1.8.0_71
info: [debug] Checking whether adb is present
info: [debug] Using adb from C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe
warn: No app capability, can't parse package/activity
info: [debug] Using fast reset? true
info: [debug] Preparing device for session
info: [debug] Not checking whether app is present since we are assuming it's already on the device
info: Retrieving device
info: [debug] Trying to find a connected android device
info: [debug] Getting connected devices...
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe devices
info: [debug] 2 device(s) connected
info: Found device 192.168.113.101:5555
info: [debug] Setting device id to 192.168.113.101:5555
info: [debug] Waiting for device to be ready and to respond to shell commands (timeout = 5)
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 wait-for-device
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell "echo 'ready'"
info: [debug] Starting logcat capture
info: [debug] Getting device API level
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell "getprop ro.build.version.sdk"
info: [debug] Device is at API Level 22
info: Device API level is: 22
info: [debug] Extracting strings for language: default
info: [debug] Apk doesn't exist locally
info: [debug] Could not get strings, but it looks like we had an old strings file anyway, so ignoring
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell "rm -rf /data/local/tmp/strings.json"
info: [debug] Not uninstalling app since server not started with --full-reset
info: [debug] Skipping install since we launched with a package instead of an app path
info: [debug] Forwarding system:4724 to device:4724
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 forward tcp:4724 tcp:4724
info: [debug] Pushing appium bootstrap to device...
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 push "C:\\Program Files (x86)\\Appium\\node_modules\\appium\\build\\android_bootstrap\\AppiumBootstrap.jar" /data/local/tmp/
info: [debug] Pushing settings apk to device...
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 install "C:\Program Files (x86)\Appium\node_modules\appium\build\settings_apk\settings_apk-debug.apk"
info: [debug] Pushing unlock helper app to device...
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 install "C:\Program Files (x86)\Appium\node_modules\appium\build\unlock_apk\unlock_apk-debug.apk"
info: --> GET /wd/hub/status {}
info: [debug] Responding to client with success: {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}},"sessionId":"515cbed7-46c0-4fce-a2e0-652976c5a659"}
info: <-- GET /wd/hub/status 200 69.204 ms - 156 {"status":0,"value":{"build":{"version":"1.4.16","revision":"ae6877eff263066b26328d457bd285c0cc62430d"}},"sessionId":"515cbed7-46c0-4fce-a2e0-652976c5a659"}
info: Starting App
info: [debug] Attempting to kill all 'uiautomator' processes
info: [debug] Getting all processes with 'uiautomator'
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell "ps 'uiautomator'"
info: [debug] No matching processes found
info: [debug] Running bootstrap
info: [debug] spawning: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell uiautomator runtest AppiumBootstrap.jar -c io.appium.android.bootstrap.Bootstrap -e pkg com.android.mms -e disableAndroidWatchers false
info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_RESULT: shortMsg=java.lang.IllegalStateException
info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_RESULT: longMsg=UiAutomationService android.accessibilityservice.IAccessibilityServiceClient$Stub$Proxy#1034d97calready registered!
info: [debug] [UIAUTOMATOR STDOUT] INSTRUMENTATION_CODE: 0
info: [debug] UiAutomator exited
info: [debug] executing cmd: C:\Users\Harsha\AppData\Local\Android\sdk\platform-tools\adb.exe -s 192.168.113.101:5555 shell "echo 'ping'"
info: [debug] Attempting to uninstall app
info: [debug] Not uninstalling app since server not started with --full-reset
info: [debug] Cleaning up android objects
error: UiAutomator quit before it successfully launched
info: [debug] Cleaning up appium session
error: Failed to start an Appium session, err was: Error: UiAutomator quit before it successfully launched
info: [debug] Error: UiAutomator quit before it successfully launched
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android.js:205:23)
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android-hybrid.js:249:5)
at Object.async.eachSeries (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\async\lib\async.js:142:20)
at [object Object].androidHybrid.stopChromedriverProxies (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android-hybrid.js:233:9)
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android.js:200:10)
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android.js:222:9)
at [object Object].androidCommon.uninstallApp (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android-common.js:478:5)
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android.js:220:12)
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\lib\devices\android\android.js:229:11)
at C:\Program Files (x86)\Appium\node_modules\appium\node_modules\appium-adb\lib\adb.js:901:7
at [object Object].<anonymous> (C:\Program Files (x86)\Appium\node_modules\appium\node_modules\appium-adb\lib\adb.js:180:9)
at ChildProcess.exithandler (child_process.js:194:7)
at emitTwo (events.js:87:13)
at ChildProcess.emit (events.js:172:7)
at maybeClose (internal/child_process.js:818:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:211:5)
info: [debug] Responding to client with error: {"status":33,"value":{"message":"A new session could not be created. (Original error: UiAutomator quit before it successfully launched)","origValue":"UiAutomator quit before it successfully launched"},"sessionId":null}
Here's how the Grid decides to which node should a test be routed to.
It uses something called as a Capability Matcher, to help it decide as to which among its nodes should a test be routed to. If you don't provide a custom capability matcher [ You can take a look at this blog of mine to understand how to plugin in your own custom variant ], it resorts to relying on the DefaultCapabilityMatcher for doing this matching job.
Since you haven't set any of the attributes that the default capability matcher understands [ see here for learning what are those ], it always matches the first node with a test. That is why you are seeing the mess up.
It only understands browserName, platform, version and applicationName and for browserName, platform you cannot pass in any arbitrary values but can only use predefined values. See BrowserType and Platform java classes.
So to solve your problem, you have two options.
Option 1 : You build your own capability matcher which suites your needs (you can refer to my blog post link that I have shared above) and work with it.
Option 2 : You resort to using the capability "applicationName" wherein you provide a unique value to this for each of your nodes and then rely on your test to add this extra capability at the test level.
That should resolve your problem.
I'm having troubles to install Docker on Mac OS X Yosemite (10.10.4): when I try with the Docker Quickstart Terminal from the Docker Toolbox I get this:
. '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'
bash-3.2$ . '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'
Creating Machine default...
executing: /usr/local/bin/VBoxManage
STDOUT: Oracle VM VirtualBox Command Line Management Interface Version 5.0.2
(C) 2005-2015 Oracle Corporation
All rights reserved.
Usage:
VBoxManage [<general option>] <command>
STDERR:
Creating VirtualBox VM...
Creating SSH key...
Creating disk image...
Creating 20000 MB hard disk image...
Converting from raw image file="stdin" to file="/Users/arbi/.docker/machine/machines/default/disk.vmdk"...
Creating dynamic image with size 20971520000 bytes (20000MB)...
executing: /usr/local/bin/VBoxManage createvm --basefolder /Users/arbi/.docker/machine/machines/default --name default --register
STDOUT: Virtual machine 'default' is created and registered.
UUID: e0f2a54b-b11a-47e2-9f3e-450f6fea78c8
Settings file: '/Users/arbi/.docker/machine/machines/default/default/default.vbox'
STDERR:
VM CPUS: 1
VM Memory: 2048
executing: /usr/local/bin/VBoxManage modifyvm default --firmware bios --bioslogofadein off --bioslogofadeout off --bioslogodisplaytime 0 --biosbootmenu disabled --ostype Linux26_64 --cpus 1 --memory 2048 --acpi on --ioapic on --rtcuseutc on --natdnshostresolver1 off --natdnsproxy1 off --cpuhotplug off --pae on --hpet on --hwvirtex on --nestedpaging on --largepages on --vtxvpid on --accelerate3d off --boot1 dvd
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic1 nat --nictype1 82540EM --cableconnected1 on
STDOUT:
STDERR:
using 192.168.99.1 for dhcp address
executing: /usr/local/bin/VBoxManage list hostonlyifs
STDOUT: Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet0
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic2 hostonly --nictype2 82540EM --hostonlyadapter2 vboxnet0 --cableconnected2 on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storagectl default --name SATA --add sata --hostiocache on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storageattach default --storagectl SATA --port 0 --device 0 --type dvddrive --medium /Users/arbi/.docker/machine/machines/default/boot2docker.iso
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage storageattach default --storagectl SATA --port 1 --device 0 --type hdd --medium /Users/arbi/.docker/machine/machines/default/disk.vmdk
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage guestproperty set default /VirtualBox/GuestAdd/SharedFolders/MountPrefix /
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage guestproperty set default /VirtualBox/GuestAdd/SharedFolders/MountDir /
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage sharedfolder add default --name Users --hostpath /Users --automount
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage setextradata default VBoxInternal2/SharedFoldersEnableSymlinksCreate/Users 1
STDOUT:
STDERR:
Starting VirtualBox VM...
executing: /usr/local/bin/VBoxManage showvminfo default --machinereadable
STDOUT: name="default"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="e0f2a54b-b11a-47e2-9f3e-450f6fea78c8"
CfgFile="/Users/arbi/.docker/machine/machines/default/default/default.vbox"
SnapFldr="/Users/arbi/.docker/machine/machines/default/default/Snapshots"
LogFldr="/Users/arbi/.docker/machine/machines/default/default/Logs"
hardwareuuid="e0f2a54b-b11a-47e2-9f3e-450f6fea78c8"
memory=2048
. . .
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
STDERR:
using 192.168.99.1 for dhcp address
executing: /usr/local/bin/VBoxManage list hostonlyifs
STDOUT: Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet0
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --nic2 hostonly --nictype2 82540EM --hostonlyadapter2 vboxnet0 --cableconnected2 on
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage modifyvm default --natpf1 delete ssh
STDOUT:
STDERR: VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1766 of file VBoxManageModifyVM.cpp
executing: /usr/local/bin/VBoxManage modifyvm default --natpf1 ssh,tcp,127.0.0.1,52532,,22
STDOUT:
STDERR:
executing: /usr/local/bin/VBoxManage startvm default --type headless
STDOUT: Waiting for VM "default" to power on...
VM "default" has been successfully started.
STDERR:
Error creating machine: exit status 1
You will want to check the provider to make sure the machine and associated resources were properly removed.
Starting machine default...
exit status 1
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Setting environment variables for machine default...
host is not running
docker is configured to use the default machine with IP
For help getting started, check out the docs at https://docs.docker.com
default is not running. Please start this with docker-machine start default
When I try to start the machine manually, it fails to run again:
$ docker-machine create --driver virtualbox default
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Error creating machine: exit status 1
You will want to check the provider to make sure the machine and associated resources were properly removed.
But then when I open VirtualBox, I see the default machine powered off, If I try to start it manually it fails and I get the following error:
Failed to open a session for the virtual machine default.
Failed to load VMMR0.r0 (VERR_VMM_SMAP_BUT_AC_CLEAR).
Result Code: NS_ERROR_FAILURE (0x80004005)
Component: ConsoleWrap
Interface: IConsole {872da645-4a9b-1727-bee2-5585105b9eed}
Any idea why it's failing to start the default machine?
I had to downgrade to VirtualBox 4.3 to make the Docker host starts successfully.
Uninstall VirtualBox:
To uninstall VirtualBox, open the disk image (dmg) file again and double-click on the uninstall icon contained therein.
Then re-install the Docker toolbox.