site unavailable after install and reboot and plonectl start - linux

Ubuntu 10.04 system. new Plone install, went fine and created some content, everything seemed fine. New kernel update and a reboot later, Plone is running but will not present any pages to a browser. In fact, a browser attempt just times out. I can telnet to the port 8080 on the system and send an HTTP get by hand and nothing comes back. The log file for client1 in a zeo install keeps repeating:
2011-08-10T16:59:57 INFO ZServer HTTP server started at Wed Aug 10 16:59:57 2011
Hostname: 0.0.0.0
Port: 8080
------
2011-08-10T16:59:57 INFO Zope Set effective user to "plone"
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage ClientStorage (pid=24596) created RW/normal for storage: '1'
------
2011-08-10T17:00:02 INFO ZEO.cache created temporary cache file '<fdopen>'
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 8100)>
------
2011-08-10T17:00:02 INFO ZEO.zrpc.Connection(C) (127.0.0.1:8100) received handshake 'Z3101'
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Connected to storage: ('dns', 8100)
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage No verification necessary -- empty cache
------
2011-08-10T17:00:22 INFO ZServer HTTP server started at Wed Aug 10 17:00:22 2011
Hostname: 0.0.0.0
Port: 8080
I haven't been able to find any other info on what is causing this, nor can I find any documentation on debugging a Plone install.
Thanks for any help you can provide.

Forgive the aborted answer, misread the log snippet. The repeated log entries you're seeing are what you'd expect to see from repeated restarts. Are you repeatedly restarting the instance? If not, then in it seems your instance is restarting on it's own. Shut down the instance and start it using "bin/instance fg" and see if that gives you more information.

Related

tails os and proxychains - getting denied connection

Been trying to run a wallet app in tails os ver 4.28 with no success. I'm getting denied error when using it with proxychains. Being a noob here, would someone assist me in letting me know what I'm doing incorrect here. I've included terminal output & proxychain config info for reference here.
amnesia#amnesia:~/Persistent$ chmod +x Neuron-v0.101.2-x86_64.AppImage
amnesia#amnesia:~/Persistent$ proxychains ./Neuron-v0.101.2-x86_64.AppImage
ProxyChains-3.1 (http://proxychains.sf.net)
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-<--denied
06:16:58.553 › Network: connection dropped
|DNS-request| localhost
|DNS-request| localhost
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-|DNS-request| localhost
<><>-127.0.0.1:8114-<--denied
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-|DNS-request| localhost
<--denied
06:17:00.145 › Network: fail to connect to the network. Is CKB node running?
06:17:00.323 › Network: switched to: {
id: 'mainnet',
name: 'default node',
remote: 'http://localhost:8114',
genesisHash: '0x92b197aa1fba0f63633922c61c92375c9c074a93e85963554f5499fe1450d0e5',
type: 0,
chain: 'ckb'
}
06:17:01.453 › Main window: The main window is ready to show
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-|DNS-request| localhost
<><>-127.0.0.1:8114-<--denied
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-|DNS-request| localhost
<--denied
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-<--denied
|DNS-response| localhost is 127.0.0.1
|DNS-request| localhost
|DNS-request| localhost
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-<--denied
06:17:03.705 › CKB: external RPC on default uri not detected, starting bundled CKB node.
06:17:03.707 › CKB: Initializing node...
06:17:03.708 › CKB: init: config file detected, skip ckb init.
06:17:03.708 › CKB: starting node...
06:17:04.116 › CKB: process closed
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-|DNS-request| localhost
<><>-127.0.0.1:8114-<--denied
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-<--denied
|DNS-request| localhost
|DNS-response| localhost is 127.0.0.1
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:8114-|DNS-request| localhost
<--denied
^C|DNS-response|: localhost does not exist
Aborted
proxychain.config file
#dynamic_chain
strict_chain
#random_chain
#chain_len = 2
#quiet_mode
proxy_dns
# Some timeouts in milliseconds
tcp_read_time_out 15000
tcp_connect_time_out 8000
[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
socks4 127.0.0.1 9050
Appreciate any assistance in this matter.
So, didn't get a response from the community as of yet. I did some exploration and enabled the following in proxychains config file
enabled dynamic_chain and commented strict_chain
replace socks4 with socks5
This got rid of the denied issue, but gave me a timeout issue.
Reached out the wallet tech team for assistance. They responded stating wallet synchronization fails when it's behind a firewall, vpn, anti-virus. Wallet behind a proxy configuration also disrupts the synchronization. I had a very slim hope that this would work and that faded quickly with their response. This closes out this open question.
I solved the problem by connecting to the internet (usually not connected) and running "sudo apt-get update" . After the update was done (a few seconds) I restarted tails (USB Stick variant) and then the problem was gone.

GitLab Health Check without token

I've got GitLab 10.5.6. I'd like to use Health Check information in my monitoring system. I can configure it by using Health Check endpoints with health check access token, but as this solution is depracated, I want to use IP whitelist. And I have some problems with it.
According to this article https://docs.gitlab.com/ee/administration/monitoring/ip_whitelist.html I edited /etc/gitlab/gitlab.rb and added this line (as this GitLab was installed around version 7 or even older I think):
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1', 'X.X.X.X', 'Y.Y.Y.Y']
where X.X.X.X is IP of my computer and Y.Y.Y.Y is IP of server with GitLab. After it I executed reconfiguration (gitlab-ctl reconfigure). And started tests... Below logs are from production.log file.
Execution of curl http://127.0.0.1:8888/-/readiness on server Y.Y.Y.Y returns proper JSON with expected data:
Started GET "/-/readiness" for 127.0.0.1 at 2018-03-24 20:01:31 +0100
Processing by HealthController#readiness as /
Completed 200 OK in 27ms (Views: 0.6ms | ActiveRecord: 0.5ms)
Execution of curl http://Y.Y.Y.Y:8888/-/readiness on server Y.Y.Y.Y returns error:
Started GET "/-/readiness" for Y.Y.Y.Y at 2018-03-24 21:20:04 +0100
Processing by HealthController#readiness as /
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 1.0ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness through Firefox browser on computer X.X.X.X returns error:
Started GET "/-/readiness" for X.X.X.X at 2018-03-24 20:03:04 +0100
Processing by HealthController#readiness as HTML
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 0.8ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness?token=ZZZZZZZZZZZZZ through Firefox browser on computer X.X.X.X returns proper JSON with expected data.
I don't have any idea what I can check more. Maybe there's lack of any more configuration in /etc/gitlab/gitlab.rb as it's quite old GitLab instance.

PostgreSQL on IBM Cloud Kubernetes returns "psql: FATAL: password authentication failed for user "replica_user"" error. Works on GCP and Azure

I have deployed this PostgreSQL image to the IBM Cloud, Google Cloud Platform and Microsoft Azure using Kubernetes. https://github.com/paunin/PostDock
It was successfully deployed on all 3 platforms with identical configurations and an identical process. The IBM cloud fails with the error "psql: FATAL: password authentication failed for user "replica_user""
You can find below the logs from all 3 cloud platforms. Has anyone experienced this?
IBM Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: FATAL: password authentication failed for user "replica_user"
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (172.30.65.206) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
psql: FATAL: password authentication failed for user "replica_user"
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 30 times more)
....
The last couple of lines are then repeated many times.
This is the log file from deploying the same application, using identical processes on the Google Cloud. It works just fine on the Google Cloud Platform.
Google Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (10.52.0.12) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 30 times more)
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 29 times more)
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node1-service" (10.52.0.11) and accepting
TCP/IP connections on port 5432?
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node1-service:5432 (will try 28 times more)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> REPLICATION_UPSTREAM_NODE_ID=1
>>> Sending in background postgres start...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> Starting standby node...
>>> Instance hasn't been set up yet.
>>> Clonning primary node...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
NOTICE: destination directory '/var/lib/postgresql/data' provided
INFO: connecting to upstream node
INFO: Successfully connected to upstream node. Current installation size is 34 MB
INFO: checking and correcting permissions on existing directory /var/lib/postgresql/data ...
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
NOTICE: starting backup (using pg_basebackup)...
INFO: executing: '/usr/lib/postgresql/9.5/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h cyclos-postgres-node1-service -p 5432 -U replica_user -c fast -X stream '
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example : pg_ctl -D /var/lib/postgresql/data start
HINT: After starting the server, you need to register this standby with "repmgr standby register"
[REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-02-02 13:24:32.87843+00; Details: Cloned from host 'cyclos-postgres-node1-service', port 5432; backup method: pg_basebackup; --force: Y
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Will add configs to exists file
>>> Starting postgres...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on cyclos-postgres-node2-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
LOG: incomplete startup packet
LOG: incomplete startup packet
LOG: database system was interrupted; last known up at 2018-02-02 13:24:31 UTC
FATAL: the database system is starting up
psql: FATAL: the database system is starting up
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node2-service:5432 (will try 60 times more)
LOG: entering standby mode
LOG: redo starts at 0/2000028
LOG: consistent recovery state reached at 0/20000F8
LOG: database system is ready to accept read only connections
LOG: started streaming WAL from primary at 0/3000000 on timeline 1
>>>>>> Db replica_db exists on cyclos-postgres-node2-service:5432!
>>> Waiting for replication on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
>>> Replication is done
>>> Unregister the node if it was done before
DELETE 0
>>> Registering node with role standby
INFO: connecting to standby database
INFO: connecting to master database
INFO: retrieving node list for cluster 'postgres_cluster'
INFO: registering the standby
[REPMGR EVENT] Node id: 2; Event type: standby_register; Success [1|0]: 1; Time: 2018-02-02 13:24:51.891592+00; Details:
INFO: standby registration complete
NOTICE: standby node correctly registered for cluster postgres_cluster with id 2 (conninfo: user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2)
Locking standby (NEW_UPSTREAM_NODE_ID=1)...
>>> Starting repmgr daemon...
[2018-02-02 13:24:53] [NOTICE] looking for configuration file in current directory
[2018-02-02 13:24:53] [NOTICE] looking for configuration file in /etc
[2018-02-02 13:24:53] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-02 13:24:53] [INFO] connecting to database 'user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-02 13:24:53] [INFO] connected to database, checking its state
[2018-02-02 13:24:53] [INFO] connecting to master node of cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] retrieving node list for cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] checking role of cluster node '1'
[2018-02-02 13:24:53] [INFO] checking cluster configuration with schema 'repmgr_postgres_cluster'
[2018-02-02 13:24:53] [INFO] checking node 2 in cluster 'postgres_cluster'
[2018-02-02 13:24:53] [INFO] reloading configuration file
[2018-02-02 13:24:53] [INFO] configuration has not changed
[2018-02-02 13:24:53] [INFO] starting continuous standby node monitoring
ERROR: cannot execute DELETE in a read-only transaction
STATEMENT: DELETE FROM repmgr_postgres_cluster.repl_nodes WHERE conninfo LIKE '%host=cyclos-postgres-node3-service%'
And on the Azure Cloud, it works just fine as well.
Azure Cloud Log
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not connect to server: Connection refused
Is the server running on host "cyclos-postgres-node2-service" (10.244.0.9) and accepting
TCP/IP connections on port 5432?
>>> Auto-detected master name: 'cyclos-postgres-node1-service'
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
cat: /var/lib/postgresql/data/standby.lock: No such file or directory
>>> Previously Locked standby upstream node LOCKED_STANDBY=''
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> REPLICATION_UPSTREAM_NODE_ID=1
>>> Sending in background postgres start...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
>>> Starting standby node...
>>> Instance hasn't been set up yet.
>>> Clonning primary node...
>>> Waiting for upstream postgres server...
>>> Wait db replica_db on cyclos-postgres-node1-service:5432(user: replica_user,password: *******), will try 30 times with delay 10 seconds (TIMEOUT=300)
NOTICE: destination directory '/var/lib/postgresql/data' provided
INFO: connecting to upstream node
>>>>>> Db replica_db exists on cyclos-postgres-node1-service:5432!
INFO: Successfully connected to upstream node. Current installation size is 34 MB
INFO: checking and correcting permissions on existing directory /var/lib/postgresql/data ...
NOTICE: starting backup (using pg_basebackup)...
INFO: executing: '/usr/lib/postgresql/9.5/bin/pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h cyclos-postgres-node1-service -p 5432 -U replica_user -c fast -X stream '
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example : pg_ctl -D /var/lib/postgresql/data start
HINT: After starting the server, you need to register this standby with "repmgr standby register"
[REPMGR EVENT] Node id: 2; Event type: standby_clone; Success [1|0]: 1; Time: 2018-02-02 06:50:47.340146+00; Details: Cloned from host 'cyclos-postgres-node1-service', port 5432; backup method: pg_basebackup; --force: Y
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Will add configs to exists file
>>> Starting postgres...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on cyclos-postgres-node2-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
LOG: incomplete startup packet
LOG: database system was interrupted; last known up at 2018-02-02 06:50:46 UTC
LOG: incomplete startup packet
FATAL: the database system is starting up
psql: FATAL: the database system is starting up
>>>>>> Db replica_db is still not accessable on cyclos-postgres-node2-service:5432 (will try 60 times more)
LOG: entering standby mode
LOG: redo starts at 0/2000028
LOG: consistent recovery state reached at 0/2000130
LOG: database system is ready to accept read only connections
LOG: started streaming WAL from primary at 0/3000000 on timeline 1
>>>>>> Db replica_db exists on cyclos-postgres-node2-service:5432!
>>> Waiting for replication on this node is over(if any in progress): CLEAN_UP_ON_FAIL=, INTERVAL=30
>>> Replication is done
>>> Unregister the node if it was done before
DELETE 0
>>> Registering node with role standby
INFO: connecting to standby database
INFO: connecting to master database
INFO: retrieving node list for cluster 'postgres_cluster'
INFO: registering the standby
[REPMGR EVENT] Node id: 2; Event type: standby_register; Success [1|0]: 1; Time: 2018-02-02 06:51:05.083455+00; Details:
INFO: standby registration complete
NOTICE: standby node correctly registered for cluster postgres_cluster with id 2 (conninfo: user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2)
Locking standby (NEW_UPSTREAM_NODE_ID=1)...
>>> Starting repmgr daemon...
[2018-02-02 06:51:05] [NOTICE] looking for configuration file in current directory
[2018-02-02 06:51:05] [NOTICE] looking for configuration file in /etc
[2018-02-02 06:51:05] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-02 06:51:05] [INFO] connecting to database 'user=replica_user password=replica_pass host=cyclos-postgres-node2-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-02 06:51:06] [INFO] connected to database, checking its state
[2018-02-02 06:51:06] [INFO] connecting to master node of cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] retrieving node list for cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] checking role of cluster node '1'
[2018-02-02 06:51:06] [INFO] checking cluster configuration with schema 'repmgr_postgres_cluster'
[2018-02-02 06:51:06] [INFO] checking node 2 in cluster 'postgres_cluster'
[2018-02-02 06:51:06] [INFO] reloading configuration file
[2018-02-02 06:51:06] [INFO] configuration has not changed
[2018-02-02 06:51:06] [INFO] starting continuous standby node monitoring
ERROR: cannot execute DELETE in a read-only transaction
STATEMENT: DELETE FROM repmgr_postgres_cluster.repl_nodes WHERE conninfo LIKE '%host=cyclos-postgres-node3-service%'
I was able to run this on a paid cluster in IBM Cloud and it appears to be working. I did NOT use the persistent volumes and I was on a paid cluster. Please note that persistent volumes are not available on free clusters, so if you are testing on a free cluster you will get issues if you use persistent volumes.
My cluster has 3 workers of size u2c.2x4 (the smallest available) and is on the default version of Kubernetes for IBM Cloud (1.8.6), if that helps you debug at all. Please try again or if your setup is different than mine, let me know and I can try with a matching setup.
$ kubectl logs --namespace=mysystem mysystem-db-node1-0
>>> Setting up STOP handlers...
>>> STARTING SSH (if required)...
>>> SSH is not enabled!
>>> STARTING POSTGRES...
>>> TUNING UP POSTGRES...
>>> Cleaning data folder which might have some garbage...
psql: could not translate host name "mysystem-db-node1-service" to address: Name or service not known
psql: could not translate host name "mysystem-db-node2-service" to address: Name or service not known
>>> Auto-detected master name: ''
>>> Setting up repmgr...
>>> Setting up repmgr config file '/etc/repmgr.conf'...
>>> Setting up upstream node...
>>> Sending in background postgres start...
>>> Waiting for local postgres server start...
>>> Wait db replica_db on mysystem-db-node1-service:5432(user: replica_user,password: *******), will try 60 times with delay 10 seconds (TIMEOUT=600)
psql: could not translate host name "mysystem-db-node3-service" to address: Name or service not known
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
psql: could not connect to server: Connection refused
Is the server running on host "mysystem-db-node1-service" (172.30.207.54) and accepting
TCP/IP connections on port 5432?
selecting default shared_buffers ... >>>>>> Db replica_db is still not accessable on mysystem-db-node1-service:5432 (will try 60 times more)
128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /var/lib/postgresql/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG: could not bind IPv6 socket: Cannot assign requested address
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
LOG: database system was shut down at 2018-02-14 15:40:14 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
done
server started
CREATE DATABASE
CREATE ROLE
/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/entrypoint.sh
>>> Configuring /var/lib/postgresql/data/postgresql.conf
>>>>>> Config file was replaced with standard one!
>>>>>> Adding config 'wal_keep_segments'='250'
>>>>>> Adding config 'shared_buffers'='300MB'
>>>>>> Adding config 'archive_command'=''/bin/true''
>>> Creating replication user 'replica_user'
CREATE ROLE
>>> Creating replication db 'replica_db'
LOG: received fast shutdown request
LOG: aborting any active transactions
LOG: autovacuum launcher shutting down
waiting for server to shut down....LOG: shutting down
LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2018-02-14 15:40:16 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
LOG: incomplete startup packet
LOG: incomplete startup packet
>>>>>> Db replica_db exists on mysystem-db-node1-service:5432!
>>> Registering node with role master
INFO: connecting to master database
INFO: master register: creating database objects inside the 'repmgr_mysystem_cluster' schema
INFO: retrieving node list for cluster 'mysystem_cluster'
[REPMGR EVENT] Node id: 1; Event type: master_register; Success [1|0]: 1; Time: 2018-02-14 15:40:27.337393+00; Details:
[REPMGR EVENT] will execute script '/usr/local/bin/cluster/repmgr/events/execs/master_register.sh' for the event
[REPMGR EVENT::master_register] Node id: 1; Event type: master_register; Success [1|0]: 1; Time: 2018-02-14 15:40:27.337393+00; Details:
[REPMGR EVENT::master_register] Locking master...
[REPMGR EVENT::master_register] Unlocking standby...
NOTICE: master node correctly registered for cluster 'mysystem_cluster' with id 1 (conninfo: user=replica_user password=replica_pass host=mysystem-db-node1-service dbname=replica_db port=5432 connect_timeout=2)
>>> Starting repmgr daemon...
[2018-02-14 15:40:27] [NOTICE] looking for configuration file in current directory
[2018-02-14 15:40:27] [NOTICE] looking for configuration file in /etc
[2018-02-14 15:40:27] [NOTICE] configuration file found at: /etc/repmgr.conf
[2018-02-14 15:40:27] [INFO] connecting to database 'user=replica_user password=replica_pass host=mysystem-db-node1-service dbname=replica_db port=5432 connect_timeout=2'
[2018-02-14 15:40:27] [INFO] connected to database, checking its state
[2018-02-14 15:40:27] [INFO] checking cluster configuration with schema 'repmgr_mysystem_cluster'
[2018-02-14 15:40:27] [INFO] checking node 1 in cluster 'mysystem_cluster'
[2018-02-14 15:40:27] [INFO] reloading configuration file
[2018-02-14 15:40:27] [INFO] configuration has not changed
[2018-02-14 15:40:27] [INFO] starting continuous master connection check

zabbix agent tries to speak with server

I want to create a zabbix proxy and a zabbix agent and setup the agent to speak through the proxy.I have created docker containers for this (zabbix-proxy and zabbix-agent).
proxy.conf:
Server=192.10.30.58 # address of server
ServerPort=10051
Hostname=DFS
agent.conf:
Server=ZabbixProxy # the zabbix-proxy container name
ListenPort=10050
Hostname=Agent
I have created also in zabbix :
A proxy named DFS.
A host named DFS and 192.10.30.3:10051
A host named Agent and 192.18.0.4:10050 (an internal IP where the agent is running)
I can see data from Monitoring-> Latest Data for both the proxy and the agent.
So, it work.
But, in my log I can see that for the agent it gives me:
INFO success: zabbix-agentd entered RUNNIG state, process has stayed up for > than 1 seconds (startsecs)
failed to accept an incoming connection: connection from "192.10.30.58" rejected, allowed hosts: "ZabbixProxy"
(The 192.10.30.3:10051 is the external ip of proxy)
It seems that the agent tries to speak with the server also but I don't know why.
If in agent.conf instead of ZabbixProxy (the name of the zabbix proxy container) I put the address of proxy 192.10.30.3 , then I still have the same errors and also I can't get Latest data for the agent.
I I use ServerActive=ZabbixProxy or ServerActive=192.10.30.3:10051, I am receiving:
...
INFO spawned: 'zabbix-agentd' with pid 51
2017-04-12 16:37:55,916 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:37:57,928 INFO spawned: 'zabbix-agentd' with pid 52
2017-04-12 16:37:57,988 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:38:01,001 INFO spawned: 'zabbix-agentd' with pid 53
2017-04-12 16:38:01,061 INFO exited: zabbix-agentd (exit status 1; not expected)
2017-04-12 16:38:02,063 INFO gave up: zabbix-agentd entered FATAL state, too many start retries too quickly
and of course now the agent doesn't work at all.
Parameter Server is for passive items - incoming connections to the agent. Agent connects to the server (or proxy) based on the parameter ServerActive, which seems to be misconfigured in your case.

Why is my application not being deployed on OpenShift?

I believe I have everything set up properly for my server but I keep getting this error
Starting NodeJS cartridge
Tue Jan 05 2016 10:49:19 GMT-0500 (EST): Starting application 'squadstream' ...
Waiting for application port (8080) become available ...
Application 'squadstream' failed to start (port 8080 not available)
-------------------------
Git Post-Receive Result: failure
Activation status: failure
Activation failed for the following gears:
568be5b67628e1805b0000f2 (Error activating gear: CLIENT_ERROR: Failed to
execute: 'control start' for /var/lib/openshift/568be5b67628e1805b0000f2/nodejs
#<IO:0x0000000082d2a0>
#<IO:0x0000000082d228>
)
Deployment completed with status: failure
postreceive failed
I have my git repo set up with all the steps followed properly.
https://github.com/ammark47/SquadStreamServer
Edit: I have another app on openshift that is on 8080. I'm not sure if that makes a difference.
If the other application is running on the same gear, then it is binding to port 8080 first, making it unavailable for your second application. You will need to run each application on it's own gear. Also, you need to make sure that you are binding to port 8080 on the correct IP address for your gear, you can't bind to 0.0.0.0 or 127.0.0.1

Resources