Version details
OS: Ubuntu 18.04.5 LTS
aziot-edge: bionic,now 1.2.3-1 amd64
aziot-identity-service: bionic,now 1.2.2-1 amd64
docker: Docker version 20.10.8+azure, build 3967b7d28e15a020e4ee344283128ead633b3e0c
Verifying the installation shows that the aziot-identityd is in "Down-activating" state
# sudo iotedge system status
System services:
aziot-edged Running
aziot-identityd Down - activating
aziot-keyd Running
aziot-certd Running
aziot-tpmd Ready
aziot-identityd is in a bad state because:
aziot-identityd.service: Down - activating : Printing the last 10 log lines.
-- Logs begin at Fri 2020-11-06 12:29:56 IST, end at Fri 2021-09-10 19:07:13 IST. --
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [INFO] - Could not reconcile Identities with current device data. Reprovisioning.
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [INFO] - Updated device info for Edge1.
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - Failed to provision with IoT Hub, and no valid device backup was found: Hub client error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - service encountered an error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - caused by: Hub client error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - caused by: internal error
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 2021-09-10T13:37:10Z [ERR!] - 0: <unknown>
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN aziot-identityd[1871]: 1: <unknown>
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN systemd[1]: aziot-identityd.service: Main process exited, code=exited, status=1/FAILURE
Sep 10 19:07:10 vm-DevIoTEdge1-poc-CentIN systemd[1]: aziot-identityd.service: Failed with result 'exit-code'.
iotedge check shows 2 configuration related errors:
# iotedge check --verbose
Configuration checks (aziot-identity-service)
---------------------------------------------
√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
× read all preloaded certificates from the Certificates Service - Error
could not load cert with ID "aziot-edged-trust-bundle"
Caused by:
parameter "id" has an invalid value
caused by: not found
√ read all preloaded key pairs from the Keys Service - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK
Connectivity checks (aziot-identity-service)
--------------------------------------------
√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK
Configuration checks
--------------------
√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
× configuration has correct URIs for daemon mgmt endpoint - Error
SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
caused by: docker returned exit code: 1, stderr = SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
√ aziot-edge package is up-to-date - OK
√ container time is close to host time - OK
‼ DNS server - Warning
Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
You can ignore this warning if you are setting DNS server per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
√ production readiness: container engine - OK
‼ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
caused by: Could not open container engine config file /etc/docker/daemon.json
caused by: No such file or directory (os error 2)
× production readiness: Edge Agent's storage directory is persisted on the host filesystem - Error
Could not check current state of edgeAgent container
caused by: docker returned exit code: 1, stderr = Error: No such object: edgeAgent
× production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error
Could not check current state of edgeHub container
caused by: docker returned exit code: 1, stderr = Error: No such object: edgeHub
√ Agent image is valid and can be pulled from upstream - OK
Connectivity checks
-------------------
√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the default network can connect to upstream MQTT port - OK
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream MQTT port - OK
30 check(s) succeeded.
2 check(s) raised warnings.
4 check(s) raised errors.
TOML file has only the manual provisioning with connection string.
I had this error because my IOT Hub networks "Public network access" was set as "Disabled".
You can correct this by going the following:
Go to the Azure portal, and go to the IOT Hub resource in question.
Go to the Networking menu option.
Change the "Public network access" to either "All Networks" or "Selected IP ranges", depending on your use case. Remember if you select "Selected IP ranges", you must add the VM/IOT devices ip address to the list of allowed IP addresses.
I came across this question like too many times while I was working with an enterprise environment. My finding is more related to the environment and security aspect of the whole system.
For my case, my working environment was RedHat Linux and Azure is hosted on-prem with added layer of proxy server. Only one piece of advice to solve most common issues in such environment is to give all necessary permissions of rwx (read, write, all).
Pinpointing the problem asked, the identity daemon is failing because the aziot trust bundle is not loading properly.
read all preloaded certificates from the Certificates Service - Error
could not load cert with ID "aziot-edged-trust-bundle"
Check the certificate is properly setup to use device identity certificate.
Second error is related to daemon management socket:
× configuration has correct URIs for daemon mgmt endpoint - Error
SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
caused by: docker returned exit code: 1, stderr = SocketError - SocketErrorCode (TimedOut) : Operation timed out
One or more errors occurred. (Got bad response: )
This can be resolved by manually giving ownership permission to mgmt.sock at /var/lib/iotedge location.
Nevertheless, there may be a variety of reasons for iotedge dps to not work and further iotAgent and iotHub to not start. It is better to go to the root of the issue and start resolving it.
I'm using a rasbery pi with Rasbian. I want to use Kafka to stream data from a camera to my phone. I downloaded this package from the Kafka website that contains Zookeeper and Kafka:
https://www.apache.org/dyn/closer.cgi?path=/kafka/2.4.1/kafka_2.12-2.4.1.tgz
First I started zookeeper with the zookeeper-server-start.sh located in the bin directory with:
"sudo bin/zookeeper-server-start.sh config/zookeeper.properties" in the terminal. I got back:
[2020-04-07 17:56:44,843] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,854] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,936] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,937] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:44,975] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,976] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,978] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-04-07 17:56:44,979] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-04-07 17:56:45,010] INFO Log4j found with jmx enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2020-04-07 17:56:45,263] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,264] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,268] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,269] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-04-07 17:56:45,271] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-04-07 17:56:45,319] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-04-07 17:56:45,483] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,484] INFO Server environment:host.name=Rupert (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,485] INFO Server environment:java.version=11.0.6 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,486] INFO Server environment:java.vendor=Raspbian (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,487] INFO Server environment:java.home=/usr/lib/jvm/java-11-openjdk-armhf (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,488] INFO Server environment:java.class.path=/usr/local/kafka/bin/../libs/activation-1.1.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/kafka/bin/../libs/commons-cli-1.4.jar:/usr/local/kafka/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/kafka/bin/../libs/connect-api-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-basic-auth-extension-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-file-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-json-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-client-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-runtime-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-transforms-2.4.1.jar:/usr/local/kafka/bin/../libs/guava-20.0.jar:/usr/local/kafka/bin/../libs/hk2-api-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-core-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/kafka/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.28.jar:/usr/local/kafka/bin/../libs/jersey-common-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/kafka/bin/../libs/jersey-hk2-2.28.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/kafka/bin/../libs/jersey-server-2.28.jar:/usr/local/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1-sources.jar:/usr/local/kafka/bin/../libs/kafka-clients-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-examples-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-scala_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-test-utils-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-2.4.1.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/lz4-java-1.6.0.jar:/usr/local/kafka/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/paranamer-2.8.jar:/usr/local/kafka/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/kafka/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/kafka/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/kafka/bin/../libs/scala-library-2.12.10.jar:/usr/local/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/kafka/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/kafka/bin/../libs/zookeeper-3.5.7.jar:/usr/local/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/usr/local/kafka/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,498] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/arm-linux-gnueabihf/jni:/lib/arm-linux-gnueabihf:/usr/lib/arm-linux-gnueabihf:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,499] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,500] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,501] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,502] INFO Server environment:os.arch=arm (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,503] INFO Server environment:os.version=4.19.66-v7+ (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,504] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,506] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,507] INFO Server environment:user.dir=/usr/local/kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,508] INFO Server environment:os.memory.free=493MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,509] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,510] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,529] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,531] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,539] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer)
[2020-04-07 17:56:45,661] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2020-04-07 17:56:45,699] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-07 17:56:45,773] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-07 17:56:45,965] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
[2020-04-07 17:56:45,994] INFO Reading snapshot /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap)
[2020-04-07 17:56:46,076] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-04-07 17:56:46,377] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
I believe that his is correct, but please do bring anything to my attention.
Next I tried to run Kafka using "sudo bin/kafka-server-start.sh config/server.properties" and after running for 10 seconds it returned:
[2020-04-07 17:49:40,577] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-07 17:49:47,215] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-04-07 17:49:47,222] INFO starting (kafka.server.KafkaServer)
[2020-04-07 17:49:47,233] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-04-07 17:49:47,558] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:47,679] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,681] INFO Client environment:host.name=Rupert (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,682] INFO Client environment:java.version=11.0.6 (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,682] INFO Client environment:java.vendor=Raspbian (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,683] INFO Client environment:java.home=/usr/lib/jvm/java-11-openjdk-armhf (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,684] INFO Client environment:java.class.path=/usr/local/kafka/bin/../libs/activation-1.1.1.jar:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/kafka/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/kafka/bin/../libs/commons-cli-1.4.jar:/usr/local/kafka/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/kafka/bin/../libs/connect-api-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-basic-auth-extension-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-file-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-json-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-mirror-client-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-runtime-2.4.1.jar:/usr/local/kafka/bin/../libs/connect-transforms-2.4.1.jar:/usr/local/kafka/bin/../libs/guava-20.0.jar:/usr/local/kafka/bin/../libs/hk2-api-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-core-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/kafka/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/kafka/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/kafka/bin/../libs/jersey-client-2.28.jar:/usr/local/kafka/bin/../libs/jersey-common-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/kafka/bin/../libs/jersey-hk2-2.28.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/kafka/bin/../libs/jersey-server-2.28.jar:/usr/local/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/kafka/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka_2.12-2.4.1-sources.jar:/usr/local/kafka/bin/../libs/kafka-clients-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-examples-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-scala_2.12-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-streams-test-utils-2.4.1.jar:/usr/local/kafka/bin/../libs/kafka-tools-2.4.1.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/lz4-java-1.6.0.jar:/usr/local/kafka/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/paranamer-2.8.jar:/usr/local/kafka/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/kafka/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/kafka/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/kafka/bin/../libs/scala-library-2.12.10.jar:/usr/local/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/kafka/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/kafka/bin/../libs/zookeeper-3.5.7.jar:/usr/local/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/usr/local/kafka/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,693] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/arm-linux-gnueabihf/jni:/lib/arm-linux-gnueabihf:/usr/lib/arm-linux-gnueabihf:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,695] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,696] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,697] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,698] INFO Client environment:os.arch=arm (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,699] INFO Client environment:os.version=4.19.66-v7+ (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,700] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,701] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,702] INFO Client environment:user.dir=/usr/local/kafka (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,703] INFO Client environment:os.memory.free=975MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,704] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,704] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,769] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#114918a (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:47,887] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-04-07 17:49:48,017] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-04-07 17:49:48,130] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:48,313] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:48,449] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:48,557] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:49,678] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:49,682] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:50,784] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:50,787] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:51,890] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:51,894] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:52,997] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:53,000] INFO Socket error occurred: localhost/0:0:0:0:0:0:0:1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,114] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,118] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:54,353] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:55,221] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:55,375] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2020-04-07 17:49:55,396] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2020-04-07 17:49:55,412] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2020-04-07 17:49:55,449] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:113)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:375)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:399)
at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-04-07 17:49:55,480] INFO shutting down (kafka.server.KafkaServer)
[2020-04-07 17:49:55,577] INFO shut down completed (kafka.server.KafkaServer)
[2020-04-07 17:49:55,583] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-04-07 17:49:55,612] INFO shutting down (kafka.server.KafkaServer)
My assumption is that Kafka can't reach zookeeper but I really have no clue.
Is there a way I can test if zookeeper is working?
Thanks to anyone who helps, stay healthy.
When I pressed ctrl+shift+c to get rid of the text that showed up afterwards it also undid what needed to happen.
Im trying to run a simple query on a column with only 10 rows:
select MAX(Column3) from table;
However the spark application runs infinitely with the following message:
> 2017-05-10T16:23:40,397 DEBUG [IPC Parameter Sending Thread #0]
> ipc.Client: IPC Client (1360312263) connection to /0.0.0.0:8032 from
> ubuntu sending #1841 2017-05-10T16:23:40,397 DEBUG [IPC Client
> (1360312263) connection to /0.0.0.0:8032 from ubuntu] ipc.Client: IPC
> Client (1360312263) connection to /0.0.0.0:8032 from ubuntu got value
> #1841 2017-05-10T16:23:40,397 DEBUG [main] ipc.ProtobufRpcEngine: Call: getApplicationReport took 0ms 2017-05-10T16:23:41,397 DEBUG
> [main] security.UserGroupInformation: PrivilegedAction as:ubuntu
> (auth:SIMPLE)
> from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323)
> 2017-05-10T16:23:41,398 DEBUG [IPC Parameter Sending Thread #0]
> ipc.Client: IPC Client (1360312263) connection to /0.0.0.0:8032 from
> ubuntu sending #1842 2017-05-10T16:23:41,398 DEBUG [IPC Client
> (1360312263) connection to /0.0.0.0:8032 from ubuntu] ipc.Client: IPC
> Client (1360312263) connection to /0.0.0.0:8032 from ubuntu got value
> #1842 2017-05-10T16:23:41,398 DEBUG [main] ipc.ProtobufRpcEngine: Call: getApplicationReport took 1ms 2017-05-10T16:23:41,399 DEBUG
> [main] security.UserGroupInformation: PrivilegedAction as:ubuntu
> (auth:SIMPLE)
> from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323)
> 2017-05-10T16:23:41,399 DEBUG [IPC Parameter Sending Thread #0]
> ipc.Client: IPC Client (1360312263) connection to /0.0.0.0:8032 from
> ubuntu sending #1843 2017-05-10T16:23:41,399 DEBUG [IPC Client
> (1360312263) connection to /0.0.0.0:8032 from ubuntu] ipc.Client: IPC
> Client (1360312263) connection to /0.0.0.0:8032 from ubuntu got value
> #1843 2017-05-10T16:23:41,399 DEBUG [main] ipc.ProtobufRpcEngine: Call: getApplicationReport took 0ms
The issue was related to an unhealthy node, therefore it was not able to assign the task. The solution was to increase the yarn maximum disk utilization percentage in yarn-site.xml because my disk was at 97% used :
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>99</value>
</property>
I'm trying to setup a gridgain cluster in a cloud environment (opensciencedatacloud.org).
I've verified that UDP multicast is available and port 47400 is open in this environment, but unfortunately GridGain is unable to find the other nodes when they are launched. Do you have clue why it is not working.
Following you can find below the a cluster node log:
INFO o.g.grid.kernal.GridKernal%nextflow - Config URL: n/a
INFO o.g.grid.kernal.GridKernal%nextflow - Daemon mode: off
INFO o.g.grid.kernal.GridKernal%nextflow - OS: Linux 2.6.32-358.2.1.el6.x86_64 amd64
INFO o.g.grid.kernal.GridKernal%nextflow - OS user: root
INFO o.g.grid.kernal.GridKernal%nextflow - Language runtime: Groovy
INFO o.g.grid.kernal.GridKernal%nextflow - VM information: Java(TM) SE Runtime Environment 1.7.0_51-b13 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 24.51-b03
INFO o.g.grid.kernal.GridKernal%nextflow - VM total memory: 0.83GB
INFO o.g.grid.kernal.GridKernal%nextflow - Remote Management [restart: off, REST: on, JMX (remote: off)]
INFO o.g.grid.kernal.GridKernal%nextflow - GRIDGAIN_HOME=/root
INFO o.g.grid.kernal.GridKernal%nextflow - VM arguments: [-Djava.awt.headless=true]
WARN o.g.grid.kernal.GridKernal%nextflow - SMTP is not configured - email notifications are off.
INFO o.g.grid.kernal.GridKernal%nextflow - Configured caches ['allSessions']
INFO o.g.grid.kernal.GridKernal%nextflow - 3-rd party licenses can be found at: /root/libs/licenses
INFO o.g.grid.kernal.GridKernal%nextflow - Local node user attribute [ROLE=worker]
[gridgain-#5%pub-nextflow%] WARN o.g.grid.kernal.GridDiagnostic - Initial heap size is less than 512MB (59MB). It is highly recommended to allocate at least 512MB of initial heap to run GridGain. Use -Xms512m -Xmx512m to set initial heap size.
INFO o.g.grid.kernal.GridKernal%nextflow - Non-loopback local IPs: 172.16.1.98, fe80:0:0:0:78b5:53ff:fe01:643b%3, fe80:0:0:0:f816:3eff:fe54:f4e8%2, 172.17.42.1
INFO o.g.grid.kernal.GridKernal%nextflow - Enabled local MACs: FA163E54F4E8, 7AB55301643B
INFO o.g.g.s.c.t.GridTcpCommunicationSpi - IPC shared memory server endpoint started [port=48100, tokDir=/root/work/ipc/shmem/cf5dbd14-4bb8-420b-998f-820056aa6d1c-2646]
INFO o.g.g.s.c.t.GridTcpCommunicationSpi - Successfully bound shared memory communication to TCP port [port=48100, locHost=0.0.0.0/0.0.0.0]
INFO o.g.g.s.c.t.GridTcpCommunicationSpi - Successfully bound to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0]
WARN o.g.g.s.c.noop.GridNoopCheckpointSpi - Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
INFO o.g.grid.kernal.GridKernal%nextflow - Security status [authentication=off, secure-session=off]
WARN o.g.g.k.p.cache.GridCacheProcessor - Cache write synchronization mode is set to FULL_ASYNC. All single-key 'put' and 'remove' operations will return 'null', all 'putx' and 'removex' operations will return 'true'.
WARN o.g.g.k.p.cache.GridCacheProcessor - Automatically set write order mode to PRIMARY for write synchronization mode [writeSynchronizationMode=FULL_ASYNC, cacheName=allSessions]
WARN o.g.g.k.p.cache.GridCacheProcessor - Query indexing is disabled (queries will not work) for cache: 'allSessions'. To enable change GridCacheConfiguration.isQueryIndexEnabled() property.
INFO o.g.g.k.p.cache.GridCacheDgcManager - <allSessions> DGC trace log disabled.
INFO o.g.g.k.p.cache.GridCacheProcessor - Started cache [name=allSessions, mode=REPLICATED]
INFO org.eclipse.jetty.server.Server - jetty-9.0.5.v20130815
INFO o.e.jetty.server.ServerConnector - Started ServerConnector#7b9617a0{HTTP/1.1}{0.0.0.0:8080}
INFO o.g.g.k.p.r.p.h.j.GridJettyRestProtocol - Command protocol successfully started [name=Jetty REST, host=/0.0.0.0, port=8080]
INFO o.g.g.k.p.r.p.t.GridTcpRestProtocol - Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
INFO o.g.g.s.d.tcp.GridTcpDiscoverySpi - Successfully bound to TCP port [port=47500, localHost=/172.16.1.98]
WARN o.g.g.s.d.t.i.m.GridTcpDiscoveryMulticastIpFinder - GridTcpDiscoveryMulticastIpFinder has no pre-configured addresses (it is recommended in production to specify at least one address in GridTcpDiscoveryMulticastIpFinder.getAddresses() configuration property)
>>> +------------------------------------------------------------------------------------+
>>> GridGain ver. platform-os-6.0.2#20140323-sha1:f9c796a1b29d2d7ce2737e681cbe578b5315d79f
>>> +------------------------------------------------------------------------------------+
>>> OS name: Linux 2.6.32-358.2.1.el6.x86_64 amd64
>>> CPU(s): 2
>>> Heap: 0.83GB
>>> VM name: 2646#node.novalocal
>>> Grid name: nextflow
>>> Local node [ID=CF5DBD14-4BB8-420B-998F-820056AA6D1C, order=1]
>>> Local node addresses: [node.novalocal/172.16.1.98]
>>> Local ports: TCP:8080 TCP:11211 TCP:47100 TCP:47500 TCP:48100
>>> GridGain documentation: http://www.gridgain.com/documentation
INFO o.g.g.k.m.d.GridDiscoveryManager - Topology snapshot [ver=1, nodes=1, CPUs=2, heap=0.83GB]
Usually software firewalls prevent multicast packets. Can you try with firewall disabled on your system?
Ubuntu 10.04 system. new Plone install, went fine and created some content, everything seemed fine. New kernel update and a reboot later, Plone is running but will not present any pages to a browser. In fact, a browser attempt just times out. I can telnet to the port 8080 on the system and send an HTTP get by hand and nothing comes back. The log file for client1 in a zeo install keeps repeating:
2011-08-10T16:59:57 INFO ZServer HTTP server started at Wed Aug 10 16:59:57 2011
Hostname: 0.0.0.0
Port: 8080
------
2011-08-10T16:59:57 INFO Zope Set effective user to "plone"
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage ClientStorage (pid=24596) created RW/normal for storage: '1'
------
2011-08-10T17:00:02 INFO ZEO.cache created temporary cache file '<fdopen>'
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 8100)>
------
2011-08-10T17:00:02 INFO ZEO.zrpc.Connection(C) (127.0.0.1:8100) received handshake 'Z3101'
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage Connected to storage: ('dns', 8100)
------
2011-08-10T17:00:02 INFO ZEO.ClientStorage zeostorage No verification necessary -- empty cache
------
2011-08-10T17:00:22 INFO ZServer HTTP server started at Wed Aug 10 17:00:22 2011
Hostname: 0.0.0.0
Port: 8080
I haven't been able to find any other info on what is causing this, nor can I find any documentation on debugging a Plone install.
Thanks for any help you can provide.
Forgive the aborted answer, misread the log snippet. The repeated log entries you're seeing are what you'd expect to see from repeated restarts. Are you repeatedly restarting the instance? If not, then in it seems your instance is restarting on it's own. Shut down the instance and start it using "bin/instance fg" and see if that gives you more information.