My goal is to reset a device that is already connected to IoT Central by removing the x509 certificate file and deleting the key-pair from the HSM.
Sep 22 17:04:11 bobcat systemd[1]: aziot-identityd.service: Scheduled restart job, restart counter is at 18119.
Sep 22 17:04:11 bobcat systemd[1]: Stopped Azure IoT Identity Service.
Sep 22 17:04:11 bobcat aziot-edged[404821]: 2022-09-22T17:04:11Z [WARN] - The daemon could not start up successfully: Could not retrieve device information
Sep 22 17:04:11 bobcat aziot-edged[404821]: 2022-09-22T17:04:11Z [WARN] - caused by: HTTP request error
Sep 22 17:04:11 bobcat aziot-edged[404821]: 2022-09-22T17:04:11Z [WARN] - caused by: connection error: Connection reset by peer (os error 104)
Sep 22 17:04:11 bobcat aziot-edged[404821]: 2022-09-22T17:04:11Z [WARN] - Requesting device reprovision.
Sep 22 17:04:11 bobcat aziot-edged[404821]: 2022-09-22T17:04:11Z [WARN] - The reprovisioning operation failed
Sep 22 17:04:11 bobcat systemd[1]: Started Azure IoT Identity Service.
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - Failed to provision with IoT Hub, and no valid device backup was found: internal error
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - service encountered an error
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - caused by: internal error
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - caused by: could not create certificate
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - caused by: internal error
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - caused by: could not create certificate
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - caused by: old device identity certificate, which should contain the common name field required for registration, could not be retrieved
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2022-09-22T17:04:11Z [ERR!] - 0: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 1: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 2: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 3: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 4: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 5: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 6: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 7: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 8: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 9: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 10: <unknown>
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 11: __libc_start_main
Sep 22 17:04:11 bobcat aziot-identityd[404892]: 12: <unknown>
Sep 22 17:04:11 bobcat systemd[1]: aziot-identityd.service: Main process exited, code=exited, status=1/FAILURE
Sep 22 17:04:11 bobcat systemd[1]: aziot-identityd.service: Failed with result 'exit-code'.```
Related
Apache web server fails to restart.The server has been running well and suddenly failed.
What would be the possible cause for making httpd.service fail to start and what is the solution?
Running Apachectl configtest returns symbol lookup error: /usr/local/apache/bin/httpd: undefined symbol: apr_crypto_init
Running systemctl status httpd.service :
httpd.service - Web server Apache
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2022-10-04 22:36:27 CST; 1min 24s ago
Process: 13030 ExecStop=/usr/local/apache/bin/apachectl graceful-stop (code=exited, status=127)
Process: 3911 ExecStart=/usr/local/apache/bin/apachectl start (code=exited, status=127)
Main PID: 851 (code=exited, status=0/SUCCESS)
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com systemd[1]: Starting Web server Apache...
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com apachectl[3911]: /usr/local/apache/bin/httpd: symbol lookup error: /usr/local/apache/bin/httpd: undefined symbol: apr_crypto_init
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com systemd[1]: httpd.service: control process exited, code=exited status=127
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com systemd[1]: Failed to start Web server Apache.
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com systemd[1]: Unit httpd.service entered failed state.
Oct 04 22:36:27 hwsrv-985893.hostwindsdns.com systemd[1]: httpd.service failed.
Running journalctl -xe :
Oct 04 22:51:54 hwsrv-985893.hostwindsdns.com kernel: net_ratelimit: 75 callbacks suppressed
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4063]: Failed password for root from 61.177.172.114 port 36803 ssh2
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4065]: Failed password for root from 218.92.0.195 port 33236 ssh2
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4063]: Received disconnect from 61.177.172.114 port 36803:11: [preauth]
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4063]: Disconnected from 61.177.172.114 port 36803 [preauth]
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4063]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.114 user=root
Oct 04 22:51:56 hwsrv-985893.hostwindsdns.com sshd[4065]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
Oct 04 22:51:59 hwsrv-985893.hostwindsdns.com sshd[4065]: Failed password for root from 218.92.0.195 port 33236 ssh2
Oct 04 22:51:59 hwsrv-985893.hostwindsdns.com sshd[4065]: Received disconnect from 218.92.0.195 port 33236:11: [preauth]
Oct 04 22:51:59 hwsrv-985893.hostwindsdns.com sshd[4065]: Disconnected from 218.92.0.195 port 33236 [preauth]
Oct 04 22:51:59 hwsrv-985893.hostwindsdns.com sshd[4065]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.195 user=root
Oct 04 22:51:59 hwsrv-985893.hostwindsdns.com kernel: net_ratelimit: 65 callbacks suppressed
Oct 04 22:52:05 hwsrv-985893.hostwindsdns.com kernel: net_ratelimit: 77 callbacks suppressed
Unable to start Apache2 server after I modifed dir.conf file, even after changing it back to normal
I modified /etc/apache2/mods-enabled/dir.conf and changed the order and wrote index.php in
the first place before index.html, so the file contents have been modified from
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" to now
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm". But still
changing that back to original order, also does not start apache2 server, it gives the same
error. I tried restarting and stop and start, but none seem to work.
** Here are the details of systemctl status apache2.service **
● apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Tue 2020-11-24 12:28:31 IST; 5min ago
Docs: man:systemd-sysv-generator(8)
Process: 20300 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
Process: 25755 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)
Nov 24 12:28:31 localhost apache2[25755]: *
Nov 24 12:28:31 localhost apache2[25755]: * The apache2 configtest failed.
Nov 24 12:28:31 localhost apache2[25755]: Output of config test was:
Nov 24 12:28:31 localhost apache2[25755]: AH00534: apache2: Configuration error: More than one MPM loaded.
Nov 24 12:28:31 localhost apache2[25755]: Action 'configtest' failed.
Nov 24 12:28:31 localhost apache2[25755]: The Apache error log may have more information.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Control process exited, code=exited status=1
Nov 24 12:28:31 localhost systemd[1]: Failed to start LSB: Apache2 web server.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Unit entered failed state.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Failed with result 'exit-code'.
**Here is the details of journalctl -xe**
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost systemd[1]: Failed to start MySQL Community Server.
-- Subject: Unit mysql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has failed.
--
-- The result is failed.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Unit entered failed state.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Failed with result 'exit-code'.
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Service hold-off time over, scheduling restart.
Nov 24 12:44:44 localhost systemd[1]: Stopped MySQL Community Server.
-- Subject: Unit mysql.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has finished shutting down.
Nov 24 12:44:44 localhost systemd[1]: Starting MySQL Community Server...
-- Subject: Unit mysql.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has begun starting up.
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545855Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545908Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.705792Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --expl
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.707018Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.32-0ubuntu0.16.04.1) starting as process 538
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709650Z 0 [ERROR] Could not open file '/var/log/mysql/error.log' for error logging: No suc
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709678Z 0 [ERROR] Aborting
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709703Z 0 [Note] Binlog end
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709770Z 0 [Note] /usr/sbin/mysqld: Shutdown complete
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
I tried to create kubernetes cluster(v1.2.3) on azure with coreos cluster. I followed the documentation (http://kubernetes.io/docs/getting-started-guides/coreos/azure/)
Then I cloned the repo( git clone https://github.com/kubernetes/kubernetes). And I did a minor change in file( docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml) changed the kube version from v1.1.2 to v1.2.3.
And then I created the cluster by running the file(./create-kubernetes-cluster.js), cluster is successfully created for me. But in master node API server didn't get started..
I checked the log it was showing - Cloud provider could not be initialized: unknown cloud provider "vagrant".. I could not catch why this issue was coming..
This is my Log of -> kube-apiserver.service
-- Logs begin at Sat 2016-07-23 12:41:36 UTC, end at Sat 2016-07-23 12:44:19 UTC. --
Jul 23 12:43:06 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: I0723 12:43:06.299966 1964 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: F0723 12:43:06.300057 1964 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: I0723 12:43:16.428476 2015 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: F0723 12:43:16.428534 2015 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: I0723 12:43:26.756551 2024 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: F0723 12:43:26.756654 2024 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 kube-apiserver[2039]: I0723 12:43:36.872849 2039 server.go:188] Will report 172.16.0.4 as public IP address.
Have you had a look at kuberenetes-anywhere (https://github.com/kubernetes/kubernetes-anywhere)? Much work has been done there and now probably has all the right bits to deploy out your cluster with Azure specific cloud provider integrations.
I am trying to boot Fedora 20 with serial output,so I modify the boot command line with:
menuentry 'Fedora (3.18.0) 20 (Heisenbug)' --class fedora --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-690525b7662a4bbca483ccdfdac3f6dc-advanced-d27ee4d5-522c-48e8-abc5-73b42bd81ae4' {
load_video
insmod gzio
insmod part_gpt
insmod ext2
set root='hd1,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd1,gpt2 --hint-efi=hd1,gpt2 --hint-baremetal=ahci1,gpt2 86088439-feab-4ac8-9cca-792414d9fff0
else
search --no-floppy --fs-uuid --set=root 86088439-feab-4ac8-9cca-792414d9fff0
fi
linuxefi /vmlinuz-3.18.0 root=UUID=d27ee4d5-522c-48e8-abc5-73b42bd81ae4 ro text no_console_suspend hpet=disable console=ttyS0,115200 console=tty0
initrdefi /initramfs-3.18.0.img
}
And the serial output seemed to stop at:
'a start job is running for Show Plymouth Boot Screen',
and did not go on.
and here are the journalctl message:
Jan 06 19:02:13 localhost.localdomain systemd[1]: Mounted /boot.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Mounting /boot/efi...
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Activation of DM RAID sets.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Encrypted Volumes.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Reached target Encrypted Volumes.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Mounted /boot/efi.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Local File Systems.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Reached target Local File Systems.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Mark the need to relabel after reboot.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Reconfigure the system on administrator request.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Relabel all filesystems, if necessary.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage...
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Recreate Volatile Files and Directories...
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Security Auditing Service...
Jan 06 19:02:13 localhost.localdomain auditd[468]: Error - audit support not in kernel
Jan 06 19:02:13 localhost.localdomain auditd[468]: Cannot open netlink audit socket
Jan 06 19:02:13 localhost.localdomain auditd[468]: The audit daemon is exiting.
Jan 06 19:02:13 localhost.localdomain auditctl[469]: Error - audit support not in kernel
Jan 06 19:02:13 localhost.localdomain auditctl[469]: Error - audit support not in kernel
Jan 06 19:02:13 localhost.localdomain auditctl[469]: Cannot open netlink audit socket
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Recreate Volatile Files and Directories.
Jan 06 19:02:13 localhost.localdomain systemd[1]: auditd.service: main process exited, code=exited, status=1/FAILURE
Jan 06 19:02:13 localhost.localdomain systemd[1]: Failed to start Security Auditing Service.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Unit auditd.service entered failed state.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Starting Update UTMP about System Reboot/Shutdown...
Jan 06 19:02:13 localhost.localdomain systemd-journal[394]: Permanent journal is using 24.0M (max 601.3M, leaving 902.0M of free 2.1G, current limit 601.3M).
Jan 06 19:02:13 localhost.localdomain systemd-journal[394]: Time spent on flushing to /var is 172.987ms for 1168 entries.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Trigger Flushing of Journal to Persistent Storage.
Jan 06 19:02:13 localhost.localdomain systemd[1]: Started Update UTMP about System Reboot/Shutdown.
Jan 06 19:02:17 localhost.localdomain kernel: random: nonblocking pool is initialized
finally I solved this problem by appending:
console=tty console=ttyS0,115200n8
rather than:
console=tty0 console=ttyS0,115200
in boot command line
I don't know why,but it works,thanks god.
i connected tomcat connector with apache in redhat linux.after restarting of jboss server wont take's jboss request.here is my stack trace
help me how to solve this issue.
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [info] ajp_service::jk_ajp_common.c (2673): (worker1) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1)
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [info] jk_open_socket::jk_connect.c (758): connect to ::1:8009 failed (errno=111)
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [info] ajp_connect_to_endpoint::jk_ajp_common.c (1019): Failed opening socket to (::1:8009) (errno=111)
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [error] ajp_send_request::jk_ajp_common.c (1663): (worker1) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111)
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [info] ajp_service::jk_ajp_common.c (2673): (worker1) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2)
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [error] ajp_service::jk_ajp_common.c (2693): (worker1) connecting to tomcat failed.
[Mon Sep 15 01:42:38 2014] [5411:140090475009792] [info] jk_handler::mod_jk.c (2806): Service error=-3 for worker=worker1
[Mon Sep 15 01:42:40 2014] [5622:140090483402496] [info] jk_open_socket::jk_connect.c (758): connect to ::1:8009 failed (errno=111)
[Mon Sep 15 01:42:40 2014] [5622:140090483402496] [info] ajp_connect_to_endpoint::jk_ajp_common.c (1019): Failed opening socket to (::1:8009) (errno=111)
[Mon Sep 15 01:42:40 2014] [5622:140090483402496] [error] ajp_send_request::jk_ajp_common.c (1663): (worker1) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111)
i have installed 1.2.40 version of tomcat connector later on moved to older version 1.2.35 then it works ....