Listener oracle in pacemaker cluster - linux

I need some help. I have created a cluster with 2 nodes. I created all resources, but listener has errors and the pacemaker cluster status shows Oracle Listener has stopped.
In the web interface I have the following error messages:
Failed to start listener_or on Mon Oct 25 16:30:52 2021 on node lha1: Listener pdb1 appears to have started, but is not running properly:
and
Unable to get metadata for resource agent 'ocf:heartbeat:oralsnr' (timeout)
In pacemaker cluster status I have the following error message:
listener_or_start_0 on lha1 'error' (1): call=35, status='complete', exitreason='Listener pdb1 appears to have started, but is not running properly: ', last-rc-change='2021-10-25 16:30:52 +03:00', queued=0ms, exec=393ms
However, I can connect to the base.
Do you have any ideas how I could solve this issue?

The issue was that the agent couldn't see the Oracle listener.
The solution was to:
Create tnsnames.ora
Write configurtion
Reload services
After this the pcs didn't have any errors.

Related

Azure-IoT - Raspberry pi3 Forever script exited with code: 7

I'm running a forever script sending data from raspberry pi 3 to azure-iot-hub, using the following,
root#raspberrypi3:~# forever start /home/pi/azure/iam/ble_azure.js
After working for about 1.5 days, I stopped getting messages at auzre-iot-hub, when I checked forever list I got the following:
root#raspberrypi3:~# forever list
info: Forever processes running
data: uid command script forever pid id logfile uptime
data: [0] NWgI /usr/bin/nodejs /home/pi/azure/iam/ble_azure.js 8990 3784 /root/.forever/NWgI.log 0:21:17:38.742
When I checked the log file I get this error message:
/home/pi/azure/iam/node_modules/applicationinsights/AutoCollection/Exceptions.js:27
throw error;
^
NotConnectedError: mqtt.js returned client disconnecting error
at translateError (/home/pi/azure/iam/node_modules/azure-iot-device-mqtt/lib/mqtt-translate-error.js:25:11)
at MqttTwinReceiver._handleError (/home/pi/azure/iam/node_modules/azure-iot-device-mqtt/lib/mqtt-twin-receiver.js:201:42)
at /home/pi/azure/iam/node_modules/azure-iot-device-mqtt/lib/mqtt-twin-receiver.js:64:18
at MqttClient._checkDisconnecting (/home/pi/azure/iam/node_modules/mqtt/lib/client.js:314:7)
at MqttClient.subscribe (/home/pi/azure/iam/node_modules/mqtt/lib/client.js:423:12)
at /home/pi/azure/iam/node_modules/azure-iot-device-mqtt/lib/mqtt-twin-receiver.js:62:22
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
error: Forever detected script exited with code: 7
error: Script restart attempt #34
state has changed poweredOn
started scanning
[IoT hub Client] Connect error: mqtt.js returned premature close error
Through the log file, mqtt error keeps happening multiple times and forever handled it successfully, what I can't understand why after 1.5 days, I get this error:
**error: Forever detected script exited with code: 7
error: Script restart attempt #34**
Also why I keep getting such mqtt error, why it keeps disconnecting?
**NotConnectedError: mqtt.js returned client disconnecting error**
Forever --version
v0.15.3
root#raspberrypi3:~# uname -a
Linux raspberrypi3 4.9.35-v7+ #1014 SMP Fri Jun 30 14:47:43 BST 2017 armv7l GNU/Linux
Thanks
If you're using Client.fromConnectionString to instantiate the client object the SDK disconnects and reconnects every 45 minutes to renew the shared access signature token. (it doesn't happen with AMQP that uses a different authentication mechanism). It might be that when re-establishing the connection the client hits this "premature close" error that we've been tracking in this issue.
There are 2 things that could help limit potential errors linked to disconnecting/reconnecting:
Use X509 certificates instead of connection strings to authenticate.
Create a client object using Client.fromSharedAccessSignature and build a long-lived signature that does not require disconnecting and reconnecting as often.
Last but not least, the next release of the SDK (1.2.0) will include a retry/reconnect logic that is way more robust than what was there before. I'll update the issue to point to it when it is released.

Chef-server-ctl reconfigure/ Creating Admin User on chef server

I am fairly new to Linux (and brand new to chef) and I have ran into an issue when setting up my chef server. I am trying to create an admin user with the command
sudo chef-server-ctl user-create admin Admin Ladmin admin#example.com
examplepass -f admin.pem
but after I keep getting this error:
ERROR: Connection refused connecting...
ERROR: Connection refused connecting to https://127.0.0.1/users/, retry 5/5
ERROR: Network Error: Connection refused - Connection refused
connecting to https://..., giving up
Check your knife configuration and network settings
I also noticed that when I ran chef-server-ctl I got this output:
[2016-12-21T13:24:59-05:00] ERROR: Running exception handlers Running
handlers complete
[2016-12-21T13:24:59-05:00] ERROR: Exception
handlers complete Chef Client failed. 0 resources updated in 01 seconds
[2016-12-21T13:24:59-05:00] FATAL: Stacktrace dumped to
/var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-12-21T13:24:59-05:00] FATAL: Please provide the contents of the
stacktrace.out file if you file a bug report
[2016-12-21T13:24:59-05:00] FATAL:
Chef::Exceptions::CannotDetermineNodeName: Unable to determine node
name: configure node_name or configure the system's hostname and fqdn
I read that this error is due to a prerequisite mistake but I'm uncertain as to what it means or how to fix it. So any input would be greatly appreciated.
Your server does not have a valid FQDN (aka full host name). You'll have to fix this before installing Chef server.

HDInsight Emulator not running on Windows / connection exception

I'm trying to setup a HDInsight emulator on a Windows 8.1 PC following these instructions: https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-emulator-get-started/
When trying to run a MapReduce job, I get a connection error.
How can I solve or further investigate this issue?
Details below.
Prerequisites:
Installed Azure Powershell and Azure SDK for VS 2015
Installed HDInsight Emulator for Azure incl. Hortonworks Data Platform
Started local hdp services (13 services running)
Connected Visual Studio to Emulator (had to follow troubleshooting point 2: replacing IP addresses in core-site.xml with '*' due to dynamic IP)
Created directories and copied text files as suggested
Problem:
When trying to run the first example, I get the following error:
16/01/11 10:36:39 INFO mapreduce.Job: Job job_1452503376359_0003 failed with state FAILED due to: Application application_1452503376359_0003 failed 2 times due to AM Container for appattempt_1452503376359_0003_000002 exited with exitCode: -1000 due to: Call From EH3HOST/192.168.56.1 to EH3HOST:8020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
.Failing this attempt.. Failing the application.
16/01/11 10:36:39 INFO mapreduce.Job: Counters: 0
The following worked for me:
Search for XML files containing <your own host name>:8020 inside the c:\hdp\hdp-<Version Number>\etc\hadoop\ folder. (e.g. EH3HOST:8020)
You should find at least
mapred-site.xml
core-site.xml
yarn-site.xml
Replace all occurrences within these files with 127.0.0.1:8020.

NEO4J local server does not start

I am running Linux in VirtualBox and am having an issue that I did not encounter on my machine with Linux as the primary OS.
When launching the neo4j service through sudo ./neo4j start in /opt/neo4j-community-2.3.1/bin I get a timeout with the message Failed to start within 120 seconds. Neo4j Server may have failed to start, please check the logs
my log from /opt/neo4j-community-2.3.1/data/graph.db/messages.log says:
http://pastebin.com/wUA715QQ
and data/log/console.log says:
2016-01-06 02:07:03.404+0100 INFO Successfully started database
2016-01-06 02:07:03.603+0100 INFO Successfully stopped database
2016-01-06 02:07:03.604+0100 INFO Successfully shutdown Neo4j Server
2016-01-06 02:07:03.608+0100 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.security.auth.FileUserRepository#9ab182' was successfully initialized, but failed to start. Please see attached cause exception. Starting Neo4j failed: Component 'org.neo4j.server.security.auth.FileUserRepository#9ab182' was successfully initialized, but failed to start. Please see attached cause exception.
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.security.auth.FileUserRepository#9ab182' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:97)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.CommunityBootstrapper.main(CommunityBootstrapper.java:35)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.security.auth.FileUserRepository#9ab182' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:462)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:194)
... 3 more
Caused by: java.nio.file.AccessDeniedException: /opt/neo4j-community-2.3.1/data/dbms/auth
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.Files.readAllBytes(Files.java:3152)
at org.neo4j.server.security.auth.FileUserRepository.loadUsersFromFile(FileUserRepository.java:208)
at org.neo4j.server.security.auth.FileUserRepository.start(FileUserRepository.java:73)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Any idea why the server won't start?
Check the permissions on /opt/neo4j-community-2.3.1/data/dbms/auth
See the line that says:
Caused by: java.nio.file.AccessDeniedException: /opt/neo4j-community-2.3.1/data/dbms/auth

cassandra: Exception connecting to localhost/9160. Reason: Connection refused

I am unable to connect to to the cassandra. By this I mean that when I try cassandra-CLI, or pycassaShell I get the following error:
Exception connecting to localhost/9160. Reason: Connection refused.
I have tried using a few non-defaul ports but to no joy.
I am running cassandra version 1.0.7
I have tried bounding the service, using restart, stop, start, and force-restart.
Run jps command under root user and kill CassandraDaemon if you will see it. After this you will start Cassandra again.
My problem was cassandra was not even starting. Solution was to start Cassandra!
I had the same issue in cassandra, After rename var/lib/cassandra/data to var/lib/cassandra/data1 then cassandra is working fine, If cassandra alloted maximum memory is full then it will throw connection refuse exception

Resources