I have an issue mongo db stops responding - linux

MongoDB service crashes and below is the error log:
ERROR: mmap private failed with out of memory. (64 bit build)
Assertion: 13636:file /var/lib/mongodb/_tmp_repairDatabase_4/dbname.0 open/create failed in createPrivateMap (look in log for more information)
and I could not start mongodb until I clear the data from mongodb data directory

Which OS do you use ?
Did you look at:
MongoDB: out of memory
or
http://pranavl.wordpress.com/2012/09/12/mongodb-error-mmap-failed-with-out-of-memory-64-bit-build/
I am not expert in Mongo - but I run it without any problem..

Related

Failed to destroy regular DB in YugabyteDB YSQL

[Question posted by a user on YugabyteDB Community Slack]
I found lots of "IO error" in tserver's ERROR log:
E0622 18:12:44.155575 460196 tablet_metadata.cc:400] Failed to destroy regular DB at: /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-00004200000030008000000000004254/tablet-71c67832a05b4effa7462823983c7e6a: IO error (yb/rocksdb/util/env_posix.cc:317): /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-00004200000030008000000000004254/tablet-71c67832a05b4effa7462823983c7e6a/LOCK: No such file or directory
E0622 18:12:44.158552 460196 tablet_metadata.cc:400] Failed to destroy regular DB at: /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-000042000000300080000000000042e4/tablet-ac80c3319e784abaa9eb93d6caca6faf: IO error (yb/rocksdb/util/env_posix.cc:317): /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-000042000000300080000000000042e4/tablet-ac80c3319e784abaa9eb93d6caca6faf/LOCK: No such file or directory
E0622 18:12:44.163367 460196 tablet_metadata.cc:400] Failed to destroy regular DB at: /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-000042000000300080000000000042b7/tablet-c156f2fc85154a4c8d835660aa1c5244: IO error (yb/rocksdb/util/env_posix.cc:317): /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-000042000000300080000000000042b7/tablet-c156f2fc85154a4c8d835660aa1c5244/LOCK: No such file or directory
E0622 18:12:44.166888 460196 tablet_metadata.cc:400] Failed to destroy regular DB at: /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-0000420000003000800000000000420c/tablet-48c3441b5a6f469c99407b8cc04b7d26: IO error (yb/rocksdb/util/env_posix.cc:317): /home/yugabyte/data/yb-data/tserver/data/rocksdb/table-0000420000003000800000000000420c/tablet-48c3441b5a6f469c99407b8cc04b7d26/LOCK: No such file or directory
I have checked disk and network, both of them work fine.
I can create/drop/select/update/insert through psql too.
Can I ignore these errors, or anything I can do to follow this issue?
If these logs only generated after tserver started, without repeating and everything works fine you can ignore them.

mongodb suddenly stopped working on production server

I have a server running node and mongoDB. Mongo is suddenly throwing errors, I managed to get it started upon rebooting the server but the error reappears upon trying to handle a request.
The error when I try to run mongo in a shell is:
MongoDB shell version v4.4.6
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
My mongod.conf has the correct data path and I also tried setting it manually using mongod --dbpath and can confirm that it is correct.
mongod --repair doesn't work either.
Any ideas?
Turns out the server was out of storage. Running df showed that there was no disk space left.
I ended up adding a volume and moving my database and files there.

Error while making connection on oracle sql developer using linux OS

I am trying to make a new connection on oracle sqldeveloper as sysdba and when i hit test or connect i got this error message:
Status : Failure -Test failed: IO Error: The Network Adapter could not establish the connection
The default port is 1521, but 1522 is often common.
Check if the database is up. database will not be up causing this issue.
or Check if you are able to connect using sqlplus

Neo4j refused to connect

Characteristics :
Linux
Neo4j version 3.2.1
Access on remote
Installation
I Had install neo4j and gave the folder chmod 777 .
Im running it remotely on my machine and I had already enabled non local access
Doing NEo4j start i get this message
Active database: graph.db
Directories in use:
home: /home/cloudera/Muna/apps/neo4j
config: /home/cloudera/Muna/apps/neo4j/conf
logs: /home/cloudera/Muna/apps/neo4j/logs
plugins: /home/cloudera/Muna/apps/neo4j/plugins
import: /home/cloudera/Muna/apps/neo4j/import
data: /home/cloudera/Muna/apps/neo4j/data
certificates: /home/cloudera/Muna/apps/neo4j/certificates
run: /home/cloudera/Muna/apps/neo4j/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
Started neo4j (pid 9469). It is available at http://0.0.0.0:7474/
There may be a short delay until the server is ready.
See /home/cloudera/Muna/apps/neo4j/logs/neo4j.log for current status.
and it is not connecting in the browser .
running neo4j console
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 409600000 bytes for AllocateHeap
# An error report file with more information is saved as:
# /home/cloudera/hs_err_pid18598.log
where could the problem be coming from ?
Firstly, you should set the maximum open files to 40000, which is the recommended value. Then you do not get the WARNING. Like this: http://neo4j.com/docs/1.6.2/configuration-linux-notes.html
Secondly,'failed to allocate memory' means that the Java virtual machine cannot allocate the memory you start it with.
It can be a misconfiguration, or you physically do not have enough memory.
Please read the memory sizing guidelines here:
https://neo4j.com/docs/operations-manual/current/performance/

How to connect to DB2 SQL database with node JS?

For linux, one can use the API page found here, and connecting is straight forward. For OS X, one will run into the error when trying to connect to an existing DB2 database:
{ [Error: [IBM][CLI Driver] SQL1042C An unexpected system error occurred. SQLSTATE=58004 ] error: '[node-odbc] SQL_ERROR', message: '[IBM][CLI Driver] SQL1042C An unexpected system error occurred. SQLSTATE=58004\n', state: 'HY000' }
Does anyone know how to fix this problem?
The latest answer on this issue gives you the answer:
export DYLD_LIBRARY_PATH=/Users/.../<project_folder>/node_modules/ibm_db/installer/clidriver/lib/icc
node app.js
You have to do this every time you enter the shell, so you may as well put this in your .profile or .bash_profile.

Resources