MongoDB "open files limit" error during repairDatabase - linux

I'm trying to do repairDatabase of MongoDB on Ubuntu 16.04 but it fails with an error "errno:24 Too many open files" ("code" : 16818).
I've raised "ulimit -n" up to 1024000, restarted the server, but still getting the same error.
It does not seem possible to raise it higher and I'm stuck with no ideas. Please help!

We have faced similar issue. First please make sure number of file descriptors used by "mongod" process while running repairDatabase() command. You can verify this with the help of "lsof -p mongod_pid" Also please note, if you want to change "max number of process", you need to edit "/etc/security/limits.conf" file by adding entry for mongod process.
Edit:
Also there is already feature request to open file per database as currently "wiredtiger" opens one file per collection and one for index. Also one should seriously look into horizontal scaling by sharding if cost is not a serious issue.

Related

Snakemake cannot write metadata

I have troubles to get snakemake-minimal=7.8.5 to run on Windows 10. I can execute rules, but snakemake terminates due to an error regarding the metadata:
Failed to set marker file for job started ([Errno 2] No such file or directory: 'C:\\test\\project\\.snakemake\\incomplete\\cnVucy9leHBlcmltZW50XzAzL2RmX2ludGVuc2l0aWVzX3Byb3RlaW5Hcm91cHNfbG9uZ18yMDE3XzIwMThfMjAxOV8yMDIwX04wNTAxNV9NMDQ1NDcvUV9FeGFjdGl2ZV9IRl9YX09yYml0cmFwX0V4YWN0aXZlX1Nlcmllc19zbG90XyM2MDcwLzE0X2V4cGVyaW1lbnRfMDNfZGF0YS5pcHluYg=='). Snakemake will work, but cannot ensure that output files are complete in case of a kill signal or power loss. Please ensure write permissions for the directory C:\test\project\.snakemake
I tried to troubleshoot doing the following
change the folders: Documents, User folder, and like the above in the root folder of my c drive
I tried to manipulate the security settings: Controlled folder or RandsomWare Access, see discussion -> it is deactivated
If I erase the .snakemake it is re-creating upon execution, so I assume I have write access. However, some security setting is disallowing the long filename with the hash
I tried the same workflow on a different Windows 10 machine and there I don't get the error, so I assume it is some windows issue.
Did anyone encounter the same error and found a solution?
I agree it is due to the length of the filename. It seems the default max filename length is 260. The file you pasted has a length of 262. You can edit the registry to allow longer filenames. Also consider opening an issue in snakemake to improve the documentation or otherwise address this issue for windows machines.

Generate auto increment sequence in logstash

I am pushing logs to Elastic Search from Logstash and then i need to get back the logs in the order they were written. Sorting by time stamp does not help because there could me multiple log statements in the same time. I followed the solution in Include monotonically increasing value in logstash field? and it worked perfectly in my windows system.
But when the code was moved to the linux production environment, logstash is not starting up. Failing with the below error
reason=>"Couldn't find any filter plugin named 'seq'. Are you sure
this is correct? Trying to load the seq filter plugin resulted in this
error: no such file to load -- logstash/filters/seq", :level=>:error}
Check if the seq.rb file is in the filter folder.
Also check if the line ending of your seq.rb are linux. If you transferred the file from a windows machine to a linux, the problem might come from here.

Error checking database <weird chars> File does not exist

After a server crash, I get a weird problem concerning database fixup. The console constantly throws a block of errors "Error checking database File does not exist" I did not find any databases with these names.
Here is an image as I am notz allowed to directly include pics:
https://pbs.twimg.com/media/CA87BQfUcAA21Cq.png:large
Where does domino know which databases to fixup?
How may I get rid of these errors?
Any idea appreciated.
Rene
Apparently I found a clue, myself:
http://www-01.ibm.com/support/docview.wss?uid=swg1LO78425
So, my next steps were fixup -j & compact on command line level without the server being up. Also, I deleted the dbdirman.nsf as suggested by Torsten.
I stumbled over a corrrupt database which caused a crash of the fixup. After moving the DB and recreating it from backup, the server could be started without an issue.
For now, the problem seems to be solved.

message text not available when using psloglist

I'm using psloglist to analysis the saved event log for my windows 2003 server, however, the critical information i need is not retrieved properly and "message text not available. insertion strings" is appended instead. I've been searching for long while and still unable to find any solution or the root cause, anybody come across the same and could give some help in this? Thanks.
psloglist \\localhost -d 7 application -o "Source" | find "MessageText"

Apache Cassandra. UnavailableException While Trying to insert a record

I am very new to Cassandra, and some how configured it. I was following This Link
.
Everything was fine. But in the end when I am trying to insert a record, it gives me the following exception. Today since afternoon I am trying to fix this. Googled a lot, but could not reach anywhere.
Any help on this will be greatly appretiated.
[default#DEMO] set Users[1234][name] = scott;
null
UnavailableException()
at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:16077)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:801)
at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:785)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:909)
at org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:222)
at org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:328)
[default#DEMO]
Thank you
Achyuth
this is old one, I want to share my experience.
I had same issue when I was setting the qa environment. Every thing configured fine: including : cassandra-topology.properties. But the nodetool ring display the unknown DC value for all nodes since the default is set UNKNOWN. That told me that the cassandra-topology.properties is not right in some way. After tried several things still no luck, I decided to create my own cassandra-topology.properties file and re-type every thing with vi, then it starts fine.
So if you have issue, run nodetool ring first to see if the DC set is what it should be.

Resources