When I run /etc/init.d/arangodb3 upgrade I get this error
2018-08-05T13:05:44Z [8290] INFO file-descriptors (nofiles) hard limit is 131072, soft limit is 131072
2018-08-05T13:05:45Z [8290] ERROR duplicate entry for collection name 'test'
2018-08-05T13:05:45Z [8290] ERROR collection id 50339084 has same name as already added collection 50333892
2018-08-05T13:05:45Z [8290] ERROR error while opening database: duplicate name
2018-08-05T13:05:45Z [8290] FATAL cannot start database: duplicate name
I want know access db and delete 'test' collection but arangoshell now keeps telling me that connection is refused. Anybody have any idea how to fix it?
Related
i am getting error while publishing results on sonar.
Error querying database. Cause: org.apache.ibatis.executor.result.ResultMapException: Error attempting to get column 'RAWLINEHASHES' from result set. Cause: java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2_111974964$" too small
Cause: org.apache.ibatis.executor.result.ResultMapException: Error attempting to get column 'RAWLINEHASHES' from result set. Cause: java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2_111974964$" too small
Pipeline executed for 2 hr 30 mins.
Can you please help ?
The error that you are getting is ORA-01555. Which is an Oracle error message.
Your pipeline is executing something against an Oracle database, which after it has run for a long time, gives the error.
For ways to avoid this error see: https://blog.enmotech.com/2018/09/10/ora-01555-snapshot-old-error-ways-to-avoid-ora-01555-snapshot-too-old-error/
I'm sending a delete operation to Movilizer with only a key and the pool but it gives me this error:
Cannot delete primary group (Do not set group in delete command to delete entire entry)
Why?
It was because I wasn't updating my ACK. This error ocurred before and I was getting it because of the ACK. Thanks.
I am trying to drop a database in mongodb, then I switch to this database using command use test and then run db.dropDatabase(), but it gives me an error message:
{
....
"errmsg" : "not master"
"codeName" : "NotMaster"
...
}
Anyone know what the problem is?
This just means you are not connected to the primary replica node. The command will run if you are on the primary. As mongo changes primary automatically, you may be connected to a secondary replica.
during the process migration of Sonar (from 4.5.7 to 5.4), we faced to an issue. The migration failed with this message :
Cannot insert duplicate key row in object 'dbo.file_sources' with uni
que index 'file_sources_file_uuid_uniq'. The duplicate key value is...
My database is an MS SQL Server. She's configured in FRENCH_CI_AS. I tried to change it in FRENCH_CS_AS, but it didn't solve the problem.
I've observed that, each time we restart the migration, the number of processed files was different. BUT it always failed by processing for the same file.
Any idea ?
I'm receiving:
[Error: Bad BSON Document: illegal CString]
When using the Node MongoDB driver while iterating over one of my collections with Cursor.each. It seems to make some of my documents disappear, and not found in the collection, even though they were individually accessible when I look for them using Collection.findOne().
Does this mean that my data is corrupted in some way?
Thanks to #wdberkeley for all the help in the above comment, which helped me to track down my problem.
It turns out that I did have a single corrupted document in my collection, which was inserted during an unclean shutdown of Mongo. I was unaware how that document would affect the rest of my queries though.
When you perform a collection.find(), and then start iterating with the cursor over the collection, the cursor will stop and be unable to go any further if it encounters an error, such as with [Error: Bad BSON Document: illegal CString].
This happens with both cursor.forEach or cursor.nextObject. Thus, I was unable to access any of the documents that came after the error in the collection, even though I was able to access those documents individually with collection.findOne.
The only solution in this scenario for me was to run db.repairDatabase, which removed the corrupted documents, and solved the problem for me.