Getting the following error: 13279:can't find self in the replset config when configuring replica sets - linux

I am configuring a 3 node mongodb replica set on linux. I am using the following config
fork = true
bind_ip = 127.0.0.1
port = 27017
verbose = true
dbpath = /opt/mongoDB/data/db
logpath = /opt/mongoDB/log/mongod.log
logappend = true
journal = true
replSet = rs1
keyFile = /opt/mongoDB/mongodb/bin/conf/keyfile
to start the server. I started the server and when I run connected to the server using mongo command line tool.
When I did rs.initiate() I get
{
"info2" : "no configuration explicitly specified -- making one",
"me" : "host-ip:27017",
"ok" : 0,
"errmsg" : "couldn't initiate : can't find self in the replset config"
}
I tried providing the cfg to initiate() and still get the same error.
This is what shows up in the log file.
Mon Oct 14 13:27:33.218 [rsStart] replSet info no seed hosts were specified on the --replSet command line
Mon Oct 14 13:27:34.118 [conn1] run command admin.$cmd { replSetInitiate: { _id: "rs1", members: [ { _id: 0.0, host: "host-ip:27017" } ] } }
Mon Oct 14 13:27:34.118 [conn1] replSet replSetInitiate admin command received from client
Mon Oct 14 13:27:34.118 [conn1] replSet replSetInitiate config object parses ok, 1 members specified
Mon Oct 14 13:27:34.118 [conn1] getallIPs("host-ip"): [ip address]
Mon Oct 14 13:27:34.118 BackgroundJob starting: ConnectBG
Mon Oct 14 13:27:34.118 [conn1] User Assertion: 13279:can't find self in the replset config
Mon Oct 14 13:27:34.119 [conn1] replSet replSetInitiate exception: can't find self in the replset config
Mon Oct 14 13:27:34.119 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "rs1", members: [ { _id: 0.0, host: "host-ip:27017" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:230 reslen:107 1ms
When should I do to resolve this error?

I could not initiate Replica Set of MongoDB on CentOS machine.
http://docs.mongodb.org/manual/tutorial/deploy-replica-set
mongodb replset
rs.initiate()
{
"errmsg":"couldn't initiate : can't find self in the replset config on port 27011"
}
Then I just used JSON object parameter to rs.initiate(rsconfig)
var rsconfig = {"_id":"rs1","members":[{"_id":1,"host":"127.0.0.1:27011"}]}
then rs.add(..) or just all at once
var rsconfig = {"_id":"rs1","members":[{"_id":1,"host":"127.0.0.1:27011"},{"_id":2,"host":"127.0.0.1:27012"},{"_id":3,"host":"127.0.0.1:27013"}]}
check with print(JSON.stringify(rsconfig)) then
rs.initiate(rsconfig)
after several seconds check
rs.status()

Don't set 127.0.0.1 to bind_ip, change the bind_ip to the machine's name or ip address (such as 192.168.0.1).

Related

Getting hudson.remoting.ChannelClosedException error in Jenkins

I have an error while running a pipeline in Jenkins using a Kubernetes Cloud server.
Everything works fine until the moment of the npm install where i get Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#3b1e0041:nodejs-rn5f3": Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down
How can I fix this error ?
Here are my logs :
[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
‘nodejs-rn5f3’ is offline
Agent nodejs-rn5f3 is provisioned from template nodejs
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
jenkins/label-digest: "XXXXXXXXXXXXXXXXXXXXXXXXXX"
jenkins/label: "nodejs"
name: "nodejs-rn5f3"
spec:
containers:
- args:
- "cat"
command:
- "/bin/sh"
- "-c"
image: "node:15.5.1-alpine3.10"
imagePullPolicy: "IfNotPresent"
name: "node"
resources:
limits: {}
requests: {}
tty: true
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
workingDir: "/home/jenkins/agent"
- env:
- name: "JENKINS_SECRET"
value: "********"
- name: "JENKINS_AGENT_NAME"
value: "nodejs-rn5f3"
- name: "JENKINS_WEB_SOCKET"
value: "true"
- name: "JENKINS_NAME"
value: "nodejs-rn5f3"
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
- name: "JENKINS_URL"
value: "http://XX.XX.XX.XX/"
image: "jenkins/inbound-agent:4.3-4"
name: "jnlp"
resources:
requests:
cpu: "100m"
memory: "256Mi"
volumeMounts:
- mountPath: "/home/jenkins/agent"
name: "workspace-volume"
readOnly: false
hostNetwork: false
nodeSelector:
kubernetes.io/os: "linux"
restartPolicy: "Never"
volumes:
- emptyDir:
medium: ""
name: "workspace-volume"
Running on nodejs-rn5f3 in /home/jenkins/agent/workspace/something
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
[... cloning repository]
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ ls -la
total 1240
drwxr-xr-x 5 node node 4096 Feb 26 07:33 .
drwxr-xr-x 4 node node 4096 Feb 26 07:33 ..
-rw-r--r-- 1 node node 1689 Feb 26 07:33 package.json
and some other files and folders
[Pipeline] sh
+ cat package.json
{
[...]
"dependencies": {
[blabla....]
},
"devDependencies": {
[blabla...]
}
}
[Pipeline] sh
+ npm install
Cannot contact nodejs-rn5f3: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel#3b1e0041:nodejs-rn5f3": Remote call on nodejs-rn5f3 failed. The channel is closing down or has closed down
At this stage, here are the logs of the container jnlp in my pod nodejs-rnf5f3 :
INFO: Connected
Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Read side closed
Feb 26, 2021 8:05:53 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Terminated
Feb 26, 2021 8:05:53 AM jenkins.slaves.restarter.JnlpSlaveRestarterInstaller$FindEffectiveRestarters$1 onReconnect
INFO: Restarting agent via jenkins.slaves.restarter.UnixSlaveRestarter#1a39588e
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: nodejs-rnf5f3
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Feb 26, 2021 8:05:55 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Feb 26, 2021 8:05:55 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Feb 26, 2021 8:05:55 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: WebSocket connection open
Feb 26, 2021 8:05:58 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connected
.... same as above
I don't know where this error come from. Is this related to the usage of resources ?
Here are the usage of my containers :
POD NAME CPU(cores) MEMORY(bytes)
jenkins-1-jenkins-0 jenkins-master 61m 674Mi
nodejs-rnf5f3 jnlp 468m 104Mi
nodejs-rnf5f3 node 1243m 1284Mi
My cluster is a e2-medium in GKE with 2 nodes.
If I had to bet (but its just a wild guess) I had say that the pod was killed due to running out of memory (OOM Killed).
The ChannelClosedException is a symptom, not the problem.
Its kind of hard to debug because the agent pod is being deleted, you can try kubectl get events in the relevant namespace, but events only last for 1 hour by default.

mongod ERROR: child process failed, exited with error number 14

I got the following error when I tried to restart the db after the server (a linux VM) rebooted without shutting down the db first. I saw someone posted the same error over one and half years ago, but the solution proposed there didn't apply to my situation because it's not a yaml config issue (the db had been running for quite a while). I also included the log at the end. Thanks for any help.
sudo mongod --fork --logpath /nas/is1/bin/mongodb/data/db/mongodb.log --dbpath /nas/is1/bin/mongodb/data/db
about to fork child process, waiting until server is ready for connections.
forked process: 20085
ERROR: child process failed, exited with error number 14
output in the log file.
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] MongoDB starting : pid=20085 port=27017 dbpath=/data/mongodb/data/db 64-bit host=raboso
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] db version v3.2.1
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] allocator: tcmalloc
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] modules: none
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] build environment:
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] distarch: x86_64
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] target_arch: x86_64
2017-01-19T15:33:45.286-0500 I CONTROL [initandlisten] options: { processManagement: { fork: true }, storage: { dbPath: "/data/mongodb/data/db" }, systemLog: { destination: "file", path: "/data/mongodb/data/db/mongodb.log" } }
2017-01-19T15:33:45.329-0500 I - [initandlisten] Detected data files in /data/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2017-01-19T15:33:45.346-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=112G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-01-19T15:33:54.009-0500 E STORAGE [initandlisten] WiredTiger (-31802) [1484858034:9041][20085:0x7f0fcf72bcc0], file:sizeStorer.wt, WT_SESSION.open_cursor: sizeStorer.wt read error: failed to read 4096 bytes at offset 49152: WT_ERROR: non-specific WiredTiger error
2017-01-19T15:33:54.011-0500 I - [initandlisten] Invariant failure: ret resulted in status UnknownError -31802: WT_ERROR: non-specific WiredTiger error at src/mongo/db/storage/wiredtiger/wiredtiger_size_storer.cpp 67
2017-01-19T15:33:54.022-0500 I CONTROL [initandlisten]
0x12cf722 0x127ac14 0x1266dad 0x1058db2 0x10425ea 0x103f540 0xf679a8 0x93bc91 0x9403b9 0x7f0fce33bb35 0x939829
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"ECF722"},{"b":"400000","o":"E7AC14"},{"b":"400000",
"o":"E66DAD"},{"b":"400000","o":"C58DB2"},{"b":"400000","o":"C425EA"},{"b":"400000",
"o":"C3F540"},{"b":"400000","o":"B679A8"},{"b":"400000","o":"53BC91"},{"b":"400000",
"o":"5403B9"},{"b":"7F0FCE31A000","o":"21B35"},{"b":"400000","o":"539829"}],
"processInfo":{ "mongodbVersion" : "3.2.1", "gitVersion" : "a14d55980c2cdc565d4704a7e3ad37e4e535c1b2",
"compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.10.0-514.2.2.el7.x86_64",
"version" : "#1 SMP Wed Nov 16 13:15:13 EST 2016", "machine" : "x86_64" },
"somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFEF9CD5000", "elfType" : 3 },
{ "b" : "7F0FCF31B000", "path" : "/lib64/librt.so.1", "elfType" : 3 }, { "b" : "7F0FCF117000",
"path" : "/lib64/libdl.so.2", "elfType" : 3 }, { "b" : "7F0FCEE0F000", "path" : "/lib64/libstdc++.so.6",
"elfType" : 3 }, { "b" : "7F0FCEB0D000", "path" : "/lib64/libm.so.6", "elfType" : 3 },
{ "b" : "7F0FCE8F7000", "path" : "/lib64/libgcc_s.so.1", "elfType" : 3 }, { "b" : "7F0FCE6DB000",
"path" : "/lib64/libpthread.so.0", "elfType" : 3 }, { "b" : "7F0FCE31A000", "path" : "/lib64/libc.so.6",
"elfType" : 3 }, { "b" : "7F0FCF523000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3 } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12cf722]
mongod(_ZN5mongo10logContextEPKc+0x134) [0x127ac14]
mongod(_ZN5mongo17invariantOKFailedEPKcRKNS_6StatusES1_j+0xAD) [0x1266dad]
mongod(_ZN5mongo20WiredTigerSizeStorerC1EP15__wt_connectionRKSs+0x222) [0x1058db2]
mongod(_ZN5mongo18WiredTigerKVEngineC2ERKSsS2_S2_mbbb+0x6DA) [0x10425ea]
mongod(+0xC3F540) [0x103f540]
mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x588) [0xf679a8]
mongod(_ZN5mongo13initAndListenEi+0x321) [0x93bc91]
mongod(main+0x149) [0x9403b9]
libc.so.6(__libc_start_main+0xF5) [0x7f0fce33bb35]
mongod(+0x539829) [0x939829]
----- END BACKTRACE -----
2017-01-19T15:33:54.022-0500 I - [initandlisten]
***aborting after invariant() failure
If a system running MongoDB with the WiredTiger storage engine crashes or experiences an unclean shutdown, MongoDB may not be able to recover data files on restart if the crash/shutdown interrupted a WiredTiger checkpoint.
MongoDB cannot automatically recover data files on restart.
Sadly there is no workaround. Either you can restore data from backups or resync from another replica set member.
WiredTiger (-31802) [1484858034:9041][20085:0x7f0fcf72bcc0], file:sizeStorer.wt, WT_SESSION.open_cursor: sizeStorer.wt read error: failed to read 4096 bytes at offset 49152: WT_ERROR: non-specific WiredTiger error
Above error suggets that your database has been corrupted. Repair it by:
mongod --repair --dbpath /path/to/data/db

MongoDB is not reporting to close connections

I am running through the getting started tutorial to evaluate MongoDB for use in production.
Tailing the server logs it showed the following: in the last 5 lines mongod reports it is closing connections but the count for the number of connections open is not decreasing. Is this expected behaviour?
$ tail -f /usr/local/var/log/mongodb/mongo.log
2015-11-29T15:10:13.425+0000 I JOURNAL [journal writer] Journal writer thread started
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] MongoDB starting : pid=86710 port=27017 dbpath=/usr/local/var/mongodb 64-bit host=As-MacBook-Air.local
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten]
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] db version v3.0.7
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] git version: nogitversion
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] build info: Darwin As-MacBook-Air.local 13.4.0 Darwin Kernel Version 13.4.0: Wed Mar 18 16:20:14 PDT 2015; root:xnu-2422.115.14~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] allocator: system
2015-11-29T15:10:13.425+0000 I CONTROL [initandlisten] options: { config: "/usr/local/etc/mongod.conf", net: { bindIp: "127.0.0.1" }, storage: { dbPath: "/usr/local/var/mongodb" }, systemLog: { destination: "file", logAppend: true, path: "/usr/local/var/log/mongodb/mongo.log" } }
2015-11-29T15:10:13.435+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2015-11-29T15:14:21.158+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52667 #1 (1 connection now open)
2015-11-29T15:14:21.169+0000 I NETWORK [conn1] end connection 127.0.0.1:52667 (0 connections now open)
2015-11-29T15:14:21.175+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52668 #2 (1 connection now open)
2015-11-29T15:14:21.176+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52669 #3 (2 connections now open)
2015-11-29T15:14:21.176+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52670 #4 (3 connections now open)
2015-11-29T15:14:21.176+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52671 #5 (4 connections now open)
2015-11-29T15:14:21.179+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52672 #6 (5 connections now open)
2015-11-29T15:14:21.183+0000 I NETWORK [conn2] end connection 127.0.0.1:52668 (4 connections now open)
2015-11-29T15:14:21.184+0000 I NETWORK [conn3] end connection 127.0.0.1:52669 (3 connections now open)
2015-11-29T15:14:21.184+0000 I NETWORK [conn4] end connection 127.0.0.1:52670 (2 connections now open)
2015-11-29T15:14:21.184+0000 I NETWORK [conn5] end connection 127.0.0.1:52671 (1 connection now open)
2015-11-29T15:14:21.184+0000 I NETWORK [conn6] end connection 127.0.0.1:52672 (0 connections now open)
2015-11-29T15:25:39.136+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52799 #7 (1 connection now open)
2015-11-29T15:25:39.137+0000 I NETWORK [conn7] end connection 127.0.0.1:52799 (0 connections now open)
2015-11-29T15:25:39.142+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52800 #8 (1 connection now open)
2015-11-29T15:25:39.142+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52801 #9 (2 connections now open)
2015-11-29T15:25:39.144+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52802 #10 (3 connections now open)
2015-11-29T15:25:39.144+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52803 #11 (4 connections now open)
2015-11-29T15:25:39.145+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52804 #12 (5 connections now open)
2015-11-29T15:25:39.153+0000 I INDEX [conn9] allocating new ns file /usr/local/var/mongodb/test.ns, filling with zeroes...
2015-11-29T15:25:39.209+0000 I STORAGE [FileAllocator] allocating new datafile /usr/local/var/mongodb/test.0, filling with zeroes...
2015-11-29T15:25:39.209+0000 I STORAGE [FileAllocator] creating directory /usr/local/var/mongodb/_tmp
2015-11-29T15:25:39.328+0000 I STORAGE [FileAllocator] done allocating datafile /usr/local/var/mongodb/test.0, size: 64MB, took 0.118 secs
2015-11-29T15:25:39.374+0000 I WRITE [conn9] insert test.restaurants query: { address: { street: "2 Avenue", zipcode: "10075", building: "1480", coord: [ -73.9557413, 40.7720266 ] }, borough: "Manhattan", cuisine: "Italian", grades: [ { date: new Date(1412121600000), grade: "A", score: 11 }, { date: new Date(1389830400000), grade: "B", score: 17 } ], name: "Vella", restaurant_id: "41704620", _id: ObjectId('565b18f333b46874536e90de') } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 179 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 221ms
2015-11-29T15:25:39.374+0000 I COMMAND [conn9] command test.$cmd command: insert { insert: "restaurants", documents: [ { address: { street: "2 Avenue", zipcode: "10075", building: "1480", coord: [ -73.9557413, 40.7720266 ] }, borough: "Manhattan", cuisine: "Italian", grades: [ { date: new Date(1412121600000), grade: "A", score: 11 }, { date: new Date(1389830400000), grade: "B", score: 17 } ], name: "Vella", restaurant_id: "41704620", _id: ObjectId('565b18f333b46874536e90de') } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 179 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 221ms
2015-11-29T15:25:39.384+0000 I NETWORK [conn8] end connection 127.0.0.1:52800 (4 connections now open)
2015-11-29T15:25:39.384+0000 I NETWORK [conn9] end connection 127.0.0.1:52801 (4 connections now open)
2015-11-29T15:25:39.384+0000 I NETWORK [conn10] end connection 127.0.0.1:52802 (3 connections now open)
2015-11-29T15:25:39.385+0000 I NETWORK [conn11] end connection 127.0.0.1:52803 (2 connections now open)
2015-11-29T15:25:39.385+0000 I NETWORK [conn12] end connection 127.0.0.1:52804 (2 connections now open)
I started the MongoDB server with:
$ mongod --config /usr/local/etc/mongod.conf
Other details:
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.9.5
BuildVersion: 13F1096
$ mongod -version
db version v3.0.7
git version: nogitversion
$ cat /usr/local/etc/mongod.conf
systemLog:
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /usr/local/var/mongodb
net:
bindIp: 127.0.0.1
From my experience, MongoDB is holding a queue of connections that it dispenses on request, not each request translates to opening a connection and closing it when response is delivered.
I've experienced a large number of connections when working from multiple services with the same MongoDB and we had to limit within our app the number of created connections and reuse same connections rather than create a new one for each request.
Hope it helps.

Not authorized for query on admin.system.namespaces on mongodb

I start a new mongo instance, create a user, authorize it, but when I run "show collections", the system says that the id is not authorized. I do not know why?
# mongo admin
MongoDB shell version: 2.4.3
connecting to: admin
Server has startup warnings:
Thu May 23 18:23:56.735 [initandlisten]
Thu May 23 18:23:56.735 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
Thu May 23 18:23:56.735 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
Thu May 23 18:23:56.735 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
Thu May 23 18:23:56.735 [initandlisten]
> db = db.getSiblingDB("admin")
admin
> db.addUser({user:"sa",pwd:"sa",roles:["userAdminAnyDatabase"]})
{
"user" : "sa",
"pwd" : "75692b1d11c072c6c79332e248c4f699",
"roles" : [
"userAdminAnyDatabase"
],
"_id" : ObjectId("519deedff788eb914bc429b5")
}
> show collections\
Thu May 23 18:26:50.103 JavaScript execution failed: SyntaxError: Unexpected token ILLEGAL
> show collections
Thu May 23 18:26:52.418 JavaScript execution failed: error: {
"$err" : "not authorized for query on admin.system.namespaces",
"code" : 16550
} at src/mongo/shell/query.js:L128
> db.auth("sa","sa")
1
> show collections
Thu May 23 18:27:22.307 JavaScript execution failed: error: {
"$err" : "not authorized for query on admin.system.namespaces",
"code" : 16550
} at src/mongo/shell/query.js:L128
I had the same problem, but I found this tutorial and it helped me.
http://www.hacksparrow.com/mongodb-add-users-and-authenticate.html
use:
db.addUser('sa', 'sa')
instead of
db.addUser({user:"sa",pwd:"sa",roles:["userAdminAnyDatabase"]})
{
"user" : "sa",
"pwd" : "75692b1d11c072c6c79332e248c4f699",
"roles" : [
"userAdminAnyDatabase"
],
"_id" : ObjectId("519deedff788eb914bc429b5")
}
As Robert says, admin users has only rights to admin, not to write in databases.
So you have to create a custom user for your database. There's different ways. I have choose the dbOwner way.
(I use Ubuntu Server, mongo 2.6.3 and Robomongo)
So to do this, fisrt create your admin user like mongo says :
type mongo in your linux shell
and these command in the mongo shell :
use admin
db.createUser({user:"mongoadmin",pwd:"chooseyouradminpassword",roles:[{role:"userAdminAnyDatabase",db:"admin"}]})
db.auth("mongoadmin","chooseyouradminpassword")
exit
edit the mongo conf file with :
nano /etc/mongod.conf
You can use vi if nano is not installed.
activate authentication by uncommented/adding these line auth=true
if you want to use Robomongo from other machine change the line bind_ip=127.0.0.1 by bind_ip=0.0.0.0 (maybe you should add more protection in production).
type in linux shell :
service mongod restart
mongo
And in mongo shell :
use admin
db.auth("mongoadmin","pwd:"chooseyouradminpassword")
use doomnewdatabase
db.createUser({user:"doom",pwd:"chooseyourdoompassword",customData:{desc:"Just me as I am"},roles : [{role:"dbOwner",db:"doomnewdatabase"}]})
db.auth("doom","chooseyourdoompassword")
show collections
(customData is not required).
If you want to try if it works, type this in the mongo shell :
db.products.insert( { item: "card", qty: 15 } )
show collections
db.products.find()
Good luck ! Hope it will help you and others !
I have search this informations for hours.
I had the same problem and this is how I solved it:
db = db.getSiblingDB('admin')
db.addUser(
{ user: "mongoadmin",
pwd: "adminpass",
roles: ['clusterAdmin', 'userAdminAnyDatabase', 'readAnyDatabase'] } )
For MongoDB version 2.6 use:
db.createUser(
{
user: "testUser"
pwd: "password",
roles: [{role: "readWrite", db:"yourdatabase"}]
})
See the docs
I solved it like so
for mongoDB 2.6 + currently 3
db.createUser(
{
user: "username",
pwd: "password",
roles: [ { role: "root", db: "admin" } ]
}
)
note that for the role filed instead of userAdminAnyDatabase we use root
I would try granting the read role to the user. userAdminAnyDatabase grants the ability to administer users.

HTML::TagFilter command line vs apache

I installed HTML::TagFilter from CPAN on a Fedora machine
This snippet works just fine on the command line :
my $tf = new HTML::TagFilter;
$tf->deny_tags( { TABLE => {style => ["BORDER-BOTTOM"]} });
$tf->deny_tags( { TABLE => {prevstyle => ['any']} }); $str = $tf->filter($str);
But when the same code is run on Apache, I am getting this error:
[Fri Dec 14 16:11:48 2012] [error] Can't locate object method "new" via
package "HTML::TagFilter" at
/usr/local/lib/perl5/site_perl/5.10.0/HTML/TagFilter.pm line 320.
What could be the source of this error?

Resources