I'm trying to set up a continuous replication job on a single-server deployment of CouchDB 2.2, from a local db to a local db, without spreading a user's password around.
I can get replication working with by creating a document in the _replicator db of this form:
{
"_id": "my-replication-job-id",
"_rev": "1-5dd6ea5ad8479bb30f84dac02aaba59c",
"source": "http://username:password#localhost:5984/source_db_name",
"target": "http://username:password#localhost:5984/target_db_name",
"continuous": true,
"owner": "user-that-created-this-job"
}
Is there any way to do this without inserting the user's password in plaintext (or a base64-encoded version of it) in the source and target fields of my replication job? This is all running on the same server... both databases and the replication job itself.
I see that Couch 2.2 added the ability for replication jobs to maintain their authenticated state using a cookie but my understanding is the username and password are still be necessary in order to initiate an authenticated session with the source and target DB's.
I should add that I have require_valid_user = true configured.
Thanks in advance.
It may be this issue:
https://github.com/apache/couchdb/issues/1550
Which has now been fixed, and I believe it will appear in 2.2.1
Related
I'm getting this error when setting up the 3rd party integration for Mongo with Google Ops Agent:
AuthenticationFailed: SCRAM authentication failed, storedKey mismatch
Google Cloud Ops Agent Config
Mongo Integration
mongodb:
type: mongodb
insecure: false
cert_file: /etc/ssl/mongodb.pem
key_file: /etc/ssl/mongodb.pem
ca_file: /etc/ssl/mongodb.pem
endpoint: 127.0.0.1:443
username: opsplease
password: Williwork$
collection_interval: 60s
insecure_skip_verify: true
The Full error:
"jsonPayload": {
"context": "conn4277",
"severity": "I",
"attributes": {
"authenticationDatabase": "admin",
"principalName": "user",
"mechanism": "SCRAM-SHA-256",
"client": "1.1.1.1:34542",
"result": "AuthenticationFailed: SCRAM authentication failed, storedKey mismatch"
},
"id": 20249,
"message": "Authentication failed",
"component": "ACCESS"
},
I'm using a config that works on other instances, and validated the user and password are working and also have permissions needed to the database.
I can't find any useful answers relating to this error, but it does seem to be fairly common outside of the Google Ops Agent. Any insight or suggestions would be appreciated.
The problem was actually the password itself consisting of characters that the Ops Agent was not able to handle without escaping.
No other system had an issue with this password including Robo3T or the mongo shell itself connecting to the replication set. I had tried putting the password string in quotes but that did not fix it either.
The character that caused the issue so far is $. It needs to be escaped:
password: Williwork\$
How the string will look from mongoshell.
mongodb://opsplease:Williwork%24#server1:443,server2:443,server3:443/?authSource=admin&replicaSet=rs0&readPreference=primary&ssl=true&tlsAllowInvalidCertificates=true&tlsAllowInvalidHostnames=true
After more testing, I found that you cannot use percent encoding in the config.
password: Williwork%24
If you try this, it won't show the same error in any logs but the metrics data will also not show up which is arguably worse than having it throwing errors.
I looked all over for solutions to this, and nothing was relevant to the relatively new Ops Agent , so marking my own question as answered for anyone who may find this in the future. I wish there was a log from the ops agent that would show it choking on the value parsing, but the /var/log/messages and /var/log/google-cloud-ops-agent/subagents/logging-module.log do not have anything helpful for this issue.
[Question posted by a user on YugabyteDB Community Slack]
I am trying to migrate an existing application from PostgreSQL to YugabyteDB using a cluster with 3 nodes.
The smoke tests run fine but I received the following error as soon as I use more than one concurrent user:
com.yugabyte.util.PSQLException: ERROR: Query error: Restart read required at: { read: { physical: 1648067607419747 } local_limit: { physical: 1648067607419747 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
at com.yugabyte.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at com.yugabyte.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at com.yugabyte.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at com.yugabyte.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at com.yugabyte.jdbc.PgStatement.execute(PgStatement.java:408)
at com.yugabyte.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:162)
at com.yugabyte.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:151)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:219)
at org.jooq.impl.Tools.executeStatementAndGetFirstResultSet(Tools.java:4354)
at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:230)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:340)
at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:284)
at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2843)
at org.jooq.impl.DefaultDSLContext.fetch(DefaultDSLContext.java:4749)
I am using the version 11.2-YB-2.13.0.1-b0
It is a clinical data repository implemented using Spring Boot and JOOQ. The application exposes a REST API to store and query clinical documents inside the database.
I try to execute a JMeter test plan that creates and queries random documents using 10 concurrent users during a fixed period (5min).
Until now, we were using PostgreSQL which seems to have Read Committed as the default isolation level. So, I assume that I have to change the isolation at the application level as Spring uses the one defined by the database by default.
Please note that the default isolation level of PostgreSQL and YugabyteDB is not the same.
Read Committed Isolation is supported only if the gflag yb_enable_read_committed_isolation is set to true. By default this gflag is false and in this case the Read Committed isolation level of Yugabyte's transactional layer falls back to the stricter Snapshot Isolation (in which case READ COMMITTED and READ UNCOMMITTED of YSQL also in turn use Snapshot Isolation).
Can you change the isolation level as above for YugabyteDB? Please refer to this doc link for more details - https://docs.yugabyte.com/latest/explore/transactions/isolation-levels/
It should work much better after the change.
We are creating a NodeJS based solution that makes use of MongoDB, by means of Mongoose. We recently started adding support for Atlas, but we would like to be able to fallback to non-Atlas based queries, when Atlas is not available, for a given connection.
I can't assume the software will be using MongoDB Cloud. Although I could make assumptions based on the URL, I'd still need to have a way to be able to do something like:
const available: boolean = MyModel.connection.isAtlasAvailable()
The reason we want this is because if we make an assumption on Atlas being available and then the client uses a locally hosted MongoDB, the following code will break, since $search is Atlas specific:
const results = await Person.aggregate([
{
$search: {
index: 'people_text_index',
deleted: { $ne: true },
'text': {
'query': filter.query,
'path': {
wildcard: '*'
}
},
count: {
type: 'total'
}
}
},
{
$addFields: {
'mongoMeta': '$$SEARCH_META'
}
},
{ $skip : offset },
{ $limit: limit }
]);
I suppose I could surround this with a try/catch and then fall back to a non-Atlas search, but I'd rather check something is doable before trying an operation.
Is there any way to check whether MongoDB Atlas is available, for a given connection? As an extension to the question, does Mongoose provide a general pattern for checking for feature availability, such as if the connection supports transactions?
I suppose I could surround this with a try/catch and then fall back to a non-Atlas search, but I'd rather check something is doable before trying an operation.
As an isAtlasCluster() check, it would be more straightforward to use a regex match to confirm the hostname in the connection URI ends in mongodb.net as used by MongoDB Atlas clusters.
However, it would also be much more efficient to set a feature flag based on the connection URI when your application is initialised rather than using try/catch within the model on every request (which will add latency of at least one round trip failure for every search request).
I would also note that checking for an Atlas connection is not equivalent to checking if Atlas Search is configured for a collection. If your application requires some initial configuration of search indexes, you may want to have a more explicit feature flag configured by an app administrator or enabled as part of search index creation.
There are a few more considerations depending on the destination cluster tier:
Atlas free & shared tier clusters support fewer indexes so a complex application may have a minimum cluster tier requirement.
Atlas Serverless Instances (currently in preview) does not currently have support for Atlas Search (see Serverless Instance Limitations).
As an extension to the question, does Mongoose provide a general pattern checking for feature availability, such as if the connection supports transactions?
Multi-document transactions are supported in all non-EOL versions of MongoDB server (4.2+) as long as you are connected to a replica set or sharded cluster deployment using the WiredTiger storage engine (default for new deployments since MongoDB 3.2). MongoDB 4.0 also supports multi-document transactions, but only for replica set deployments using WiredTiger.
If your application has a requirement for multi-document transaction support, I would also check that on startup or make it part of your application deployment prerequisites.
Overall this feels like complexity that should be covered by prerequisites and set up of your application rather than runtime checks which may cause your application to behave unexpectedly even if the initial deployment seems fine.
Regarding Kafka-Zookeeper Security using DIGEST MD5 Authentication, I am trying to rotate/change credentials/password for both server(zookeeper) and client(kafka) jaas config file.
We have a 3 node cluster of 3 zookeepers and 3 kafka broker nodes with below jaas configuration file.
kafka.conf
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="password";
};
zookeeper.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="password";
};
To rotate we do a rolling restart of server(zookeeper) instances after updating the credential(password) and during the process of rolling restart after updating the same credential/password for super user for client(kafka instances) one at a time, we notice
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
these info level in server logs, which eventually results in unclean shutdown and restart of the broker which impacts the writes and reads for longer than expected. I have tried commenting requireClientAuthScheme=sasl in zookeeper zoo.cfg https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication to allow any clients authenticate to zookeeper but no success.
Also, alternative approach - tried to update the credential/password in jaas config file dynamically using sasl.jaas.config and do get the same exception documented in this jira (reference: https://issues.apache.org/jira/browse/KAFKA-8010).
can someone have any suggestions? Thanks in advance.
In Fauxton, I've setup a replication rule from a CouchDB v1.7.1 database to a new CouchDB v2.3.0 database.
The source does not have any authentication configured. The target does. I've added the username and password to the Job Configuration.
It looks like the replication got stuck somewhere in the process. 283.8 KB (433 documents) are present in the new database. The source contains about 18.7 MB (7215 docs) of data.
When restarting the database, I'm always getting the following error:
[error] 2019-02-17T17:29:45.959000Z nonode#nohost <0.602.0> --------
throw:{unauthorized,<<"unauthorized to access or create database
http://my-website.com/target-database-name/">>}:
Replication 5b4ee9ddc57bcad01e549ce43f5e31bc+continuous failed to
start "https://my-website.com/source-database-name/ "
-> "http://my-website.com/target-database-name/ " doc
<<"shards/00000000-1fffffff/_replicator.1550593615">>:<<"1e498a86ba8e3349692cc1c51a00037a">>
stack:[{couch_replicator_api_wrap,db_open,4,[{file,"src/couch_replicator_api_wrap.erl"},{line,114}]},{couch_replicator_scheduler_job,init_state,1,[{file,"src/couch_replicator_scheduler_job.erl"},{line,584}]}]
I'm not sure what is going on here. From the logs I understand there's an authorization issue. But the database is already present (hence, it has been replicated partially already).
What does this error mean and how can it be resolved?
The reason for this error is that the CouchDB v2.3.0 instance was being re-initialized on reboot. It required me to fill-in the cluster configuration again.
Therefore, the replication could not continue until I had the configuration re-applied.
The issue with having to re-apply the cluster configuration has been solved in another SO question.