couchdb, after replicating to clean version of the database, users get not authorized - node.js

After replicating the database to remove tombstones, it started throwing "you are not authorized to access this db".
Restlet error
Pouchdb error
What I had to do was manually add a new user, then remove them again, and that made it happy.
I guess that means, like the indexes, something needs to be reset or rescanned. Any way I can do this operation through script? My script handling this is currently in node by using pouchdb to replicate with a filter to remove all tombstones, then shut down couchdb service, swap the dbname.couch and .dbname_design file and folder around with the clean versions, then start up the service again.
Thanks
--Edit, I have narrowed it down a bit, it looks like creating a new database, adds a new _admin role. Removing that role fixes the permissions. Is there a way to prevent this role from being added, or alternatively, remove it through a script, curl, node, etc.? I am only finding documentation on removing users, not roles.

Related

REST for user impossible?

My ArangoDB server contains a database (mydb) with a collection (thiscol) and in the collection sits a bit of data. I can login with a user Thijs and look around using the web interface.
I cannot have user Thijs use the REST API, I have tried setting the password, granting access, restarting and making a new user.
The REST interface always returns 401 Unauthorized. I have obviously quadrupple checked the password.
If I use the user root it all works fine.
And I rather not start with Foxx services at this point because it seems to be an enormous amount of work to implement a REST service that allready exists but is not 'available'.
So is the REST API only implemented for a single hard-coded username or am I missing something here..
REST call example: http://localhost:8529/_api/collection
per request
failing GET
Ah, you need to specify the database to use in the URL, without specifying a database it will default to _system. Try localhost:8529/_db/vbo/_api/collection see this documentation page for more info.
Glad to help get it resolved.

SailsJS: Undo API Generation & Clean, Safe Remove for API

I'm looking to remove/delete an API I generated using sails generate api
I had run sails generate api user and went onto implementing the sails-generate-auth plugin (tried sails-auth too), and the library said it something already exists in my /models directory. I deleted user from models, ran the plugin's command again, and it gave the same message for my /services directory -- but there are no files in that directory (except for .gitkeep)!
I would like to run a sails command, something like sails remove api user, to repeal anything configured for this endpoint -- and essentially start over (this time using sails-generate-auth).
How can I repeal an "api"?
There is no command for deleting an API. You can remove your API by deleting your controller + model of the API.
When you run the command sails-generate-auth it automaticly creates a new User API.It returns an error whenever you need to delete anything else so you'll need to delete the files that it requested and the files that it just created...
You also might wanna check out this guide as it might help you alot with the proccess of implement OAuth authentication to your sails app: https://www.bearfruit.org/2014/07/21/tutorial-easy-authentication-for-sails-js-apps/

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

Chef server migration: How to update the client.pem in nodes?

I am attempting to migrate from 1 chef server to another using knife-backup. However, knife-backup does not seem to update the nodes, and all my nodes are still pointing to the old server in their respective client.rb files, and their validation.pem and client.pem are still paired with the old server.
Consequently, I update all the client.rb and validation.pem files manually.
However, I still need to update client.pem. Obviously, one way to do so would be to bootstrap the node again to the new server, however I do not want to do that because I do not want to deploy to these nodes because that could cause a loss of data.
Is there any way to update client.pem in the nodes without having to bootstrap or run chef-client? One way would be to get the private key and do it manually, but I am not sure how to do that.
Thanks!
PS: Please feel free to suggest any other ideas for migration as well!
It's the chef server "client" entities that contain the public keys matching the private key ("client.pem") files on each client server. The knife backup plugin reportedly restores chef clients. Have you tried just editing the chef server URL (in the "client.rb") and re-running chef-client?
Additional note:
You can discard the "validation.pem" files. These are used during bootstrap to create new client registrations. Additionally most likely your new chef server has a alternative validation key.

Get Schema error when making Data sync in Azure

I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?
the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.

Resources