Migrate couchdb data from 0.10.0? - couchdb

For a linux system, I've backed up an old database from couchdb 0.10.0, basically a tar archive of the /var/lib/couchdb directory.
What is the procedure to convert this data in the format required by couchdb 1.0.1? If I simply restore the files to their original location, they are not found. If I place them in /var/lib/couchdb/1.0.1, I get the following error:
{"error":"kill","reason":"{gen_server,call,\n [couch_server,\n {open,<<\"test\">>,\n [{user_ctx,\n {user_ctx,null,\n [<<\"_admin\">>],\n <<\"{couch_httpd_auth, default_authentication_handler}\">>}}]},\n infinity]}"}
(In this case the database is named test.couch, I placed test.couch in /var/lib/couchdb/1.0.1/test.couch and tried to open it from the URL: http://localhost:5984/test/)
edit: oops, the solution was pretty obvious. Copying was the right thing to do, but I forgot to change permissions.
So, to restore a backed up couchdb database, all that is needed is:
sudo chown couchdb:couchdb backup/test.couch
sudo mv backup/test.couch /var/lib/couchdb/1.0.1

You could try replication between a 0.10 and 1.0.1 server although I'm pretty sure that 1.0.1 can read 0.10 databases. Is there more information in couch.log?

Related

Source class "\Magento\Framework\DB\Adapter\Pdo\Mysql" for "Magento\Framework\DB\Adapter\Pdo\MysqlFactory" generation does not exist

i'm trying to install Magento 2 in localhost, and get an error when i want to connect the database.
The error is:
Source class "\Magento\Framework\DB\Adapter\Pdo\Mysql" for "Magento\Framework\DB\Adapter\Pdo\MysqlFactory" generation does not exist.
OS: linux mint 19.1 x64.
DB: MySQL.
I create a database (magento) and a user (magento), asigning all privileges.
When run:
mysql -u magento -p
And then put his password can access, so all is fine here.
The path dir is: /var/www/html/magento2
i'm follow this tutorial: https://tecadmin.net/install-magento-on-ubuntu-16-04/
what should I do to solve this?
Solved.
Change the Full Release with Sample Data of Magento for the Full Release with NO Sample Data, and solved.

Easiest way to copy/duplicate a RethinkDB database?

How can I easily duplicate my production database (mydb) to create a dev database (mydb-dev)?
The rethinkdb restore command seems to have no option to specify the name of the output database. It only has the option to select which database I'd like to restore from the dump. I'm using rethinkdb 1.16.3
You can use rethinkdb export, extract the archive, and rename the directory inside before importing it:
$ rethinkdb export
$ cd rethinkdb_export_2015-04-05T13:54:43
$ mv mydb mydb_dev
$ rethinkdb import -d ./
Thinker tool by internalfx also allows you to clone a database to a different DB, using the --targetDB= option.

puppet: Could not back up <file>: Got passed new contents for sum

I had a question I was hoping someone might have an answer to. Essentially what I'm doing is try to ensure I'm always using a fixed, slightly older version of phpunit, which I've placed in my module's file resources.
The manifest:
file
{
"/usr/bin/phpunit":
ensure => file,
owner => 'root',
group => 'root',
mode => 0755,
source => "puppet:///modules/php/phpunit"
}
Preparation: I download the current ('wrong') version of phpunit and place it in /usr/bin.
So the first run puppet succeeds:
Notice: Compiled catalog for <hostname> in environment production in 3.06 seconds
Notice: /Stage[main]/Php/File[/usr/bin/phpunit]/content: content changed '{md5}9f61f732829f4f9e3d31e56613f1a93a' to '{md}38789acbf53196e20e9b89e065cbed94'
Notice: /Stage[main]/Httpd/Service[httpd]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 15.86 seconds
Then I download the current (still 'wrong') version of phpunit and place it in /usr/bin again.
This time the puppet run fails.
Notice: Compiled catalog for <hostname> in environment production in 2.96 seconds
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: /Stage[main]/Php/File[/usr/bin/phpunit]/content: change from {md5}9f61f732829f4f9e3d31e56613f1a93a to {md5}38789acbf53196e20e9b89e065cbed94 failed: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
What gives? If I delete the file ( /var/lib/puppet/clientbucket/9/f/6/1/f/7/3/2/9f61f732829f4f9e3d31e56613f1a93a/ ) from my filebucket it will work again... for the next run, but not the one after that.
What am I doing wrong?
I'd appreciate any input and thanks in advance.
Been having this error as well. I solved it with a combination of two previous answers.
Firstly I had to delete /var/lib/puppet/clientbucket on the client node by running:
sudo rm -r /var/lib/puppet/clientbucket
Just doing this will only let it run once more.
Then I had to mark the backup => false to stop it recreating the file, missing out either step failed to solve it for me. The accepted answer is incorrect by saying there is
"no solution other than upgrading".
I was able to fix the same problem by removing /var/lib/puppet/clientbucket on the client node.
This node has been running out of disk space, so puppet has probably incorrectly stored empty files there.
As a workaround, you can set backup => false in the file resource. This is a little unsafe, of course.
This has no solution other than to upgrade since there's a bug in certain versions of puppet where files containing both UTF8 and binary characters are treated wrongly, and it results in an error message.
https://tickets.puppetlabs.com/browse/PUP-1038
The ridiculously overcomplicated solution I used as a workaround is to have a .tar file in the file resource which notifies an exec which untars and places the actual executable in the correct directory, making sure the timestamp for the latter is newer than the former.
It's far from ideal but it works in cases like mine where upgrading puppet to the most current version isn't an attractive option.

How to change location of Influxdb storage folder?

I've Installed package from the official site by instruction. By default the physical destination of database folder is /opt/influxdb/shared.
I've tried to change properties of config file and written it properly. But after that I can't start the influxdb service.
[storage]
dir = "/media/alex/Second/InfluxStorage/data/db" //my settings
How I can change the default database directory ?
EDIT: This is for InfluxDB v1.x only. It has been reported to not work for InfluxDB v2.x.
Make a new directory where you want to put your data and set the appropriate permissions, e.g.:
mkdir /new/path/to/influxdb
sudo chown influxdb:influxdb influxdb
Edit the following three lines of your /etc/influxdb/influxdb.conf (/usr/local/etc/influxdb.conf on macOS) so that they point to your new location:
# under [meta]
dir = "/new/path/to/influxdb/meta"
# under [data]
dir = "/new/path/to/influxdb/data"
wal-dir = "/new/path/to/influxdb/wal"
Restart the InfluxDB daemon.
sudo service influxdb restart # Ubuntu/Debian
brew services restart influxdb # macOS/homebrew
Done!
In case you want to move existing data, just simply copy the existing data (location can be found at influxdb.conf; /var/lib/influxdb on Ubuntu/Debian) to your new desired location before editing influxdb.conf and make sure the new folder has the appropriate permissions/ownership.
There is some information about backups/restores on the official docs, however just plain copying worked for me.
The above was tested on InfluxDB v1.2 on macOS/Ubuntu/Raspbian.
For InfluxDB 2.0:
In InfluxDB 2.0 the data directories are below ~/.influxdbv2 by default.
Actually, there are 2 data storages for bolt (various key-value configurations) and engine (the TSM database).
From the documentation, to change the location to the bolt database:
Default: ~/.influxdbv2/influxd.bolt
influxd flag: influxd --bolt-path=~/.influxdbv2/influxd.bolt
Environment variable: export INFLUXD_BOLT_PATH=~/.influxdbv2/influxd.bolt
Configuration file: bolt-path: /users/user/.influxdbv2/influxd.bolt
From the documentation, to change the location to the engine database:
Default: ~/.influxdbv2/engine
influxd flag: influxd --engine-path=~/.influxdbv2/engine
Environment variable: export INFLUXD_ENGINE_PATH=~/.influxdbv2/engine
Configuration file: engine-path: /users/user/.influxdbv2/engine

Authentication error from server: SASL(-13): user not found: unable to canonify

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Resources