Creating additional dbfarm in MonetDb - linux

I have created a dbfarm in MonetDb. then I've moved the directory of the dbfarm to another location, and the dbfarm stopped working. So I'm trying to fix that by deleting the old dbfarm and/or creating a new one
The problem is that when trying to create new dbfarm by
monetdbd start newDbfarm/
I get the error:
monetdbd: binding to stream socket port 50000 failed: Address already in use
How can I solve this?
I'm working with the latest monetDb (MonetDB Oct2014 Release)
Update
I've managed to somehow fix this by using
monetdbd set port=50001 newDbfarm/
before the
monetdbd start newDbfarm/
and then I have to always specify the port when using monetdb:
monetdb -p50001 create voc1
Is there a way to just delete the old dbfarm? or change the default so I will always go to the new dbfarm?

You can stop monetdbd from using the old dbfarm
monetdb stop oldBdbfarm
monetdb start newDbfarm
It might take a while for this to complete, especially if there are queries running.

Related

Rhel osbuild-composer system repository override is not working

As per document (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/managing-repositories_composing-a-customized-rhel-system-image) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below
##composer-cli blueprints depsolve Test1-blueprint
2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources.
And with next service restart osbuild-composer does not start
ERROR: Info Error: Get "http://localhost/api/v1/projects/source/info/appstream": dial unix /run/weldr/api.socket: connect: connection refused
Am I missing something here ?
Having all manner of issues with this myself. A trawl of my /var/log/messages file, and it looks like, for me at least, osbuild-composer is failing to start due to the non existence of /etc/osbuild-composer/osbuild-composer.toml. Actual error is permission denied, but it doesnt exist..
This is on RHEL 8.5, and just updated to 8.6 this morning, and same problem
/edit Ive removed everything, and reverted to using the lorax backend, as per chapter 2.2 in the doc linked (same one I was following). My 'composer-cli compose types' command now at least works. Fingers crossed..

Mongodb - Empty collections after EC2 crash

I had a node application running on EC2 managed by Elastic Beanstalk.
The elastic beanstalk removed an instance and recreated new one due to some issue.
I had mongo store its db in a separate Elastic Block Store volume, and did re-attach the volume, mounted etc..
However when I tried to start mongodb using systemctl, I got various errors.
I tried --repair, chown the data directory to mongod and it finally worked, but now the user db was gone and the application re-created it and all collections are empty, but I do see large collection-x-xxxxxxx.wt and index-x-xxxxx.wt files in the data directory.
What am I doing wrong ?
Is there any way to recover the data.
PS: I did try the --repair before I saw the warning about how it would remove all corrupted data
I was able to restore from the collection-x-xxxxx.wt files....
In my 'corrupted' mongo data directory, there was a WiredTiger.wt.orig file and WiredTiger.wt file.
If I try to start mongod by removing the 'orig' extension, mongod will not start and start showing errors like WiredTiger.wt read error: failed to read 4096 bytes at offset 73728: WT_ERROR: non-specific WiredTiger error.
Searching for restoring a corrupted 'WiredTiger' file, I came across this medium article about repairing MongoDB after a corrupted WiredTiger file.
Steps from article as I followed them :
Stop mongod. (the one with empty collections)
Point mongod to a new data directory
Start mongod and create new db, and new collections with the same names as the ones from corrupted mongo db.
Insert atleast one dummy record into each of these collection.
Find the names of the collection*.wt files in this new location using db.<insert-collectionName>.stats(), look in the uri property of the output.
Stop mongod.
Copy over collection-x-xxxxx.wt from corrupted directory to the new directory and rename them to the corresponding ones from step 5.
7.1. i.e. Say, if your collection named 'testCollection' had the wt collection file name as collection-1-1111111.wt in corrupted directory and the name as collection-6.6666666.wt in new directory, you will have to copy the 'collection-1-1111111.wt' into the new directory and rename it to collection-6.6666666.wt
7.2. To find the collection wt file name of say 'testCollection', you can open the collection-x.xxxx.wt files in a text editor and scroll past the 'gibberish' to see your actual data matching the ones from 'testCollection'. (mine is not encrypted at rest).
Repeat the copy - rename step for all collections you have.
Run repair in new db path with --repair switch, you can see mongo fixing stuff in logs.
Start db.
Once done, verify the collections, mongodump from new db and mongorestore to fresh db and recreate indexes.
That article was a god sent, I cant believe it worked. Thank you Ido Ozeri from Medium

Unable to start Kudu master

While starting kudu-master, I am getting the below error and unable to start kudu cluster.
F0706 10:21:33.464331 27576 master_main.cc:71] Check failed: _s.ok() Bad status: Invalid argument: Unable to initialize catalog manager: Failed to initialize sys tables async: on-disk master list (hadoop-master:7051, slave2:7051, slave3:7051) and provided master list (:0) differ. Their symmetric difference is: :0, hadoop-master:7051, slave2:7051, slave3:7051
It is a cluster of 8 nodes and i have provided 3 masters as given below in master.gflagfile on master nodes.
--master_addresses=hadoop-master,slave2,slave3
TL;DR
If this is a new installation, working under the assumption that master ip addresses are correct, I believe the easiest solution is to
Stop kudu masters
Nuke the <kudu-data-dir>/master directory
Start kudu masters
Explanation
I believe the most common (if not only) cause of this error (Failed to initialize sys tables async: on-disk master list (hadoop-master:7051, slave2:7051, slave3:7051) and provided master list (:0) differ.) is when a kudu master node gets added incorrectly. The error suggests that kudu-master thinks it's running on a single node rather than 3-node cluster.
Maybe you did not intend to "add a node", but that's most likely what happened. I'm saying this because I had the same problem; after some googling and debugging, I discovered that during the installation, I started kudu-master before putting the correct IP address in master.gflagfile, so that kudu-master was spun up thinking it was running on a single node, not 3 node. Using steps above to clean install kudu-master again, my problem was solved.

Using memcached failover servers in nodejs app

I'm trying to set up a robust memcached configuration for a nodejs app with the node-memcached driver, but it does not seem to use the specified failover servers when one server dies.
My local experiment goes as follows:
shell
memcached -p 11212
node
MC = require('memcached')
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212']})
c.get('foo', console.log) //this will eventually time out
c.get('foo', console.log) //repeat 5 or 6 times to exceed the retries number
//wait until all the connection errors appear in the console
//at this point, the failover server should be in use
c.get('foo', console.log) //this still times out :(
Any ideas of what might we be doing wrong?
It seems that the failover feature is somewhat buggy in node-memcached.
To enable failover you must set the remove options:
c = new MC('localhost:11211', //this process does not exist
{failOverServers: ['localhost:11212'],
remove : true})
Unfortunately, this is not going to work because of the following error:
[depricated] HashRing#replaceServer is removed.
[depricated] the API has no replacement
That is, when trying to replace a dead server with a replacement from the failover list, node-memcached outputs a deprecation error from the HashRing library (which, in turn, is maintained by the same author of node-memcached). IMHO, feel free to open a bug :-)
This is come when your nodejs server not getting any session id from memcached
Please check properly in php.ini file you are setting properly or not for memcached
session.save = 'memcache'
session.path = 'tcp://localhost:11212'

Server Error in '/DotNetNuke_Community' Application

I'm getting the following error when attempting to run DotNetNuke 7.1 from IIS.
Object reference not set to an instance of an object.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.NullReferenceException: Object reference not set to an instance of an object.
Source Error:
Line 572: //first call GetProviderPath - this insures that the Database is Initialised correctly
Line 573: //and also generates the appropriate error message if it cannot be initialised correctly
Line 574: string strMessage = DataProvider.Instance().GetProviderPath();
Line 575: //get current database version from DB
Line 576: if (!strMessage.StartsWith("ERROR:"))
I've tried running it from Visual Studio 2012 after downloading and extracting the source code to a folder, then running, but I get the same error (also, VS loads about 13 instances of it's built in webserver which can't be correct).
Clearly, there is something wrong with the database. From what I've read in the past, there should have been a start up configuration page (for configuring settings the first time you run the project).
I did look at the local version of IIS (running on Windows 8) and it created the site fine there, however, for some reason the internal webserver attempts to run (and the option to run on an external IIS is greyed out).
Anyone run into this problem with DNN Community edition? I've tried running as admin and setting permissions with no luck at all.
Any way to fix this?
Ok, the key is to delete the Database.mdf file completely.
Then create a new empty database of your choice in SQL Server (2008 or greater).
Create a new user account with db_owner access (as it must be able to create tables, etc).
Change the connection strings in the release.config and development.config to connect to the database.
DELETE the web.config file.
RENAME either config file to "web.config"
Set the default project to the web project in VS
set the default page to default.aspx
Run
I made the erroneous assumption that running the app would rename the config file for me (not sure why I assumed that).
SOLVED!

Resources