We are currently using NodeJs with Knex for connecting with MySQL.
We have plans to migrate our database to Cloud Spanner.
So wanted to know, if knexjs has support for cloud spanner.
I did not see any related articles in their official website (http://knexjs.org/).
If not, any ORM which has support to both MySQL and Cloud Spanner which will have minimal changes from knexjs
We continued using Knexjs for our Spanner operations. It is working fine so far.
We build the queries using knex
and convert it to raw queries using
querybuilder.toSQL()
and binding the parameters.
The Google public docs list the different libraries that can be used with Google Cloud Spanner. You can use node.js with Cloud Spanner so I believe the knexjs should also work. A recommendation is to modify your code so that knexjs outputs the SQL command to help with debugging in case certain commands don't work
Related
I'm creating a database visualizer in the React/Node app to connect the different databases. I'm able to connect all the standard databases, MySQL, MSSQL, Postgres, Cassandra, etc. I'm looking for the connector for SnappyDb for nodeJs which I didn't get on Google but it is available for Java (JDBC). Any suggestions or alternative approaches appreciated.
I am currently migrating data from AWS Redshift to Oracle ADW. I use postgres to create a mock database and run integration tests to simulate how my queries would run in production environment. Postgres is a good candidate for mocking Redshift database as they are similar, but that is not the case for Oracle ADW. I'm wondering if anyone has suggestions on how I could create a mock database that has the same syntax constraints as Oracle ADW.
I already know how to create mock connections to write unit tests. However, these integration tests help us validate pipelines end to end.
I would recommend using the Oracle XE database. It is free to use, is "real" Oracle, and supports most basic and several advanced features, with some resource limitations. It can be deployed in a variety of ways; not sure what would work best for you, but you can check it out here: https://www.oracle.com/database/technologies/appdev/xe.html
We are using NodeJS framework. I need to mock a dynamoDb insert function. I am not sure how to set this up and all the Google'd examples are for queries. Anyone help me out? Thank you.
Might be you need something similar to https://www.npmjs.com/package/dynamoose to connect nodejs and dynamoDB. This is a similar library to mongoose which is used for nodejs and mongodb connection. It just not only handles connection but it is an object modeling tool for Amazon's DynamoDB.
Look here for Put and batchwrite operations samples from AWS. The node.js ones are in the midst of moving from v2 of the SDK to the newer v3 AWS JS SDK.
How do you perform queries without specifying shard key in mongodb api and how do you query across partitions?
In sql api the latter is enabled by setting EnableCrossPartitionQuery to true on the request but I'm not able to find anything like that for the mongodb api. And my queries that work on an unsharded collection now fails(queries that specify the shard key works as expected).
The queries fail indiscriminately of whether I use the AsQueryable extension syntax or the aggregation framework.
As I know, no such property similar to EnableCrossPartitionQuery in CosmosDB Mongo API. In fact, CosmosDB is an independent server implementation that does not directly align with MongoDB server versions and features.
CosmosDB supports a subset of the MongoDB API and translates requests into the CosmosDB SQL equivalent. CosmosDB has some different behaviours and results, particularly with their implementation of partitioning as compared to MongoDB's sharding. But the onus is on CosmosDB to improve their emulation of MongoDB.
Certainly, you could add feedback here to get official assistance or consider using MongoDB Atlas on Azure if you'd like full MongoDB feature support.
Hope it helps you.
Was confirmed a bug by the Product Group team! Will be fixed in first two weeks of september in case anyone runs into the same problems in the mean time.
I'm looking into using gcloud node api to access the datastore api but was curious if it supported query caching in a similar manner to ndb? If not, what's the best way to make sure repeated queries are cached?
As far as I know, gcloud-node isn't planning to be a full-on ORM (like ndb is for Python). Also, as Patrick Costello noted in the comments above, NDB doesn't cache query results, but individual entities instead.
I think if you want caching of query results (or individual entities), you'd have to manually cache these by running your own Memcache server (http://memcached.org/) and interacting with it using memcached (https://www.npmjs.com/package/memcached)
I ended up using NsqlCache-datastore which is integrated into gstore-node. Guide: https://medium.com/google-cloud/how-to-add-a-cache-layer-to-the-google-datastore-in-node-js-ffb402cd0e1c
Looks like I can use the memcache app engine service accessible through this node library:
https://github.com/GoogleCloudPlatform/appengine-nodejs