migrate int id to uuid string using prisma - node.js

I want to my database column int id to string uuid format.
id Int #id #default(autoincrement())
// to
id String #id #default(uuid())
Rewriting prisma code Above, migration works but, the id just was stringified sequence like '1' or '2', and I want to reset id to uuid. But I didn't find this uuid function which algorithm derived from and architecture(where it derived from database, rust, or client?). What the best way to reassign id? I can generate uuid from node clients?
[In Addition]
I found explanation in referred to document
This is derived from prisma.
You can still use uuid() when using introspection by manually
changing your Prisma schema and generating Prisma Client,
in that case the values will be generated by Prisma's query engine.
but can we call it from client api?

I found prisma using uuid v4 as default, so any library or programming language is available as manner of how it generates.
I post discussion thread in prisma repository, too.

Related

Google Cloud Python Lib - Get Entity By ID or Key

I've been working on a python3 script that is given an Entity Id as a command line argument. I need to create a query or some other way to retrieve the entire entity based off this id.
Here are some things I've tried (self.entityId is the id provided on the commandline):
entityKey = self.datastore_client.key('Asdf', self.entityId, namespace='Asdf')
query = self.datastore_client.query(namespace='asdf', kind='Asdf')
query.key_filter(entityKey)
query_iter = query.fetch()
for entity in query_iter:
print(entity)
Instead of query.key_filter(), i have also tried:
query.add_filter('id', '=', self.entityId)
query.add_filter('__key__', '=', entityKey)
query.add_filter('key', '=', entityKey)
So far, none of these have worked. However, a generic non-filtered query does return all the Entities in the specified namespace. I have been consulting the documentation at: https://googleapis.dev/python/datastore/latest/queries.html and other similar pages of the same documentation.
A simpler answer is to simply fetch the entity. I.e. self.datastore_client.get(self.datastore_client.key('Asdf', self.entityId, namespace='asdf'))
However, given that you are casting both entity.key.id and self.entityId, you'll want to check your data to see if you are key names or ids. Alternatives to the above are:
You are using key ids, but self.entityid is a string self.datastore_client.get(self.datastore_client.key('Asdf', int(self.entityId), namespace='asdf'))
You are using key names, and entityId is an int self.datastore_client.get(self.datastore_client.key('Asdf', str(self.entityId), namespace='asdf'))
I've fixed this problem myself. Because I could not get any filter approach to work, I ended up doing a query for all Entities in the namespace, and then did a conditional check on entity.key.id, and comparing it to the id passed on the commandline.
query = self.datastore_client.query(namespace='asdf', kind='Asdf')
query_iter = query.fetch()
for entity in query_iter:
if (int(entity.key.id) == int(self.entityId)):
#do some stuff with the entity data
It is actually very easy to do, although not so clear from the docs.
Here's the working example:
>>> key = client.key('EntityKind', 1234)
>>> client.get(key)
<Entity('EntityKind', 1234) {'property': 'value'}>

Seeder method for Azure Database in EF Core 2

What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.

Meteor Mongo BulkOp turning ObjectID into plain object

While using Meteor, I sometimes access the underlying Node Mongo driver so I can make bulk updates and inserts.
const bulk = Coll.rawCollection().initializeOrderedBulkOp();
bulk.insert({key_id: Mongo.Collection.ObjectID()}); // note key_id is an ObjectID
...
bulk.execute();
But the value of the key_id fields ends up being the plain subdocument {_str: '...'} when I look in the database after the insert.
Is there any way to use bulk operations in Node's Mongo library (whatever it is Meteor uses) and keep ObjectID's as Mongo's ObjectID type?
(There's many posts about the nature of the different ID types, and explaining Minimongo, etc. I'm interested specifically about the bulk operations converting ObjectID's into plain objects, and solving that issue.)
From Neil's top-level comment
On a native method you would actually need to grab the native implementation. You should be able to access from the loaded driver through MongoInternals [...]
Mongo.Collection.ObjectID is not a plain ObjectId representation, and is actually a complex object for Meteor internal use. Hence why the native methods don't know how to use the value.
So if you have some field which is an ObjectId, and you're using some method of a Meteor Collection's rawCollection (for example,
.distinct
.aggregate
.initializeOrderedBulkOp
.initializeUnorderedBulkOp
), you'll want to convert your ObjectId's using
const convertedID = new MongoInternals.NpmModule.ObjectID(
originalID._str
);
// then use in one of the arguments to your function or something
const query = {_id: convertedID};
before calling the method on them.

Create a Couchbase Document without Specifying an ID

Is it possible to insert a new document into a Couchbase bucket without specifying the document's ID? I would like use Couchbase's Java SDK create a document and have Couchbase determine the document's UUID with Groovy code similar to the following:
import com.couchbase.client.java.CouchbaseCluster
import com.couchbase.client.java.Cluster
import com.couchbase.client.java.Bucket
import com.couchbase.client.java.document.JsonDocument
// Connect to localhost
CouchbaseCluster myCluster = CouchbaseCluster.create()
// Connect to a specific bucket
Bucket myBucket = myCluster.openBucket("default")
// Build the document
JsonObject person = JsonObject.empty()
.put("firstname", "Stephen")
.put("lastname", "Curry")
.put("twitterHandle", "#StephenCurry30")
.put("title", "First Unanimous NBA MVP)
// Create the document
JsonDocument stored = myBucket.upsert(JsonDocument.create(person));
No, Couchbase documents have to have a key, that's the whole point of a key-value store, after all. However, if you don't care what the key is, for example, because you retrieve documents through queries rather than by key, you can just use a uuid or any other unique value when creating the document.
It seems there is no way to have Couchbase generate the document IDs for me. At the suggestion of another developer, I am using UUID.randomUUID() to generate the document IDs in my application. The approach is working well for me so far.
Reference: https://forums.couchbase.com/t/create-a-couchbase-document-without-specifying-an-id/8243/4
As you already found out, generating a UUID is one approach.
If you want to generate a more meaningful ID, for instance a "foo" prefix followed by a sequence number, you can make use of atomic counters in Couchbase.
The atomic counter is a document that contains a long, on which the SDK relies to guarantee a unique, incremented value each time you call bucket.counter("counterKey", 1, 2). This code would take the value of the counter document "counterKey", increment it by 1 atomically and return the incremented value. If the counter doesn't exist, it is created with the initial value 2, which is the value returned.
This is not automatic, but a Couchbase way of creating sequences / IDs.

Storing schema-less complex json object in Cassandra

I have a schema-less json object that I wish to store in Cassandra DB using spring-cassandra. I learned that Cassandra supports Map type but Cassandra doesn't accept Map<String, Object> as a data model.
I need to query on the fields of the json so storing it as a blob is out of question. Is there anyway I can do this?
PS: I've looked at Storing JSON object in CASSANDRA, the answer didn't seem applicable to my use case as my json could be very complex.
Did you look at UDT (user-defined-type) ?
You can define an UDT like this:
CREATE TYPE my_json(
property1 text,
property2 int,
property3 map<text, text>,
property4 map<int, another_json_type>,
...
)
And then in Java use Map<String, UserType>
Note: UserType comes from the Java driver: https://github.com/datastax/java-driver/blob/2.1/driver-core/src/main/java/com/datastax/driver/core/UserType.java
You cannot create an user type n Java, you can only get it from the metadata of your table, see this: https://github.com/datastax/java-driver/blob/3.0/driver-core/src/test/java/com/datastax/driver/core/UserTypesTest.java#L62-L81
1) one solution from me is, integrate solr search and index this table first.
2) Later write a solr analyser to parse the json and put under various fields in solr while indexing.
3) Next step is use solr supported query like select * from table where solr_query = "{search expression syntax}"

Resources