How to create my own data type in Prisma?
I want to create data type to store TIME WITH TIMEZONE in PostgreSQL.
You can use DateTime data type and in an attribute can define to use native PostgreSQL time with timezone type.
Here's how you can define it in schema file
model Time {
time DateTime #db.Timetz(6)
}
Here's a reference for using native database types in docs.
Related
I am following the below blog which explains how to create operator and import another CR into existing one.
http://heidloff.net/article/accessing-third-party-custom-resources-go-operators/
here https://github.com/nheidloff/operator-sample-go/blob/aa9fd15605a54f712e1233423236bd152940f238/operator-application/controllers/application_controller.go#L276 , spec is created with hardcoded properties.
I want to import the spark operator types in my operator.
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/pkg/apis/sparkoperator.k8s.io/v1beta2/types.go
This spark operator is having say - 100+ types/properties. By following the above blog , i could create the Go object but it would be hardcoded. I want to create the dynamic object based on user provided values in CR YAML. e.g. - customer can provided 25 attributes , sometimes 50 for spark app. I need to have dynamic object created based on user YAML. Can anybody please help me out ?
If you set the spec type to be a json object, you can have the Spec contain arbitrary json/yaml. You don't have to have a strongly typed Spec object, your operator can then decode it and do whatever you want with it during your reconcile operation as long as its you as you can serialize and deserialize it from json. Should be able to set it to json.RawMesage I think?
What do you mean by hardcoded properties?
If I understood it correctly, you want to define an API for a resource which uses both types from an external operator and your customs. You can extend your API using the types from specific properties such as ScheduledSparkApplicationSpec from this. Here is an example API definition in Go:
type MyKindSpec struct {
// using external third party api (you need to import it)
SparkAppTemplate v1beta2.ScheduledSparkApplicationSpec `json:"sparkAppTemplate,omitempty"`
// using kubernetes core api (you need to import it)
Container v1.Container `json:"container,omitempty"`
// using custom types
MyCustomType MyCustomType `json:"myCustomType,omitempty"`
}
type MyCustomType struct {
FirstField string `json:"firstField,omitempty"`
SecondField []int `json:"secondField,omitempty"`
}
Tech: Nestjs, graphql (schema first)
When I make custom scalar named with 'Datetime' and change it to typescript using auto generation, 'export type DateTime = any' is created.
When I make custom scalar named with 'Json' and change it to typescript using auto generation, 'export type Json = any' is created.
But, when I make custom scalar named with 'Date' and change it to typescript using auto generation, there is no 'export type Date = any'.
It acts javascript built-in 'Date' object.
'https://typegraphql.com/docs/scalars.html' <== Docs says they provide built-in Date scalar type, and is this exactly equals with javascript Date object?
I am trying to run a query from the results of another using with in the objection orm
ex:
Model.query().with(alias, query).select(columns).from(alias);
according to the Knex documentation which is linked from the objection docs, this should work fine. However, when I run the code, objection prepends the schema name to the alias and I get an error stating that relation schema.alias does not exist. I tried using raw but this did not help either.
ex:
Model.query().with(alias, query).select(columns).from(raw(alias));
is there a way for me to select the table/alias defined in the with method without objection prepending the schema to it?
The query method of the model I was using was overridden with code that specified the schema
ex:
class MyModel extends BaseModel {
static query() {
return super.query().withSchema(schema);
}
}
To get around this issue I used the query method of the parent class directly rather than the overridden query method of the model I was using.
This solves my current problem, but does not answer the question of whether one could omit the prepended schema name in the from method.
I have a Cosmos DB with a container that contains document with varying structure.
I am using the Java SQL API for reading the documents from this container.
The issue I am having is that the API methods for querying/reading the container expects a model class as input param and will return instances of the model class. Because my container contains documents that have varying fields and depth, it is not possible for me to create a model class to represent this.
I need to be able to read/query the documents and then parse it myself and extract the values that I am looking for.
Any ideas? I have used "Object" in the API methods for e.g. queryItem and then it returns a LinkedHashMap that I can parse myself. Is this the way to do it? It looks a bit "raw" but I have not found a better way.
Below is a typical example from the SDK doc. I cannot create a "Family" model class in my code, because the structure can vary from document to document - both which fields are stored and the depth.
private void queryItems() {
CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions();
queryOptions.setQueryMetricsEnabled(true);
CosmosPagedIterable<Family> familiesPagedIterable = container.queryItems(
"SELECT * FROM Family WHERE Family.lastName IN ('Andersen', 'Wakefield', 'Johnson')", queryOptions, Family.class);
familiesPagedIterable.iterableByPage(10).forEach(cosmosItemPropertiesFeedResponse -> {
logger.info("Got a page of query result with {} items(s) and request charge of {}",
cosmosItemPropertiesFeedResponse.getResults().size(), cosmosItemPropertiesFeedResponse.getRequestCharge());
logger.info("Item Ids {}", cosmosItemPropertiesFeedResponse
.getResults()
.stream()
.map(Family::getId)
.collect(Collectors.toList()));
});
}
Per my understanding, it's determined by the sdk funtion's input parameters and output data type. And exactly, we can find that both sample code for java or spring are depends on the data model. So it's really good for you to use Object in your code because of the various documents.
And it's true that we can't design a data model to contain all the properties in the documents but I think it's also a good idea to set a model which contains all the properties required. I mean that maybe you have a useless property in a query, so the query model should exclude it.
I think I found the proper solution:
Create model class. Define the members with unknown depth and structure as JsonNode.
Then the model class could be used and the values of the JsonNode accessed using nice methods.
i am trying to mock sqlalchemy session.add such that when i insert session.add(order) and commit it , it should give me order.orderId back which i will use to extends the test case further .
session.add(order)
session.commit()
return order.orderId
i have mocked sessionMaker i.e mocker.patch.object(file_name, "create_pgdb_engine") which return me mock but i am getting error like
Instance has a NULL identity key. If this is an auto-generated value, check that the database table allows generation of new primary key values, and that the mapped Column object is configured to expect these generated values. Ensure also that this flush() is not occurring at an inappropriate time, such as within a order() event.
Trying to mock lower level database components to create mock objects is probably not a very good strategy for testing. If you're trying to mock sqlalchemy objects for unit testing purposes, look into Factory-boy. Check out the full example here of how a test framework can be constructed using sqlalchemy and factory-boy.