How to use aws amplify searchable to search multiple key of model? - node.js

I am using amplify graphql api. Where I have an Item model with different attributes.
I am trying to create an autocomplete or autosuggest api with user-added input.
type Item #model
#searchable
{
id: ID!
name: String!
description: String
category: String
fullAddress: String!
street: String
area: String
district: String
city: String
state: String!
zip: Int!
Right now I aam querying it like
query SearchEvent {
searchEvents(filter:{
name: {
ne: $query
}
}) {
items {
id
name
}
}
}
But this gives me only full word match to a particular key, like this example is match of name.
How do I query to get a response of any match or suggestions from any key of the item object?
For example, if my item name is Laptop and api query is la it should return the laptop item and other matching items name.
Likewise, if api query is ala it should return the Alabama, Alaska names including the item name matching with ala with a limit to 10 let's say.
Anyway is this possible? Any lead will eb helpful.

Related

Rest Api for search by products, brands and more using single search input

I'm new to Node and Mongodb.
I want to implement Search Rest api, with single param passing to api resulting to search in mongo collections checking the category, subCategory values and returning the related keyword matching object. just like flipkart search bar, with suggestion keywords, what should i follow to achieve this. i'm just having knowledge with basic CRUD operations that's all. Any suggestions or ref practices are helpful to me. Thank you
You can follow two approaches for the above implementation.
1) Basic approach. We can create a search collection which would have the following field like
Search
_id, name, description, type (brand, products, etc), type_id (brand_id, product_id), logo (It can be a brand logo, or product logo and etc).
On every product, brand, etc add we would create an entry in the search table.
Similarly, on deletion, we would remove that product or brand from the search table
We would have an end called http:///search/:string
Which would in response give result as
{
data: [
{
_id: 507f1f77bcf86cd799439011,
name: 'Clothing',
description: "Sample description about clothing",
type: 'brand',
type_id: 675f1f77bcf86cd799439124, // brand id reference,
logo: "http://<domain_name>/logo/675f1f77bcf86cd799439124"
},
{
_id: 5d3f1f77bcf86cd799439234,
name: 'Monitor',
description: "Sample description about Monitor",
type: 'product',
type_id: 5j5f1f77bcf86cd799439987, // product id reference
logo: "http://<domain_name>/logo/5j5f1f77bcf86cd799439987"
}, {
_id: 507f1f77bcf86cd799439333,
name: "Mobile",
description: "Sample description about Mobile",
type: 'brand',
type_id: 876f1f77bcf86cd799439444, // brand id reference
logo: "http://<domain_name>/logo/876f1f77bcf86cd799439444"
}
]
}
2) Sophisticated approach: Instead of using a search table you can go with the elastic search for a faster and robust approach
you can use mongodb text search feature
or you can go with elasatic search as per your requirement.

How to version records of table in database?

I have a table in a database that stores products to buy (let's call the table Product). It's necessary to track changes of every product. I have 2 ideas for versioning records.
First is to create a new record in Product by copying the product and override the fields that differ and keep the reference in the record for the older and newer version. In that case a record in Product table is read-only except the field that indicate whether the product is archived or not.
Second is to create 2 tables: Product and ArchivisedProduct. The Product's records are editable, but for each change is created a new record in ArchivisedProduct where differences only are stored (so except an id, all fields are nullable) and tables' fields hold references to each other.
Do you know any tool that could manage that process and works well with Node.js, Prisma, PostgreSQL and Apollo? For such use winston/papertrail was recomended for me, but as I read the docs it seems that it only creates logs.
Exemplary database structure for more clearance:
1st example:
type Product {
id: Int! #id
name: String!
price: Float!
archived: Boolean!
productVersionIds: [Product]!
}
2nd example:
type Product {
id: Int! #id
name: String!
price: Float!
archivisedProductIds: [ArchivisedProduct]! #relation(name: "ProductToArchiva")
}
type ArchivisedProduct {
id: Int! #id
name: String
price: Float
product: Product! #relation(name: "ProductToArchiva")
}
Depending on how many products you intend to store, it may be simpler to have each Product version stored in the ProductVersion model, and then keep tabs on the latest Product (the "head") in a Product model.
You'd have:
type ProductVersion {
id: Int!
version: Int!
name: String!
price: Float!
##id([id, version])
}
type Product {
productId: Int! #id
headVersion: Int!
productVersion: Product! #relation(fields: [productId, headVersion], references: [id, version])
}
For each change to a Product, you'd store the new ProductVersion containing the information, and update the corresponding Product to point the headVersion to the new ProductVersion. That would all be part of a single transaction to ensure consistency.
To query your list of products you'd use the Product object and join the ProductVersion.
If you store a lot of products and joining is a concern, you con consider having a copy of the whole ProductVersion data in the Product instead of using a relation through the headVersion field.
Note that it also would imply you'd compute diff at runtime, and not have them stored directly in the database itself.

GraphQL: How nested to make schema?

This past year I converted an application to use Graphql. Its been great so far, during the conversion I essentially ported all my services that backed my REST endpoints to back grapqhl queries and mutations. The app is working well but would like to continue to evolve my object graph.
Lets consider I have the following relationships.
User -> Team -> Boards -> Lists -> Cards -> Comments
I currently have two different nested schema: User -> team:
type User {
id: ID!
email: String!
role: String!
name: String!
resetPasswordToken: String
team: Team!
lastActiveAt: Date
}
type Team {
id: ID!
inviteToken: String!
owner: String!
name: String!
archived: Boolean!
members: [String]
}
Then I have Boards -> Lists -> Cards -> Comments
type Board {
id: ID!
name: String!
teamId: String!
lists: [List]
createdAt: Date
updatedAt: Date
}
type List {
id: ID!
name: String!
order: Int!
description: String
backgroundColor: String
cardColor: String
archived: Boolean
boardId: String!
ownerId: String!
teamId: String!
cards: [Card]
}
type Card {
id: ID!
text: String!
order: Int
groupCards: [Card]
type: String
backgroundColor: String
votes: [String]
boardId: String
listId: String
ownerId: String
teamId: String!
comments: [Comment]
createdAt: Date
updatedAt: Date
}
type Comment {
id: ID!
text: String!
archived: Boolean
boardId: String!
ownerId: String
teamId: String!
cardId: String!
createdAt: Date
updatedAt: Date
}
Which works great. But I'm curious how nested I can truly make my schema. If I added the rest to make the graph complete:
type Team {
id: ID!
inviteToken: String!
owner: String!
name: String!
archived: Boolean!
members: [String]
**boards: [Board]**
}
This would achieve a much much deeper graph. However I worried how much complicated mutations would be. Specifically for the board schema downwards I need to publish subscription updates for all actions. Which if I add a comment, publish the entire board update is incredibly inefficient. While built a subscription logic for each create/update of every nested schema seems like a ton of code to achieve something simple.
Any thoughts on what the right depth is in object graphs? With keeping in mind the every object beside a user needs to be broadcast to multiple users.
Thanks
GraphQL's purpose is to avoid a couple of queries, so I'm sure that making the nested structure is the right way. With security in mind, add some GraphQL depth limit libraries.
GraphQL style guides suggest you have all complex structures in separate Object Types ( as you have, Comment, Team, Board... ).
Then making a complex query/mutation is up to you.
I'd like you to expand this sentence
Which if I add a comment, publish the entire board update is
incredibly inefficient
I'm not sure about this as you have your id of the Card. So adding new comment will trigger mutation which will create new Comment record and update Card with the new comment.
So your structure of data on the backend will define the way you fetch it but not so much the way you mutate it.
Take a look at the GitHub GraphQL API for example:
each of the mutations is a small function for updating/creating piece of the complex tree even if they have nested structure of types on the backend.
In addition for general knowledge of what are approaches for designing the mutations, I'd suggest this article.
You can use nesting in GraphQL like
type NestedObject {
title: String
content: String
}
type MainObject {
id: ID!
myObject: [NestedObject]
}
In the above code, the type definition of NestObject gets injected into the myObject array. To understand better you can see it as:
type MainObject {
id: ID!
myobject: [
{
title: String
content: String
}
]
}
I Hope this solves your problem!

How to get data from MySql relation table in Prisma

In datamodel.graphql
type Ride {
rideId: String
productId: String
passenger: Passenger
origin: Origin
destination: Destination
dateTime: DateTime
feedback: String
}
type Passenger {
id: ID! #unique
firstName: String
lastName: String
}
type Destination {
# The unique ID of the destination.
id: ID! #unique
latitude: Float
longitude: Float
address: String
}
type Origin {
id: ID! #unique
latitude: Float
longitude: Float
address: String
}
type Report {
productId: String
passenger: Passenger
description: String
}
I deployed this data model and generates a MySql Db, auto queries, mutations with this.
It creates a "Ride", "Passenger", "Report" and "Origin" table in MySql. But it didn't create any column for passenger, origin, destination in "Ride" table.
It separates creates a relation table for this like _PassengerToRide, _OriginToRide, and _DestinationToRide.
Lack of relation establishment in "Ride" table, I couldn't get the details from Passenger, Origin and Destination tables when I query "rides()". Is this the correct way to define the datamodel.graphql. (edited)
Based on your description, this query should "just work":
query {
rides {
feedback
passenger {
id
}
origin {
id
}
destination {
id
}
}
}
Prisma uses the relation table approach you mentioned to keep track if relations between two nodes, for example table _OriginToRide relates to relation #relation(name: "OriginToRide") from your datamodel.
You don't have to change anything on the SQL level to connect the relations afterwards.
Note: The above applies to Prisma database connectors with activated migrations. If your connector doesn't do migrations, different approaches to represent relations are supported. The respective datamodel to support this can be generated by introspecting your existing database schema.

How to update an index with new variables in Elasticsearch?

I have an index 'user' which has a mapping with field of "first", "last", and "email". The fields with names get indexed at one point, and then the field with the email gets indexed at a separate point. I want these indices to have the same id though, corresponding to one user_id parameter. So something like this:
function indexName(client, id, name) {
return client.update({
index: 'user',
type: 'text',
id: id,
body: {
first: name.first
last: name.last
}
})
}
function indexEmail(client, id, email) {
return client.update({
index: 'user',
type: 'text',
id: id,
body: {
email: email
}
})
}
When running:
indexName(client, "Jon", "Snow").then(indexEmail(client, "jonsnow#gmail.com"))
I get an error message saying that the document has not been created yet. How do I account for a document with a variable number of fields? And how do I create the index if it has not been created and then subsequently update it as I go?
The function you are using, client.update, updates part of a document. What you actually needs is to first create the document using the client.create function.
To create and index, you need the indices.create function.
About the variable number of fields in a document type, it is not a problem because Elastic Search support dynamic mapping. However, it would be advisable to provide a mapping when creating the index, and try to stick to it. Elastic Search default mapping can create you problems later on, e.g. analyzing uuids or email addresses which then become difficult (or impossible) to search and match.

Resources