Here is a scenario that I'm trying to reason with using CouchDB. Let say there are Sellers that can post items for sale, and Buyers that can place offers on those items. how could this work in CouchDB so that the following are true...
Sellers can query to see their posts and all offers made on it
Buyers can query to see their offers and the post it was made on
users cannot see offers made by other users unless said offer was on an item they posted
the main problems i see is with the lack of any "Foreign key" equivalent, therefore i could imagine offers being an array on the post. but then i need to make sure that when queried, you only see the offers you're allowed.
currently I'm just using CouchDB independently without, for example, an express server.
Related
I'm working on a SAAS system that allows purchases to be made through a clients own payment gateway. We have one client that wants to use Stripe as their gateway, however as they are using Corporate Purchase Cards (CPC), it is necessary to pass Level 3 transaction details through. I've been trying to get details from Stripe on how we ensure that Level 3 data can be passed through successfully, however I'm not really getting anywhere with this in terms of getting any definitive information we can work with.
Stripe say that their system supports level 3 data, we just need to provide the data in the first place, however there is nothing in their documentation about this and the example we have been provided only allows for a single item to be listed - we will need to support a basket of different items.
We are using the Payment Intents process and already support adding in Metadata to the transaction. We've been told that adding metadata for SKU, Unit of Measure, Unit Price and Extended Price will allow level 3 processing, however this does seem to fall short of the information list on other sources (not to mention does not allow listing multiple items in the order due to the metadata keys needing to be unique)
Baed on that, our Metadata population looks like this (values hard coded to example purposes)
Dictionary<string, string> nRetVar = new Dictionary<string, string>();
nRetVar.Add("Customer", "John Smith");
nRetVar.Add("Email", "John.Smith#example.com");
nRetVar.Add("Order Number", "12345");
nRetVar.Add("Order Date", "2020-02-06");
nRetVar.Add("SKU", "ABCD1234");
nRetVar.Add("Unit of Measure", "1 Pack");
nRetVar.Add("Unit Price", "$10.00");
nRetVar.Add("Extended Price", "$15.00");
Stripe support never seem to directly answer any of the questions we have been asking about this, so it's proving very hard to get any progress on this - does anyone have enough experience with this to confirm if this metadata is enough to class as level 3, or is there more that we need to be adding?
Stripe supports Level 3 data in their API on both Charge and PaymentIntent. This feature though is currently "gated" which means you need to get access to the feature on your specific account. It's a bit similar to a long running beta. You should contact their support team again and ask for them to enable Level 3 data on PaymentIntent for your account.
The fields they are expecting as specific to that feature. This does not go inside metadata. The documentation is also gated which means you can only see it once you get access to the feature, to avoid confusion for other developers who don't have access.
You can see what the shape looks like in stripe-java for example on Charge here. The feature is not directly supported on PaymentIntent in the library though as this is still private.
I am designing an Ecommerce using micro services architecture. Suppose that I have two context a product catalog, inventory and pricing.
It's seems clear to me that they have a clear responsibility. But to serve the show case (the product list) I need to make a request for the product catalog, get a list of ID's and then use it to query the Inventory micro services to check inventory status ( in stock or stockout). Besides that I need to make a request to Pricing to get the price of each product.
So basically to serve a fundamental feature makes me execute three requests (like a join) in three micro services. I have been reading about micro services architecture and when you are dealing with many "joins" it's possible that the these contexts should be a single one. But, IMO it seems clear to me that each context has a different set of responsibilities.
The other option is to create a "search" micro service that aggregate all these information (product + pricing + inventory). We can use a domain event to notify "search" microsecond that something has changed. So we can resolve show case with a single request. This look like a CQRS.
The question is...
Is there a correct approach?
Which one is better ? Trade-offs?
you can try to include some information from different domain contexts to other domain context
so you product catalog domain will contain #of items , price from inventory and pricing domains.
This will be a read only (value objects)and should be updated by events from inventory and pricing domains .
in your use case the trusted source of truth will be carried in inventory domain so if any synchronization failure happen still the inventory will reject any order because of availability .
in your case i think its better to create a separate search microservice to aggregate the data from all of them as search is mostly always from multiple domain areas like product , inventory and ....
and you can use events from other microservice (Event Sorucing) to populate the data in search.
It seems that what you need is to show in a view info coming from different microservices (contexts).
You can use ViewModel Composition technic, where an infraestructure component (a request handler) intercepts the http request and allows microservices to participate in the response, looking for a microservice who says "hey, I have that info" (Inventory has the info about stock, Pricing about price, and so on). This infraestructure component compose a dynamic viewmodel on the fly, with the info coming from differect microservices.
I've never implemented it, but look at this video explaining it, from minute 17:35 to 21:00
https://www.youtube.com/watch?v=KkzvQSuYd5I
Hope it helps.
Update on 14-Feb-2019
Probably this will answer your question in more detail https://stackoverflow.com/a/54676222/1235935
I think the right approach here is to use Event Sourcing to pre-aggregate the show case data with product description, inventory and price. A separate microservice is probably not needed. This pre-aggregated data (a.k.a. materialized view) can be stored in the same microservice that handles the user request to display products (probably the order creation service).
The events could be generated by log-based Change Data Capture (CDC) from the DB of the product, inventory and pricing services and writing them to their respective topics in a log structured streaming platform (e.g. Kafka or AWS Kinesis) as mentioned here. This will also ensure "read your own write" guarantees in product, inventory and pricing services
We have a bot that will be used by different customers and depending on their database, sector of activity we're gonna have different answers from the bot and inputs from users. Intents etc will be the same for now we don't plan to make a custom bot for each customer.
What would be the best way to separate data per customer within Chatbase?
I'm not sure if we should use
A new API key for each customer (Do we have a limitation then?)
Differentiate them by the platform filter (seems to not be appropriated)
Differentiate them by the version filter (same it would feel a bit weird to me)
Using Custom Event, not sure how though
Example, in Dialogflow we pass the customer name/id as a context parameter.
Thank you for your question. You listed the two workarounds I would suggest, I will detail the pros/cons:
New API Key for each customer: Could become unwieldy to have to change bots everytime you want to look at a different users' metrics. You should also create a general api (bot) where you send all messages in order to get the aggregate metrics. This would mean making two api calls per message.
Differentiate by version filter: This would be the preferred method, however it could lengthen load times for your reports as your number of users grows. The advantage would be that all of your metrics are in one place, and they will be aggregated while only having to send one api call per message.
I'm having a problem understanding how basic communication between microservices should be made and I haven't been able to find a good solution or standard way to do this in the other questions. Let's use this basic example.
I have an invoice service that return invoices, every invoice will contain information(ids) about the user and the products. If I have a view in which I need to render the invoices for a specific user, I just make a simple request.
let url = "http://my-domain.com/api/v2/invoices"
let params = {userId:1}
request(url,params,(e,r)=>{
const results = r // An array of 1000 invoices for the user 1
});
Now, for this specific view I will need to make another request to get all the details for each product on each invoice.
results.map((invoice)=>{
invoice.items.map((itemId)=>{
const url=`http://my-domain.com/api/v2/products/${itemId}`
request(url,(e,r)=>{
const product = r
//Do something else.....
});
});
});
I know the code example is not perfect but you can see that this will generate a huge number of requests(at least 1000) to the product service and just for 1 user, now imagine if I have 1000 users making this kind of requests.
What is the right way to get the information off all the products without having to make this number of requests in order to avoid performance issues?.
I found some workarounds for this kind of scenarios such as:
Create an API endpoint that accepts a list of IDs in order to make a single request.
Duplicate the information from the Product service within the invoice service and find a way to keep them in sync.
In a microservices architecture are these the right ways to deal with this kind of issues? For me, they look like simple workarounds.
Edit #1: Based on Remus Rusanu response.
As per Remus recommendation, I decided to isolate my services and describe them a little bit better.
As shown in the image above the microservices are now isolated(in specific the Billing-service) and they now are the owners of the data. By using this structure I ensure that Billing-service is able to work even if there are async jobs or even if the other two services are down.
If I need to create a new invoice, I can call the other two microservices(Users, Inventory) synchronously and then update the data on the "cache" tables(Users, Inventory) in my billing service.
Is it also good to assume these "cache" tables are read-only? I assume they are since only the user/inventory services should be able to modify this information to preserve isolation and authority over the information.
You need to isolate the services as so they do not share state/data. The design in your question is a single macroservice split into 3 correlated storage silos. Case in point, you cannot interpret a result form the 'Invoicing' service w/o correlating the data with the 'Products' response(s).
Isolated microservices mean they own their data and they can operate independently. An invoice is complete as returned from the 'Invoices' service. It contains the product names, the customer name, every information on the invoice. All the data came from its own storage. A separate microservice could be 'Inventory', that operates all the product inventories, current stock etc. It would also have its own data, in its own storage. A 'product' can exist in both storage mediums, and there once was logical link between them (when the invoice was created), but the link is severed now. The 'Inventory' microservice can change its products (eg. remove one, add new SKUs etc) w/o affecting the existing Invoices (this is not only a microservice isolation requirement, is also a basic accounting requirement). I'm not going to enter here into details of what is a product 'identity' in real life.
If you find yourself asking questions like you're asking it likely means you do not have microservices. You should think at your microservice boundaries while considering what happens if you replace all communication with async queued based requests (a response can come 6 days later): If the semantics break, the boundary is probably wrong. If the semantics hold, is the right track.
It all depends on the resilience requirements that you have. Do you want your microservice to function when the other microservices are down or not?
The first solution that you presented is the less resilient: if any of the Users or Products microservices goes down, the Invoice microservice would also go down. Is this what you want? On the other hand, this architecture is the simplest. A variation of this architecture is to let the client make the join requests; this leads to a chatty conversation but it has the advantage that the client could replace the missing information with default information when the other microservices are down.
The second solution offers the biggest possible resilience but it's more complex. Having an event-driven architecture helps a lot in this case. In this architecture the microservices act as swimming lanes. A failure in one of the microservices does not propagate to other microservices.
Im trying to add a feature to my website that involves the typical "invite your friends" with help from a contact importer (cloudsponge). Its a pretty popular and gets the job done but I need something faster..
The problem with cloudsponge is that they request all contacts in one call, this could mean a long wait time for someone with alot of contacts.
I looked at their rest calls and there doesnt seem to be a way to load contacts in pieces. Do any of these contact importing services allow you to pull in a few contacts at a time (lets say 50) so that we can show our user the first 50 contacts and load the rest / updating the view. So they dont have to wait forever for all the contacts to be pulled?
Ive looked at other apis like context io but cant seem to find a solution to this one.
I built the CloudSponge API.
Early on, we decided to support imports across a variety of providers while exposing a simple and consistent interface. Pagination and rolling or real-time access to contacts were things that were excluded in order to do that. To provide end-user feedback on the progress of the import, we added the /events endpoint.
So far import speed hasn't been a major issue for a couple reasons:
In general, end users with an address book of 10000+ contacts are rare (although this may not be the case for certain niches).
End users who do have this many contacts in their address book usually understand that it will take a while to import.
Having said that, the speed is something that we can definitely improve upon. Here's a few ideas:
We can allow for returning only a subset of all contacts by default. For example, we currently return all contacts for Gmail, which is usually a much larger number of contacts than are actually stored in 'my contacts'.
We can implement parallel paginated imports on the server side. This will make our server process work harder and faster to download the user's contacts from, say, Gmail. This adds complexity on our side but keeps the API untouched.
We can implement your suggestion: add a rolling or real-time access to contacts in our API, either in an extended endpoint or a new version of our interface.
I'm happy to work with you on exploring these to improve our service. Send us an email: support#cloudsponge.com
Graeme