Made2Manage ERP Web service method information - m2m

I am using Mane2Manage ERP Web service but want to know about methods.
Which method I can use to getting Customer and Sales Order.

HTTP GET
Use GET requests to retrieve resource representation/information only – and not to modify it in any way. As GET requests do not change the state of the resource, these are said to be safe methods. Additionally, GET APIs should be idempotent, which means that making multiple identical requests must produce the same result every time until another API (POST or PUT) has changed the state of the resource on the server.

Related

Check if request to api is made from frontend or backend application

In order to restrict actions to specific tokens I need to check if a request made to my api is made from an frontend or backend application.
I want to implement the same behavior like Stripe. Using a secret key on client side will result in an error.
So is there a safe way to check this?
The answer could be through the User Agent, but in fact there is no exact way to identify and differentiate whether a request came from a browser or from another API for example, as it is possible to manipulate the User Agent.
Well-behaved "bots" (like common search engine spiders) will identify themselves using a User Agent specific to them.

DocuSign dynamic/multiple webhook urls

Is there any guidelines/recommendations for the webhook URL that I can use for setting up the event notifications?
I am thinking of using something like this - /webhook/v1/{uniqueAppID}. The uniqueAppID changes for every envelope, I dynamically construct the URL and set it to the EventNotification object while creating the envelope.
The unique app id is used to track the response from DocuSign, So if at all there is any issue in parsing the response, I would know for which envelope/app id I have got the notification.
I read that the system will retry delivery for failures only after a successful acknowledgement is received from the webhook, In my case, it will be like having multiple webhooks. Will this setup cause any issues in retrying the failures? Does setting up the url like /webhook/v1?uniqueAppID={uniqueAppID} helps?
Thank You
Great questions.
First up, you don't have to use any kind of endpoint/URL schemes. You could have them all go one place. The information that you get from DocuSign in the payload would allow you to know everything you need about the envelope and if you need additional information - you could make API calls to determine this.
However, I would agree that if you need this information, using different endpoints would give you greater flexibility. Also, if it's different apps, you could, in theory, deploy them separately and thus the endpoint code would change without affecting other apps.
As for retry, this is done in case DocuSign does not get a 200, 201 or other HTTP response that's positive from your web server. If DocuSign gets 401 or 500 etc. If no response is received, DocuSign would retry again in 24 hours.
That should not impact your decision about the design.
One last thing to consider, you cannot be behind firewall/VPN etc. Highly recommend you consider a public cloud provider (Azure, AWS, Google) for your app to avoid networking issues.
When using envelope-level webhooks, the triggers and destination URI are embedded into that envelope. After the envelope enters a predefined state like 'sent' or 'completed', the writeback targets the URI that you provided. Unless you intentionally change this, it should remain envelope-specific.
This is different from our typical Connect setup, which would have a single URI per listener and envelopes writebacks would be directed to the listener URI at the time they're processed.
Any subsequent failure or retry attempts would follow the standard guidelines outlined here: How to get docusign webhook to retry on failure?

How do i authenticate my api (node app) when being accessed by my UI?

I've looked at various approaches to this but I'm yet to find one that is perfect for my set up.
I have a UI that calls the api to present data to a user - the UI doesn't require someone to login and i don't want to authenticate every user. What i'm trying to do is simply protect the api because with the url to the api, anyone will be able to access the data.
In my mind, i can create a password/ apikey of some sort and store that as an environment variable that is sent from the UI to the API whenever a request is made. On the API, that apikey/password is validated before allowing access to the api.
Does this sound wise? Would you suggest something else? Have i missed anything. All thoughts are welcome.
The solution sounds good to me. You can do something like this:
back-end: create a new REST endpoint (something like /api/get-token) in your Node.JS application which generates a token (or ID), saves it locally (maybe into a DBMS) and returns it as response. Then, in every endpoint you need to protect, you extract the token from the request and, before sending any "private" data, you check if the supplied token is valid, or not.
front-end: as soon as your application is loaded, you send a request to the newly created REST endpoint and store the given token locally (localStorage should be fine). Then, every time the UI needs to access "private" data, you simply supply the claim within the request.
The implementation should be pretty straight forward.

How to access multiple remote services in transaction-like manner

I have an endpoint responsible for creating a paid subscription. However, in order to create subscription I need to access multiple different services in succession:
1) create subscription with a token provided by front-end (generated by a direct call from front-end app to payment system) (Call to the payment system)
2) get Billing information to save in database (Call to the payment system)
3) save some of billing info (f_name, l_name) and provided shipping info (Call to the database)
4) subscribe customer to the mailing list (Call to the email service provider)
Any of these steps can fail due to service being unavailable, problems with internet connection in the DC or any other number of problems that are not controllable by developers. Is there any options to process all of this in a transaction-like manner to avoid partial completion? e.g. We create subscription, but don't write to database.
I am using Node.js, if this helps.
Have a look at the Saga pattern for microservices. This could essentially be laid out as a service which you contact when you want to create a subscription. It knows every step involved and on top of that, also knows how to roll back every transaction, should any step fail.
Upon making a request, the service would just start doing all the necessary requests/queries and then either:
Return successfully
Rollback all transactions that have happened so far and return an error
This obviously relies on all of your services being able to revert to known good state.
Another approach would be to use two-phase/n-phase commits, but they may impose a big performance drop which is not desirable for something user-facing.
You may want to read through this discussion on HackerNews where this problem is discussed in far more detail.

Custom logic app connector

We are creating a multi tenant application. To allow the users to create bussiness logic, we want to use Logic apps.
Therefore I want to create a web app which will expose the DocumentDB change feed.
When creating a logic app, you can choose between different out of the box connectors. How can we get ours included in the list? Is there any documentation on that?
The idea is to get the logic app running with every document insert.
To achieve this, I have two options: Polling triggers and Webhook triggers.
I prefer the polling trigger because this will be less work than implementing logic to handle all the subscribed URL's per tenant. Anyone who has concerns/suggestions on this approach?
The location header should become my continuation token from the DocumentDB change feed, is that correct?
Logic app will call my api the first time without location header
My api will call DocDb without continuation tokens, which will return all docs one by one, because the max doc count is set to 1
My api will return the first document that is retrieved, and will set the retry-after to 0 and the location to new continuation token that I have received. If no documents are found, the api will return the result like in step 5.
Logic app will start a new instance to handle the document and will call the API again with the continuation token in the header.
Step 3 to 4 will be repeated until all documents are processed. Because I am only processing one document per logic app instance, Azure should be able to scale for me automatically?
When all documents are processed, the api will return a 202 statuscode with a location header set to the latest continuation token and a retry-after to 15.
After 15 seconds, logic app will call our api with the latest continuation token. This will trigger the process again.
Is my solution feasible? What if I need to stop, or clone the logic app configuration for some reason, how can I know what the latest continuation was or do I need to save my continuation tokens in some data store?
Yes what you've described here should be supported. You can use your own connector in a logic app by clicking the dropdown above the search and selecting to use an API from API Management or App Services as detailed here and here.
The continuation token can be preserved in the "trigger state" of the location header assuming you are using the 202 polling pattern above. So for example the header may be https://mydocdbconnector.azurewebsites.net/api/trigger?triggerstate={thisCouldBeTheContinuationToken} -- that way on subsequent polls the last continuation token is sent back to the trigger and can be used in the operation. Trigger state is preserved as long as the trigger remains unchanged in the definition (enabled/disabling/etc all preserve trigger state).
The only part I'm not clear on is the multi-tenant requirements you have. I assume you mean you want each of the users to be able to trigger on their own documentDb instance -- the best supported pattern for this today is to have a logic app per customer - each with it's own triggerState and trigger. This could be leveraging a custom connector as well. This is the pattern that services like Microsoft Flow use which are built on Logic Apps.
Let me know if that helps.

Resources