Does UPS offer any real time tracking API? - webhooks

I am using UPS APIs to figure out ratings and create shipments and now I'd like to track shipments in real time. I know there are a lot of services that provide tracking aggregation for different couriers, including UPS, via webhooks, but I am looking for a "native" UPS API for the real time tracking that would not require me to keep poling for updates.
I went over the UPS API docs and could not find anything. Is there such an API that offers webhook type notifications or something similar in UPS? If not, how do those services like EasyPost, ParcelMonitor, TrackingMore and others can offer such functionality?

I'd like to track shipments in real time
"Real-time tracking" is a misnomer in the shipping technology industry. EasyPost does provide webhook events to notify you of updates, but those updates will never come in "real-time" and no tracking service provider should ever make such a claim. There will always be some kind of delay between when the carrier updates their system and when the updates are able to flow through providers like EasyPost. The delay may be short, but any delay at all means that "real-time" goes out the window.
Being an EasyPost employee myself, I have to disclose my bias and say that I believe EasyPost can help you get updates as promptly as possible for UPS shipments.
The nice thing about setting up webhooks with EasyPost is that you don't have establish your own logic to repeated poll for new tracking updates. This can save you network bandwidth and reduce the complexity of your code and it can help you get updates sooner than if you were trying to poll updates on your own intermittent schedule.

Related

Azure Functions - Concurrency Issue

I'm planning a project and working through all the potential issues I might face. One that I keep running into which might be specific to my project is concurrency issues. From my understanding, Azure Functions scale when under demand which is exactly what I'm looking for but causes a problem when it comes to concurrency. Let me explain the scenario:
Http triggered Azure Function which does the below
Gets clients available credit, if zero, auto-charge clients card.
Deduct credit from the client for the request.
Processes the request and returns to the client.
Where I see an issue is getting the available credit and auto-charging card. Due to the possibility of having multiple instances of the function I might auto-charge the card multiple times and on top of that getting and deducting the credit will be affected.
I'm wanting the scaling of Azure Functions but can't figure a way around these concurrency issues. Any insight or pointers in the right direction would be very much appreciated.

How to deal with api that rate limits requests?

For small app they are no problem.
But for apps with traffic you can hit limits easily.
Http protocol is req-res driven. Just because your backend is stuck with limit, you can't really wait to send respond back until rate limit allows you to resume making your api calls.
What do you do?
I can think of several scenarios:
Wait it out: while it sucks, but sometimes it's easy fix, as you don't need to do anything.
Queue it: this a lot of work oppose to making just api call. This requires that first you store it in database, then have background task go through database and do the task. Also user would be told "it is processing" not "it's done"
Use lot of apis: very hacky... and lot of trouble to manage. Say you are using amazon, now you would have to create, verify, validate like 10 accounts. Not even possible for where you need to verify with say domain name. Since amazon would know account abc already owns it.
To expand on what your queueing options are:
Unless you can design the problem of hitting this rate limit out of existence as #Hammerbot walks through, I would go with some implementation of queue. The solution can scale in complexity and robustness according to what loads you're facing and how many rate limited APIs you're dealing with.
Recommended
You use some library to take care of this for you. Node-rate-limiter looks promising. It still appears you would have to worry about how you handle your user interaction (make them wait, write to a db/cache-service and notify them later).
"Simplest case" - not recommended
You can implement a minimally functioning queue and back it with a database or cache. I've done this before and it was fine, initially. Just remember you'll run into needing to implement your own retry logic, will have to worry about things like queue starvation **. Basically, the caveats of rolling your own < insert thing whose implementation someone already worried about > should be taken into consideration.
**(e.g. your calls keep failing for some reason and all of a sudden your background process is endlessly retrying large numbers of failing queue work elements and your app runs out of memory).
Complex case:
You have a bunch of API calls that all get rate-limited and those calls are all made at volumes that make you start considering decoupling your architecture so that your user-facing app doesn't have to worry about handling this asynchronous background processing.
High-level architecture:
Your user-facing server pushes work units of different type onto different queues. Each of these queues corresponds to a differently rate-limited processing (e.g. 10 queries per hour, 1000 queries per day). You then have a "rate-limit service" that acts as a gate to consuming work units off the different queues. Horizontally distributed workers then only consume items from the queues if and only if the rate limit service says they can. The results of these workers could then be written to a database and you could have some background process to then notify your users of the result of the asynchronous work you had to perform.
Of course, in this case you're wading into a whole world of infrastructure concerns.
For further reading, you could use Lyft's rate-limiting service (which I think implements the token bucket algorithm to handle rate limiting). You could use Amazon's simple queueing service for the queues and Amazon lambda as the queue consumers.
There are two reasons why rate limits may cause you problems.
Chronic: (that is, a sustained situation). You are hitting rate limits because your sustained demand exceeds your allowance.
In this case, consider a local cache, so you don't ask for the same thing twice. Hopefully the API you are using has a reliable "last-modified" date so you can detect when your cache is stale. With this approach, your API calling is to refresh your cache, and you serve requests from your cache.
If that can't help, you need higher rate limits
Acute: your application makes bursts of calls that exceed the rate limit, but on average your demand is under the limit. So you have a short term problem. I have settled on a brute-force solution for this ("shoot first, ask permission later"). I burst until I hit the rate limit, then I use retry logic, which is easy as my preferred tool is python, which supports this easily. The returned error is trapped and retry handling takes over. I think every mature library would have something like this.
https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html
The default retry logic is to backoff in increasingly big steps of time.
This has a starvation risk, I think. That is, if there are multiple clients using the same API, they share the same rate limit as a pool. On your nth retry, your backoff may be so long that newer clients with shorter backoff times are stealing your slots ... by the time your long backoff time expires, the rate limit has already been consumed by a younger competitor, so you now retry even longer, making the problem worse,although at the limit, this just means the same as the chronic situation: the real problem is your total rate limit is insufficient, but you might not be sharing fairly among jobs due to starvation. An improvement is to provide a less naive algorithm, it's the same locking problem that you do in computer science (introducing randomisation is a big improvement). Once again, a mature library is aware of this and should help with built-in retry options.
I think that this depends on which API you want to call and for what data.
For example, Facebook limits their API call to 200 requests per hour and per user. So if your app grows, and you are using their OAuth implementation correctly, you shouldn't be limited here.
Now, what data do you need? Do you really need to make all these calls? Is the information you call somewhat storable on any of your server?
Let's imagine that you need to display an Instagram feed on a website. So at each visitor request, you reach Instagram to get the pictures you need. And when your app grows, you reach the API limit because you have more visitors than what the Instagram API allows. In this case, you should definitely store the data on your server once per hour, and let your users reach your database rather than Instagram's one.
Now let's say that you need specific information for every user at each request. Isn't it possible to let that user handle his connection to the API's? Either by implementing the OAuth 2 flow of the API or by asking the user their API informations (not very secure I think...)?
Finally, if you really can't change the way you are working now, I don't see any other options that the ones you listed here.
EDIT: And Finally, as #Eric Stein stated in his comment, some APIs allow you to rise your API limit by paying (a lot of SaaS do that), so if your app grows, you should afford to pay for those services (they are delivering value to you, it's fair to pay them back)

What is the actual difference of Azure Notification Hubs' Telemetry options?

While researching Azure Notification Hubs, I saw there are two Telemetry options available (source):
"Limited"
"Rich"
Although I have found very limited descriptions on the Pricing and the FAQ pages, this is not enough information to make a decision whether I want the "Rich" telemetry or if the "Limited" Telemetry is enough. Additionaly, those descriptions only talk about the "Rich" option:
Standard namespaces have access to Per Message Telemetry and Push Notification Services Feedback
Rich telemetry: You can use Notification Hubs Per Message Telemetry to track any push requests and Platform Notification System Feedback for debugging.
Also, a Tweet asking #AzureSupport for help only lead to the FAQ page and eventually led them to ask me if I could ask this very question on SO.
The only option available next to asking here is to actually try out, but that would incur a monthly fee and extra effort.
The main difference between the two is that "limited" gives you access to counts of various events: registrations, sends etc; pretty much everything you see as graphs in Azure Portal on Notification Hubs blades.
"Rich" (or Per Message Telemetry) gives you access to detailed information about every single push: things like feedback from PNS and many other things. You can think about it as if you were to send requests directly to PNS yourself and log pretty much any meaningful information about those.
Let me know in the comments if I can clarify.
I finally found a useful MSDN page, which has some answers to what telemetry is provided, how, and to who. It says "This API is only available for Standard tier notification hubs" which means Rich Telemetry only, not Limited.
If the PNS platform involved supports it, then Success means it was delivered to the device. Oddly, despite copious other error codes, there seems to be none for "accepted by PNS, but still not/failed to deliver to device."
It provides counts of outcomes, so if you want per-device results, you'll have to send to only to one device at a time.

Instagram API Subscribe inconsistently working

I have experienced significant variability when using the Instagram Subscription API. For the most part, the API will not post updates to my end-point as specified during the subscribe initiation. My understanding is that the subscription is configured correctly as any of the updates from my personal account are received.
There seem to be reports across the web talking about significant delays. However, it is my experience that accounts that work do so within seconds but in most instances no subscription messages are never received.
There was discussion on the web also regarding queuing of updates sent through to the subscribe API. Which may make a little sense, however a queue would suggest that updates would be received eventually.
I have requested basic permissions, which is sufficient to request public media from each registered account. Yet, there I have a gut feeling that these permissions could be the problem, so I have started the process of requesting public_content.
At this stage there seems to be a number of developers experiencing similar issues, yet no resolutions.
Has anybody been able to resolve this issue?
I'm subscribed to aspect=media object=user and experiencing a similar issue.
For some users, I'm notified 95% of the time. For other users, I've never been notified of a single post.
In this post nithinisreddy mentions that the data is being "sampled". I think this is the reason. Hopefully it improves after the tags/locations subscriptions are deprecated.

Maximo Control Desk (7.5.1.2) - Measuring business hours spent in a specific status

We're transitioning to a managed service provider for our IT service desk and deskside and we're working out the details of their SLAs. Many of the SLAs are based on ticket status. An example of this is the following:
"Measures the amount of time it takes to assess, schedule, test, and package application packages before they are available for User Acceptance Testing."
My first thought was to try using SLAs to measure this, as they neatly tie together calendars and priorities, but I'm having a really hard time finding any information about how I could do this.
Now I'm looking into using the TKSTATUS.STATUSTRACKING attribute on tickets, but I believe this just tracks straight 24/7 time instead of taking into consideration any calendars.
Has anyone tried this before? Any suggestions?
We are in a similar process, but we are measuring time on site of vendors for our work orders. Opposed to using any SLAs we have diffrent statuses which mark different events. Then when we want to know how long it took for an event to finish, we look to see the time required to change status in the wostatus table.

Resources