I would like to get user Google Fit activity updates to my server (via webhook) so there is no need to make request periodically.
Currently i have a cron job to get data, but has two problems, infrastructure complexity and delay from the data generation and data processing.
I have seen that there is a Google Report Api but does not support Google Fit.
https://developers.google.com/admin-sdk/reports/v1/guides/push
Related
First of all, sorry for the english, this is not my mother tongue.
What i'm trying to do is the following:
1 - Get API REST Azure Advisor Recommendations ( Costwise )
2 - Get the recommendations acordingly to the recommendationtypeID
3 - Post into a channel
The problem is when posting, the logic app fires several recommendations, i would like to make this a little better timed. Like each recommendation sent after an elapsed time or such thing.
I tought i could create a thread to post the results, but logic apps and curl POST seems to not allow this kind of operation and i'm pretty new to this kind of thing.
Is there a way to break each recommendation from the huge JSON, and then post those with a time window between each one? Is there a better way to bring advisor alerts to slack so my costs team would see those in slack?
Thanks in advance
You need to enable the Logic app connector to allow chunks that splits large data into chunks
Open settings from HTTP connector
Set allow chunks to on
Here is the Microsoft Document which describes chunks in Logic apps.
There is an one year old question How can I retrieve through an API *Live Metrics* of Microsoft Application Insights about is it possible to pull LiveMetrics data that appInsights generate for the application trough some API
Right now i don't see anything live related in the official documentation - https://dev.applicationinsights.io/reference . And the answer for old question was also that there is no any way to get them.
But maybe someone knows if AppInsights team plans were changed in this year and they are working on that API?
It might be really useful to pull that data in realtime through API to own alerting\metrics system to process data from different microservices\applications and display them in aggregated way in realtime.
As example we can build something like OpServer has but based on different applications and their AppInsights data .
As right now there is no any way to get it
Note: I work in Application Insights team at Microsoft.
LiveMetrics data is not persistently stored anywhere, and there is no API to retrieve it. The data is collected only when someone is actively on the Live Metrics portal page. The moment browser window is closed, data is gone as well.
If your goal is to get metrics/other in real-time, then you can do that by implementing own ITelemetryProcessor. Most people use ITelemetryProcessor to "filter" out unwanted telemetry. But that is not a rule. All telemetry passes through TelemetryProcessor, and you can chose to filter the data or do something else with it. In your case you want to send it instantly to some real-time service. In fact, LiveMetrics (internally known as QuickPulse) is implemented as a TelemetryProcessor. (https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/PerformanceCollector/Perf.Shared/QuickPulseTelemetryProcessor.cs#L158)
General doc about TelemetryProcessor:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling#create-a-telemetry-processor-c
I am planning to create a mobile application for android and ios users, i think i will take a try with xamarin since i will be alone on this project and i don't have a lots of time.
I want that the mobile app for both platform get datas from the api, then if there is new datas available we notify the user by a notification.
How the mobile will work in that kind of project? I mean should i make a background service then check every x seconds/minutes by http request? In that case which time interval? Should i use websockets instead for this case?
The app might be used by many people, so i would to know the scenario in this kind of project: Getting very fast changes, without overload the server due to too many connects or whatever else.
I'm confused about this and i need some lights around, any mobile application/server experiences related would be apprecied!
EDIT:
As suggered by an user, here additional infos:
The api is homemade, restful using JWT made in NodeJS.
Each users on their device should get messages from server asap, even when the app is in background/closed.
Maybe in the future a way to send messages between users themselves.
You have to implement push notification.
It is quite easy to implement this in xamarin. just send the push notification to the device and on the notification received call back send the API request to retrieve the updated data.
Here is the document for sending push notification from custom API.
https://learn.microsoft.com/en-us/appcenter/push/pushapi
I'm not a mobile developer, so take this with a grain of salt.
The answer to this really depends on what you're doing, which informs how often to check the API. If it's a messaging app, for example, you could have it check every couple minutes to see if there are undelivered messages, then check more frequently for the next X minutes (to facilitate a conversation in real time).
If it's a GPS navigation app to be used while driving, you'd need much more frequent requests.
As for the API, that also depends on what type of API and the number of requests you can make to it. Is it a commercial API that you get x number of calls per hour on? Is it an API that you built? Etc.
Basically, you need to give more information in order to get more specific answers.
I am trying to generate some "quick reply templates" i.e possible reply according to previous messages in a chat thread using Api.ai/Dialogflow.
I have trained api.ai agent to some extent to generate replies only for some selected queries. Now, I want to enhance it to generate replies for more queries but training an agent manually for a large number of queries is not practically possible. Is there any way to train the api.ai chatbot dynamically by analysing the previous chat thread, i already have stored in db or using the data of ongoing chats.
Users are some sellers so i assume they will talk regarding there product only, so questions will be somewhat similar in every chat thread.
Looks like there is now the ability to train via the API: https://dialogflow.com/docs/training, along with uploading text files with training lists.
You can add more Training Phrases using the POST and PUT API methods for the /intents endpoint.
Any changes made via the API to alter the agent's behavior, initiate the training in the same way when you save an intent. This trains the agent with the changes delievered through the API.
There currently isn't a API for training.
If you have a log of the queries for your agent (via the API or your webhook), you could "train" your agent by using those log to determine the most common unanswered queries by looking at how many queries match the default fallback intent and create new intents and responses for those queries using Dialogflow's API: https://dialogflow.com/docs/reference/agent/intents#post_intents
I am in need of a way to implement the error logging and providing a way to the admins to rtry any failure that occurs within a suitescript.
Here are my thoughts on the implementation:
Lets say for restlet i can log the datain, or the incoming data in any userevent script in a text file along with its status as success or failure. Later have a scheduled script to process that text file that may send those errors to my .Net Api and I can provide a way for Admins to retry.
Could anyone suggest me how its normally done in netsuite projects?
For similar systems, I typically advise you create Custom Records. Your custom records can have a field to store the raw data (JSON, xml, etc) as well as a Status (Succeeded, Failed, Retry, etc). You could consider retry mechanisms like having a User Event on the Custom Record that immediately retries upon creation of the record, then if that fails have a Map/Reduce that runs on a regular schedule to clean things up.
If the native Execution Logs aren't providing enough functionality for you in that respect, you can add a Custom Record for "logging" as well, but I'd suggest trying to use the native logs first. The Script Execution Logs UI provides reasonable searching/filtering capabilities.