I am testing a webservice where based on different promo code I am retrieving multiple offers each promo code. Now I did parameterization for promo code, Now I need to get response time for each request. Where I can get these details.
Training. You can find the response time by taking in person training, by taking computer based training, by going through the downloadable tutorial or by reading the PDF versions of the Virtual User Generator users guide.
Related
I want to create giveaways which require the participants to follow the twitter account of the giveaway creator.
My first idea was to use the Twitter API (endpoint: "/2/users/:id/followers"). This works fine for me however I always run into rating limits. The API allows me to send 15 requests every 15 minutes and returns a maximum of 1000 users per request. Since many accounts have more then 15000 followers and since many request happen at the same time (many users want to participate in a giveaway) this solution is not suitable for me.
My secound idea was to use web scraping instead (e.g Node Fetch). I was following along this tutoria: However doing so I always run into the issue that Twitter uses random strings to name their html elements. You can see in the picture there is no defined class to grap the elements.
So my main question is how can I access these element ?
Random Follower of my Twitter Account
I also have a follow up question regarding the effictivness of this method. Assuming I have multiple people who want to particpate in a short amount of time (e.g 10 people in 5 minutes) and they all need to follow a big twitter account (e.g 100k followers).
Is it efficent to scrape all 100k followers each time or should I instead try to fetch the 100k followers once, safe them to my database and use my database to check for each user later ?
As a side note, I am using node.js and node-fetch, however I have no problems to switch the framework. In addition I think the grabbing of the element as well as the performance should be universal.
Thanks for your help :)
They're going to detect your servers excessive calls. There is a Twitter Developer Portal where you can request elevated access which may raise the limits for you.
https://developer.twitter.com
I’m trying to understand Microsoft Graph API’s rate limiting/throttling with OneDrive/SharePoint and excel endpoints. I know that they don’t publish these limits (see here), but am hoping that if I provide some information about my use-case, someone might be able to provide ballpark figures.
Here is some information about the use case:
There is a spreadsheet that is kept in OneDrive or SharePoint
This spreadsheet runs calculations
Users send in inputs and get back outputs from this spreadsheet
There are 200 users
Each user makes about 10 read requests (sending in inputs) and 10
write requests(getting back outputs) per day i.e. 2,000 read requests and 2,000 write requests per day
Given the information above, is there any way to know if users will encounter the 429 ("Too many requests") or 503 ("Server Too Busy") error?
And if so,:
How often?
Is there a strategy for dealing with this?
I'm ultimately trying to decide if this is an appropriate tool for an enterprise-grade app or if I should go with SpreadsheetWeb.
I have been searching the past few days how to retrieve the endpoint utterances and its scores for a dashboard I am working with. Problem is I'm lost with the APIs, there seems to be many, but I cannot find the exact one that fits my need.
In this API documentation here, there is one that gets example utterances. What I would want to get is the actual endpoint utterances.
Anyone can point me where to find the API to use? Thanks in advance.
#Jeff, actually in that API docs that you linked, the answer was there, however perhaps not under the most obvious name.
You're looking for Download application query logs, which has this request URL
https://[location].api.cognitive.microsoft.com/luis/api/v2.0/apps/{appId}/querylogs/[?includeResponse]
GET Download application query logs - Gets the query logs of the past month for the application.
The response is 200 as a CSV file containing the query logs for the past month.
Got this one working now, got it from this forums.
https://<REGION>.api.cognitive.microsoft.com/luis/api/v2.0/apps/<APP_ID>/versions/0.1/intents/<INTENT_ID>/suggest?multiple-intents=true
Just confused a bit, I do not find the use of the INTENT_ID here. I'm not sure if this was intended or was a fault on the api design.
But anyways, did the job, got all the user utterances and its confidence scores.
Hope this helps anyone.
I am trying to generate some "quick reply templates" i.e possible reply according to previous messages in a chat thread using Api.ai/Dialogflow.
I have trained api.ai agent to some extent to generate replies only for some selected queries. Now, I want to enhance it to generate replies for more queries but training an agent manually for a large number of queries is not practically possible. Is there any way to train the api.ai chatbot dynamically by analysing the previous chat thread, i already have stored in db or using the data of ongoing chats.
Users are some sellers so i assume they will talk regarding there product only, so questions will be somewhat similar in every chat thread.
Looks like there is now the ability to train via the API: https://dialogflow.com/docs/training, along with uploading text files with training lists.
You can add more Training Phrases using the POST and PUT API methods for the /intents endpoint.
Any changes made via the API to alter the agent's behavior, initiate the training in the same way when you save an intent. This trains the agent with the changes delievered through the API.
There currently isn't a API for training.
If you have a log of the queries for your agent (via the API or your webhook), you could "train" your agent by using those log to determine the most common unanswered queries by looking at how many queries match the default fallback intent and create new intents and responses for those queries using Dialogflow's API: https://dialogflow.com/docs/reference/agent/intents#post_intents
I am trying to extract some data from this website which refreshes every minute. I have tried researching about web scraping and tried chrome extensions but none seem to work for me.
Some background information about the website: it is a website where people go to monitor bid prices for COE (certificate of entitlement for cars in Singapore). Every alternate Wednesday, from 1430 to 1600, I would have to manually copy and paste the data into an Excel spreadsheet before it refreshes every minute.
Details for COE
I have attached screenshots to illustrate further.
This is the website to scrape; https://www.onemotoring.com.sg/1m/coe/coeDetail.html
You can get a very low cost with AWS Lambda with node-js.
Create a Lambda function and trigger it at your cron schedule you want to crawl the website. You can use library like
https://github.com/bda-research/node-crawler
to simplify crawling.
Also,
To get the exact nodes in the page use serverside jquery or any progressive script that can extract elements from the crawled page.
Once you have the details, you can store them under DynamoDB which is a nosql with very low latency.
You can use,
ODM like https://github.com/clarkie/dynogels to access DynamoDB with very less code.
Hope it helps.