Gatling active user drops to zero for longer run - performance-testing

When i am trying to run the gatling script for longer duration say, more than an hour, i see the active user is dropped to zero after few minutes into the test. I have tried atOnceUser(nbUser), constantUsersPerSec(rate) during(duration) but the end result looks like the one below.
I am not able to figure out what is causing this issue because if the active user dropped to zero, then logically there should not be any request going in. Also one additional info: All my request gets completed in 200 ms mark.
Any help would be appreciated.

Looks like some error either with colected statistics or with report. During execution gatling generates simulation.log file which contains data about each user and request. You can check there if new users were not generated after some time, but if requests were executed i would bet that only reports are wrong. It uses highcharts (external library) to generate reports and in few versions there were issues with plots fe. some series were not printed. Maybe try to use older (or newer) version of gatling-highcharts.

Related

Persistent Spotify 429 errors - with ridiculous retry-after suggestion of 76,000s (about 21hr)

I am working on a application that uses the Spotify Web API to build and maintain playlists for the user based on a given recipe (just a JSON that represents a logic-scheme basically). Currently the application is in development mode. I use delays between each API call I make, currently about 400ms. And I also had delays of 7.5s when I got the occasional 429 error (too many requests).
Anyway, I recently made it so that all of the playlist recipes get rebuilt in an infinite loop. So the process is just always running and making API calls about every 100ms, in order to keep all of the playlists up-to-date based on the recipes. However after letting this loop run for about 10 minutes, I started persistently getting 429s even after retrying after 7.5s and longer.
Apparently the 429 responses contain a header called 'retry-after' which is how long Spotify suggests waiting before making another call (as I said, before I was just using a fixed 7.5s delay on 429s). I am seeing that the value I am receiving for 'retry-after' is on the order of about 76,000s (21 hours).
But I thought that the rate limits are enforced over a 30s window...
(see https://developer.spotify.com/documentation/web-api/guides/rate-limits/) So why is my 'retry-after' header so high?
This is mostly a design philosophy question so the code itself I think is mostly irrelevant but if you'd like to take a look it's available here: https://github.com/jakefoglia/Smart-Playlist-Manager
site/SPM-core/maintainer.js : contains the 'infinite loop'
site/SPM-core/spotify_api_hook.js : contains most of the API calls
The 30s window is presented in the documentation only as an example, not as an actual way in which the API works. As you correctly say, Retry-After header (value is seconds) is all the information you need to decide how long to wait before doing the next call.
Each time your app "violates" the rate limit by making an early request, it gets "punished" by an increased delay period, — and since the app apparently never even consulted the header, and repeatedly violated the limit, the delay got this high. This however did not result in shutdown, or blocking, or rejection, or something similar, because the header only suggests the duration of a delay, rather than enforcing it.

Microsoft Flow with File Created Action is not triggered all the time

I have one drive synced local folder and the files will be synced with a SharePoint site when we add files to this folder. I also have a Flow that gets triggered for every file added.
The detailed article about what I am achieving here can be found here.
The problem is that it is not triggered all the time. Let's say I added 100 files and the Flow triggered only 78 times. Are there any limitations on the Flow that it can run only this many times in a timeframe? Anyone else faced this issue? Any help is really appreciated. #sharepoint #sharepointonline #flow #onedrive
Finally, after spending a few hours, I got it working with 120 files at the same time. The flow runs smoothly and efficiently now. Here is what I did.
Click on the three dots on your trigger in the flow, and then click on settings.
Now in the new screen, enable the Split On (Without this my Flow was not getting triggered) and give the Array value. Clicking on the array dropdown will give you the matching value. Now turn on the Concurrency as shown in the preceding image and give the Degree of Parallelism to maximum (50 as of now).
According to Microsoft:
Concurrency Control is to Limit the number of concurrent runs of the flow or leave it off to run as many as possible at the same time. Concurrency control changes the way new runs are queued. It cannot be undone once enabled.

Execute a particular function every time the date changes in the user's local time

I am saving a counter number in user storage.
I want to provide some content to the user which changes daily using this counter.
So every time the counter increases by 1 the content will change.
The problem is the timezone difference.
Is there anyway to run a function, daily which will increase this counter by 1. I could use setInterval() which is a part of the NodeJs library but that won't be an accurate "daily" update for all users.
User storage is only available to you as a developer when the Action is active. This data is not available once the Action is closed, so you wouldn't be able to asynchronously update the field. If you do want asynchronous access, I'd suggest using an external database and only storing the database row key in the user's userStorage. That way you can access the data and modify it whenever you want.
The setInterval method will run a function periodically, but may not work in the way you want. It only runs the function while the runtime is active. A lot of services will shut down a runtime after a period. Cloud Functions, for example, run sometimes but then will shut down when not used. Additonally, Cloud Functions can be run several times in parallel instances, executing a setInterval function several times in parallel. That would increment the counter more times than you want.
Using a dedicated Cron service would help reduce the number of simultaneous executions while also ensuring it runs when you want.
You are unable to directly access the user's timezone within the Action, meaning you won't be able to determine the end of a day. You can get the content to change every day, but it'll have some sort of offset. To get around this, you could have several cron jobs which run for different segments of users.
Using the conv.user.locale field, you can derive their language. en-US is generally going to be for American users, which generally are going to live in the US. While this could result in an odd behavior for traveling, you can then bucket users into a particular period of execution. Running the task overnight, either 1AM or 4AM they'll probably be unaware but know that it updates overnight.
You could use the location helper to get the user's location more precisely. This may be a bit unnecessary, but you could use that value to determine their timezone and then derive that user's "midnight" to put in the correct Cron bucket.

Cirqus: "Expected an event with sequence number 0." exception

Every once in a while I receive exceptions in Cirqus when trying to process commands. It happens with different types of commands, however it always happens with this specific aggregate root type (let's say its a registration form). We haven't deleted events nor messed with the Events table in any way, so I'm wondering what else can cause the issue.
The exact (but anonymized) error message is: Tried to apply event with sequence number 12 to aggregate root of type RegistrationForm with ID d863ac79-6bc0-480d-9d83-30b7696e7ea1 with current sequence number -1. Expected an event with sequence number 0.
So for example to debug the latest instance of the exception I queried the database for this aggregate id and got 37 events in return. I then checked the sequences and the sequences seemed correct. I also checked that the global sequences were at least also chronologically correct. Then I checked to see if the "meta" column had a different global sequence than the record, but that also checked out OK.
What I find most confusing is that other registration forms are able to go through. Looking at our logs there's no pattern I can identify, and also it only happens about 3-5% of the time.
I guess what I'm wondering is: what can cause this issue? how can I debug it? how can I prevent it from happening in the future?
System specifics: We're running under .net 4.5, using Cirqus 0.63.12 (and then also tested on 0.66.4), using Postgres 9.4 as the database (and using v0.63.12 of the Cirqus.Postgres package).
I found the issue! It seems that the PostgreSQL event source's SQL code was missing an Order By clause and in some cases my events were being returned out of order. I submitted this pull request as a proposed fix to the problem: https://github.com/d60/Cirqus/pull/75

NCA R12 with LoadRunner 12.02 - nca_get_top_window returns NULL

Connection successfully established by nca_connect_server() but i am trying to capture current open window by using nca_get_top_window() but it returns NULL. Due to this all subsequent requests fail
It depends on how you obtained your script, whether it recorded or manually written.
If script is written manually there is guarantee that it could be replayed, since it may happen that sequence of API (or/and its parameters) is not valid. If script is recorded – there might be missed correlation or something like this, common way to spot the issue – is to compare recording and replaying behavior (by comparing log files related to these two stages, make sure you are using completely extended kind of log files) to find out what and why goes wrong on replay, and how it digress from recording activity.

Resources