In one NodeJS VM, I have 6 instances. When I hit all of the 6 instances separately and monitor all console logs together, it is observed that there is a flurry of activity on all console logs for 2 seconds, followed by a 30 second period of inactivity. This pattern repeats itself.
This cycle occurs at the exact same time for all 6 instances whether or not the start time of the Jmeter test is varied. The resulting tps is very low.
May I know if this behavior is normal to NodeJS or if some tuning needs to be done and if yes, what should be tuned?
Related
I configured Java based Selenium WebDriver test in Apache JMeter with the following setup:
Number of Threads (Users): 10
Ramp-up period (Second): 120
Loop Count: 1
I ticked the Delay Thread Creation until needed to save resources.
My expectation regarding the functionality:
I expected that if I have 10 users with 120 seconds ramp up time, then every user activity will start each other and the Jmeter will wait at least 12 seconds to start the next thread.
The issue is:
The threads start sometimes within 11 seconds, sometimes 12 seconds.
I don't know why does it happen because I would like to see the threads start after each other exactly in 12 seconds.
The question is
Are there any solution that to tell the JMeter to wait exactly 12 seconds for next thread start?
Here is the picture about started jobs with date time stamp:
I don't think you will be able to achieve this level of precision using ramp-up period approach of the normal Thread Group, a better idea would be going for the Ultimate Thread Group (can be installed using JMeter Plugins Manager) which allows absolute flexibility in terms of definition of ramp-up, ramp-down and time to hold the load.
Example setup:
Example output:
In order to get only one execution of the "job" per each virtual user you can use Throughput Controller configured like:
You can add Flow Control Action for pausing exact time
it allows pauses to be included without needing to generate a sample. For variable delays, set the pause time to zero, and add a Timer as a child.
I use Application Insights "Availability" feature to check a web site availability and send an alert if it is down.
Now Application Insights sends an alert every 5 minutes, even the "alert failure time window" is 15 minutes. Test frequency is 5 minutes.
So I get an alert after 5 minutes, then after 10 minutes, then after 15 minutes! I get 3 alerts while I need only 1 one alert after 15 minutes. It looks like a bug for me.
How to prevent Application Insights Availability feature to send alerts every 5 minutes?
The email (notification) is sent the moment alert condition is satisfied. It doesn't wait for alert failure time window.
Example: for alerting rule to send notification if 3 locations out of 5 turn red, and 3 locations turning red within the first second => notification will be sent during the same second. It will not wait for 5 (or 15) minutes.
This is by design with the goal to reduce TTD (time to detect).
There are two ways to handle noise:
Configure retries (test will retry 2 times during red => green state switch)
Increase the number of locations to trigger alert (for instance, 14 out of 16)
Either way - only one notification is supposed to be sent, not every 5/15 minutes. Multiple notifications suggest either some bug in tracking current state of an alert (bug in a product) or an Application which intermittently fails (so, alerting rule constantly changes its states green => red => green => ..., as a result email is sent during every transition). Do you get alert every 5 minutes when tests are red all the time?
Alert failure time window defines what failed location means. 5 min test interval and 5 min alert failure means that 1 last result defines whether location failed or not. 5 min test interval and 15 min alert failure means that 3 last results define whether location failed or not. So, if one of those 3 test runs failed then location is considered as failed (even though 2 results after it might have been successes).
Increasing alert failure time window makes alerting rule more aggressive (and noisy for intermittently failing apps).
I'm running load tests on AWS Lambda with Charlesproxy, but am confused by the timelines chart produced. I've setup a test with 100 concurrent connections and expect varying degrees of latency, but expect all 100 requests to be kicked off at the same time (hence concurrent setting in charlesproxy repeat advanced feature), but I'm seeing some requests get started a bit late ... that is if I understand the chart correctly.
With only 100 invocations, I should be well within the concurrency max set by AWS Lambda, so why then are these request being kicked off late (see requests 55 - 62 on attached image)?
Lambda can take from a few hundred milliseconds to 1-2 seconds to start up when it's in "cold state". Cold means it needs to download your package, unpack it, load in memory, then start executing your code. After execution, this container is kept "alive" for about 5 to 30 minutes ("warm state"). If you request again while it's warm, container startup is much faster.
You probably had a few containers already warm when you started your test. Those started up faster. Since the other requests came concurrently, Lambda needed to start more containers and those came from a "cold state", thus the time difference you see in the chart.
In my code i run a cron job which is run for every five seconds, and I've been getting the same WARNING ever since.
this is the api that i used:
sched.add_cron_job(test_3, second="*/5")
And I get a warning:
WARNING:apscheduler.scheduler:Execution of job "test_3 (trigger: cron[second='*/5'], next run at: 2013-11-28 15:56:30)" skipped: maximum number of running instances reached (1)
I tried giving time gap of 2 minutes it doesn't solve the issue.....
Help me in overcoming this issue..
I used the proc.terminate() to stop the execution of my method. So that the instance of the 1st thread is terminated before a new thread could start again.
Also provide a timing mechanism to complete your process well within the scheduled time say within a minute, hour or day etc. In my application i used *sleep(in_seconds)* for providing the timing mechanism.
I had a similar problem, and it turned out it was just your job 'test_3' lasting too long, more then 5 secs (or 2 minutes as you tried).
APScheduler is trying to re-execute you job, but the previous one is still running.
I have 3 different jobs set up in crontab (call them jobA, jobB, jobC) that run at different intervals and start at different times during the day. For example, jobA runs once per hour at 5 mins past the hour, jobB runs every 30 mins at 9 and 39 mins past the hour, and jobC runs every 15 mins. They are not dependent on each other, but for various reasons they can NOT be running at the same time.
The problem is that sometimes one of the jobs takes a long time to run and another one starts before the first one is done, causing issues.
Is there some way to queue or spool these jobs so that one will not start until the current running one has finished? I tried using this solution but this does not guarantee that the pending jobs will resume in the same order they were supposed to start. A queue would be best, but I cannot find anything about how to do this.
You can't do that using cron. Cron is used to run a specific command at specific time. You can do it by the solution you proposed, but that adds a lot more complexity.
I suggest, writing/coding the requirement in high level language like java and use a mutil-thread program to achieve what you need.
Control-m is another scheduling software, with a lot of other features as well. You would be able to integrate the above use-case in it.