delay 10 secons is increasing arduino - delay

Why the delay of 10 seconds is not constant value, it is increasing ?
I am using arduino nano version 3. I set 9600 bps
void loop() {
serialPrint("Saaabbb--hghgE");
}
void serialPrint(String message){
Serial.println(millis());
Serial.println(message);
delay(10000);
}
OUTPUT
0
Saaabbb--hghgE
10000
Saaabbb--hghgE
20000
Saaabbb--hghgE
30001
Saaabbb--hghgE
40001
Saaabbb--hghgE
50001
Saaabbb--hghgE
60002
Saaabbb--hghgE
70003
Saaabbb--hghgE
80004
Saaabbb--hghgE
90004
Saaabbb--hghgE
100004
Saaabbb--hghgE
110005
Saaabbb--hghgE
120006
Saaabbb--hghgE
130007
Saaabbb--hghgE
140007
Saaabbb--hghgE
150007

The extra 1 added every couple of commands is the delay caused by actually processing your code.
Serial.println(millis()) takes time to complete so the total time is process time + your added delay.
For example:
void loop() {
serialPrint("Saaabbb--hghgE");
}
void serialPrint(String message){
Serial.println(millis()); //takes 0.5millis (for example)
Serial.println(message); //takes 0.5millis (for example)
delay(10000); //takes 10000millis
}
Therefore total time from millis() is 10001.

Yeah it's just the time needed to process your command. I don't know your project, but since my ones never needed the accuracy of a millisecond, I think you can just ignore it.

Related

Laravel-Excel keeps browser busy for 140 seconds after completion of import: how do I correct it?

Using the import to models option, I am importing an XLS file with about 15,000 rows.
With the microtime_float function, the script times and echos out how long it takes. At 29.6 secs, this happens, showing it took less than 30 seconds. At that time, I can see the database has all 15k+ records as expected, no issues there.
Problem is, the browser is kept busy and at 1 min 22 secs, 1 min 55 secs and 2 min 26 secs it prompts me to either wait or kill the process. I keep clicking wait and finally it ends at 2 mins 49 secs.
This is a terrible user experience, how can I cut off this extra wait time?
It's a very basic setup: the route calls importcontroller#import with http get and the code is as follows:
public function import()
{
ini_set('memory_limit', '1024M');
$start = $this->microtime_float();
Excel::import(new myImport, 'myfile.xls' , null, \Maatwebsite\Excel\Excel::XLS);
$end = $this->microtime_float();
$t = $end - $start;
return "Time: $t";
}
The class uses certain concerns as follows:
class myImport implements ToModel, WithBatchInserts, WithChunkReading, WithStartRow

Node js server and Apache ab tool: unexpected behavior

While testing a simple node server (written with Hapi.js):
'use strict';
var Hapi = require("hapi");
var count = 0;
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
server.route({
method: 'GET',
path: '/test',
handler: (request, h) => {
count ++;
console.log(count);
return count;
}
});
const init = async () => {
await server.start();
};
process.on('unhandledRejection', (err) => {
process.exit(1);
});
init();
start the server:
node ./server.js
run the Apache ab tool:
/usr/bin/ab -n 200 -c 30 localhost:3000/test
Env details:
OS: CentOS release 6.9
Node: v10.14.1
Hapi.js: 17.8.1
I found unexpected results in case of multiple concurrent requests (-c 30): the request handler function has been called more times than the number of requests to be performed (-n 200).
Ab output example:
Benchmarking localhost (be patient)
Server Software:
Server Hostname: localhost
Server Port: 3000
Document Path: /test
Document Length: 29 bytes
Concurrency Level: 30
Time taken for tests: 0.137 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 36081 bytes
HTML transferred: 6119 bytes
Requests per second: 1459.44 [#/sec] (mean)
Time per request: 20.556 [ms] (mean)
Time per request: 0.685 [ms] (mean, across all concurrent requests)
Transfer rate: 257.12 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 15 17 1.5 16 20
Waiting: 2 9 3.9 9 18
Total: 15 17 1.5 16 21
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 17
80% 18
90% 20
95% 20
98% 21
99% 21
100% 21 (longest request)
And the node server print out 211 log lines. During various tests the mismatch is variable but always present:
-n 1000 -c 1 -> 1000 log
-n 1000 -c 2 -> ~1000 logs
-n 1000 -c 10 -> ~1001 logs
-n 1000 -c 70 -> ~1008 logs
-n 1000 -c 1000 -> ~1020 logs
It seems that as concurrency increases, the mismatch increases.
I couldn't figure out whether the ab tool performs more http requests or the node server responds more times than necessary.
Could you please help?
Its very strange and I don't get the same results as you on my machine. I would be very surprised if it was ab that was issuing different numbers of actual requests.
Things i would try:
Write a simple server using express rather than hapi. If the issue still occurs you at least know its not a problem with hapi.
Intercept the network calls using fiddler
ab -X localhost:8888 -n 100 -c 30 http://127.0.0.1:3000/test will use the fiddler proxy which will then let you see the actual calls across the network interface. more details
wireshark if you need more power and your feeling brave (I'd only use it if fiddler has let you down)
If after all these you are still finding an issue then it has been narrowed down to an issue with node, I'm not sure what else it could be. Try using node 8 rather than 10.
Using the Fiddler proxy I found that AB tool runs more times than the number of requests to be performed (example: -n 200).
By running a series of consecutive tests:
# 11 consecutive times
/usr/bin/ab -n 200 -c 30 -X localhost:8888 http://localhost:3000/test
Both the proxy and the node server report a total of 2209 requests. It looks like that AB is less imprecise with the proxy in the middle, but still imprecise.
In general, and more important, I never found mismatches between the requests passed through the proxy and the requests received by the node server.
Thanks!

gtk_css_value_inherit_free: code should not be reached

I making one plugin with Python for Rhythmbox.
I get an randomly error when start the plugin. Then of some seconds I reset Rhythmbox and the plugin run ok.
What will the cases that probably originy the error?
Error:
gtkcssinheritvalue.c:33:gtk_css_value_inherit_free: code should not be reached
gtkcssinheritvalue.c:
29 static void
30 gtk_css_value_inherit_free (GtkCssValue *value)
31 {
32 /* Can only happen if the unique value gets unreffed too often */
33 g_assert_not_reached ();
34 }
https://github.com/GNOME/gtk/blob/gtk-3-6/gtk/gtkcssinheritvalue.c
All suggestion is welcome. Thanks.

How do I make a PSGI program do costly initialisation only once per process, not per thread?

cross-post: http://perlmonks.org/?node_id=1191821
Consider app.psgi:
#!perl
use 5.024;
use strictures;
use Time::HiRes qw(sleep);
sub mock_connect {
my $how_long_it_takes = 3 + rand;
sleep $how_long_it_takes;
return $how_long_it_takes;
}
sub main {
state $db_handle = mock_connect($dsn);
return sub { [200, [], ["connect took $db_handle seconds\n"]] };
}
my $dsn = 'dbi:blahblah'; # from config file
my $app = main($dsn);
Measuring plackup (HTTP::Server::PSGI: Accepting connections at http://0:5000/):
› perl -MBenchmark=timeit,timestr,:hireswallclock -E"say timestr timeit 10, sub { system q(curl http://localhost:5000) }"
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
connect took 3.0299610154043 seconds
2.93921 wallclock secs ( 0.03 usr + 0.06 sys = 0.09 CPU) # 107.53/s (n=10)
Measuring thrall (Starting Thrall/0.0305 (MSWin32) http server listening at port 5000):
› perl -MBenchmark=timeit,timestr,:hireswallclock -E"say timestr timeit 10, sub { system q(curl http://localhost:5000) }"
connect took 3.77111188120125 seconds
connect took 3.15455510265111 seconds
connect took 3.77111188120125 seconds
connect took 3.15455510265111 seconds
connect took 3.77111188120125 seconds
connect took 3.64333342488772 seconds
connect took 3.15455510265111 seconds
connect took 3.77111188120125 seconds
connect took 3.85268922343767 seconds
connect took 3.64333342488772 seconds
17.4764 wallclock secs ( 0.02 usr + 0.09 sys = 0.11 CPU) # 90.91/s (n=10)
This performance is not acceptable because the initialisation happens several times, despite the state variable. How do you make it so it happens only once?
For whatever reason, the program thrall hard-coded a "loader" parameter in its configuration section:
my $runner = Plack::Runner->new(
server => 'Thrall',
env => 'deployment',
loader => 'Delayed',
version_cb => \&version,
);
$runner->parse_options(#ARGV);
That string "Delayed" refers to the module Plack::Loader::Delayed, which delays the loading of .psgi files until first request comes. That would match your benchmarking result. (If you re-run the benchmark again without killing thrall, you'll see identical output).
You may try running thrall -L +Plack::Loader app.psgi, which reverts the "loader" parameter to the default value hard-coded in Plack::Runner.
Isn't this what the --preload-app option to Starman does?

Analysis of nodeload result

Can anyone explain me the nodeload result below.
./nl.js -c 1 -n 10000 -i 1 "http://localhost:3000/
Server: localhost:3000
HTTP Method: GET
Document Path: /
Concurrency Level: 1
Number of requests: 10000
Body bytes transferred: 3516274
Elapsed time (s): 1172.70
Requests per second: 9.23
Mean time per request (ms): 107.95
Time per request standard deviation: 187.76
Percentages of requests served within a certain time (ms)
Min: 38
Avg: 107.9
50%: 84
95%: 141
99%: 1076
Max: 5820
How is the percentage of requests calculated?
Thanks

Resources