Why program execution time differs running the same program multiple times? - linux
Consider a nodejs cpu-bound program that generate primes numbers:
// generatePrimes.js
// long running / CPU-bound calculation
function generatePrimes(start, range) {
const primes = []
let isPrime = true
let end = start + range
for (let i = start; i < end; i++) {
for (let j = start; j < Math.sqrt(end); j++) {
if (i !== j && i%j === 0) {
isPrime = false
break
}
}
if (isPrime) {
primes.push(i)
}
isPrime = true
}
return primes
}
function main() {
const min = 2
const max = 1e7
console.log( generatePrimes(min, max) )
}
if (require.main === module)
main()
module.exports = { generatePrimes }
My HW/OS configuration:
Laptop with Linux Ubuntu 20.04.2 LTS desktop environment, with 8 cores:
$ inxi -C -M
Machine: Type: Laptop System: HP product: HP Laptop 17-by1xxx v: Type1ProductConfigId serial: <superuser/root required>
Mobo: HP model: 8531 v: 17.16 serial: <superuser/root required> UEFI: Insyde v: F.32 date: 12/14/2018
CPU: Topology: Quad Core model: Intel Core i7-8565U bits: 64 type: MT MCP L2 cache: 8192 KiB Speed: 700 MHz min/max: 400/4600 MHz Core speeds (MHz): 1: 700 2: 700 3: 700 4: 700 5: 700 6: 700 7: 700 8: 700
$ echo "CPU threads: $(grep -c processor /proc/cpuinfo)"
CPU threads: 8
Now, let measure the elapsed time:
$ /usr/bin/time -f "%e" node generatePrimes.js
[
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37,
41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89,
97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151,
157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223,
227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281,
283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359,
367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433,
439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503,
509, 521, 523, 541,
... 664479 more items
]
7.99
OK, if i run once the program, the elapsed time is ~8 seconds.
But, consider the bash script testGeneratePrimes.sh to measure elapsed times, running the same program in sequence or in parallel 6 times:
#!/bin/bash
# Get number of runs, as command line argument
if [ $# -eq 0 ]
then
echo
echo -e " run n processes, in sequence and in parallel."
echo
echo -e " usage:"
echo -e " $0 <number of runs>"
echo
echo -e " examples:"
echo -e " $0 6"
echo -e " run 6 times"
echo
exit 1
fi
numRuns=$1
# run a single instance of the process 'node generatePrimes'
runProcess() {
/usr/bin/time -f "%e" node generatePrimes.js > /dev/null
}
echo
echo "SEQUENCE TEST: running generatePrimes, $numRuns successive sequential times"
echo
for i in $(seq $numRuns); do
runProcess
done
echo
echo "PARALLEL TEST: running generatePrimes, $numRuns in parallel (background processes)"
echo
for i in $(seq $numRuns); do
runProcess &
done
wait < <(jobs -p)
Running the script (6 processes):
$ ./testGeneratePrimes.sh 6
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.16
9.09
11.44
11.57
12.93
12.00
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
25.99
26.16
30.51
30.64
31.60
31.60
I see that:
in sequence test, elapsed times increase for each run, from ~8 second to ~12 seconds ?!
in the "parallel" tests, elapsed increase from ~25 seconds to ~31 seconds ?!
Thats's insane.
I do not understand why! Maybe it's a linux scheduler limitation? A CPU hardware limitation/issue?
I also tried commands:
nice -20 /usr/bin/time -f "%e" node generatePrimes.js or
taskset -c 0-7 /usr/bin/time -f "%e" node generatePrimes.js
but without any significant difference in the described behavior.
Questions:
Why elapsed times varying so much?
There is any way to config the linux behaviour to do not limit per-process cpu usage?
BTW, related question: running multiple nodejs worker threads: why of such a large overhead/latency?
UPDATE (MORE TESTS)
Following Nate Eldredge suggestion (see comments), here some info/sets using cpupower and sensors:
$ sudo cpupower -c all info
analyzing CPU 0:
perf-bias: 6
analyzing CPU 1:
perf-bias: 6
analyzing CPU 2:
perf-bias: 6
analyzing CPU 3:
perf-bias: 6
analyzing CPU 4:
perf-bias: 6
analyzing CPU 5:
perf-bias: 6
analyzing CPU 6:
perf-bias: 6
analyzing CPU 7:
perf-bias: 6
# set perf-bias to max performance
$ sudo cpupower set -b 0
$ sudo cpupower -c all info -b
analyzing CPU 0:
perf-bias: 0
analyzing CPU 1:
perf-bias: 0
analyzing CPU 2:
perf-bias: 0
analyzing CPU 3:
perf-bias: 0
analyzing CPU 4:
perf-bias: 0
analyzing CPU 5:
perf-bias: 0
analyzing CPU 6:
perf-bias: 0
analyzing CPU 7:
perf-bias: 0
$ sudo cpupower monitor
| Nehalem || Mperf || Idle_Stats
CPU| C3 | C6 | PC3 | PC6 || C0 | Cx | Freq || POLL | C1 | C1E | C3 | C6 | C7s | C8 | C9 | C10
0| 0,04| 2,53| 0,00| 0,00|| 4,23| 95,77| 688|| 0,00| 0,00| 0,08| 0,10| 2,38| 0,00| 24,65| 0,78| 67,90
4| 0,04| 2,53| 0,00| 0,00|| 3,68| 96,32| 675|| 0,00| 0,00| 0,02| 0,06| 0,76| 0,00| 10,83| 0,03| 84,75
1| 0,03| 1,94| 0,00| 0,00|| 6,39| 93,61| 656|| 0,00| 0,00| 0,04| 0,06| 1,88| 0,00| 16,19| 0,00| 75,63
5| 0,04| 1,94| 0,00| 0,00|| 1,35| 98,65| 689|| 0,00| 0,02| 1,19| 0,04| 0,40| 0,33| 3,89| 0,82| 92,02
2| 0,56| 25,49| 0,00| 0,00|| 12,88| 87,12| 673|| 0,00| 0,00| 0,84| 0,74| 28,61| 0,03| 34,48| 3,44| 19,81
6| 0,56| 25,48| 0,00| 0,00|| 4,30| 95,70| 676|| 0,00| 0,00| 0,03| 0,09| 1,48| 0,00| 22,66| 1,11| 70,52
3| 0,19| 3,61| 0,00| 0,00|| 3,67| 96,33| 658|| 0,00| 0,00| 0,02| 0,07| 1,36| 0,00| 14,85| 0,03| 80,16
7| 0,19| 3,60| 0,00| 0,00|| 6,21| 93,79| 679|| 0,00| 0,00| 0,28| 0,19| 3,48| 0,76| 31,10| 1,50| 56,75
$ sudo cpupower monitor ./testGeneratePrimes.sh 6
[sudo] password for giorgio:
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.18
9.06
11.66
11.29
11.30
11.21
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
20.83
20.95
28.42
28.42
28.47
28.52
./testGeneratePrimes.sh took 91,26958 seconds and exited with status 0
| Nehalem || Mperf || Idle_Stats
CPU| C3 | C6 | PC3 | PC6 || C0 | Cx | Freq || POLL | C1 | C1E | C3 | C6 | C7s | C8 | C9 | C10
0| 0,20| 1,98| 0,00| 0,00|| 33,88| 66,12| 2008|| 0,00| 0,04| 0,18| 0,18| 1,83| 0,02| 16,67| 0,22| 47,04
4| 0,20| 1,98| 0,00| 0,00|| 10,46| 89,54| 1787|| 0,00| 0,09| 0,32| 0,37| 4,01| 0,03| 25,21| 0,19| 59,37
1| 0,26| 2,40| 0,00| 0,00|| 24,52| 75,48| 1669|| 0,00| 0,06| 0,18| 0,20| 2,17| 0,00| 14,90| 0,17| 57,84
5| 0,26| 2,40| 0,00| 0,00|| 32,07| 67,93| 1662|| 0,00| 0,07| 0,19| 0,14| 1,40| 0,02| 9,31| 0,53| 56,33
2| 0,93| 13,33| 0,00| 0,00|| 31,31| 68,69| 2025|| 0,00| 0,05| 0,43| 1,00| 18,21| 0,01| 26,18| 1,74| 21,23
6| 0,93| 13,33| 0,00| 0,00|| 11,98| 88,02| 1711|| 0,00| 0,19| 0,31| 0,22| 2,63| 0,03| 18,87| 0,76| 65,04
3| 0,15| 0,98| 0,00| 0,00|| 47,38| 52,62| 2627|| 0,00| 0,07| 0,17| 0,13| 1,35| 0,01| 7,88| 0,10| 42,80
7| 0,15| 0,98| 0,00| 0,00|| 59,25| 40,75| 2235|| 0,00| 0,06| 0,18| 0,18| 1,58| 0,00| 9,31| 0,63| 28,91
$ sensors && testGeneratePrimes.sh 6 && sensors
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +46.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +45.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +43.0°C (high = +100.0°C, crit = +100.0°C)
Core 2: +45.0°C (high = +100.0°C, crit = +100.0°C)
Core 3: +44.0°C (high = +100.0°C, crit = +100.0°C)
BAT0-acpi-0
Adapter: ACPI interface
in0: 12.89 V
curr1: 0.00 A
amdgpu-pci-0100
Adapter: PCI adapter
vddgfx: 65.49 V
edge: +511.0°C (crit = +104000.0°C, hyst = -273.1°C)
power1: 1.07 kW (cap = 30.00 W)
pch_cannonlake-virtual-0
Adapter: Virtual device
temp1: +42.0°C
acpitz-acpi-0
Adapter: ACPI interface
temp1: +47.0°C (crit = +120.0°C)
temp2: +53.0°C (crit = +127.0°C)
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.36
9.76
11.35
11.38
11.22
11.24
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
21.06
21.14
28.50
28.55
28.62
28.65
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +54.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +51.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +50.0°C (high = +100.0°C, crit = +100.0°C)
Core 2: +54.0°C (high = +100.0°C, crit = +100.0°C)
Core 3: +50.0°C (high = +100.0°C, crit = +100.0°C)
BAT0-acpi-0
Adapter: ACPI interface
in0: 12.89 V
curr1: 0.00 A
amdgpu-pci-0100
Adapter: PCI adapter
vddgfx: 65.49 V
edge: +511.0°C (crit = +104000.0°C, hyst = -273.1°C)
power1: 1.07 kW (cap = 30.00 W)
pch_cannonlake-virtual-0
Adapter: Virtual device
temp1: +46.0°C
acpitz-acpi-0
Adapter: ACPI interface
temp1: +55.0°C (crit = +120.0°C)
temp2: +57.0°C (crit = +127.0°C)
Related
How to fix "RuntimeError: CUDA error: out of memory"
I have successfully trained in one GPU, but it cant work in multi GPU. I check the code but it just simple set some val in map, then carry out multi GPU training like torch.distributed.barrier. I set the following code but failed even I set the batch size = 1. docker exec -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -it jy /bin/bash os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3' The use of GPU. |===============================+======================+======================| | 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A | | 24% 26C P8 21W / 250W | 8MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 208... On | 00000000:3E:00.0 Off | N/A | | 25% 27C P8 2W / 250W | 8MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 GeForce RTX 208... On | 00000000:40:00.0 Off | N/A | | 25% 25C P8 20W / 250W | 8MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 GeForce RTX 208... On | 00000000:41:00.0 Off | N/A | | 26% 25C P8 15W / 250W | 8MiB / 11019MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ The error informations. /root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions FutureWarning, WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** | distributed init (rank 2): env:// | distributed init (rank 1): env:// | distributed init (rank 3): env:// | distributed init (rank 0): env:// Traceback (most recent call last): File "main_track.py", line 398, in <module> main(args) File "main_track.py", line 159, in main utils.init_distributed_mode(args) File "/jy/TransTrack/util/misc.py", line 459, in init_distributed_mode torch.distributed.barrier() File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier work = default_pg.barrier(opts=opts) RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 355 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 357 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 358 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 356) of binary: /root/anaconda3/envs/pytorch171/bin/python3 Traceback (most recent call last): File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ main_track.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2022-10-13_08:54:25 host : 2f923a848f88 rank : 1 (local_rank: 1) exitcode : 1 (pid: 356) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR
I am getting the below error while connecting to IBM MQ using library pymqi. Its a clustered MQ channel Traceback (most recent call last): File "postToQueue.py", line 432, in <module> qmgr = pymqi.connect(queue_manager, channel, conn_info) File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password) File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client self.connect_with_options(name, cd, user=user, password=password) File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options raise MQMIError(rv[1], rv[2]) pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR' Please see my code below. queue_manager = 'quename here' channel = 'channel name here' host ='host-name here' port = '2333' queue_name = 'queue name here' message = 'my message here' conn_info = '%s(%s)' % (host, port) print(conn_info) qmgr = pymqi.connect(queue_manager, channel, conn_info) queue = pymqi.Queue(qmgr, queue_name) queue.put(message) print("message sent") queue.close() qmgr.disconnect() Getting error at the line below qmgr = pymqi.connect(queue_manager, channel, conn_info) Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header -----------------------------------------------------------------------------+ | | | WebSphere MQ First Failure Symptom Report | | ========================================= | | | | Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time | | UTC Time :- 1580246871.853000 | | UTC Time Offset :- -300 (Eastern Standard Time) | | Host Name :- CA-LDLD0SQ2 | | Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 | | PIDS :- 5724H7251 | | LVLS :- 8.0.0.11 | | Product Long Name :- IBM MQ for Windows (x64 platform) | | Vendor :- IBM | | O/S Registered :- 0 | | Data Path :- C:\Python\Scripts\IBM | | Installation Path :- C:\Python | | Installation Name :- MQNI08000011 (126) | | License Type :- Unknown | | Probe Id :- XC207013 | | Application Name :- MQM | | Component :- xxxInitialize | | SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, | | Line Number :- 5085 | | Build Date :- Dec 12 2018 | | Build Level :- p800-011-181212.1 | | Build Type :- IKAP - (Production) | | UserID :- alekhya.machiraju | | Process Name :- C:\Python\python.exe | | Arguments :- | | Addressing mode :- 32-bit | | Process :- 00010908 | | Thread :- 00000001 | | Session :- 00000001 | | UserApp :- TRUE | | Last HQC :- 0.0.0-0 | | Last HSHMEMB :- 0.0.0-0 | | Last ObjectName :- | | Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC | | Minor Errorcode :- OK | | Probe Type :- INCORROUT | | Probe Severity :- 2 | | Probe Description :- AMQ6090: MQM could not display the text for error | | 536895781. | | FDCSequenceNumber :- 0 | | Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. | | | +-----------------------------------------------------------------------------+
Android app is not deployed on emulator in Android Studio
I am new to working on android studio. Downloaded the latest version of Android studio, I just wrote a hello world program and when deploying it to emulator, its taking forever. I did not write any code to check if I went wrong somewhere in my coding. I can see these in logcat: 2019-09-20 10:33:39.010 6159-6276/? W/ErrorProcessor: onFatalError, processing error from engine(4) com.google.android.apps.gsa.shared.speech.b.g: Error reading from input stream at com.google.android.apps.gsa.staticplugins.recognizer.j.a.a(SourceFile:28) at com.google.android.apps.gsa.staticplugins.recognizer.j.b.run(SourceFile:15) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.google.android.apps.gsa.shared.util.concurrent.a.ax.run(SourceFile:14) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) at com.google.android.apps.gsa.shared.util.concurrent.a.ai.run(SourceFile:6) Caused by: com.google.android.apps.gsa.shared.exception.GsaIOException: Error code: 393238 | Buffer overflow, no available space. at com.google.android.apps.gsa.speech.audio.Tee.f(SourceFile:103) at com.google.android.apps.gsa.speech.audio.au.read(SourceFile:2) at java.io.InputStream.read(InputStream.java:101) at com.google.android.apps.gsa.speech.audio.ao.run(SourceFile:18) at com.google.android.apps.gsa.speech.audio.an.run(SourceFile:2) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.google.android.apps.gsa.shared.util.concurrent.a.ax.run(SourceFile:14) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) at com.google.android.apps.gsa.shared.util.concurrent.a.ai.run(SourceFile:6) 2019-09-20 10:33:39.010 6159-6276/? I/AudioController: internalShutdown 2019-09-20 10:33:39.014 6159-6159/? I/MicroDetector: Keeping mic open: false 2019-09-20 10:33:39.014 6159-6159/? I/MicroDetectionWorker: #onError(false) 2019-09-20 10:33:39.014 6159-6258/? I/DeviceStateChecker: DeviceStateChecker cancelled 2019-09-20 10:33:44.022 6159-6159/? I/MicroDetectionWorker: #updateMicroDetector [detectionMode: [mDetectionMode: [1]]] 2019-09-20 10:33:44.022 6159-6159/? I/MicroDetectionWorker: #startMicroDetector [speakerMode: 0] 2019-09-20 10:33:44.023 6159-6159/? I/AudioController: Using mInputStreamFactoryBuilder 2019-09-20 10:33:44.075 6159-6159/? I/MicroDetectionWorker: onReady 2019-09-20 10:33:44.080 6159-6276/? I/MicroRecognitionRunner: Starting detection. 2019-09-20 10:33:44.080 6159-6266/? I/MicrophoneInputStream: mic_starting com.google.android.apps.gsa.staticplugins.aa.c#12afee8 2019-09-20 10:33:44.084 1504-8322/? I/AudioFlinger: AudioFlinger's thread 0xa1783740 tid=8322 ready to run 2019-09-20 10:33:44.089 1504-1639/? E/AudioFlinger: not enough memory for AudioTrack size=131296 2019-09-20 10:33:44.089 1504-1639/? D/MemoryDealer: AudioTrack (0xa5362030, size=4194304) 0: 0xa5362040 | 0x00000000 | 0x000200E0 | A 1: 0xa5362070 | 0x000200E0 | 0x000200E0 | A 2: 0xa53621c0 | 0x000401C0 | 0x000200E0 | A 3: 0xa5362090 | 0x000602A0 | 0x000200E0 | A 4: 0xa53623f0 | 0x00080380 | 0x000200E0 | A 5: 0xa5f1bdc0 | 0x000A0460 | 0x000200E0 | A 6: 0xa57b7640 | 0x000C0540 | 0x000200E0 | A 7: 0xa53625b0 | 0x000E0620 | 0x000200E0 | A 8: 0xa57b7930 | 0x00100700 | 0x000200E0 | A 9: 0xa53628c0 | 0x001207E0 | 0x000200E0 | A 10: 0xa53622b0 | 0x001408C0 | 0x000200E0 | A 11: 0xa57b7c00 | 0x001609A0 | 0x000200E0 | A 12: 0xa57b7ae0 | 0x00180A80 | 0x000200E0 | A 13: 0xa57b7e10 | 0x001A0B60 | 0x000200E0 | A 14: 0xa5362b90 | 0x001C0C40 | 0x000200E0 | A 15: 0xa57b7e40 | 0x001E0D20 | 0x000200E0 | A 16: 0xa57b7b40 | 0x00200E00 | 0x000200E0 | A 17: 0xa45ff0f0 | 0x00220EE0 | 0x000200E0 | A 18: 0xa45ff260 | 0x00240FC0 | 0x000200E0 | A 19: 0xa45ff3b0 | 0x002610A0 | 0x000200E0 | A 2019-09-20 10:33:44.089 1504-1639/? E/AudioFlinger: createRecordTrack_l() initCheck failed -12; no control block? 2019-09-20 10:33:44.101 6159-6266/? E/AudioRecord: AudioFlinger could not create record track, status: -12 2019-09-20 10:33:44.122 6159-6266/? E/AudioRecord-JNI: Error creating AudioRecord instance: initialization check failed with status -12. 2019-09-20 10:33:44.122 6159-6266/? E/android.media.AudioRecord: Error code -20 when initializing native AudioRecord object. 2019-09-20 10:33:44.122 6159-6266/? I/MicrophoneInputStream: mic_started com.google.android.apps.gsa.staticplugins.aa.c#12afee8 2019-09-20 10:33:44.123 6159-6159/? I/MicroDetectionWorker: onReady 2019-09-20 10:33:44.133 6159-6266/? I/MicrophoneInputStream: mic_close com.google.android.apps.gsa.staticplugins.aa.c#12afee8 2019-09-20 10:33:44.133 6159-6276/? I/MicroRecognitionRunner: Detection finished 2019-09-20 10:33:44.133 6159-6276/? W/ErrorReporter: reportError [type: 211, code: 524300]: Error reading from input stream 2019-09-20 10:33:44.134 6159-6299/? I/MicroRecognitionRunner: Stopping hotword detection. 2019-09-20 10:33:44.135 6159-6276/? W/ErrorProcessor: onFatalError, processing error from engine(4) com.google.android.apps.gsa.shared.speech.b.g: Error reading from input stream at com.google.android.apps.gsa.staticplugins.recognizer.j.a.a(SourceFile:28) at com.google.android.apps.gsa.staticplugins.recognizer.j.b.run(SourceFile:15) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.google.android.apps.gsa.shared.util.concurrent.a.ax.run(SourceFile:14) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) at com.google.android.apps.gsa.shared.util.concurrent.a.ai.run(SourceFile:6) Caused by: com.google.android.apps.gsa.shared.exception.GsaIOException: Error code: 393238 | Buffer overflow, no available space. at com.google.android.apps.gsa.speech.audio.Tee.f(SourceFile:103) at com.google.android.apps.gsa.speech.audio.au.read(SourceFile:2) at java.io.InputStream.read(InputStream.java:101) at com.google.android.apps.gsa.speech.audio.ao.run(SourceFile:18) at com.google.android.apps.gsa.speech.audio.an.run(SourceFile:2) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:457) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.google.android.apps.gsa.shared.util.concurrent.a.ax.run(SourceFile:14) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at com.google.android.apps.gsa.shared.util.concurrent.a.bl.run(SourceFile:4) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) at com.google.android.apps.gsa.shared.util.concurrent.a.ai.run(SourceFile:6) These logs are in repetitive mode. What is that I am missing here? I have HAXM installed.
The logcat says this error: Caused by: com.google.android.apps.gsa.shared.exception.GsaIOException: Error code: 393238 | Buffer overflow, no available space. Did you have enough space on disk for the emulator? Or may be a permission for audio recording needed in the manifest, so try adding this: <uses-permission android:name="android.permission.RECORD_AUDIO" /> You can also try to completely quit the emulator, without saving the state, and restart it.
When I used the backup tool cbbackup of couchbase,i meet a exception in thread w2
when i use the couchbase cbbackup to backup my data,i meet a question like this . E:\couchbase\bin>cbbackup -m diff http://localhost:8091 E:\couchbase_backup - u Administrator -p password [ ] 0.0% (0/estimated 7303 msgs) [enter image description here][1]bucket: beer-sample, msgs transferred... : total | last | per sec byte : 0 | 0 | 0.0 . bucket: bucket-beer-sample, msgs transferred... : total | last | per sec byte : 0 | 0 | 0.0 [ ] 0.0% (0/estimated 586 msgs) bucket: bucket-gamesim-sample, msgs transferred... : total | last | per sec byte : 0 | 0 | 0.0 [ ] 0.0% (0/estimated 31369 msgs) bucket: bucket-travel-sample, msgs transferred... : total | last | per sec byte : 0 | 0 | 0.0 [####################] 100.0% (586/estimated 586 msgs) bucket: gamesim-sample, msgs transferred... : total | last | per sec byte : 94693 | 94693 | 20342.2 Exception in thread w1: Traceback (most recent call last): File "threading.pyc", line 551, in __bootstrap_inner File "threading.pyc", line 504, in run File "pump.pyc", line 302, in run_worker File "pump.pyc", line 360, in run File "pump_dcp.pyc", line 180, in provide_batch File "pump_dcp.pyc", line 508, in get_dcp_conn File "pump_dcp.pyc", line 629, in setup_dcp_streams File "pump_dcp.pyc", line 649, in request_dcp_stream File "cb_bin_client.pyc", line 65, in _sendMsg error: [Errno 10053] I don't know how to resloving it. I stuck it in two days ,please help me. thanks. By the way.It doesn't happen two days ago. The exception sometimes is "Exception in thread w2". https://i.stack.imgur.com/HzXTl.png
How to include modules for code coverage for unit testing?
My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included. I am: running Intern 1.6.2 (installed with npm locally) testing NodeJS code using callbacks, not promises using CommonJS modules, not AMD modules Directory Structure (only showing relevant files): plister | |--libraries | |--file-type-support.js | |--tests | |--intern.js | |--unit | |--file-type-support.js | |--node_modules |--intern plister/tests/intern.js define({ useLoader: { 'host-node': 'dojo/dojo' }, loader: { packages: [ {name: 'libraries', location: 'libraries'} ] }, reporters: ['console'], suites: ['tests/unit/file-type-support'], functionalSuites: [], excludeInstrumentation: /^(tests|node_modules)\// }); plister/tests/unit/file-type-support.js define([ 'intern!bdd', 'intern/chai!expect', 'intern/dojo/node!fs', 'intern/dojo/node!path', 'intern/dojo/node!stream-equal', 'intern/dojo/node!../../libraries/file-type-support' ], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) { 'use strict'; bdd.describe('file-type-support', function doTest() { bdd.it('should show that the example output.plist matches the ' + 'temp.plist generated by the module', function () { var deferred = this.async(), input = path.normalize('tests/resources/input.plist'), output = path.normalize('tests/resources/output.plist'), temporary = path.normalize('tests/resources/temp.plist'); // Test deactivate function by checking output produced by // function against test output. fileTypeSupport.deactivate(fs.createReadStream(input), fs.createWriteStream(temporary), deferred.rejectOnError(function onFinish() { streamEqual(fs.createReadStream(output), fs.createReadStream(temporary), deferred.callback(function checkEqual(error, equal) { expect(equal).to.be.true; })); })); }); }); }); Output: PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms) 1/1 tests passed 1/1 tests passed Output (on failure): FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms) AssertionError: expected true to be false AssertionError: expected true to be false 0/1 tests passed 0/1 tests passed npm ERR! Test failed. See above for more details. npm ERR! not ok code 0 Output (after removing excludeInstrumentation): PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms) 1/1 tests passed 1/1 tests passed ------------------------------------------+-----------+-----------+-----------+-----------+ File | % Stmts |% Branches | % Funcs | % Lines | ------------------------------------------+-----------+-----------+-----------+-----------+ node_modules/intern/ | 70 | 50 | 100 | 70 | chai.js | 70 | 50 | 100 | 70 | node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 | Test.js | 79.71 | 42.86 | 72.22 | 79.71 | node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 | bdd.js | 100 | 100 | 100 | 100 | tdd.js | 76.19 | 50 | 55.56 | 76.19 | node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 | console.js | 56.52 | 35 | 57.14 | 56.52 | node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 | chai.js | 37.9 | 8.73 | 26.38 | 39.34 | tests/unit/ | 100 | 100 | 100 | 100 | file-type-support.js | 100 | 100 | 100 | 100 | ------------------------------------------+-----------+-----------+-----------+-----------+ All files | 42.14 | 11.35 | 33.45 | 43.63 | ------------------------------------------+-----------+-----------+-----------+-----------+ My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems. I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage. Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage. Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0. The use of callback/rejectOnError in the above code is correct.