How to record audio and playback on Ionic 2? - audio

I am currently working on an Ionic 2 app project which allows users to record their own sound, stop the recording and also play that sound back. According to ionic-native, I can see they provide 2 plugins which are MediaPlugin and MediaCapture. I have tried out using MediaPlugin but I faced problems while starting the record, stop it and play it back.
Have anybody experienced using this plugin ? I have went through the ionic docs and some other blogs but still I am not able to make it. I am very new to this, thank you so much for your efforts. I appreciate all of your idea.
Regards,
This is the logs I got from emulator while start the recording:
I/MPEG4Writer( 401): limits: 2147483647/0 bytes/us, bit rate: 12200 bps and the estimated moov size 3072 bytes
D/Genyd ( 56): Received Set Clipboard
D/Genymotion( 56): Received Set Clipboard
D/dalvikvm( 379): GC_CONCURRENT freed 717K, 13% free 6011K/6864K, paused 0ms+1ms, total 10ms
E/genymotion_audio( 401): get_next_buffer() pcm_read error -1
W/PluginManager( 1116): THREAD WARNING: exec() call to Media.startRecordingAudio blocked the main thread for 10037ms. Plugin should use CordovaInterface.getThreadPool().
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
I/MPEG4Writer( 401): setStartTimestampUs: 10031588
I/MPEG4Writer( 401): Earliest track starting time: 10031588
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
This is the logs I got from emulator while stop the recording:
I/MPEG4Writer( 401): Received total/0-length (42/0) buffers and encoded 42 frames. - audio
I/MPEG4Writer( 401): Audio track drift time: 0 us
D/MPEG4Writer( 401): Stopping Audio track source
E/genymotion_audio( 401): get_next_buffer() pcm_read error -16
D/MPEG4Writer( 401): Audio track stopped
D/MPEG4Writer( 401): Stopping writer thread
D/MPEG4Writer( 401): 0 chunks are written in the last batch
D/MPEG4Writer( 401): Writer thread stopped
I/MPEG4Writer( 401): The mp4 file will not be streamable.
D/MPEG4Writer( 401): Stopping Audio track
D/AudioPlayer( 1116): renaming /storage/emulated/0/tmprecording.3gp to /storage/emulated/0/../Documents/undefined-.wav
E/AudioPlayer( 1116): FAILED renaming /storage/emulated/0/tmprecording.3gp to /storage/emulated/0/../Documents/undefined-.wav
W/PluginManager( 1116): THREAD WARNING: exec() call to Media.stopRecordingAudio blocked the main thread for 135ms. Plugin should use CordovaInterface.getThreadPool().
Here is my home.ts code:
import {Component} from '#angular/core';
import {NavController, Platform, Page, Events} from 'ionic-angular';
import {MediaPlugin} from 'ionic-native';
#Component({
templateUrl: 'build/pages/home/home.html'
})
export class HomePage {
private _platform: Platform;
private _fileRecord: MediaPlugin;
private _pathFile: string;
private _nameFile: string;
constructor(private navCtrl: NavController, platform: Platform) {
this._platform = platform;
}
public startRecord(): void {
this._pathFile = this.getPathFileRecordAudio();
this._fileRecord = new MediaPlugin(this._pathFile);
this._fileRecord.startRecord();
}
public stopRecord(): void {
this._fileRecord.stopRecord();
}
private startPlay(): void {
this._fileRecord = new MediaPlugin(this._pathFile);
this._fileRecord.play();
}
private getPathFileRecordAudio(): string {
let path: string = (this._platform.is('ios') ? '../Library/NoCloud/': '../Documents/');
return path + this._nameFile + '-' + '.wav';
}
}

Have you checked out the comments in the source for the plugin? There are notes about how to tweak things... Someone had something similar here but you gave far more details pertaining to the errors. (And I copied the comments into that answer).

Related

using pino multistream with synchronous logging

from what I understand, Pino (v 7.5.1) by default does sync logging. From the docs
In Pino's standard mode of operation log messages are directly written to the output stream as the messages are generated with a blocking operation.
I am using pino.multistreams like so
const pino = require('pino')
const pretty = require('pino-pretty')
const logdir = '/Users/punkish/Projects/z/logs'
const streams = [
{stream: fs.createWriteStream(`${logdir}/info.log`, {flags: 'a'})},
{stream: pretty()},
{level: 'error', stream: fs.createWriteStream(`${logdir}/error.log`, {flags: 'a'})},
{level: 'debug', stream: fs.createWriteStream(`${logdir}/debug.log`, {flags: 'a'})},
{level: 'fatal', stream: fs.createWriteStream(`${logdir}/fatal.log`, {flags: 'a'})}
]
Strangely, Pino is behaving asynchronously. I have a curl operation that is outputting out of sequence (before earlier events that are using log.info)
log.info('1')
.. code to do 1 something
log.info('2')
.. code to do 2 something
log.info('3')
.. code to do 3 something
const execSync = require('child_process').execSync
execSync(curl --silent --output ${local} '${remote}')
and my console output is
1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 39.5M 100 39.5M 0 0 108M 0 --:--:-- --:--:-- --:--:-- 113M
2
3
this is a bit annoying and confusing. Maybe this is not the fault of Pino, and maybe curl is causing the problem. But if I replace pino logging with console.log then the order is as expected. So it seems the problem is with Pino behaving asynchronously. How can I go back to synchronous logging?
The trick is to call pino.destination({...}) to create a SonicBoom output stream: a pino-specific alternative to fs.createWriteStream. The SonicBoom options have a boolean property sync. You also need the sync option in pretty({...}).
const pino = require('pino')
const pretty = require('pino-pretty')
const logdir = '/Users/punkish/Projects/z/logs'
const createSonicBoom = (dest) =>
pino.destination({dest: dest, append: true, sync: true})
const streams = [
{stream: createSonicBoom(`${logdir}/info.log`)},
{stream: pretty({
colorize: true,
sync: true,
})},
{level: 'error', stream: createSonicBoom(`${logdir}/error.log`)},
{level: 'debug', stream: createSonicBoom(`${logdir}/debug.log`)},
{level: 'fatal', stream: createSonicBoom(`${logdir}/fatal.log`)}
]
Test:
const log = pino({ level: 'info' }, pino.multistream(streams))
console.log('Before-Fatal')
log.fatal('Fatal')
log.error('Error')
log.warn('Warn')
console.log('After-Warn, Before-Info')
log.info('Info')
console.log('After-Info')
Output:
Before-Fatal
[1234567890123] FATAL (1234567 on host): Fatal
[1234567890127] ERROR (1234567 on host): Error
[1234567890127] WARN (1234567 on host): Warn
After-Warn, Before-Info
[1234567890128] INFO (1234567 on host): Info
After-Info
seems like using pino.multistream (or multiple transports, which seem to have the same effect as multistream) automatically forces pino to behave asynchronously. There is no way around it. Since synchronous logging is more impt for me than speed (in this project), I will look for an alternative logging solution

Flashing via OpenOCD does not allow the embedded program to run, however running using GDB with an OpenOCD bridge works fine

I am running into an issue with using Rust for embedded purposes where I can run and debug programs just fine, but if I try to flash programs such that they can run without being connected to my computer they do not work.
For reference, I am using an stm32f303 chip. This also seems to be a recent issue, since I haven't had a problem before.
Code being flashed (its just blinky):
#![feature(used)]
#![no_std]
extern crate cortex_m;
extern crate cortex_m_rt;
extern crate panic_abort; // panicking behavior
extern crate stm32f30x_hal as hal;
use hal::prelude::*;
use hal::stm32f30x;
use hal::delay::Delay;
fn main() {
let cp = cortex_m::Peripherals::take().unwrap();
let dp = stm32f30x::Peripherals::take().unwrap();
let mut flash = dp.FLASH.constrain();
let mut rcc = dp.RCC.constrain();
let clocks = rcc.cfgr.freeze(&mut flash.acr);
let mut gpioc = dp.GPIOC.split(&mut rcc.ahb);
let mut led1 = gpioc
.pc13
.into_push_pull_output(&mut gpioc.moder, &mut gpioc.otyper);
let mut delay = Delay::new(cp.SYST, clocks);
loop {
led1.set_high();
delay.delay_ms(1_000_u16);
led1.set_low();
delay.delay_ms(1_000_u16);
}
}
// As we are not using interrupts, we just register a dummy catch all
// handler
#[link_section = ".vector_table.interrupts"]
#[used]
static INTERRUPTS: [extern "C" fn(); 240] = [default_handler; 240];
extern "C" fn default_handler() {
loop {}
}
Output of OpenOCD program command:
$ openocd -f interface/jlink.cfg -f target/stm32f3x.cfg -c "program target/thumbv7em-none-eabihf/debug/cortex-m-quickstart verify reset exit"
Open On-Chip Debugger 0.10.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "jtag". To override use 'transport select <transport>'.
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
jtag_ntrst_delay: 100
none separate
cortex_m reset_config sysresetreq
Info : No device selected, using first device.
Info : J-Link EDU Mini V1 compiled Mar 16 2017 12:04:38
Info : Hardware version: 1.00
Info : VTarget = 3.178 V
Info : clock speed 1000 kHz
Info : JTAG tap: stm32f3x.cpu tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part: 0xba00, ver: 0x4)
Info : JTAG tap: stm32f3x.bs tap/device found: 0x06422041 (mfg: 0x020 (STMicroelectronics), part: 0x6422, ver: 0x0)
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
adapter speed: 1000 kHz
Info : JTAG tap: stm32f3x.cpu tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part: 0xba00, ver: 0x4)
Info : JTAG tap: stm32f3x.bs tap/device found: 0x06422041 (mfg: 0x020 (STMicroelectronics), part: 0x6422, ver: 0x0)
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x1ffff1bc msp: 0x20001258
Info : Reduced speed from 8000 kHz to 4000 kHz (maximum).
adapter speed: 8000 kHz
** Programming Started **
auto erase enabled
Info : device id = 0x10036422
Info : flash size = 256kbytes
wrote 14336 bytes from file target/thumbv7em-none-eabihf/debug/cortex-m-quickstart in 0.688176s (20.344 KiB/s)
** Programming Finished **
** Verify Started **
verified 13028 bytes in 0.077558s (164.041 KiB/s)
** Verified OK **
** Resetting Target **
adapter speed: 1000 kHz
Info : JTAG tap: stm32f3x.cpu tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part: 0xba00, ver: 0x4)
Info : JTAG tap: stm32f3x.bs tap/device found: 0x06422041 (mfg: 0x020 (STMicroelectronics), part: 0x6422, ver: 0x0)
shutdown command invoked
As can be seen, there aren't any issues when flashing the program, however it doesn't run. Once again, everything is completely fine when using GDB, and the code runs without any issues.
Any help, advice, or even general ideas of what to do would be greatly appreciated!
[Update]
It looks like the issue is with my boot0 pin, which is pulled high instead of low. This means that when GDB isn't connected and setting the PC, the MCU tries to boot from system memory instead of main memory. This explains why it works when debugging but not on its own.
For anyone who may be following the instructions in the Embedded Rust Book, if you have been using semihosting to print messages from your program while debugging (I used cortex_m_semihosting::hprintln), be aware that it seems to cause the same symptoms as above. I spent hours trying to figure out why the program would not start when run 'by itself', but removing the semihosting crate and the hprintln call immediately fixed the problem.

OpenShift MongoError: auth fails how to resolve?

EDIT ; attaching my app.js , I am using
git add app.js
git commit -m "updated app.js"
git push
command to push code from local machine, and my app.js code is as follows :
/*
*RESTfull server
*/
//defining express middleware
var express=require('express');
//require mongoose, this middleware helps in modeling data for mongodb
var mongoose=require('mongoose');
//require passport, this middleware helps in authentiation
var passport=require('passport');
//require passport, this middleware parsing body
var bodyParser = require('body-parser');
var flash = require('connect-flash');
//define port on which node app is gonna run
//var port = process.env.PORT || 8000;
var server_port = process.env.OPENSHIFT_NODEJS_PORT || 8080 ;
var server_ip_address = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1' ;
var app=express();
app.use(bodyParser.json());
app.use(passport.initialize());
app.use(passport.session());
app.use(flash());
======================================================================
app.listen(server_port,server_ip_address);
console.log('The magic happens on port ' + 'http://'+server_ip_address+':'+server_port);
EDIT :
I commented all mongodb connection code, now my app.js has simple expressjs code, still I can see the same output from the command "rhc tail -a app", Iam not sure why nodejs catridge is trying to connect to mongodb, eventhough there is no code in app.js, is it possible that the log has been generated previously and the same log is being shown ? can I clear log file and test it once? can somebody please help me.
I deployed my nodejs(expressjs) app to the openshift server. I am hitting a mongoError "MongoError: auth fails", I am providing credentials to mongodb server.
Initially when node child process starts it is trying to connect to the
"mongodb://admin:XXXXXX#ip:port" but it should connect to "mongodb://admin:XXXXXX#ip:port/admin" as credentials reside in admin.system.users collection.
I am using mongoose to connect to mongoDB so I changed my mongoose connect to
mongoose.connect(mongodb://admin:XXXXXX#ip:port/admin); But I still see child process is trying to connect to this url "mongodb://admin:XXXXXX#ip:port", but later point of time it connects to the correct collection, and I can see the console ouput of the following code.
mongoose.connection.once('connected', function() {
console.log("Connected to database G")
});
I tested few routes, they are working fine. I want to understand why is it behaving so and can I ignore this error or how can I resolve this issue??
Thanks in advance.
You should be using process.env.OPENSHIFT_MONGODB_DB_URL instead of forming your own url. This environment variable has the following format:
mongodb://admin:LX3eZCP6yxxx#123e4b9a5973ca07ca00002f-appname.rhcloud.com:12345/
Attaching my "rhc tail -a app" commad,
==> app-root/logs/nodejs.log-20150328020443 <==
DEBUG: Starting child process with 'node app.js'
mongodb://admin:pass#550f3e705973cab149000009-app.rhcloud.com:59281/
mongodb://admin:pass#550f3e705973cab149000009-app.rhcloud.com:59281/
The magic happens on port http://127.9.17.129:8080
/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_modules/m
ongoose/node_modules/mongodb/lib/mongodb/connection/base.js:246
throw message;
^
MongoError: auth fails
at Object.toError (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runt
ime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/utils.js:114:11)
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1194:31
==> app-root/logs/nodejs.log-20150327071155 <==
at EventEmitter.emit (events.js:98:17)
DEBUG: Program node app.js exited with code 8
DEBUG: Starting child process with 'node app.js'
mongodb://admin:pass#550f3e705973cab149000009-app.rhcloud.com:59281/
mongodb://admin:pass#550f3e705973cab149000009-app.rhcloud.com:59281/
The magic happens on port http://127.9.17.129:8080
/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_modules/m
ongoose/node_modules/mongodb/lib/mongodb/connection/base.js:246
throw message;
^
MongoError: auth fails
==> app-root/logs/nodejs.log <==
DEBUG: program 'app.js'
DEBUG: --watch '/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/data/.nod
ewatch'
DEBUG: --ignore 'undefined'
DEBUG: --extensions 'node|js|coffee'
DEBUG: --exec 'node'
DEBUG: Starting child process with 'node app.js'
DEBUG: Watching directory '/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/
data/.nodewatch' for changes.
admin:pass#550f3e705973cab149000009-app.rhcloud.com:59281
The magic happens on port http://127.9.17.129:8080
Connected to database G
==> app-root/logs/haproxy.log <==
[WARNING] 087/140540 (417258) : Server express/local-gear is UP, reason: Layer7
check passed, code: 200, info: "HTTP status check returned code <3C>200<3E>", ch
eck duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0
total in queue.
[WARNING] 088/001408 (417258) : Server express/local-gear is DOWN, reason: Layer
4 connection problem, info: "Connection refused", check duration: 0ms. 0 active
and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 088/001408 (417258) : proxy 'express' has no server available!
[WARNING] 088/002019 (417258) : Server express/local-gear is UP, reason: Layer7
check passed, code: 200, info: "HTTP status check returned code <3C>200<3E>", ch
eck duration: 29ms. 1 active and 0 backup servers online. 0 sessions requeued, 0
total in queue.
[WARNING] 088/110018 (417258) : Server express/local-gear is DOWN, reason: Layer
4 connection problem, info: "Connection refused", check duration: 0ms. 0 active
and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 088/110018 (417258) : proxy 'express' has no server available!
[WARNING] 088/110112 (417258) : Server express/local-gear is UP, reason: Layer7
check passed, code: 200, info: "HTTP status check returned code <3C>200<3E>", ch
eck duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0
total in queue.
[WARNING] 088/110502 (417258) : Server express/local-gear is DOWN, reason: Layer
4 connection problem, info: "Connection refused", check duration: 0ms. 0 active
and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 088/110502 (417258) : proxy 'express' has no server available!
[WARNING] 088/110556 (417258) : Server express/local-gear is UP, reason: Layer7
check passed, code: 200, info: "HTTP status check returned code <3C>200<3E>", ch
eck duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0
total in queue.
==> app-root/logs/nodejs.log-20150328074316 <==
The magic happens on port http://127.9.17.129:8080
/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_modules/m
ongoose/node_modules/mongodb/lib/mongodb/connection/base.js:246
throw message;
^
MongoError: auth fails
at Object.toError (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runt
ime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/utils.js:114:11)
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1194:31
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1903:9
at Server.Base._callHandler (/var/lib/openshift/550f3c0ffcf933066f0001b8/app
-root/runtime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connec
tion/base.js:453:41)
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:487:18
==> app-root/logs/haproxy_ctld.log <==
I, [2015-03-22T18:06:38.808186 #415579] INFO -- : Starting haproxy_ctld
I, [2015-03-27T14:20:21.556898 #15736] INFO -- : Starting haproxy_ctld
I, [2015-03-29T12:18:29.365873 #417278] INFO -- : Starting haproxy_ctld
I, [2015-03-29T12:18:37.485326 #417532] INFO -- : Starting haproxy_ctld
==> app-root/logs/nodejs.log-20150323084556 <==
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/
runtime/repo/app.js:43:16)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
DEBUG: Program node app.js exited with code 8
==> app-root/logs/nodejs.log-20150328012640 <==
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1194:31
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/db.js:1903:9
at Server.Base._callHandler (/var/lib/openshift/550f3c0ffcf933066f0001b8/app
-root/runtime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connec
tion/base.js:453:41)
at /var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/runtime/repo/node_mo
dules/mongoose/node_modules/mongodb/lib/mongodb/connection/server.js:487:18
at MongoReply.parseBody (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-roo
t/runtime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/responses/
mongo_reply.js:68:5)
at null.<anonymous> (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/ru
ntime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/ser
ver.js:445:20)
at EventEmitter.emit (events.js:95:17)
at null.<anonymous> (/var/lib/openshift/550f3c0ffcf933066f0001b8/app-root/ru
ntime/repo/node_modules/mongoose/node_modules/mongodb/lib/mongodb/connection/con
nection_pool.js:207:13)
at EventEmitter.emit (events.js:98:17)
DEBUG: Program node app.js exited with code 8

NodeJS crash with multiple request

My nodejs server crashes randomly in real time ( and always on Web Stress Tool with 10+ thread request). Below is the code that I believe to be the root cause.
main.js
--------
app = express():
---------
app.get('/image/*', actions.download);
actions.js
var request = require('request');
exports.download = function(req, res){
var url = <Amazon s3 URL>;
req.pipe(request(url)).pipe(res);
};
When server crashes, I am getting below error in nohup
stream.js:94
throw er; // Unhandled stream error in pipe.
^
Error: socket hang up
at createHangUpError (http.js:1476:15)
at Socket.socketOnEnd [as onend] (http.js:1572:23)
at Socket.g (events.js:180:16)
at Socket.emit (events.js:117:20)
at _stream_readable.js:943:16
at process._tickCallback (node.js:419:13)
Detailed log when I tried with sudo NODE_DEBUG=net node main.js and subjected to stress test with 10 threads
NET: 3017 Socket._read readStart
NET: 3017 afterWrite 0 { domain: null, bytes: 335, oncomplete: [Function: afterWrite] }
NET: 3017 afterWrite call cb
NET: 3017 onread ECANCELED 164640 4092 168732
NET: 2983 got data
NET: 2983 onSocketFinish
NET: 2983 oSF: not ended, call shutdown()
NET: 2983 destroy undefined
NET: 2983 destroy
NET: 2983 close
NET: 2983 close handle
Error: read ECONNRESET
at errnoException (net.js:904:11)
at TCP.onread (net.js:558:19)
This is cased by libuv in src/unix/stream.c. Here we have:
if (stream->shutdown_req) {
/* The UV_ECANCELED error code is a lie, the shutdown(2) syscall is a
* fait accompli at this point. Maybe we should revisit this in v0.11.
* A possible reason for leaving it unchanged is that it informs the
* callee that the handle has been destroyed.
*/
uv__req_unregister(stream->loop, stream->shutdown_req);
uv__set_artificial_error(stream->loop, UV_ECANCELED);
stream->shutdown_req->cb(stream->shutdown_req, -1);
stream->shutdown_req = NULL;
}
I've found the reason of this problem:
stream->shutdown_req was assigned by int uv_shutdown(. So someone called uv_shutdown. Who called uv_shutdown?
uv_shutdown is not a simple function. See here. The name of this function is StreamWrap::Shutdown.
StreamWrap::Shutdown is used in nodejs: SET_INSTANCE_METHOD("shutdown", StreamWrap::Shutdown, 0). shutdown method is a part of wrappers/pipe_wrap and wrappers/tcp_wrap
So someone called shutdown from nodejs/lib. It can be lib/net and lib/tls.
So shutdown is called from function onCryptoStreamFinish or function onSocketFinish.
So you need to find who sent shutdown request in your case. onread ECANCELED means that stream (for example socket1.pipe(socket2)) has been killed.
BTW I think that you can workaround your issue by using special technique to destroy piped sockets from lib/tls:
pair.encrypted.on('close', function() {
process.nextTick(function() {
// Encrypted should be unpiped from socket to prevent possible
// write after destroy.
pair.encrypted.unpipe(socket);
socket.destroy();
});
});

Apachebench request count and Node.js script counter don't match

No doubt I'm doing something stupid, but I've been having problems running a simple node.js app using the Nerve micro-framework. Testing with apachebench, it seems that the code within my single controller is being invoked more frequently than the app itself is being called.
I've created a test script like so:
'use strict';
(function () {
var path = require('path');
var sys = require('sys');
var nerve = require('/var/www/libraries/nerve/nerve');
var nerveCounter = 0;
r_server.on("error", function (err) {
console.log("Error " + err);
});
var app = [
["/", function(req, res) {
console.log("nc = " + ++nerveCounter);
}]
];
nerve.create(app).listen(80);
}());
Start the server. From another box, run a load test:
/usr/sbin/ab -n 5000 -c 50 http://<snip>.com/
...
Complete requests: 5000
...
Percentage of the requests served within a certain time (ms)
...
100% 268 (longest request)
But the node script itself is printing all the way up to:
nc = 5003
rc = 5003
In other words, the server is being called 5000 times but the controller code is being called 5003 times.
Any ideas what I'm doing wrong?
Updated
I changed the tone and content of this question significantly to reflect the help Colum, Alfred and GregInYEG gave me in realising that the problem did not lie with Redis or Nerve and probably lie with apachebench.
Program:
const PORT = 3000;
const HOST = 'localhost';
const express = require('express');
const app = module.exports = express.createServer();
const redis = require('redis');
const client = redis.createClient();
app.get('/incr', function(req, res) {
client.incr('counter', function(err, reply) {
res.send('incremented counter to:' + reply.toString() + '\n');
});
});
app.get('/reset', function(req, res) {
client.del('counter', function(err, reply) {
res.send('resetted counter\n');
});
});
app.get('/count', function(req, res) {
client.get('counter', function(err, reply) {
res.send('counter: ' + reply.toString() + '\n');
});
});
if (!module.parent) {
app.listen(PORT, HOST);
console.log("Express server listening on port %d", app.address().port);
}
Conclusion
It works without any flaws on my computer:
$ cat /etc/issue
Ubuntu 10.10 \n \l
$ uname -a
Linux alfred-laptop 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux
$ node -v
v0.2.6
$ npm install express hiredis redis
npm info build Success: redis#0.5.2
npm info build Success: express#1.0.3
npm info build Success: hiredis#0.1.6
$ ./redis-server --version
Redis server version 2.1.11 (00000000:0)
$ git clone -q git#gist.github.com:02a3f7e79220ea69c9e1.git gist-02a3f7e7; cd gist-02a3f7e7; node index.js
$ #from another tab
$ clear; curl http://localhost:3000/reset; ab -n 5000 -c 50 -q http://127.0.0.1:3000/incr > /dev/null; curl http://localhost:3000/count;
resetted counter
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /incr
Document Length: 25 bytes
Concurrency Level: 50
Time taken for tests: 1.172 seconds
Complete requests: 5000
Failed requests: 4991
(Connect: 0, Receive: 0, Length: 4991, Exceptions: 0)
Write errors: 0
Total transferred: 743893 bytes
HTML transferred: 138893 bytes
Requests per second: 4264.61 [#/sec] (mean)
Time per request: 11.724 [ms] (mean)
Time per request: 0.234 [ms] (mean, across all concurrent requests)
Transfer rate: 619.61 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 7
Processing: 4 11 3.3 11 30
Waiting: 4 11 3.3 11 30
Total: 5 12 3.2 11 30
Percentage of the requests served within a certain time (ms)
50% 11
66% 13
75% 14
80% 14
90% 15
95% 17
98% 19
99% 24
100% 30 (longest request)
counter: 5000

Resources