In order to tell NetworkManager to create a Wi-Fi access point over D-Bus using Node.js with the node-dbus library I need to provide an SSID as a byte array. As Node.js doesn't have the Blob class from client-side JavaScript my understanding is that I need to use a Buffer for this, but it's not working.
I can successfully turn a byte array into a string with the following code:
let bytes = new Uint8Array(ssidBytes);
let string = new TextDecoder().decode(bytes);
How do I reverse this to get a byte array from a string?
I've tried:
let ssidBytes = Buffer.from(ssid);
And I've tried:
let ssidBytes = [];
for (let i = 0; i < ssid.length; ++i) {
ssidBytes.push(ssid.charCodeAt(i));
}
Assuming there isn't another error in my code (or the library I'm using), neither of these seem to have the desired effect.
For more background information see https://github.com/Shouqun/node-dbus/issues/228 and https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/663
Thanks
I found the solution in another post, here https://stackoverflow.com/a/36389863/1782967
let ssid = 'my-ap';
let ssidByteArray = [];
let buffer = Buffer.from(ssid);
for (var i = 0; i < buffer.length; i++) {
ssidByteArray.push(buffer[i]);
}
A more compact solution that does the same:
const ssid = 'my-ap';
const ssidByteArray = Array.from(Buffer.from(ssid));
Related
How to write a single file while reading from multiple input streams of the exact same file from diffrent locations with NodeJS.
As its still not Clear Maybe?
I want to use more performance for the download lets say we have 2 locations for the same file each can perform only 10mb down stream so i want to download a part from the first location and the secund in parallel. to get it with 20mb.
so both streams need to get joined some how and both streams need to know the range they are downloading.
i have 2 examples
var http = require('http')
var fs = require('fs')
// will write to disk __dirname/file1.zip
function writeFile(fileStream){
//...
}
// This example assums downloading from 2 http locations
http.request('http://location1/file1.zip').pipe(writeFile)
http.request('http://location2/file1.zip').pipe(writeFile)
var fs = require('fs')
// will write to disk __dirname/file1.zip
function writeFile(fileStream){
//...
}
// this example is reading the same file from 2 diffrent disks
fs.readfFile('/mount/volume1/file1.zip').pipe(writeFile)
fs.readfFile('/mount/volume2/file1.zip').pipe(writeFile)
How i think that it would work
ReadStream needs to check if a defined content range is already writen befor rereading the next chunk from each file and maybe they should start in on a random location in the file to read.
if the total file content length is X we will divide it into smaller chunks and create a map where each entry has a fixed content length so we know what parts we got and what parts we are downloading in total.
Trying to answer this question my self
We can try to simply optimistic raise Read
let SIZE = 64; // 64 byte intervals
let buffers = []
let bytesRead = 0
function readParallel(filepath,callback){
fs.open(filepath, 'r', function(err, fd) {
fs.fstat(fd, function(err, stats) {
let bufferSize = stats.size;
while (bytesRead < bufferSize) {
let size = Math.min(SIZE, bufferSize - bytesRead);
let buffer = new Buffer(size),
let position = bytesRead
let length = size
let offset = bytesRead
let read = fs.readSync(fd, buffer, offset, length, position);
buffers.push(buffer);
bytesRead += read;
}
});
});
}
// At the End: buffers.concat() ==== "File Content"
fs.createReadStream() has an option you can pass it to specify the start
let f = fs.createReadStream("myfile.txt", {start: 1000});
You could also open a normal file descriptor with fs.open(), then fs.read() one byte from a position right before where you want the stream to be positioned using the position argument to fs.read() and then you can pass that file descriptor into fs.createReadStream() as an option and the stream will start with that file descriptor and position (though obviously the start option to fs.createReadStream() is a bit simpler).
Using csv-parse with csv-stringify from the CSV Project.
const fs = require('fs');
const parse = require('csv-parse');
const stringify = require('csv-stringify')
const stringifier = stringify();
const writeFile = fs.createWriteStream('out.csv');
fs.createReadStream('file1.csv').pipe(parse()).pipe(stringifier).pipe(writeFile);
fs.createReadStream('file2.csv').pipe(parse()).pipe(stringifier).pipe(writeFile);
Here I parse each file separately (using a different parse stream for each source), then pipe both to the same stringify stream which concatenates them, then write to destination.
Range Locking
The Answer is Advisory Locking it is as simple as Torrent does it
assign the whole file or a part of it to multiple smaller parts
lock the file range and fetch that range from a list of sources.
use the file created in part 1 as driver for a FIFO Queue it contains all meta
To get a File from Multiple Sources a JS Implementation would look like
if we assume all files are only i put no error handling in here
const queue = [];
const sources = ['https://example.com/file','https://example1.com/file'];
const fileSize = fetch({sources[0],{method: 'HEAD'}).then(({ headers })=>headers['Content-Size']);
const targetBuffer = new UInt8Array(fileSize);
const charset = 'x-user-defined';
// Maps to the UTF Private Address Space Area so you can get bits as chars
const binaryRawEnablingHeader = `text/plain; charset=${charset}`;
const requestDefaults = {
headers: {
'Content-Type': binaryRawEnablingHeader,
'range': 'bytes=2-5,10-13'
}
}
const downloadPlan = /* some logic that puts that bytes into the target WiP */
// use response.text() and then convert that to byte via
// UNICODE Private Area 0xF700-0xF7ff.
const convertToAbyte = (chars) =>
new Array(chars.length)
.map((_abyte,offset) =>
chars.charCodeAt(offset) & 0xff);
I'd heard on the grapevine a while ago that reading from process.env is a hit to performance in Node. I wondered if someone can clarify whether this is still the case, and calls to process.env should be avoided or whether it makes no difference to performance?
Thanks!
You can set up your own test for this using process.hrtime(), let's try reading it a bunch of times and see what we get:
const time = process.hrtime();
const NS_PER_SEC = 1e9;
const loopCount = 10000000;
let hrTime1 = process.hrtime(time);
for (var i = 0; i < loopCount; i++)
{
let result = process.env.TEST_VARIABLE
}
let hrTime2 = process.hrtime(time);
let ns1 = hrTime1[0] * NS_PER_SEC + hrTime1[1];
let ns2 = hrTime2[0] * NS_PER_SEC + hrTime2[1];
console.log(`Read took ${(ns2 - ns1)/loopCount} nanoseconds`);
The result on my machine (oldish Windows Tower, Node v8.11.2 ):
Read took 222.5536641 nanoseconds
So around ~0.2 microseconds.
This is pretty fast.. when we talk about performance issues everything is relative. If you really need to read this very frequently, it would be best to cache it.
To make this clear, let's test both scenarios:
// Cache
const test = process.env.TEST_VARIABLE;
let loopCount = 10000000; console.time("process.env cached"); for (var i = 0; i < loopCount; i++) { let result = test } console.timeEnd("process.env cached");
// No cache
loopCount = 10000000; console.time("process.env uncached"); for (var i = 0; i < loopCount; i++) { let result = process.env.TEST_VARIABLE } console.timeEnd("process.env uncached");
This takes ~10ms when caching, and ~2s when no variable is used to cache the value.
I'm trying to separate the 4 channel that are in a Buffer i receive from a 4-mic Array ReSpeaker. I'm using nodejs and currently i use a spawn command like:
spawn('arecord -r16000 -fS16_LE -traw -c4 -Dac108')
and then pipe the output in a transformer where i split the Buffer in the 4 channels and save them into separate file for check the result
const stream = require("stream");
const fs = require('fs');
class ChannelTransformer extends stream.Transform {
constructor(options) {
var write_1 = fs.createWriteStream('ch1', {encoding: 'binary'});
var write_2 = fs.createWriteStream('ch2', {encoding: 'binary'});
var write_3 = fs.createWriteStream('ch3', {encoding: 'binary'});
var write_4 = fs.createWriteStream('ch4', {encoding: 'binary'});
options.readableObjectMode = true;
options.writableObjectMode = true;
options.highWaterMark = 20000;
options.transform = (chunk, encoding, callback) => {
let channels = [[],[],[],[]];
for(let i=0; i<source.length;i++ ){
channels[i%4].push(chunk[i])
}
write_1.write(new Uint8Array(channels[0]));
write_2.write(new Uint8Array(channels[1]));
write_3.write(new Uint8Array(channels[2]));
write_4.write(new Uint8Array(channels[3]));
callback();
};
super(options);
}
}
As result from this code i get 4 file and if I import them with Audacity i find out that ch2 and ch4 files have been correctly separated, while ch1 and ch3 are corrupted and result in a white noise file.
Am i missing something on the separation? i thought that audio was stored on the pattern :
[[ch1_0],[ch2_0],[ch3_0],[ch4_0],[ch1_1],[ch2_1],...]
Also i dont get why, if the pattern i follow is not correct, 2 of the channels where separate succesfully.
I've also tried to cast the chunk into something else like:
let source = new Int8Array(chunk);
and then in the for cicle:
channels[i%4].push(source[i])
with different Type like Float32Array, Uint8Array, Uint16Array, Int16Array
but the result are the same.
I've already tested that the 4mic is working correctly by using the command:
arecord -r16000 -fS16_LE -traw -c4 -Dac108 -I ch1 ch2 ch3 ch4
which produce 4 file as expected containing each channel.
For every test, I block with mi finger on mic each couple of seconds while speaking so i can tell the difference between every channel.
Can anyone help me? or have some hints?
Thanks!
OK i figured out what was the problem.
Basically i was recording with bitWidth = 16, and the Buffer object in Node is an instance of Uint8Array so the pattern i was following was indeed correct, but due to the 8bitArray i had to assign 2 element per channel. Cause the format of the 8bit Array is:
[[ch1],[ch1],[ch2],[ch2],[ch3],[ch3],[ch4],[ch4],...]
Also casting the Array was useless because the produced 16bitArray didn't merge 2 8bit element to create a 16bit element but instead each 8bit element create the last 8bit of a 16bit element where the first 8bit are 0 like this:
8bitArray=[[11111111],[11111111],[2222222],[22222222],...]
16bitArray Casted= [[0000000011111111],[0000000011111111],[0000000022222222],...]
So i have created a method that merge the 8bitArray into the correct Array to be handle correctly based on the bitWidth of which your are recording:
var bitMultipler = bitWidth/8; //so i can handle any bitWidth with the same code
let channelsMap = new Map<number, Array<any>>();
for (let channel: number = 0; channel < totalChannels; channel++) {
channelsMap.set(channel, new Array())
}
/**
For each channel i push as many element as needed based on the bitMultipler
*/
let i = 0
while (i < chunk.length) {
for (let channel = 0; channel < totalChannels; channel++) {
for (let indexMultipler = 0; indexMultipler < bitMultipler; indexMultipler++) {
channelsMap.get(channel).push(chunk[i]);
i++;
}
}
}
Is there a limit the length of console.log output in Node.js? The following prints numbers up to 56462, then stops. This came up because we were returning datasets from MySQL and the output would just quit after 327k characters.
var out = "";
for (i = 0; i < 100000; i++) {
out += " " + i;
}
console.log(out);
The string itself seems fine, as this returns the last few numbers up to 99999:
console.log(out.substring(out.length - 23));
Returns:
99996 99997 99998 99999
This is using Node v0.6.14.
Have you tried writing that much on a machine with more memory?
According to Node source code console is writing into a stream: https://github.com/joyent/node/blob/cfcb1de130867197cbc9c6012b7e84e08e53d032/lib/console.js#L55
And streams may buffer the data into memory: http://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback
So if you put reeeaally a lot of data into a stream, you may hit the memory ceiling.
I'd recommend you split up your data and feed it into process.stdout.write method, here's an example: http://nodejs.org/api/stream.html#stream_event_drain
I would recommend using output to file when using node > 6.0
const output = fs.createWriteStream('./stdout.log');
const errorOutput = fs.createWriteStream('./stderr.log');
// custom simple logger
const logger = new Console(output, errorOutput);
// use it like console
var count = 5;
logger.log('count: %d', count);
// in stdout.log: count 5
I'm trying to play some mp3 files in node.js. The thing is that I manage to play them one by one, or even, as I want in parallel. But what I also want is to be able to control the amplitude (gain) to be able to create a crossfade in the end. Could anyone help me understand what it is I need to do? (I want to use it in node-webkit so I need a solution that is node.js based with no external dependencies.)
This is what I've got so far:
var lame = require('lame'), Speaker = require('speaker'), fs = require('fs');
var audioOptions = {channels: 2, bitDepth: 16, sampleRate: 44100};
var decoder = lame.Decoder();
var stream = fs.createReadStream("music/ge.mp3", audioOptions).pipe(decoder).on("format", function (format) {
this.pipe(new Speaker(format))
}).on("data", function (data) {
console.log(data)
})
I customized the npm package pcm-volume to do that. To crossfade, provide two pcm audio buffers (output of your decoders). Pipe the result to your Speaker object.
Here is the main part of the modifications. In this case the crossfade happens at the scale of the provided buffer, but you can change that.
var l = buf.length;
var out = new Buffer(l);
for (var i=0; i < l; i+=2) {
volumeSunrise = 0.5*this.volume*(1-Math.cos(pi*i/l));
volumeSunset = 0.5*this.volume*(1+Math.cos(pi*i/l));
uint = Math.round(volumeSunrise*buf.readInt16LE(i) + volumeSunset*this.sunsetBuffer.readInt16LE(i));
// you may want to ensure that -32767 <= uint <= 32768 here, in case you use a volume higher than 1
out.writeInt16LE(uint, i);
}
this.push(out);
callback()