Write float value in Node buffer - node.js

I am reading sensor data and I need to send these data via bluetooth so I am using noble/bleno library to subscribe data every time the value change. Here I am confusing how to send data as a buffer.
I have data something like value = 24.3756
So I need to write this in buffer:
let buf = new Buffer(2);
buf.writeIntLE(buf);
But when I convert to ArrayBuffer value shows only 24 (not getting after decimal point)
How to send full float value and read as array buffer ?

First of all please read Buffer documentation here.
Then please don't use deprecated functions. If you need to create a new buffer use Buffer.alloc or Buffer.allocUnsafe. When you're creating buffer ensure that it can hold the data you want to write there. Next please use a suitable method for writing data. You're writing floating point number, then you have to use Buffer.writeFloatBE/Buffer.writeFloatLE. If you do everything I mentioned you'll end up with a correct solution:
const buffer = Buffer.allocUnsafe(4);
buffer.writeFloatLE(24.3756);

Related

How to save records with Asset field use server-to-server cloudkit.js

I want to use server-to-server cloudkit js. to save record with Asset field.
the Asset field is a m4a audio. after saved, the audio file is corrupt to play
The Apple's Doc is not clear about the Asset field.
In a record that is being saved to the database, the value of an Asset field must be a window.Blob type. In the code fragment above, the type of the assetFile variable is window.File.
Docs:
https://developer.apple.com/documentation/cloudkitjs/cloudkit/database/1628735-saverecords
but in nodejs ,there is no Blob or .File, I filled it with a buffer like this code:
var dstFile = path.join(__dirname,"../test.m4a");
var data = fs.readFileSync(dstFile);
let buffer = Buffer.from(data);
var rec = {
recordType: "MyAttachment",
fields: {
ext: { value: ".m4a" },
file: { value: buffer }
}
}
//console.debug(rec);
mydatabase.newRecordsBatch().create(rec).commit().then(function (response) {
if (response.hasErrors) {
console.log(">>> saveAttachFile record failed");
console.warn(response.errors[0]);
} else {
var createdRecord = response.records[0];
console.log(">>> saveAttachFile record success:", createdRecord);
}
});
The record is successful be saved.
But when I download the audio from icloud.developer.apple.com/dashboard .
the audio file is corrupt to play.
What's wrong with it. thank you to reply.
I was having the same problem and have found a working solution!
Remembering that CloudKitJS needs you to define your own fetch method, I implemented a custom one to see what was going on. I then attached a debugger on the custom fetch to inspect the data that was passing through it.
After stepping through the caller, I found that all asset values are transformed using its toString() method only when the library is embedded in NodeJS. This is determined by the absence of the global window object.
When toString() is called on a Buffer, its contents are encoded to UTF-8 (by default), which causes binary assets to become malformed. If you're using node-fetch for your fetch implementation, it supports Buffer and stream.Readable, so this toString() call does nothing but harm.
The most unobtrusive fix I've found is to swap the toString() method on any Buffer or stream.Readable instances passed as an asset field values. You should probably use stream.Readable, by the way, so that you don't load the entire asset in memory when uploading.
Anyway, here's what it looks like in practice:
// Put this somewhere in your implementation
const swizzleBuffer = (buffer) => {
buffer.toString = () => buffer;
return buffer;
};
// Use this asset value instead
{ asset: swizzleBuffer(fs.readFileSync(path)) }
Please be aware that this workaround mutates a Buffer in an ugly way (since Buffer apparently can't be extended). It's probably a good idea to design an API which doesn't use Buffer arguments so that you can mutate instances that only you create yourself to avoid unintended side effects anywhere else in your code.
Also, sure to vendor (make a local copy) of CloudKitJS in your project, as the behavior may change in the future.
ORIGINAL ANSWER
I ran into the same problem and solved it by encoding my data using Base64. It appears that there's a bug in their SDK which mangles Buffer instances containing non-ascii characters (which, um, seems problematic).
Anyway, try something like this:
const assetField = { value: Buffer.from(data.toString('base64')), 'ascii') }
Side note:
You'll need to decode the asset(s) on the device before using them. There's no way to do this efficiently without writing your own routines, as the methods included in Data / NSData instances requires all data to be in memory.
This is a problem with CloudKitJS (and not the native CloudKit client / service), so the other option is to write your own routine to upload assets.
Neither of these options seem particularly great, but rolling your own atleast means there aren't extra steps for clients to take in order to use the asset.

How to read an SVG file from stream with MagickNet

My application allows the user to upload images and send them to the service, which then converts it to another format and sends it back. We are adding support for the SVG file format and I am running into an issue with reading the file from a byte array.
The issue is that when I initialize a MagickImageInfo object with the SVG Stream object, I get this error:
"no decode delegate for this image format '' # error/blob.c/BlobToImage/355"
I played around with it and am able to get past this error if I instead create a MagickImage object and supply it with an instance of MagickReadSettings where I set the Format to SVG explicitly.
The core problem is that the MagickImage code needs a hint as to what kind of file it is when it's an SVG. For other file types, it seems to be able to infer what kind of file it is. However, while I am able to supply the MagickImage class with what format the file is, the MagickImageInfo class doesn't have any parameters that I can give it to hint at the file type.
One possible solution would be to write the file to disk, then have MagickImageInfo class read the file from disk, but I really don't want to do this as it adds complexity to the service and makes it depend on disk write access.
Relevant code:
Working code:
var readSettings = new MagickReadSettings() { Format = MagickFormat.Svg };
using (MagickImage image = new MagickImage(stream, readSettings))
{
image.Write("C:\test"); // Actual code doesn't write to disk
}
Not working code:
MagickImageInfo info = new MagickImageInfo(stream);
It appears that you found a missing feature. I found your post here and added an extra overload for the MagickImageInfo constructor. The following will be available in Magick.NET 7.0.3.9 and higher:
var readSettings = new MagickReadSettings() { Format = MagickFormat.Svg };
MagickImageInfo info = new MagickImageInfo(stream, readSettings);
Feel free to open an issue next time here: https://github.com/dlemstra/Magick.NET or here: https://magick.codeplex.com/discussions

Arduino read get values after ? in url with ethernet shield

Sorry if it has been asked before but I cannot find anything on the internet. I just want to host a page in arduino ethernet shield and when I visit it from a browser with Get parameters (e.g http://xxx.xxx.xx.xx/led.html?red=255,green=0,blue=255) to change to change the led's color. I cannot find how to send data from browser to arduino.
The answer involves the use of The Ethernet Library (Ethernet.h) - which provides an interface to Client and Server, a degree of conceptualizing low-level I/O and buffering the input, and context.
In the following code snippet ( of the sample code at http://arduino.cc/en/Tutorial/WebServer )
...
char c = client.read();
Serial.write(c);
...
the line char c = client.read() is taking a byte from the request stream and assigning it to a char type, it is then serialing that charbyte to a string where it perform conditional logic on it.
The conditional logic of that sample only cares about reading whether a return character \n on a blank line, but the bytes that are being read (on each iteration) byte to byte, actually compose the RAW Request.
At minimal, a RAW HTTP GET Request looks like this:
GET /?first=John&Last=Doe HTTP/1.1
Host: localhost
So, for you to read the query string, you'll need to buffer the bytes being read from the stream.
Then, you'll likely serialize the whole buffer into a string, and perform string operations on them, as well as your conditional logic....

Can you pass meta data along with a stream?

When I pipe something like an image file through a stream is there any way to send an meta object along with it?
My server gets sent an image from a user. The image gets pushed through a set of streams that perform various actions.
The final stream emits a data event, it passes the resulting image buffer into a callback but I lose all context for the user. I need to keep the resulting image tied to the user's id and some other meta data.
Ideal:
stream.on('data', function(img, meta){
...
})
Thanks for any possible solutions!
In short, no, there's nothing built into Node.js to support including metadata with streams. You do have some other options, though, including:
You could use a closure to track the meta data separately from the stream. For example:
function handleImage(imageStream) {
var meta = {...};
imageStream.pipe(otherStreams).on('data', function(image) {
// you now have `image` and `meta` variables at your disposal here.
}
}
The downside of this is that the metadata is not available to your otherStreams.
This is a good solution if your other streams are third-party code outside of your control, of if they don't need to know about the metadata.
You could do something similar to HTTP headers, where all the data up to a certain point is meta data, and everything after it is the image. (In HTTP, the deliminator is wherever \n\n occurs first.) All of your streams in the chain have to know about this and handle it though.
If you know your metadata will always be in one chunk and none of your streams split or merge chunks, then you could simplify this a bit and just say that the first (or last) chunk is always metadata.
Switch to an object stream like Amoli mentioned in his answer. Here you would pass {image: imgData, meta: {...}}. You would then have to update your other streams to expect this format.
The main downside of this method, though, is that you either have to pass the metadata multiple times, cache it somewhere for each stream that needs it, or pass your entire image as one chunk (which kind of kills the entire point of "streams"). And, from what I've been told, node.js can optimize text/binary streams better than object streams. So, this probably isn't a good approach for your situation.
https://github.com/dominictarr/mux-demux might be helpful here. It combines multiple streams into one, so you could have separate image and meta streams. I'm not sure how well it would work for your situation though. You'd probably need to update all of your streams to be aware of it.
I know I said that all but the first option require modifying the other streams, but there is a way around that: you could create a generic "stream wrapper" that splits up the image and meta data and passes just the image data through the main stream, and has the meta data bypass it and go on to the next one down the chain. This gets ugly fast though, so probably not the best idea.
Basically, whenever you want to read or write any objects which are not strings or buffers, you’ll need to put your stream into objectMode
Example (source):
function S3Lister (s3, options) {
options || (options = {});
stream.Readable.call(this, { objectMode : true });
this.s3 = s3; // a knox-like client.
this.marker = options.start;
this.connecting = false;
this.ended = false;
}
util.inherits(S3Lister, stream.Readable);
We set the stream to use objectMode as we want to return not just data but also some metadata.
For more information:
Node.js Docs stream object mode
An introduction to nodes streams
I created a module called metastream for this type of thing. (It is in npm).

Handling chunked responses from process.stdout 'data' event

I have some code which I can't seem to fix. It looks as follows:
var childProcess = require('child_process');
var spawn = childProcess.spawn;
child = spawn('./simulator',[]);
child.stdout.on('data',
function(data){
console.log(data);
}
);
This is all at the backend of my web application which is running a specific type of simulation. The simulator executable is a c program which runs a loop waiting to be passed data (via its standard input) When the inputs come in for the simulation (ie from the client), I parse the input, and then write data to the child process stdin as follows:
child.stdin.write(INPUTS);
Now the data coming back is 40,000 bytes give or take. But the data seems to be getting broken into chunks of 8192 bytes. I've tried fixing the standard output buffer of the c program but it doesnt fix it. I'm wondering if there is a limit to the size of the 'data' event that is imposed by node.js? I need it to come back as one chunk.
The buffer chunk sizes are applied in node. Nothing you do outside of node will solve the problem. There is no way to get what you want from node without a little extra work in your messaging protocol. Any message larger than the chunk size will be chunked. There are two ways you can handle this issue.
If you know the total output size before you start to stream out of C, prepend the message length to the data so the node process knows how many chunks to pull before terminating the entire message.
Determine a special character you can append to the message you are sending from the C program. When node sees that character, you end the input from that message.
If you are dealing with IO in a web application you really want to stick with the async methods. You need something like the following (untested). There is a good sample of how to consume the Stream API in the docs
var data = '';
child.stdout.on('data',
function(chunk){
data += chunk;
}
);
child.stdout.on('end',
function(){
// do something with var data
}
);
I ran into the same problem. I tried many different things and was starting to get annoyed. I tried prepending and appending with special characters. Maybe I was stupid but I just couldn't get it right.
I ran into a module called linerstream which basically parses every chunk until it sees an EOF. You can use it like this:
process.stdout.pipe(new Linerstream()).on('data', (data) => {
// data here is complete and not chunked
});
The important part is that you do have to write data to stdout with a line that ends with EOF. Otherwise it doesn't know it is the end.
I can say this worked me. Hopefully it helps other people.
ppejovic's solution works, but I prefer concat-stream.
var concat = require('concat-stream');
child.stdout.pipe(concat(function(data) {
// all your data ready to be used.
});
There are a number of good stream helpers worth looking into based on your problem area. Take a look at substack's stream-handbook.

Resources