I am using a third party library that requires I pass U8IntList to display an image in a PDF. Their examples has me obtain the image in a File and read the bytes out.
PdfBitmap(file.readAsBytesSync())
This system is great when I am obtaining an image from a server, but I want to display an image stored in local assets.
What I tried to implement was this code..
Future<File> getImageFileFromAssets(String path) async {
final byteData = await rootBundle.load('assets/$path');
final file = File('${(await getTemporaryDirectory()).path}/$path');
await file.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));
return file;
}
Which returns the error 'No implementation found for method getTemporaryDirectory on channel plugins.flutter.io/path_provider'.
If anyone knows how to get an Asset Image as File on web it would be greatly appreciated.
Why would you want to write byte data to a file just to read it again? Just directly pass your byte data to the constructor that requires it. This should be changed on both your web and mobile implementations as it will end up being far faster.
final byteData = await rootBundle.load('assets/$path');
PdfBitmap(byteData.buffer.asUint8List())
Related
I'm working with Etsy api uploading images like this example, and it requires the images be in binary format. Here is how I'm getting the image binary data:
async function getImageBinary(url) {
const imageUrlData = await fetch(url);
const buffer = await imageUrlData.buffer();
return buffer.toString("binary");
}
However Etsy says it is not a valid image file. How can I get the image in the correct format, or make it in a valid binary format?
Read this for a working example of Etsy API
https://github.com/etsy/open-api/issues/233#issuecomment-927191647
Etsy API is buggy and has an inconsistent guide. You might think of using 'binary' encoding for the buffer because the docs saying that the data type is string but you actually don't need to. Just put the default encoding.
Also currently there is a bug for image upload, try to remove the Content-type header. Better read the link above
I want to use server-to-server cloudkit js. to save record with Asset field.
the Asset field is a m4a audio. after saved, the audio file is corrupt to play
The Apple's Doc is not clear about the Asset field.
In a record that is being saved to the database, the value of an Asset field must be a window.Blob type. In the code fragment above, the type of the assetFile variable is window.File.
Docs:
https://developer.apple.com/documentation/cloudkitjs/cloudkit/database/1628735-saverecords
but in nodejs ,there is no Blob or .File, I filled it with a buffer like this code:
var dstFile = path.join(__dirname,"../test.m4a");
var data = fs.readFileSync(dstFile);
let buffer = Buffer.from(data);
var rec = {
recordType: "MyAttachment",
fields: {
ext: { value: ".m4a" },
file: { value: buffer }
}
}
//console.debug(rec);
mydatabase.newRecordsBatch().create(rec).commit().then(function (response) {
if (response.hasErrors) {
console.log(">>> saveAttachFile record failed");
console.warn(response.errors[0]);
} else {
var createdRecord = response.records[0];
console.log(">>> saveAttachFile record success:", createdRecord);
}
});
The record is successful be saved.
But when I download the audio from icloud.developer.apple.com/dashboard .
the audio file is corrupt to play.
What's wrong with it. thank you to reply.
I was having the same problem and have found a working solution!
Remembering that CloudKitJS needs you to define your own fetch method, I implemented a custom one to see what was going on. I then attached a debugger on the custom fetch to inspect the data that was passing through it.
After stepping through the caller, I found that all asset values are transformed using its toString() method only when the library is embedded in NodeJS. This is determined by the absence of the global window object.
When toString() is called on a Buffer, its contents are encoded to UTF-8 (by default), which causes binary assets to become malformed. If you're using node-fetch for your fetch implementation, it supports Buffer and stream.Readable, so this toString() call does nothing but harm.
The most unobtrusive fix I've found is to swap the toString() method on any Buffer or stream.Readable instances passed as an asset field values. You should probably use stream.Readable, by the way, so that you don't load the entire asset in memory when uploading.
Anyway, here's what it looks like in practice:
// Put this somewhere in your implementation
const swizzleBuffer = (buffer) => {
buffer.toString = () => buffer;
return buffer;
};
// Use this asset value instead
{ asset: swizzleBuffer(fs.readFileSync(path)) }
Please be aware that this workaround mutates a Buffer in an ugly way (since Buffer apparently can't be extended). It's probably a good idea to design an API which doesn't use Buffer arguments so that you can mutate instances that only you create yourself to avoid unintended side effects anywhere else in your code.
Also, sure to vendor (make a local copy) of CloudKitJS in your project, as the behavior may change in the future.
ORIGINAL ANSWER
I ran into the same problem and solved it by encoding my data using Base64. It appears that there's a bug in their SDK which mangles Buffer instances containing non-ascii characters (which, um, seems problematic).
Anyway, try something like this:
const assetField = { value: Buffer.from(data.toString('base64')), 'ascii') }
Side note:
You'll need to decode the asset(s) on the device before using them. There's no way to do this efficiently without writing your own routines, as the methods included in Data / NSData instances requires all data to be in memory.
This is a problem with CloudKitJS (and not the native CloudKit client / service), so the other option is to write your own routine to upload assets.
Neither of these options seem particularly great, but rolling your own atleast means there aren't extra steps for clients to take in order to use the asset.
My application allows the user to upload images and send them to the service, which then converts it to another format and sends it back. We are adding support for the SVG file format and I am running into an issue with reading the file from a byte array.
The issue is that when I initialize a MagickImageInfo object with the SVG Stream object, I get this error:
"no decode delegate for this image format '' # error/blob.c/BlobToImage/355"
I played around with it and am able to get past this error if I instead create a MagickImage object and supply it with an instance of MagickReadSettings where I set the Format to SVG explicitly.
The core problem is that the MagickImage code needs a hint as to what kind of file it is when it's an SVG. For other file types, it seems to be able to infer what kind of file it is. However, while I am able to supply the MagickImage class with what format the file is, the MagickImageInfo class doesn't have any parameters that I can give it to hint at the file type.
One possible solution would be to write the file to disk, then have MagickImageInfo class read the file from disk, but I really don't want to do this as it adds complexity to the service and makes it depend on disk write access.
Relevant code:
Working code:
var readSettings = new MagickReadSettings() { Format = MagickFormat.Svg };
using (MagickImage image = new MagickImage(stream, readSettings))
{
image.Write("C:\test"); // Actual code doesn't write to disk
}
Not working code:
MagickImageInfo info = new MagickImageInfo(stream);
It appears that you found a missing feature. I found your post here and added an extra overload for the MagickImageInfo constructor. The following will be available in Magick.NET 7.0.3.9 and higher:
var readSettings = new MagickReadSettings() { Format = MagickFormat.Svg };
MagickImageInfo info = new MagickImageInfo(stream, readSettings);
Feel free to open an issue next time here: https://github.com/dlemstra/Magick.NET or here: https://magick.codeplex.com/discussions
I am using Web activity to launch default Firefox camera from my web app in Firefox OS. Able to launch default Firefox camera and took picture. Got this.result as return value inside pick success.
Now I need to get file path, where image get saved and also image file name.
Tried to parse the this.result.blob, but couldn't find the path or file related parameter .
Below is the code I'm using
var activity = new MozActivity({
// Ask for the "pick" activity
name: "pick",
// Provide the data required by the filters of the activity
data: {
type: "image/jpeg"
}
});
activity.onsuccess = function() {
var picture = this.result;
console.log("A picture has been retrieved");
};
The image file name is not returned, as you can see from the code. If you would need the file name (I can't really think of a very good use case to be honest) you can iterate over the pictures storage in the DeviceStorageAPI and get the last saved file. It's probably the one from the camera (compare blobs to be totally sure).
In your success handler, you will get the file name if you use:
this.result.blob.name
And, you can get the path to the file as:
window.URL.createObjectURL(this.result.blob);
Source
I am using abcPDF to dynamically create PDFs.
I want to save these PDFs for clients to retrieve any time they want. The easiest way (and the way I do now on my current server) is to simply save the finished PDF to the file system.
Seems I am stuck with using blobs. Luckily abcPDF can save to a stream as well as a file. Now, how to I wire up a stream to a blob? I have found code that shows the blob taking a stream like:
blob.UploadFromStream(theStream, options);
The abcPDF function looks like this:
theDoc.Save(theStream)
I do not know how to bridge this gap.
Thanks!
Brad
As an alternative that doesn't require holding the entire file in memory, you might try this:
using (var stream = blob.OpenWrite())
{
theDoc.Save(stream);
}
EDIT
Adding a caveat here: if the save method requires a seekable stream, I don't think this will work.
Given the situation and not knowing the full list of overloads of Save() method of abcPdf, it seems that you need a MemoryStream. Something like:
using(MemoryStream ms = new MemoryStream())
{
theDoc.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(ms, options);
}
This shall do the job. But if you are dealing with big files, and you are expecting a lot of traffic (lots of simultaneous PDF creations), you might just go for a temp file. Write the PDF to a temp file, then immediatelly upload the temp file for the blob.