I'm trying to upload images to ImgBB using NodeJS and GraphQL.
I have a uploadImage mutation that takes an image in the form of a data url string, and goes as follows:
import parseDataUrl from "data-uri-to-buffer";
// ...
{
Mutation: {
async uploadImage(_, { dataUrl }) {
const buffer = parseDataUrl(dataUrl); // https://www.npmjs.com/package/data-uri-to-buffer
if (buffer.byteLength > 10 * 10 ** 6)
throw new Error("The image exceeds the maximum size of 10 MB");
const body = new FormData();
body.append("image", buffer);
const result = await fetch(
`https://api.imgbb.com/1/upload?key=${process.env.IMGBB_KEY}`,
{
method: "post",
headers: { ...body.getHeaders() },
body
}
).then<any>(result => result.json());
if (!result.success || !result.url) {
const msg = result.error?.message;
throw new Error(
`There was an error during upload${msg ? `: ${msg}` : ""}`
);
}
return result.url;
}
}
}
body.getHeaders() contains:
{
'content-type': 'multipart/form-data; boundary=--------------
------------656587403243047934588601'
}
(I'm using node-fetch)
But no matter the combinations of query params, headers and body I use, I always end up getting this error:
{
status_code: 400,
error: { message: 'Undefined array key "scheme"', code: 0 },
status_txt: 'Bad Request'
}
I can't find anything about it, do you have an idea?
There are multiple things you can do to resolve your issue.
Important thing with FormData, you need to explicitly provide a name for the image buffer if it's not already included, uploading without name API would throw the same error you have mentioned.
body.append("image", buffer, "addSomeImageName.png");
The api does not require any header explicitly so you can remove it.
{
method: "post",
headers: { ...body.getHeaders() }, // This can be removed.
body
}
The logic you are using to check for the result is faulty and would always throw error even if the result is successful.
if (!result.success || !result.url) { // Check for success flag only.
const msg = result.error?.message;
throw new Error(
`There was an error during upload${msg ? `: ${msg}` : ""}`
);
}
This is the block I tested and is working fine:
import parseDataUrl from "data-uri-to-buffer";
import fetch from 'node-fetch';
import FormData from "form-data";
async function uploadImage({
dataUrl
}) {
const buffer = parseDataUrl(dataUrl);
if (buffer.byteLength > 10 * 10 ** 6)
throw new Error("The image exceeds the maximum size of 10 MB");
const body = new FormData();
body.append("image", buffer, "someImageName.png");
const result = await fetch(
`https://api.imgbb.com/1/upload?key=<your-api-key>`, {
method: "post",
// headers: { ...body.getHeaders() },
body
}
).then(result => result.json())
// .then(data => {
// console.log(data); // Working fine here too.
// });
console.log("-------result--------\n", result); // Result is fine.
// Logic testing.
console.log(result.success);
console.log(result.url);
console.log(!result.success);
console.log(!result.url);
console.log(!result.success || !result.url);
if (!result.success || !result.url) {
const msg = result.error ? .message;
console.log(`There was an error during upload${msg ? `: ${msg}` : ""}`)
// throw new Error(
// `There was an error during upload${msg ? `: ${msg}` : ""}`
// );
}
return result.url;
}
console.log("--------------------- console result ------------------------")
console.log(uploadImage({
dataUrl: "data:image/gif;base64,R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7"
}));
One thing that seems odd to me:
const buffer = parseDataUrl(dataUrl); // https://www.npmjs.com/package/data-uri-to-buffer
The package in the comment only offers the function dataUriToBuffer, which you are not using. Are you using parseDataUrl from jsdom? Are you using node-fetch's own implementation of FormData as supposed to according to the documentation?
Please append your question with all relevant import statements and please also share the contents of body.getHeaders().
Related
I'm trying to implement a multiple file loader in my web app, i.e. I want to upload to a given path in my local machine a folder with multiple files in it.
I'm using Svelte for the frontend part and Express for the backend part.
So far I've done the following. Client side:
async function load_report() {
let first_input = document.getElementById("folder");
let files_list = Array.from(first_input.files);
let local_files_paths = files_list.map((f) => f.webkitRelativePath);
const payload = new FormData();
for (let i = 0; i < files_list.length; i++) {
payload.append("files", files_list[i]);
payload.append("local_files_paths", local_files_paths[i]);
}
try {
isUploading = true; // to disable the buttons for other uploading operations
const response = await fetch("/upload_report", {
//signal: signal,
method: "POST", // or "PUT"
body: payload,
// No content-type! With FormData obect, Fetch API sets this automatically.
// Doing so manually can lead to an error
});
isUploading = false;
let fetch_msg = await response.json();
if (!response.ok) {
upload_report_error = `${fetch_msg["title"]}: ${fetch_msg["message"]}`;
alert.addAlertToStore(
"danger",
fetch_msg["title"],
fetch_msg["message"]
);
console.error(fetch_msg["message"]);
} else {
alert.addAlertToStore(
"success",
fetch_msg["title"],
fetch_msg["message"]
);
}
} catch (e) {
console.error(e);
} finally {
//$alert = `Todo '${name}' has been added`;
first_input.value = null;
}
}
and also in my onMount() function:
onMount(() => {
window.onbeforeunload = function () {
if (isUploading) {
return "You are uploading! CHILL OUT!";
}
};
Server-side, the endpoint is treated in this way:
app.post("/upload_report", async(req, res) => {
if (!req.files) {
return res.status(400).send({ title: 'upload_folder_error', message: 'No files were uploaded.'});
}
let date;
try{
date = req.body.local_files_paths[0].match(/([0-9_-]+)\//)[1];
}catch(e){
return res.status(500).send({ title: 'upload_folder_error', message: e.message });
}
if(fs.existsSync(`./app/${date}/folder`)){
return res.status(500).send({ title: 'upload_folder_error', message: 'this folder already exists' });
}
for(let i= 0; i< req.body.local_files_paths.length; i++){
let _file = req.files.files[i];
//move photo to uploads directory
_file.mv(`*some_path*`);
}
console.log('DONE');
return res.status(200).send({ title: 'upload_folder_success', message: 'Folder correctly uploaded' });
});
My problem is that when I try to close the tab, a pop up alert appears telling me If I really want to close the tab, and I can either confirm or cancel the operation. But, when I confirm the operation, I would expect the fetch to be consequentially aborted (correct me if I'm wrong) but instead it goes on server side (i.e. I can see my server logging "DONE") and the files are correctly uploaded... what I'm missing in that?
PS. I also tried to use the AbortController() and calling abort inside the onbeforeunload event, but that seemed to me to be wrong.
Could you please help me to figure it out? Thanks!
Im using html-pdf to generate a pdf document in the server, im then exporting that to the frontend to be downloaded. The PDF generates correctly on the server, however on the client all that is downloaded is a pdf document with the correct amount of pages, but no content. Im saving the file locally to the server, and if i pull that document up on my browser it indeed is formatted how i want it to be formatted, everything is correct. but no matter what i do, all the client receives is several blank pages.
i've read in several SO posts about byte-shaving and that the content isnt being received correctly because of incorrect formatting (i.e. the document is in utf-8 or some such thing) and needs to be in base64 i was successful in creating a base64 document (not shown here) but then that created a new problem in Document not readable when the document was downloaded, it then also created a problem with the server document in that it was now blank, and had no content.
im at my wits end, the document is correct but i cannot receive it correctly in the client, im hoping some fresh eyes can make sense of what i cant see.
server
const pdf = require("html-pdf");
const asyncHandler = require("../middleware/async");
const { error: errorHandler } = require("../middleware/error");
const fs = require("fs");
const path = require("path");
const Deck = require("../models/deckModel");
const DeckListTemp = require("../templates/DeckListTemp");
const Dynamic = require("../models/dynamicModel");
/**
* #description Generate PDF document of deck list
* #param {Number} deckId - Deck ID
* #returns {String} - PDF document
* #author Austin Howard
* #version 1.0.0
* #since 1.0.0
* #date 2022-5-30
*
*/
module.exports = asyncHandler(async (req, response, next) => {
try {
// find the deck
const deck = await Deck.findById(req.params.deckId);
// get the logo of the application
const logo = await Dynamic.findOne({ type: "Logo" });
// check to see if the deck and logo exist
if (!deck || !logo) {
return res.status(404).json({
message: "Cannot find deck or logo on Server",
});
}
// need to sort cards by name
await deck.cards.sort((a, b) => {
if (a.name < b.name) {
return -1;
} else if (a.name > b.name) {
return 1;
} else {
return 0;
}
});
// console log one card so we can see what we're working with
// console.log(deck.cards[0]);
// pass the deck to the template
const html = DeckListTemp(deck, logo.value);
// set up the html-to-pdf converter
// we need to url encode the deck name in case there is any special characters
// basically we just want to make sure any forward slashes are not in the deck name
// so we replace them with a space
// since any forward slash would cause the pdf to go to a subdirectory
// and we don't want that
deck.deck_name = deck.deck_name.replace(/\//g, " ");
// we need to make the pdf a promise so we can await its creation
await Promise.all([
// create the pdf
pdf
.create(html, {
encoding: "UTF-8",
format: "A3",
border: {
top: "1in",
right: "1in",
bottom: "1in",
left: "1in",
},
})
.toFile(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`,
async function (err, res) {
try {
if (err) {
console.log(err);
} else {
console.log(`Document created for ${deck.deck_name}`);
// create a readable stream from the file
console.log(
`creating readable stream from ${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
);
// pipe the readable stream to the response
// need to create a buffer then decode that buffer as base64 so that we can send it to the client
const buffer = await Buffer.from(
await fs.readFileSync(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
)
);
response.setHeader(
"Content-disposition",
`attachment; filename=${deck.deck_name}.pdf`
);
await response.sendFile(
path
.join(
`${__dirname}../../../public/pdf/${deck.deck_name
.substring(0, 50)
.replace(/\//g, " ")}.pdf`
)
.toString("base64"),
(err) => {
if (err) {
console.log(err);
} else {
console.log("downloading");
}
}
);
// fs.unlinkSync(
// `${__dirname}../../../public/pdf/${deck.deck_name
// .substring(0, 50)
// .replace(/\//g, " ")}.pdf`
// );
}
} catch (error) {
console.error(error);
}
}
),
]);
} catch (error) {
console.error(error);
res.status(500).json({
success: false,
message: `Server Error - ${error.message}`,
});
}
});
client
import axios from "axios";
import {
DECK_PREVIEW_REQUEST,
DECK_PREVIEW_SUCCESS,
} from "../../constants/deckConstants";
import { setAlert } from "../../utils/alert";
export const generatePdf = (deck) => async (dispatch) => {
try {
dispatch({ type: DECK_PREVIEW_REQUEST });
const config = {
headers: {
"Content-Type": "application/force-download",
},
};
const data = await axios.post(`/api/deck/${deck._id}/pdf`, config);
// console.log(data);
// Create blob link to download
// const blob = new Blob([data.data], { type: "application/pdf" });
console.log(data.data);
const url = window.URL.createObjectURL(
new Blob([data.data], { type: "application/octet-stream" })
);
const link = document.createElement("a");
link.href = url;
link.setAttribute("download", `${deck.deck_name}.pdf`);
// Append to html link element page
document.body.appendChild(link);
// Start download
link.click();
// Clean up and remove the link
link.parentNode.removeChild(link);
dispatch({ type: DECK_PREVIEW_SUCCESS });
} catch (error) {
console.error(error);
dispatch(setAlert(`problem generating pdf ${error}`, "danger"));
}
};
Hi I am quite new to docxtemplater but I absolutely love how it works. Right now I seem to be able to generate a new docx document as follows:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
const {Storage} = require('#google-cloud/storage');
var PizZip = require('pizzip');
var Docxtemplater = require('docxtemplater');
admin.initializeApp();
const BUCKET = 'gs://myapp.appspot.com';
exports.test2 = functions.https.onCall((data, context) => {
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(function(error, key) {
error[key] = value[key];
return error;
}, {});
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({error: error}, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors.map(function (error) {
return error.properties.explanation;
}).join("\n");
console.log('errorMessages', errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
let file_name = 'example.docx';// this is the file saved in my firebase storage
const File = storage.bucket(BUCKET).file(file_name);
const read = File.createReadStream();
var buffers = [];
readable.on('data', (buffer) => {
buffers.push(buffer);
});
readable.on('end', () => {
var buffer = Buffer.concat(buffers);
var zip = new PizZip(buffer);
var doc;
try {
doc = new Docxtemplater(zip);
doc.setData({
first_name: 'Fred',
last_name: 'Flinstone',
phone: '0652455478',
description: 'Web app'
});
try {
doc.render();
doc.pipe(remoteFile2.createReadStream());
}
catch (error) {
errorHandler(error);
}
} catch(error) {
errorHandler(error);
}
});
});
My issue is that i keep getting an error that doc.pipe is not a function. I am quite new to nodejs but is there a way to have the newly generated doc after doc.render() to be saved directly to the firebase storage?
Taking a look at the type of doc, we find that is a Docxtemplater object and find that doc.pipe is not a function of that class. To get the file out of Docxtemplater, we need to use doc.getZip() to return the file (this will be either a JSZip v2 or Pizzip instance based on what we passed to the constructor). Now that we have the zip's object, we need to generate the binary data of the zip - which is done using generate({ type: 'nodebuffer' }) (to get a Node.JS Buffer containing the data). Unfortunately because the docxtemplater library doesn't support JSZip v3+, we can't make use of the generateNodeStream() method to get a stream to use with pipe().
With this buffer, we can either reupload it to Cloud Storage or send it back to the client that is calling the function.
The first option is relatively simple to implement:
import { v4 as uuidv4 } from 'uuid';
/* ... */
const contentBuffer = doc.getZip()
.generate({type: 'nodebuffer'});
const targetName = "compiled.docx";
const targetStorageRef = admin.storage().bucket()
.file(targetName);
await targetStorageRef.save(contentBuffer);
// send back the bucket-name pair to the caller
return { bucket: targetBucket, name: targetName };
However, to send back the file itself to the client isn't as easy because this involves switching to using a HTTP Event Function (functions.https.onRequest) because a Callable Cloud Function can only return JSON-compatible data. Here we have a middleware function that takes a callable's handler function but supports returning binary data to the client.
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import corsInit from "cors";
admin.initializeApp();
const cors = corsInit({ origin: true }); // TODO: Tighten
function callableRequest(handler) {
if (!handler) {
throw new TypeError("handler is required");
}
return (req, res) => {
cors(req, res, (corsErr) => {
if (corsErr) {
console.error("Request rejected by CORS", corsErr);
res.status(412).json({ error: "cors", message: "origin rejected" });
return;
}
// for validateFirebaseIdToken, see https://github.com/firebase/functions-samples/blob/main/authorized-https-endpoint/functions/index.js
validateFirebaseIdToken(req, res, () => { // validateFirebaseIdToken won't pass errors to `next()`
try {
const data = req.body;
const context = {
auth: req.user ? { token: req.user, uid: req.user.uid } : null,
instanceIdToken: req.get("Firebase-Instance-ID-Token"); // this is used with FCM
rawRequest: req
};
let result: any = await handler(data, context);
if (result && typeof result === "object" && "buffer" in result) {
res.writeHead(200, [
["Content-Type", res.contentType],
["Content-Disposition", "attachment; filename=" + res.filename]
]);
res.end(result.buffer);
} else {
result = functions.https.encode(result);
res.status(200).send({ result });
}
} catch (err) {
if (!(err instanceof HttpsError)) {
// This doesn't count as an 'explicit' error.
console.error("Unhandled error", err);
err = new HttpsError("internal", "INTERNAL");
}
const { status } = err.httpErrorCode;
const body = { error: err.toJSON() };
res.status(status).send(body);
}
});
});
};
})
functions.https.onRequest(callableRequest(async (data, context) => {
/* ... */
const contentBuffer = doc.getZip()
.generate({type: "nodebuffer"});
const targetName = "compiled.docx";
return {
buffer: contentBuffer,
contentType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
filename: targetName
}
}));
In your current code, there are a number of odd segments where you have nested try-catch blocks and variables in different scopes. To help combat this, we can make use of File#download() that returns a Promise that resolves with the file contents in a Node.JS Buffer and File#save() that returns a Promise that resolves when the given Buffer is uploaded.
Rolling this together for reuploading to Cloud Storage gives:
// This code is based off the examples provided for docxtemplater
// Copyright (c) Edgar HIPP [Dual License: MIT/GPLv3]
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
import PizZip from "pizzip";
import Docxtemplater from "docxtemplater";
admin.initializeApp();
// The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors).
function replaceErrors(key, value) {
if (value instanceof Error) {
return Object.getOwnPropertyNames(value).reduce(
function (error, key) {
error[key] = value[key];
return error;
},
{}
);
}
return value;
}
function errorHandler(error) {
console.log(JSON.stringify({ error: error }, replaceErrors));
if (error.properties && error.properties.errors instanceof Array) {
const errorMessages = error.properties.errors
.map(function (error) {
return error.properties.explanation;
})
.join("\n");
console.log("errorMessages", errorMessages);
// errorMessages is a humanly readable message looking like this :
// 'The tag beginning with "foobar" is unopened'
}
throw error;
}
exports.test2 = functions.https.onCall(async (data, context) => {
const file_name = "example.docx"; // this is the file saved in my firebase storage
const templateRef = await admin.storage().bucket()
.file(file_name);
const template_content = (await templateRef.download())[0];
const zip = new PizZip(template_content);
let doc;
try {
doc = new Docxtemplater(zip);
} catch (error) {
// Catch compilation errors (errors caused by the compilation of the template : misplaced tags)
errorHandler(error);
}
doc.setData({
first_name: "Fred",
last_name: "Flinstone",
phone: "0652455478",
description: "Web app",
});
try {
doc.render();
} catch (error) {
errorHandler(error);
}
const contentBuffer = doc.getZip().generate({ type: "nodebuffer" });
// do something with contentBuffer
// e.g. reupload to Cloud Storage
const targetStorageRef = admin.storage().bucket().file("compiled.docx");
await targetStorageRef.save(contentBuffer);
return { bucket: targetStorageRef.bucket.name, name: targetName };
});
In addition to returning a bucket-name pair to the caller, you may also consider returning an access URL to the caller. This could be a signed url that can last for up to 7 days, a download token URL (like getDownloadURL(), process described here) that can last until the token is revoked, Google Storage URI (gs://BUCKET_NAME/FILE_NAME) (not an access URL, but can be passed to a client SDK that can access it if the client passes storage security rules) or access it directly using its public URL (after the file has been marked public).
Based on the above code, you should be able to merge in returning the file directly yourself.
I have a use case that needs to use Headless Chrome Network (https://chromedevtools.github.io/devtools-protocol/tot/Network/) to intercept all images requests and find out the image size before saving it (basically discard small images such as icons).
However, I am unable to figure out a way to load the image data in memory before saving it. I need to load it in Img object to get width and height. The Network.getResponseBody is taking requestId which I don't have access in Network.requestIntercepted. Also Network.loadingFinished always gives me "0" in encodedDataLength variable. I have no idea why. So my questions are:
How to intercept all responses from jpg/png request and get the image data? Without saving the file via URL string to the disk and load back.
BEST: how to get image dimension from header response? Then I don't have to read the data into memory at all.
My code is below:
const chromeLauncher = require('chrome-launcher');
const CDP = require('chrome-remote-interface');
const file = require('fs');
(async function() {
async function launchChrome() {
return await chromeLauncher.launch({
chromeFlags: [
'--disable-gpu',
'--headless'
]
});
}
const chrome = await launchChrome();
const protocol = await CDP({
port: chrome.port
});
const {
DOM,
Network,
Page,
Emulation,
Runtime
} = protocol;
await Promise.all([Network.enable(), Page.enable(), Runtime.enable(), DOM.enable()]);
await Network.setRequestInterceptionEnabled({enabled: true});
Network.requestIntercepted(({interceptionId, request, resourceType}) => {
if ((request.url.indexOf('.jpg') >= 0) || (request.url.indexOf('.png') >= 0)) {
console.log(JSON.stringify(request));
console.log(resourceType);
if (request.url.indexOf("/unspecified.jpg") >= 0) {
console.log("FOUND unspecified.jpg");
console.log(JSON.stringify(interceptionId));
// console.log(JSON.stringify(Network.getResponseBody(interceptionId)));
}
}
Network.continueInterceptedRequest({interceptionId});
});
Network.loadingFinished(({requestId, timestamp, encodedDataLength}) => {
console.log(requestId);
console.log(timestamp);
console.log(encodedDataLength);
});
Page.navigate({
url: 'https://www.yahoo.com/'
});
Page.loadEventFired(async() => {
protocol.close();
chrome.kill();
});
})();
This should get you 90% of the way there. It gets the body of each image request. You'd still need to base64decode, check size and save etc...
const CDP = require('chrome-remote-interface');
const sizeThreshold = 1024;
async function run() {
try {
var client = await CDP();
const { Network, Page } = client;
// enable events
await Promise.all([Network.enable(), Page.enable()]);
// commands
const _url = "https://google.co.za";
let _pics = [];
Network.responseReceived(async ({requestId, response}) => {
let url = response ? response.url : null;
if ((url.indexOf('.jpg') >= 0) || (url.indexOf('.png') >= 0)) {
const {body, base64Encoded} = await Network.getResponseBody({ requestId }); // throws promise error returning null/undefined so can't destructure. Must be different in inspect shell to app?
_pics.push({ url, body, base64Encoded });
console.log(url, body, base64Encoded);
}
});
await Page.navigate({ url: _url });
await sleep(5000);
// TODO: process _pics - base64Encoded, check body.length > sizeThreshold, save etc...
} catch (err) {
if (err.message && err.message === "No inspectable targets") {
console.error("Either chrome isn't running or you already have another app connected to chrome - e.g. `chrome-remote-interface inspect`")
} else {
console.error(err);
}
} finally {
if (client) {
await client.close();
}
}
}
function sleep(miliseconds = 1000) {
if (miliseconds == 0)
return Promise.resolve();
return new Promise(resolve => setTimeout(() => resolve(), miliseconds))
}
run();
I am using the npm package react-native-fetch-blob.
I have followed all the steps from the git repository to use the package.
I then imported the package using the following line:
var RNFetchBlob = require('react-native-fetch-blob');
I am trying to request a BLOB containing an image from the a server.
This is my main method.
fetchAttachment: function(attachment_uri) {
var authToken = 'youWillNeverGetThis!'
var deviceId = '123';
var xAuthToken = deviceId+'#'+authToken
//Authorization : 'Bearer access-token...',
// send http request in a new thread (using native code)
RNFetchBlob.fetch('GET', config.apiRoot+'/app/'+attachment_uri, {
'Origin': 'http://10.0.1.23:8081',
'X-AuthToken': xAuthToken
})
// when response status code is 200
.then((res) => {
// the conversion is done in native code
let base64Str = res.base64()
// the following conversions are done in js, it's SYNC
let text = res.text()
let json = res.json()
})
// Status code is not 200
.catch((errorMessage, statusCode) => {
// error handling
});
}
I keep receiving the following error:
"Possible Unhandled Promise Refection(id: 0): TypeError: RNFetchBlob.fetch is not a function".
Any ideas?
The issue is you are using ES5 style require statements with a library written against ES6/ES2015. You have two options:
ES5:
var RNFetchBlob = require('react-native-fetch-blob').default
ES6:
import RNFetchBlob from 'react-native-fetch-blob'
My import looks like this : import RNFetchBlob from 'rn-fetch-blob';
but I'v got an error : TypeError: RNFetchBlob.scanFile is not a function
My code:
const downloadAudio = async () => {
const { config, fs } = RNFetchBlob;
const meditationFilesPath =
Platform.OS == 'android'
? `${fs.dirs.DownloadDir}/meditations/${id}`
: `${fs.dirs.DocumentDir}/meditations/${id}`;
let audio_URL = track;
let options = {
fileCache: true,
path: meditationFilesPath + `/${id}.mp3`,
addAndroidDownloads: {
// Related to the Android only
useDownloadManager: true,
notification: true,
path: meditationFilesPath + `/${id}.mp3`,
description: 'Audio',
},
};
try {
const resAudio = await config(options).fetch('GET', audio_URL.uri);
if (resAudio) {
const audio = await RNFetchBlob.fs.scanFile([
{ path: resAudio.path(), mime: 'audio/mpeg' },
]);
console.log('res -> ', audio);
Alert.alert('Audio Downloaded Successfully.');
}
} catch (error) {
console.error('error from downloadAudio', error);
}
};