How to fix ENOENT: no such file or directory? - node.js

I am writing node js code to upload a picture to my Google Drive from Raspberry Pi
I have tried to upload the image file produced in the same folder. Node js seems to ignore the file. It always returns ENOENT although the file is present. I have verified the existence of the file manually. The path and the filename are correct. I have also verified by printing the filename to the console and it seems to match.
var fileName1 = Date.now();
const path = require('path');
const fs1 = require('fs');
var fN = fileName1+".jpg";
console.log("Only filename : "+fN);
const finalPath = path.join(__dirname, fN);
console.log("Final filename : "+finalPath);
var media = {
mimeType: 'image/jpeg',
//PATH OF THE FILE FROM YOUR COMPUTER
body: fs1.createReadStream(finalPath)
};
Output
Only filename : 1571724785329.jpg
Final filename : /home/pi/nodecode/1571724785329.jpg
events.js:174
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory,open
'/home/pi/nodecode/1571724785329.jpg'

Thanks a lot everyone for sharing their views and try to work out the answer for me. I found the problem. The file was not created at that instant when nodejs was trying to locate the file. The file creation and upload was happening parallel. So I used python instead of nodejs.

Your Code seems to work for me. The only problem I see is in the way you are creating a file name.. There is no chance the filename is going to be the same as Date.Now() no matter how you generate it previously.
Thus the sequence of code execution is the only possible cause for your problem. For example, the file you expect to load in the readable stream is not yet created when the code to create a read stream is executing.
TRY
Debugging the code execution sequence and find out if the file is created before or after this piece of code is being executed fs1.createReadStream(finalPath)
var fileName1 = "wf";
const path = require('path');
const fs1 = require('fs');
var fN = fileName1 + ".jpg";
console.log("Only filename : " + fN);
const finalPath = path.join(__dirname, fN);
console.log("Final filename : " + finalPath);
var media = {
mimeType: 'image/jpeg',
//PATH OF THE FILE FROM YOUR COMPUTER
body: fs1.createReadStream(finalPath)
};
console.log(media);
The Output I got.
Only filename : wf.jpg
Final filename : D:\dir1\dir2\myproject\wf.jpg
{ mimeType: 'image/jpeg',
body:
ReadStream {
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 65536,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: null,
pipesCount: 0,
mode: 438,
start: undefined,
end: Infinity,
autoClose: true,
pos: undefined,
bytesRead: 0,
closed: false } }

Related

ytdl-core no stream data

I am making a NodeJS Music bot for discord, and I suddenly encountered a problem. The bot properly joins the channel, lights up (indicating it is speaking), but then there's no audio. After trying to find the root cause of the problem, I believe it to be a problem with the ytdl() function from the ytdl-core module.
const stream = await ytdl(song.url, {
filter: 'audioonly',
type: 'opus',
highWaterMark: waterMark
});
Looking at the result of stream, I found this:
PassThrough {
_readableState: ReadableState {
objectMode: false,
highWaterMark: 524288,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
...
Which meant that I am not getting any buffer/stream data. It is indeed playing, but because there's nothing-- there's only silence to be heard.
I tried using pipe() and it worked just fine, but I can't play it as it is through my Music Bot.
ytdl function is like synchronous function.
"ytdl-core": "^4.10.1"
const stream = await ytdl(song.url); pause code execution until ytdl finished downloading stream.
console.log("YTDL Start");
const stream = await ytdl(song.url, {
filter: 'audioonly',
type: 'opus',
highWaterMark: waterMark
});
console.log("Is it done?");
stream.on('close', () => {
console.log('Read stream closed');
});
stream.on('finish', () => {
console.log('Read stream Finished');
});

Failure code 4 while uploading file to SFTP server using Node.js

I have written code to establish a SFTP connection and transfer files to the SFTP server using Node.js sftp.put command. I'm getting the following error while transferring the file. I can establish the connection successfully. But I cannot read/write files to the server. I have attached the code below
Code
let sftp = new client();
let filename = "sound.mp3";
const filePath = path.join(__dirname, '../audio', filename)
const putConfig = {
flags: 'w', // w - write and a - append
encoding: null, // use null for binary files
mode: 0o666, // mode to use for created file (rwx)
autoClose: true // automatically close the write stream when finished
};
sftp.connect({
host: 'host',
port: '22',
username: 'xxxx',
password: 'xxx'
}).then(() => {
return sftp.put(filePath, '/', putConfig)
}).then(data => {
console.log(data, 'the data info');
}).catch(err => {
console.log(err, 'catch error');
});
Error
Error: put->put: Failure /data
at fmtError (D:\project\node_modules\ssh2-sftp-client\src\utils.js:53:18)
at SftpClient.put (D:\project\node_modules\ssh2-sftp-client\src\index.js:684:13)
at processTicksAndRejections (internal/process/task_queues.js:93:5) {
code: 4,
custom: true
}
D:\project\node_modules\ssh2\lib\protocol\crypto\poly1305.js:20
function J(a){if(b.onAbort)b.onAbort(a);L(a);O=!0;a=new WebAssembly.RuntimeError("abort("+a+"). Build with -s ASSERTIONS=1 for more info.");r(a);throw a;}var V="data:application/octet-stream;base64,",W="data:application/octet-stream;base64,AGFzbQEAAAABIAZgAX8Bf2ADf39/AGABfwBgAABgAAF/YAZ/f39/f38AAgcBAWEBYQAAAwsKAAEDAQAAAgQFAgQFAXABAQEFBwEBgAKAgAIGCQF/AUGAjMACCwclCQFiAgABYwADAWQACQFlAAgBZgAHAWcABgFoAAUBaQAKAWoBAAqGTQpPAQJ/QYAIKAIAIgEgAEEDakF8cSICaiEAAkAgAkEAIAAgAU0bDQAgAD8AQRB0SwRAIAAQAEUNAQtBgAggADYCACABDwtBhAhBMDYCAEF/C4wFAg5+Cn8gACgCJCEUIAAoAiAhFSAAKAIcIREgACgCGCESIAAoAhQhEyACQRBPBEAgAC0ATEVBGHQhFyAAKAIEIhZBBWytIQ8gACgCCCIYQQVsrSENIAAoAgwiGUEFbK0hCyAAKAIQIhpBBWytIQkgADUCACEIIBqtIRAgGa0hDiAYrSEMIBatIQoDQCASIAEtAAMiEiABLQAEQQh0ciABLQAFQRB0ciABLQAGIhZBGHRyQQJ2Qf///x9xaq0iAyAOfiABLwAAIAEtAAJBEHRyIBNqIBJBGHRBgICAGHFqrSIEIBB+fCARIAEtAAdBCHQgFnIgAS0ACEEQdHIgAS0ACSIRQRh0ckEEdkH///8fcWqtIgUgDH58IAEtAApBCHQgEXIgAS0AC0EQdHIgAS0ADEEYdHJBBnY
RuntimeError: abort(Error: put->put: Failure /data). Build with -s ASSERTIONS=1 for more info.
at process.J (D:\project\node_modules\ssh2\lib\protocol\crypto\poly1305.js:20:53)
at process.emit (events.js:210:5)
at process.EventEmitter.emit (domain.js:475:20)
at processPromiseRejections (internal/process/promises.js:201:33)
at processTicksAndRejections (internal/process/task_queues.js:94:32)
The second argument of SftpClient.put is a path to the target remote file, not only path to the target remote folder.
So it should be like:
return sftp.put(filePath, '/' + filename, putConfig)

create password pdf file in lambda nodejs

here is a problem in aws lambda using node.js for creating password in copied pdf file.
const aws = require("aws-sdk");
const fs = require("fs");
const QPDF = require("node-qpdf");
const s3 = new aws.S3();
exports.handler = async (event) => {
const params = {
Bucket: "BucketName",
Key: "key"
};
const s3Object = await s3.getObject(params).promise();
fs.writeFileSync('/tmp/test.pdf', s3Object.Body.toString('base64'),{'encoding':'base64'});
var options = {
keyLength: 128,
password: 'abc123',
restrictions: {
print: 'low',
useAes: 'y'
}
}
QPDF.encrypt('/tmp/test.pdf',options,(err)=>{
if(err)console.log(err,err.stack);
});
fs.exists ( '/tmp/test.pdf', function (exists) {
console.log (exists);
const file = fs.readFileSync('/tmp/test.pdf');
console.log(file);
const params = {
Bucket: "BucketName",
Key: "test.pdf",
Body: file
};
s3.upload(params, (err,data) => {
if (err) console.log(err);
console.log(data);
});
});
};
this is my code!
i could copy my pdf file in S3 and successed upload the copy pdf file in S3.
So next step, i wanted to create a password in that copy pdf file.
So i used code here,
var options = {
keyLength: 128,
password: 'abc123',
restrictions: {
print: 'low',
useAes: 'y'
}
}
QPDF.encrypt('/tmp/test.pdf',options,(err)=>{
if(err)console.log(err,err.stack);
});
but there is an error.
Response:
{
"errorType": "Error",
"errorMessage": "/bin/sh: qpdf: command not found\n",
"trace": [
"Error: /bin/sh: qpdf: command not found",
"",
" at Socket.<anonymous> (/opt/nodejs/node_modules/node-qpdf/index.js:124:17)",
" at Object.onceWrapper (events.js:300:26)",
" at Socket.emit (events.js:210:5)",
" at Socket.EventEmitter.emit (domain.js:476:20)",
" at addChunk (_stream_readable.js:308:12)",
" at readableAddChunk (_stream_readable.js:289:11)",
" at Socket.Readable.push (_stream_readable.js:223:10)",
" at Pipe.onStreamRead (internal/stream_base_commons.js:182:23)"
]
}
i installed npm qpdf and node-qpdf and checked the installed npms.
what is the problem?
QPDF is a command-line program, you'll need to have it installed (system-wise) before using it.
I tested your code on Ubuntu after installing QPDF and it worked. You can refer to the repository link above for other systems.
sudo apt-get install qpdf
You need to build a standalone package and add it to the zip, you upload to AWS Lambda. Here is more information what needs to be done in order to generate the package: https://github.com/qpdf/qpdf/issues/352
You need to have the QPDF command line program in the run time. AWS Lambda has concept of AWS Layers which gives ability to solve these kind of issues. You can basically upload your program as zip on AWS Layer and then while creating the Lambda function you can give the reference of the layer created.
You can read more about it here -
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
Good things is you can keep your code separate and your command line program separate.
Layers can also be shared across different lambda functions.
Hope this helps.

Getting 403 from google drive api

I am trying to access google drive using the node client. This will run on a server in a background process without user involvement. In preparation, I have done the following:
Created a service account
Enabled Drive API access on the account whose drive I am accessing
Shared a particular folder in the drive with the service account (called MyFolder below).
I am able to successfully authenticate as the service account and list files inside the directory. However, I am not able to download any of the files. When I try, I apparently get a 403 error. It's kind of buried in the error message but that appears to be the issue. Here is my code:
const fs = require('fs');
const { google } = require('googleapis');
const auth = require('./service-creds.json');
(async () => {
let jwtClient = new google.auth.JWT(auth.client_email, null,
auth.private_key, ['https://www.googleapis.com/auth/drive']);
try {
const tokens = await jwtClient.authorize();
let drive = google.drive('v3');
const res1 = await drive.files.list({
auth: jwtClient, q: `name = 'MyFolder'`
});
const folder = res1.data.files[0];
const res2 = await drive.files.list({
auth: jwtClient,
q: `'${folder.id}' in parents`
});
// print out all files under MyFolder
res2.data.files.forEach(f => console.log(f.name, f.id));
const dest = fs.createWriteStream('./myfile.csv');
const file = res2.data.files[0];
const response = await drive.files.export({
fileId: file.id,
mimeType: file.mimeType,
auth: jwtClient
}, {
responseType: 'stream'
});
response.data.on('error', err => {
console.log(err);
}).on('end', () => {
console.log('done');
}).pipe(dest);
}
catch (err) {
console.log('The API returned an error: ', err);
}
})();
Here is part of the resulting error:
The API returned an error:
... at Gaxios.<anonymous> (/api-test/node_modules/gaxios/build/src/gaxios.js:73:27)
Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]:
{ body:
Gunzip {
_readableState: [Object],
readable: true,
domain: null,
_events: [Object],
_eventsCount: 7,
_maxListeners: undefined,
_writableState: [Object],
writable: true,
allowHalfOpen: true,
_transformState: [Object],
bytesRead: 0,
_opts: [Object],
_chunkSize: 16384,
_flushFlag: 2,
_finishFlushFlag: 2,
_scheduledFlushFlag: 0,
_handle: [Object],
_hadError: false,
_buffer: <Buffer 00 00 00 00 00 00 00 00 34 00 00 00 00 00 00 00 ... >,
_offset: 0,
_level: -1,
_strategy: 0 },
disturbed: false,
error: null },
[Symbol(Response internals)]:
{ url: 'https://www.googleapis.com/drive/v3/files/123abc123abc/export?mimeType=text%2Fplain',
status: 403,
statusText: 'Forbidden',
headers: Headers { [Symbol(map)]: [Object] },
counter: 0 } }
I have not been able to find anything in the error that states why the 403 is being thrown. It appears to be zipped up but I have not been able to successfully unzip any part of it.
You want to download a file of text/plain from Google Drive.
If my understanding is correct, how about this modification?
Modification points:
I think that the reason of your issue is to download the file of the mimeType of text/plain using the files.export method of Drive API.
When Google Docs (Spreadsheet, Document, Slides and so on) files are downloaded, you can do it by the files.export method of Drive API.
When you want to download the files except for Google Docs, please use the files.get method.
When I tried to download the file of text/plain using the files.export method, I could confirm that the same error occurs.
In order to reflect above points, please modify as follows.
Modified script:
From:
const response = await drive.files.export({
fileId: file.id,
mimeType: file.mimeType,
auth: jwtClient
}, {
responseType: 'stream'
});
To:
const response = await drive.files.get({
fileId: file.id,
alt: "media",
auth: jwtClient
}, {
responseType: 'stream'
});
Reference:
Download files

How to Get size of all files in a Directory in b2 storage of backblaze.com using backblaze-b2 of nodejs

Documentation is available here but shocking is that there is no api to get the directory size same as du command does in linux.
https://www.backblaze.com/b2/docs
this has api for files but none for directory size - https://www.backblaze.com/b2/docs/files.html
await b2.authorize();
await b2.listFileNames(bucketid);
await b2.getFileInfo(fileId) // gets the file info but directory has null in id field
we get this result from above -
{ accountId: '11111111',
action: 'folder',
bucketId: '44444444444',
contentLength: 0,
contentSha1: null,
contentType: null,
fileId: null,
fileInfo: {},
fileName: 'test/testinside/',
uploadTimestamp: 0 }

Resources