I am trying to unzip a file called Restart.Manager.zip which contains a single item, Restart Manager.app. This code seems to unzip the file correctly but upon launching the outputted .app folder, I get an error "The application “Restart Manager” can’t be opened."
const JSZip = require('jszip');
const fs = require('fs');
const jetpack = require('fs-jetpack');
const originalFs = require('original-fs');
async function extractZip(filePath, destination) {
fs.readFile(filePath, function(err, data) {
if (!err) {
var zip = new JSZip();
zip.loadAsync(data).then(function(contents) {
Object.keys(contents.files).forEach(function(filename) {
const file = zip.file(filename);
if (file) {
file.async('nodebuffer').then(function(content) {
var dest = destination + '/' + filename;
if (filename.endsWith('.asar')) {
originalFs.writeFileSync(dest, content)
} else {
jetpack.write(dest, content);
}
});
}
});
});
}
});
};
extractZip('/Users/me/Desktop/Restart.Manager.zip', '/Users/me/Desktop')
Manually unzipping the .zip file creates a working .app so I'm not sure where the code is messing up.
Here is the file on GitHub releases for testing: https://github.com/itw-creative-works/restart-manager-download-server/releases/download/installer/Restart.Manager.zip but feel free to use your own zipped .app file (although it should probably be an Electron app in which case you can find one here https://www.electronjs.org/apps)
I have tried zipping things like a .png and it unzips fine, which makes me think it is having problems with .app files or possibly the fact that the .app contains a .asar file which Electron supposedly has problems handling when it comes to the fs module: https://github.com/electron/electron/issues/1658
Related
My code successfully downloads some of the files but not all. Execution just stops in the middle without any errors and the last file may be only partially downloaded. It always fails to download some files even if I change the number of files or if I use different files. File size doesn't seem to matter.
I've tried many things but it seems that I'm not able to catch any exceptions or errors when it stops. I've tried using try-catch and process.on events but not able to catch anything.
I'm pretty sure few months ago I used this kind of code to download hundreds of files without any problems.
Here is a simplified version of my current code.
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({ keyFilename: 'D:/myProject/myKeyFile.json' });
var folder = 'D:/myProject/downloadedFiles';
var bucketName = 'bucket_1';
async function downloadFile(fileName) {
var fullPath = folder + '/' + fileName;
const options = {
destination: fullPath,
};
await storage.bucket(bucketName).file(fileName).download(options);
console.log(`gs://${bucketName}/${fileName} downloaded to ${fullPath}.`);
}
async function downloadFiles() {
var filenames = ['file1', 'file2', 'file3', 'file4', 'file5', 'file6'];
for(var i = 0; i < filenames.length; i++){
await downloadFile(filenames[i], i).catch(console.error);
}
}
downloadFiles().catch(console.error);
It turns out the download only fails on that specific computer, so my code is not the problem. My guess it is network related, maybe because a network switch was replaced some time ago.
Basically I want to do the equivalent of this How to strip path while archiving with TAR but with the tar commands imported to NodeJS, so currently I'm doing this:
const gzip = zlib.createGzip();
const pack = new tar.Pack(prefix="");
const source = Readable.from('public/images/');
const destination = fs.createWriteStream('public/archive.tar.gz');
pipeline(source, pack, gzip, destination, (err) => {
if (err) {
console.error('An error occurred:', err);
process.exitCode = 1;
}
});
But doing so leaves me with files like: "/public/images/a.png" and "public/images/b.png", when what I want is files like "/a.png" and "/b.png". I want to know how I can add to this process to strip out the unneeded directories, while keeping the files where they are.
You need to change working directory:
// cwd The current working directory for creating the archive. Defaults to process.cwd().
new tar.Pack({ cwd: "./public/images" });
const source = Readable.from('');
Source: documentation of node-tar
Example: https://github.com/npm/node-tar/blob/main/test/pack.js#L93
Total newbie in Gulp, really would like some assistance..
I am trying to name & create a new file using a string that exist in another file. this will give me the name of the white label that was deployed onto the server.
The content of the file that holds the string is (among other things) {"TITLE":"name_env"}
name_env should be the new name of the file with the suffix of .web
meaning that the new file would be like this name_env.web
What I've came up until now was:
gulp.task('label', function () {
var str = require('path/to/file/file.json')
return file('label', str, {src: true})
.pipe(gulp.dest('build/'))
});
Am I on the right track ?
Hopefully I've managed to explain myself..
Here's gulp file which will do your task(assuming there's dist folder already exist!!)
var gulp = require('gulp');
var fs = require('fs');
gulp.task('label', function() {
var buffer = JSON.parse(fs.readFileSync('path/to/file.json', 'utf8'));
return fs.writeFile('dist/' + buffer['TITLE'] + '.web' , buffer, { flag: 'wx' }, function(err) {
if (err) throw err;
console.log("It's saved!");
});
});
I have a very simple node lambda function which reads the contents of packaged file in it. I upload the code as zip file. The directory structure is as follows.
index.js
readme.txt
Then have in my index.js file:
fs.readFile('/var/task/readme.txt', function (err, data) {
if (err) throw err;
});
I keep getting the following error NOENT: no such file or directory, open '/var/task/readme.txt'.
I tried ./readme.txt also.
What am I missing ?
Try this, it works for me:
'use strict'
let fs = require("fs");
let path = require("path");
exports.handler = (event, context, callback) => {
// To debug your problem
console.log(path.resolve("./readme.txt"));
// Solution is to use absolute path using `__dirname`
fs.readFile(__dirname +'/readme.txt', function (err, data) {
if (err) throw err;
});
};
to debug why your code is not working, add below link in your handler
console.log(path.resolve("./readme.txt"));
On AWS Lambda node process might be running from some other folder and it looks for readme.txt file from that folder as you have provided relative path, solution is to use absolute path.
What worked for me was the comment by Vadorrequest to use process.env.LAMBDA_TASK_ROOT. I wrote a function to get a template file in a /templates directory when I'm running it locally on my machine with __dirname or with the process.env.LAMBDA_TASK_ROOT variable when running on Lambda:
function loadTemplateFile(templateName) {
const fileName = `./templates/${templateName}`
let resolved
if (process.env.LAMBDA_TASK_ROOT) {
resolved = path.resolve(process.env.LAMBDA_TASK_ROOT, fileName)
} else {
resolved = path.resolve(__dirname, fileName)
}
console.log(`Loading template at: ${resolved}`)
try {
const data = fs.readFileSync(resolved, 'utf8')
return data
} catch (error) {
const message = `Could not load template at: ${resolved}, error: ${JSON.stringify(error, null, 2)}`
console.error(message)
throw new Error(message)
}
}
This is an oldish question but comes up first when attempting to sort out whats going on with file paths on Lambda.
Additional Steps for Serverless Framework
For anyone using Serverless framework to deploy (which probably uses webpack to build) you will also need to add the following to your webpack config file (just after target: node):
// assume target: 'node', is here
node: {
__dirname: false,
},
Without this piece using __dirname with Serverless will STILL not get you the desired absolute directory path.
I went through this using serverless framework and it really was the file that was not sent in the compression. Just add the following line in serverless.yml:
package:
individually: false
include:
- src/**
const filepath = path.resolve('../../filename.text');
const fileData2 = fs.readFileSync(process.env.LAMBDA_TASK_ROOT + filepath, 'utf-8');
I was using fs.promises.readFile(). Couldn't get it to error out at out. The file was there, and LAMBDA_TASK_ROOT seemed right to me as well. After I changed to fs.readFileSync(), it worked.
I hade the same problem and I tried applying all these wonderful solutions above - which didn't work.
The problem was that I setup one of the folder name with one letter in upper case which was really lowercase.
So when I tried to fetch the content of /src/SOmething/some_file.txt
While the folder was really /src/Something/ - I got this error...
Windows (local environment) is case insensitive while AWS is not!!!....
I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.