I am trying to read a file in node.js but I am getting tired of writing so many callbacks. Is there a way I can just read a file in one line?
If you're just loading a config or a template you can use the sync read method
var my fileData = fs.readFileSync('myFileName');
If you need you do it as a reply to an http request you can use the streaming API
function (req, res) {
fs.createReadStream('myFileName').pipe(res);
}
Callbacks are king, but you can use anonymous callbacks...
fs.readFile('/etc/passwd', function (err, data) {
if (err) throw err;
console.log(data);
});
http://nodejs.org/api/fs.html#fs_fs_readfile_filename_options_callback
Related
I have a file readData.txt has the values "10,20,30,40,50,........"like this I have the numbers.
Now I want to write the sum of those values in to the other file called sumfile.txt. I'm using fs.readFile and fs.writeFile functions which are asynchronous.
I have tried using Promises, It worked. But I'm curious that can we do it without using Promises. I'm trying to achieve that without Promises.
If anybody know any other ways I'll be thankful.
You can use the callback parameter of fs.readFile:
fs.readFile('/etc/passwd', (err, data) => {
if (err) throw err;
console.log(data);
});
You can use the callback parameter of fs.writeFile:
const data = new Uint8Array(Buffer.from('Hello Node.js'));
fs.writeFile('message.txt', data, (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
EDIT
You can do this synchronously as well, using fs.readFileSync
fs.readFileSync('<directory>');
and fs.writeFileSync
But it is better to keep things async. It is difficult at first, but all your struggles will be rewarded.
Currently i am using putObject to upload the large file to AWS s3 with REST api call.
var params ={
Bucket:'lambdacushbu',
Key:req.files.image.name,
Body:req.files.image.data
}
s3.putObject(params,function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
console.timeEnd('Uploadtime');
console.log("uploaded",data);
res.json({
'status':'Uploaded',
'url':data.Location
});
} // successful response
});
But its looks like asynchronous i want the above in synchronous mode also a timeout is occurred but the file is being uploaded to the AWS s3.
So how can i increase the timeout value?? tried with connect-timeout package
app.use(timeout('600000'));
But it dosen't worked
Try using upload function instead of putObject. That should solve your timeout problem.
Here is a documentation for that function: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property
Synchronous call will definitely lower your app's performance. Can you provide me more details about your problem so we can find an async solution?
EDIT:
Here is how you should return response in your controller:
router.post('/your-route',
//additional middlewares
function(req, res, next) {
var params = {
Bucket:'lambdacushbu',
Key:req.files.image.name,
Body:req.files.image.data
}
s3.upload(params,function(err, data) {
if (err) { res.json(err); }
else {
res.json({
'status':'Uploaded',
'url':data.Location
}
});
}
);
And make sure you don't call res.json() or res.send() anywhere else in this route
I'm starting to use NodeJs recently and I'm trying to create a API that will get some information from web compile it and show to the user.
My question is the follow
router.get('/', function (req, res, next) {
https.get(pageUrl, function (res) {
res.on('data', function (responseBuffer) {
//Important info;
info = responseBuffer;
}
}
res.render('page', { important: info});
}
How can I wait until I have the "info" var and then send the res.render. Because right now if I try to wait it usually the program ends and don't wait the result.
Thanks.
Assuming your https.get call gives you a stream with an 'end' event [1], you can do the following:
router.get('/', function (req, res, next) {
https.get(pageUrl, function (res) {
var info;
res.on('data', function (responseBuffer) {
//Important info;
info = responseBuffer;
}
res.on('end', function() {
res.render('page', { important: info});
})
}
}
Note that the above code will not work because you shadowed the base res parameter with the res parameter from the https.get callback.
Also, note that the 'data' event may be emitted several times (again, assuming a standard stream implementation[1]), so you should accumulate the results inside your info variable.
[1] Could you please post more information about your code, such as where the https library comes from (is it the standard HTTPS lib?).
Personal thought: I highly suggest using the request module, disponible on NPM via npm install request, for HTTP(S) requests to external services. It's got a neat interface, is simple to use and handles a lot of situations for you (redirects are one example, JSON and Content-Type another).
I am using nodejs streams to fetch data from my level database (using leveldb). Here is a snippet of the code I use:
app.get('/my/route', function (req, res, next) {
leveldb.createValueStream(options)
.pipe(map.obj(function (hash) {
level.get(somekeys, function (err, item) {
return item
})
}))
.pipe(res)
})
So far, I have been using the list-stream module to get the data on the client side.
Now I'd like to retrieve the information on the client side as a stream. I've read this post (http://www.kdelemme.com/2014/04/24/use-socket-io-to-stream-tweets-between-nodejs-and-angularjs/) on how to do it using socket.io but I don't want to use socket.io
Is there any simple way to do it?
This can be done with Shoe. You have to compile the client code with browserify and you can have a stream in the browser that receives the data from the server.
The createValueStream basically is a read stream, then you can listen to the event eg., data, end: https://github.com/substack/node-levelup#createValueStream
Just need to listen to end event to finish the stream.
app.get('/my/route', function (req, res, next) {
leveldb.createValueStream(options)
.pipe(map.obj(function (hash) {
level.get(somekeys, function (err, item) {
return item
})
}))
.pipe(res)
.on('end', res.end)
})
There are several tutorials that describe how to scrape websites with request and cheerio. In these tutorials they send the output to the console or stream the DOM with fs into a file as seen in the example below.
request(link, function (err, resp, html) {
if (err) return console.error(err)
var $ = cheerio.load(html),
img = $('#img_wrapper').data('src');
console.log(img);
}).pipe(fs.createWriteStream('img_link.txt'));
But what if I would like to process the output during script execution? How can I access the output or send it back to the calling function? I, of course, could load img_link.txt and get the information from there, but this would be to costly and doesn't make sense.
You can wrap request in a function that will callback with html
function(link, callback){
request(link, function(err, im, body){
callback(err, body);
});
});
Then assign it to exports and use in any other module.
Remove the pipe all together.
request(link, function (err, resp, html) {
if (err) return console.error(err)
var $ = cheerio.load(html);
var img = $('#img_wrapper').data('src'); // the var img now has the src attr of some image
return img; // Will return the src attr
});
Update
By your comments, it seems like your request function is working as expected, but the problem is rather accessing the data from another module.
I suggest you read this Purpose of Node.js module.exports and how you use it.
This is also a good resource article describing how require and exports are working.
Put the code above in a module
Use the module.exports
Require the module in another file