I am using abcPDF to dynamically create PDFs.
I want to save these PDFs for clients to retrieve any time they want. The easiest way (and the way I do now on my current server) is to simply save the finished PDF to the file system.
Seems I am stuck with using blobs. Luckily abcPDF can save to a stream as well as a file. Now, how to I wire up a stream to a blob? I have found code that shows the blob taking a stream like:
blob.UploadFromStream(theStream, options);
The abcPDF function looks like this:
theDoc.Save(theStream)
I do not know how to bridge this gap.
Thanks!
Brad
As an alternative that doesn't require holding the entire file in memory, you might try this:
using (var stream = blob.OpenWrite())
{
theDoc.Save(stream);
}
EDIT
Adding a caveat here: if the save method requires a seekable stream, I don't think this will work.
Given the situation and not knowing the full list of overloads of Save() method of abcPdf, it seems that you need a MemoryStream. Something like:
using(MemoryStream ms = new MemoryStream())
{
theDoc.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(ms, options);
}
This shall do the job. But if you are dealing with big files, and you are expecting a lot of traffic (lots of simultaneous PDF creations), you might just go for a temp file. Write the PDF to a temp file, then immediatelly upload the temp file for the blob.
Related
I have a UDP client that grabs some data from another source and writes it to a file on the server. Since this is large amount of data, I dont want the end user to wait until they its full written to the server so that they can download it. So I made a NodeJS server that grabs the latest data from the file and sends it to the user.
Here is the code:
var stream = fs.readFileSync(filename)
.on("data", function(data) {
response.write(data)
});
The problem here is, if the download starts when the file was only for example 10mb.. the fs.readFileSync will only read my file up to 10mb. Even if 2 mins later the file increased to 100mb. fs.readFileSync will never know about the new updated data. How can I do this in Node? I would like somehow refresh the fs state or maybe perpaps wait for new data using fs file system. Or is there some kind of fs fileContent watcher?
EDIT:
I think the code below describes better what I would like to achieve, however in this code it keeps reading forever and I dont have any variable from fs.read that can help me stop it:
fs.open(filename, 'r', function(err, fd) {
var bufferSize=1000,
chunkSize=512,
buffer=new Buffer(bufferSize),
bytesRead = 0;
while(true){ //check if file has new content inside
fs.read(fd, buffer, 0, chunkSize, bytesRead);
bytesRead+= buffer.length;
}
});
Node has built-in methods in the fs module. It is tagged as unstable, so it can change in the future.
Its called: fs.watchFile(filename[, options], listener)
You can read more about it here: https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener
But i highly suggest you to use one of the good modules mantained actively like
watchr:
From his readme:
Better file system watching for Node.js. Provides a normalised API the
file watching APIs of different node versions, nested/recursive file
and directory watching, and accurate detailed events for
file/directory changes, deletions and creations.
The module page is here: https://github.com/bevry/watchr
(Used the module in a couple of proyects and working great, im not related to it in other way)
you need store in some data base last size of file.
read filesize first.
load your file.
then make a script to check if file was change.
you can consult the size with jquery.post to obtain your result and decide if need to reload in javascript
Hei there, I'm using PrimeFaces 5/JSF 2 and tomcat!
Can someone show me or give me an idea on how to store pdfs for a limited time on an application server(I'm using tomcat) and then download it (if that's what the user requests). This functionality relates to invoices so I can't use the dataExporter.
To be more specific, I pretty much implemented this but I don't feel so sure about it. One big question is... where do I store my generated files? I've browsed around and people said that it's not ok to save the files in the webApp or in the tomcat directory. What other solutiuon do I have?
Make use of File#createTempFile() facility. The servletcontainer-managed temporary folder is available as application scoped attribute with ServletContext.TEMPDIR as key.
String tempDir = (String) externalContext.getApplicationMap().get(ServletContext.TEMPDIR);
File tempPdfFile = File.createTempFile("generated-", ".pdf", tempDir);
// Write to it.
Then just pass the autogenerated file name around to the one responsible for serving it. E.g.
String tempPdfFileName = tempPdfFile.getName();
// ...
Finally, once the one responsible for serving it is called with the file name as parameter, for example a simple servlet, then just stream it as follows:
String tempDir = (String) getServletContext().getAttribute(ServletContext.TEMPDIR);
File tempPdfFile = new File(tempDir, tempPdfFileName);
response.setHeader("Content-Type", "application/pdf");
response.setHeader("Content-Length", String.valueOf(tempPdfFile.length()));
response.setHeader("Content-Disposition", "inline; filename=\"generated.pdf\"");
Files.copy(tempPdfFile.toPath(), response.getOutputStream());
See also:
How to save generated file temporarily in servlet based web application
Recommended way to save uploaded files in a servlet application
Your question is vague, but if my understanding is good:
First if you want to store the PDF for a limited time you can create a job that clean you PDFs every day or week or whatever you need.
For the download side, you can use <p:fileDownload> (http://www.primefaces.org/showcase/ui/file/download.xhtml) to download any file from the application server.
I'm writing records to a file in node.js and I need to rotate the file with a new one every so many lines or after a duration but I can't lose any lines in the process. If I try with fs.createWriteStream to create a new stream I end up losing lines by overwriting the old stream. Any advise would be much appreciated.
Don't overwrite the old stream. Create the new stream as a separate resource.
var activestream;
function startup() {
activestream = fs.createWriteStream('path');
}
function record(line) {
activestream.write(line);
}
function rotate() {
var newstream = fs.createWriteStream('path2');
activestream.end();
activestream = newstream;
}
... something like that should work. Obviously you'll have to figure out how to manage the paths.
I am going to give an unconventional answer here.
You can consider a ready made library like winston which comes with well tested functionality to do exactly what you want. Given it's meant to write logs, but you can write your csv entries too just as easily.
Another big advantage of using winston is it supports multiple transports, so you can write not only to files and rotate them you can also write to other media MongoDB and few others.
It also does stuff like conditional writing and you can define custom 'log-level' to write different functional records to your file.
I recommend you check it out before cooking your own solution.
I would recommend writing your own stream manager similar to Jason's, but instead of ending the stream when rotate is requested, let the write finish, pause the stream, rotate the file, then resume the stream. Only one stream per file should be required and you shouldn't need to recreate it.
So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
When I pipe something like an image file through a stream is there any way to send an meta object along with it?
My server gets sent an image from a user. The image gets pushed through a set of streams that perform various actions.
The final stream emits a data event, it passes the resulting image buffer into a callback but I lose all context for the user. I need to keep the resulting image tied to the user's id and some other meta data.
Ideal:
stream.on('data', function(img, meta){
...
})
Thanks for any possible solutions!
In short, no, there's nothing built into Node.js to support including metadata with streams. You do have some other options, though, including:
You could use a closure to track the meta data separately from the stream. For example:
function handleImage(imageStream) {
var meta = {...};
imageStream.pipe(otherStreams).on('data', function(image) {
// you now have `image` and `meta` variables at your disposal here.
}
}
The downside of this is that the metadata is not available to your otherStreams.
This is a good solution if your other streams are third-party code outside of your control, of if they don't need to know about the metadata.
You could do something similar to HTTP headers, where all the data up to a certain point is meta data, and everything after it is the image. (In HTTP, the deliminator is wherever \n\n occurs first.) All of your streams in the chain have to know about this and handle it though.
If you know your metadata will always be in one chunk and none of your streams split or merge chunks, then you could simplify this a bit and just say that the first (or last) chunk is always metadata.
Switch to an object stream like Amoli mentioned in his answer. Here you would pass {image: imgData, meta: {...}}. You would then have to update your other streams to expect this format.
The main downside of this method, though, is that you either have to pass the metadata multiple times, cache it somewhere for each stream that needs it, or pass your entire image as one chunk (which kind of kills the entire point of "streams"). And, from what I've been told, node.js can optimize text/binary streams better than object streams. So, this probably isn't a good approach for your situation.
https://github.com/dominictarr/mux-demux might be helpful here. It combines multiple streams into one, so you could have separate image and meta streams. I'm not sure how well it would work for your situation though. You'd probably need to update all of your streams to be aware of it.
I know I said that all but the first option require modifying the other streams, but there is a way around that: you could create a generic "stream wrapper" that splits up the image and meta data and passes just the image data through the main stream, and has the meta data bypass it and go on to the next one down the chain. This gets ugly fast though, so probably not the best idea.
Basically, whenever you want to read or write any objects which are not strings or buffers, you’ll need to put your stream into objectMode
Example (source):
function S3Lister (s3, options) {
options || (options = {});
stream.Readable.call(this, { objectMode : true });
this.s3 = s3; // a knox-like client.
this.marker = options.start;
this.connecting = false;
this.ended = false;
}
util.inherits(S3Lister, stream.Readable);
We set the stream to use objectMode as we want to return not just data but also some metadata.
For more information:
Node.js Docs stream object mode
An introduction to nodes streams
I created a module called metastream for this type of thing. (It is in npm).