I have an Ole storage file which needs to be write by a writer and read by multiple readers. When i try to open stream with CFile::shareDenyWrite it's not opening stream. It's returning false.Stream is opening if i use shareEcxlusive but than i have to make storage file share Exclusive.
Is there any way to open OleStorage files with one writer and multiple readers?
Yes, but you should be using StgOpenStorageEx to work with OleStorage files NOT CFile. Read the comments on STGM_DIRECT_SWMR (Single Writer Multiple Readers) for more information:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa380342%28v=vs.85%29.aspx
You'll need to make sure that you've initialized COM before using COM based functions (just add AfxOleInit() in your InitInstance and the framework handles the start-up and shut-down).
Related
I work on Linux. How to know that a gzip file is ready? I have a server that polls files in directory /dir/. There is an another, independent process that gzip files to /dir/. How can my server know that file is ready?
There is no ready-made solution for this. Looking at the last modification timestamp of the file (mtime) is not reliable because writes could delayed if the system is overloaded (or the input to the gzip operation is not ready), or the generating process may stop writing because it has crashed.
Usually, when applications need to do something like this, they write the temporary file under a different name, following a specific pattern. The reading process recognizes the temporary files and skips them, assuming that they are still a work in process and incomplete. Once the writer is finished, it renames the file to its final name (which is an atomic operation), and only then, the reader picks it up. This approach became popular with Dan Bernstein's maildir format:
Using maildir format
In maildir, a different directory is used for staging, but the general principle is the same.
It is also possible to use lock files and POSIX advisory locking, but they lead to more complexity. However, in some cases, they can be employed in such a way that busy waiting/polling/periodic probing is not necessary.
How can I implement a system where multiple Node.js processes write to the same file with fs.createWriteStream, such that they don't overwrite data? It looks like the default setup for fs.createWriteStream is that the file is cleared out when that method is called. My goal is to clear out the file once, and then have all other subsequent writers only append data.
Should I use fs.createWriteStream and then fs.appendFile? Or is there a way to open up a stream for each process, not just for the first process to open the file?
Should I use fs.createWriteStream and then fs.appendFile?
you can use either.
with fs.createWriteStream you have to change the flag like this:
fs.createWriteStream('your_file',{
flags: 'a+', // default is 'w' (just 'a' might be enough here, i'm not sure)
})
this should create the file if it doesn't exist or open it with write access if it exists and set the pointer to end. (append mode)
How to use fs.appendFile should be clear and it does pretty much the same.
Now the problem with multiple processes accessing the same file. Obviously only one process can open the same file with write access at the same time.
Therefore you need to wait for the file to be released if another process has the write access. You will probably need a library for that.
this one for example: https://www.npmjs.com/package/lockup
or this one: https://github.com/Perennials/mutex-node
you can also find alot more here: https://www.npmjs.com/browse/keyword/lock
or here: https://www.npmjs.com/browse/keyword/mutex
I have not tried any of those libraries but the one I posted and several others on the list should do exactly what you need.
Writing on a single file from multiple processes, ensuring data integrity, it is a fairly complex operation that you can orchestrate using File locking.
However, you have two simpler approaches:
Writing on a temporary file for each process, and then concatenate
the files at the end of the operations.
Transmitting what you need to write to a dedicated, single process and delegate the writing execution to it. Keep in mind that sending messages among processes can be expensive.
I have created a worker thread.
One thread prints the natural numbers by creating one .txt file and my intention is to open the same file and print even numbers.
I am able to print in different files by creating new .txt file in another thread.
But I need the same file (which is created by first thread) to be opened and print even numbers.
Please help me out.
There's a couple of ways I can think of to do this :
Use a critical section around the file open/write/close sections in each of the two threads (I think you'll probably need to close the file after each write before you release the critical section).
Use a third thread to do all the file writing and just pass messages from the other two threads to it to tell it what to write to file.
I'm a little confused between the 2 methods, hope somebody could enlighten
me on the difference between fs.open->fs.write, fs.writeFile, fs.writeStream.
fs.open and fs.write are for low-level access, similar to what you get when you code in C. fs.open opens a file and fs.write writes to it.
A fs.WriteStream is a stream that opens the file in the background and queues writes until the file is ready. Also, as it implements the stream API, you can use it in a more generic way, just like a network stream or so. You'll e.g. want this when a user uploads a file to your server - take the incoming HTTP POST stream, pipe() it to the WriteStream. Very easy.
fs.writeFile is a high-level method for writing a bunch of data you have in RAM to a file. It doesn't support streaming or so, so it's a bad idea for large files or performance-critical stuff. You'll want this if you write out small JSON files or so in your code.
Forget for a second the question of why on earth would you do such a thing - if, for whatever reason, two FileAppenders are configured with the same file - will this setup work?
Log4j's FileAppender does not allow for two JVM's writing to the same file. If you try, you'll get a corrupt log file. However, logback, log4j's successor, in prudent mode allows two appenders even in different JVMs to write to the same file.
It doesn't directly answer your question, but log4*net*'s FileAppender has a LockingModel attribute that you can set to only lock when the file is actually in use. So if you had two FileAppenders working in the same thread with MinimalLock set, it would probably work perfectly fine. On different threads, you might hit deadlock once in a while.
The FileAppender supports pluggable file locking models via the LockingModel property. The default behavior, implemented by FileAppender.ExclusiveLock is to obtain an exclusive write lock on the file until this appender is closed. The alternative model, FileAppender.MinimalLock, only holds a write lock while the appender is writing a logging event.
A cursory web search didn't turn up any useful results about implementing MinimalLock in log4j.
From Log4j FAQ a3.3
How do I get multiple process to log to the same file?
You may have each process log to a SocketAppender. The receiving SocketServer (or SimpleSocketServer) can receive all the events and send them to a single log file.
As to what that actually means I will be investigating myself.
I also found the following workaround on another SO question:
Code + Example