I am using QFileSystemWatcher to control the log file changes.
For creating and updating the log file I am using boost library.
When I log few messages in one method file changing signal emits only ones (for last message), but I see that file updating every time after log message added.
So, the code for QFileSystemWatcher is
std::string fn = "app.log";
logging::init_log(fn);
QFileSystemWatcher* watcher = new QFileSystemWatcher();
auto success = QObject::connect(watcher, SIGNAL(fileChanged(QString)), this, SLOT(handleFileChanged(QString)));
Q_ASSERT(success);
watcher->addPath(QString::fromStdString(fn));
adding log messages
void a(){
/* some code */
logging::write_log("test error", logging::kError);
logging::write_log("test info", logging::kInfo);
}
QFileSystemWatcher emits signal only ones for Info level message.
In file manager I see that file updating after each call (test error, test info).
In log file initialization I use
sink->locked_backend()->auto_flush(true);
so the file updates immediately.
How can I fix this? Or maybe there is another approach how to handle log file updating to show message in GUI.
Similar filesystem event notifications are usually collapsed into one, unless they are consumed by a reader. For example, if the writer writes 10 bytes to a file, the thread that monitors that file for writes will typically see one event instead of 10. This is explicitly outlined in inotify description notes on Linux, which is likely used internally by QFileSystemWatcher.
This should not matter for any correct implementation of a filesystem monitoring software. The notification only allows the monitor to notice that some event happened (e.g. a write occurred), and it is up to the software to discover further details about the event (e.g. the amount of data that was written, and writing position).
If you aim to just display the written logs, you should be able to just read the file contents from the current reading position to the end of the file. That read operation may return one log record or more. It can return an incomplete log record, too, if the C++ standard library is implemented in a certain way (e.g. when auto_flush is disabled, and the stream buffer fills the internal buffer with part of the log record content before issuing write). The monitoring software should parse the read content to separate log records and detect incomplete log records (e.g. split data by newline characters).
Related
I'm writing a client-side application which should read in a file, transform its content and then export the result. To do this, I decided on Re-Frame.
Now, I'm just starting to wrap my head around Re-Frame and cloujurescipt itself and got the following thing to work:
Somewhere in my view functions, I send this whenever a new file gets selected via a simple HTML input.
[:input {:class "file-input" :type "file"
:on-change #(re-frame/dispatch
[::events/file-name-change (-> % .-target .-value)])}]
What I get is something like C:\fakepath\file-name.txt, with fakepath actually being part of it.
My event handler currently only splits the name and saves the file name to which my input above is subscribed to display the selected file.
(re-frame/reg-event-db
::file-name-change
(fn [db [_ new-name]]
(assoc db :file-name (last (split new-name #"\\")))))
Additionally I want to read in the file to later process it locally. Assuming I'd just change my on-change action and the event handler to do this instead, how would I do it?
I've searched for a while but found next to nothing. The only things that came up where other frameworks and such, but I don't want to introduce a new dependency for each and every new problem.
I'm assuming you want to do everything in the client using HTML5 APIs (eg. no actual upload to a server).
This guide from MDN may come handy: https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications
It seems you can subscribe to the event triggered when the user selects the file(s), then you can obtain a list of said files, and inspect the files contents through the File API: https://developer.mozilla.org/en-US/docs/Web/API/File
In your case, you'll need to save a reference to the FileList object from the event somewhere, and re-use it later.
I'm trying to use Tail (https://www.npmjs.com/package/tail) to export Minecraft server log data to Discord (The discord bot part works, so I have excluded it from here).
If I say something in the game and then check "latest.log", it has been changed accordingly. However, using this script, the bot only sees a change if I open "latest.log" in notepad, it doesn't work otherwise. The bot will recognize changes as long as "latest.log" is open in the background, which is an annoyance but not too big of a deal.
However, my friend is the one who I was making this for, and for him Tail only updates the moment he opens "latest.log". Which means he would need to keep opening up that file for Tail to see it, instead of just letting it run in the background.
Tail = require('tail').Tail;
var fileToTail = "C:/Users/user/Downloads/logs/latest.log";
tail = new Tail(fileToTail);
tail.on("line", function(data) {
//Working code that sends data
});
tail.on("error", function(error) {
console.log('ERROR: ', error);
});
What could be causing the discrepancy between the two of us, and what can I do so that the bot can see the file changes without the user opening the file? Thanks in advance!
If you are using chokidar, you should pay attention to whether you are using fs.watch Vs. fs.watchFile. If using fs.watch you not successfully catch changes (which might be what you are experiencing).
See below from official docs for chokidar options:
usePolling (default: false). Whether to use fs.watchFile (backed by
polling), or fs.watch. If polling leads to high CPU utilization,
consider setting this to false. It is typically necessary to set this
to true to successfully watch files over a network, and it may be
necessary to successfully watch files in other non-standard
situations. Setting to true explicitly on MacOS overrides the
useFsEvents default. You may also set the CHOKIDAR_USEPOLLING env
variable to true (1) or false (0) in order to override this option.
I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing --> Can't get a Writer while an OutputStream is already in use
The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent.
I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.
Here's what's really going on with this error:
XPages are essentially servlets... everything that happens in an XPage is just layers on top of a servlet engine. There are basically two types of data that a servlet can send back to whatever is initiating the connection (e.g. a browser): text and binary.
An ordinary XPage sends text -- specifically, HTML. Some xAgents also send text, such as JSON or XML. In any of these scenarios, however, Domino uses a Java Writer to send the response content, because Writers are optimized for sending Character data.
When we need to send binary content, we use an OutputStream instead, because streams are optimized for sending generic byte data. So if we're sending PDF, DOC/XLS/PPT, images, etc., we need to use a stream, because we're sending binary data, not text.
The catch (as you'll soon see, that's a pun) is that we can only use one per response.
Once any HTTP client is told what the content type of a response is, it makes assumptions about how to process that content. So if you tell it to expect application/pdf, it's expecting to only receive binary data. Conversely, if you tell it to expect application/json, it's expecting to only receive character data. If the response includes any data that doesn't match the promised content type, that nearly always invalidates the entire response.
So Domino in its infinite wisdom protects us from making this mistake by only allowing us to send one or the other in a single request, and throws an exception if we disobey that rule.
Unfortunately... if there's any exception in our code when we're trying to send binary content, Domino wants to report that to the consumer... which tries to invoke the output writer to send HTML reporting that something went wrong. Except we already got a handle on the output stream, so Domino isn't allowed to get a handle on the output writer, because that would violate its own rule against only using one per response. This, in turn, throws the exception you reported, masking the exception that actually caused the problem (in your case, probably a ClassNotFoundException).
So how do we make sure that we see the real problem, and not this misdirection? We try:
try {
/*
* Move all your existing code here...
*/
} catch (e) {
print("Error generating dynamic PDF: " + e.toString());
} finally {
facesContext.responseComplete();
}
There are two reasons this is a preferred approach:
If something goes wrong with our code, we don't let Domino throw an exception about it. Instead, we log it (instead of using print to send it to the console and log, you could also toss it to OpenLog, or whatever your preferred logging mechanism happens to be). This means that Domino doesn't try to report the error to the user, because we've promised that we already reported it to ourselves.
By moving the crucial facesContext.responseComplete() call (which is what ultimately tells Domino not to send any content of its own) to the finally block, this ensures it will get executed. If we left it inside the try block, it would get skipped if an exception occurs, because we'd skip straight to the catch... so even though Domino isn't reporting our exception because we caught it, it still tries to invoke the response writer because we didn't tell it not to.
If you follow the above pattern, and something's wrong with your code, then the browser will receive an incomplete or corrupt file, but the log will tell you what went wrong, rather than reporting an error that has nothing to do with the root cause of the problem.
I almost deleted this question, but decided to answer it myself since there is very little out on google when you search for the exception.
The issue was in the xAgent, there is a line importPackage that was incorrect. Fixing this made everything work. The exception verbage: "Can't get a Writer while an OutputStream is already in use" is quite misleading. I don't know what else triggers this exception, but an alternative description would be "Java class ??yourClass?? not found"
If you found this question, then you likely have the same issue. I would ignore what the exception actually says, and check your package statements throughout your application. The java code will error on its own, but your SSJS that references the java will not error until runtime, focus on that code.
Update the response header after the body can solve this kind of problem, example :
HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.getWriter().write("<html><body>...</body></html>");
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
response.setCharacterEncoding("UTF-8");
I am writing a component that reads data from a specific filetype. Currently, it has a property for filepath - I would like for this block to quit as hard as possible when passed an invalid file/no file found.
Throwing an exception causes it to stop execution, but also deletes the block from the chalkboard while I am testing (?), which makes me think there is a more "approved" way to do it.
My current solution is something like:
LOG_ERROR( MyReader_i, "Unable to open file at " + Filepath );
return FINISH;
Is there another way to stop if something is wrong, that will hopefully stop all downstream processing as well?
Have you taken a look at the Data Reader component in the basic components? It also has a file path as an input. It deals with this during the onConfigure call as shown below:
def onconfigure_prop_InputFile(self, oldvalue, newvalue):
self.InputFile = newvalue
if not os.path.exists(self.InputFile):
self._log.error("InputFile path provided can not be accessed")
And then again in the service function by returning NOOP.
def process(self):
if (self.Play == False):
return NOOP
if not (os.path.exists(self.InputFile)):
return NOOP
This isn't the only way to deal with invalid input however. It's a design decision that is up to the developer.
If you'd like additional components down stream to know about an issue elsewhere in the chain, you have a few options. You could use the End of Stream bit, available in bulkio port implementations, to signal to down stream components that there is no additional data. They can then use this information to clean up and shut down. You could also use messaging to send a message out to an event channel and anyone who has subscribed to this event channel can be made aware of the message. Again, it's a design decision.
So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.