Problems using Tail (Node.js module) to read file as it is updated - node.js

I'm trying to use Tail (https://www.npmjs.com/package/tail) to export Minecraft server log data to Discord (The discord bot part works, so I have excluded it from here).
If I say something in the game and then check "latest.log", it has been changed accordingly. However, using this script, the bot only sees a change if I open "latest.log" in notepad, it doesn't work otherwise. The bot will recognize changes as long as "latest.log" is open in the background, which is an annoyance but not too big of a deal.
However, my friend is the one who I was making this for, and for him Tail only updates the moment he opens "latest.log". Which means he would need to keep opening up that file for Tail to see it, instead of just letting it run in the background.
Tail = require('tail').Tail;
var fileToTail = "C:/Users/user/Downloads/logs/latest.log";
tail = new Tail(fileToTail);
tail.on("line", function(data) {
//Working code that sends data
});
tail.on("error", function(error) {
console.log('ERROR: ', error);
});
What could be causing the discrepancy between the two of us, and what can I do so that the bot can see the file changes without the user opening the file? Thanks in advance!

If you are using chokidar, you should pay attention to whether you are using fs.watch Vs. fs.watchFile. If using fs.watch you not successfully catch changes (which might be what you are experiencing).
See below from official docs for chokidar options:
usePolling (default: false). Whether to use fs.watchFile (backed by
polling), or fs.watch. If polling leads to high CPU utilization,
consider setting this to false. It is typically necessary to set this
to true to successfully watch files over a network, and it may be
necessary to successfully watch files in other non-standard
situations. Setting to true explicitly on MacOS overrides the
useFsEvents default. You may also set the CHOKIDAR_USEPOLLING env
variable to true (1) or false (0) in order to override this option.

Related

Text in Bash terminal getting overwritten! Using JS, Node.js (npms are: inquirer, console.table, and mysql)

Short 10sec video of what is happening: https://drive.google.com/file/d/1YZccegry36sZIPxTawGjaQ4Sexw5zGpZ/view
I have a CLI app that asks a user for a selection, then returns a response from a mysql database. The CLI app is run in node.js and prompts questions with Inquirer.
However, after returning the table of information, the next prompt overwrites the table data, making it mostly unreadable. It should appear on its own lines beneath the rest of the data, not overlap. The functions that gather and return the data are asynchronous (they must be in order to loop), but I have tried it with just a short list of standard synchronous functions for testing purposes, and the same problem exists. I have tried it with and without console.table, and the prompt still overwrites the response, as a console table or object list.
I have enabled checkwinsize in Bash with
shopt -s checkwinsize
And it still persists.
Is it Bash? Is it a problem with Inquirer?
I was having this issue as well. In my .then method of my prompt I was using switch and case to determine what to console log depending on the users selection. Then at the bottom I had an if statement checking to if they selected 'Finish' if they didn't I called the prompt function again, which I named 'questionUser'.
That's where I was having the issue, calling questionUser again was overriding my console logs.
What I did was I wrapped the questionUser call in a setTimeout like this:
if(data.option !== 'Finish'){
setTimeout(() => {
questionUser();
}, 1000);
}
This worked for me and allowed my console log data to not get deleted.
Hopefully if anyone else is having this issue this helps, I couldn't find an answer to this specific question anywhere.

My Discord.js bot uses a command handler. How can I then create play/skip/pause/resume/etc commands in different files?

I set up the command handler for my bot using the Discord.js guide (I am relatively new to Discord.js, as well as JavaScript itself, I'd say). However, as all my commands are in different files, is there a way that I can share variables between the files? I've tried experimenting with exporting modules, but sadly could not get it to work.
For example (I think it's somewhat understandable, but still), to skip a song you must first check if there is actually any audio streaming (which is all done in the play file), then end the current stream and move on to the next one in the queue (the variable for which is also in the play file).
I have gotten a separate music bot up and running, but all the code is in one file, linked together by if/else if/else chains. Perhaps I could just copy this code into the main file for my other bot instead of using the command handler for those specific commands?
I assume that there is a way to do this that is quite obvious, and I apologize if I am wasting peoples' time.
Also, I don't believe code is required for this question but if I'm wrong, please let me know.
Thank you in advance.
EDIT:
I have also read this question multiple times beforehand and have tried the solution, although I haven't gotten it to work.
A simple way to "carry over" variables without exporting anything is to assign them to a property of your client. That way, wherever you have your client (or bot) variable, you also have access to the needed information without requiring a file.
For example...
ready.js (assuming you have an event handler; otherwise your ready event)
client.queue = {};
for (guild of client.guilds) client.queue[guild.id] = [];
play.js
const queue = client.queue[message.guild.id];
queue.push({ song: 'Old Town Road', requester: message.author.id });
queue.js
const queue = client.queue[message.guild.id];
message.channel.send(`**${queue.length}** song${queue.length !== 1 ? 's' : ''} queued.`)
.catch(console.error);

Nightmare doesn't run twice in a row - NodeJS

EDIT
I have noticed the removal of the .end() function appears to solve the issue, but after reading the Nightmare docs on the use of .end() it says: Completes any queue operations, disconnect and close the electron process.
Now while this does solve the problem, am I now just opening more and more electron processes each time the route is called, which will eventually cause the server to run out of memory, or is this a safe way to fix the issue?
ORIGINAL TEXT
Please consider the following problem:
I am developing a Node based service that will allow the user to request screenshot of a particular URL.
For this I am using Nightmare to visit the URL, wait 2 seconds, take a screenshot, which is saved to the disk, convert it to base64, delete the image and then return the base64 string.
console.log('Nightmare starts');
nightmare
.goto(url)
.wait(2000)
.screenshot(filename)
.end()
.then(function (result)
{
fs.exists(filename, function(exists)
{
if (exists)
{
data = fs.readFileSync(filename);
var base64 = data.toString('base64')
fs.unlink(filename);
var output = {'message':'success','map_image':base64};
res.send(output);
}
});
})
.catch(function (error)
{
console.error('Search failed:', error);
});
console.log("Nightmare Finished");
The above code works just fine, the first time it runs. However any subsequent calls to this just consoles "Nightmare starts" and "Nightmare Finished" instantly with the actual code in-between not running. I don't appear to have any errors display, nothing is caught if I wrap it in a try/catch. The node requires a reboot to allow it to happen again.
Something worth noting is that I am running on a headless ubuntu machine, as electron (one of the nightmare dependencies) appears to need a GUI, I am using xvfb to launch the node using the following command:
xvfb-run --auto-servernum --server-num=1 node server.js
I'm assuming this may be an issue with some resource not being released correctly on the first run, but any assistance would be appreciated.
Also open to any constructive criticism of my code, very new to Node and i'm sure i'm not writing in the most optimal way (sync file loading etc)
It appears that you are simply misplacing where you are creating the nightmare instances. Cannot help much without some more code snippet and information.
Way 1
Create nightmare instance every time and close them after you are done with your task. It will require some time to boot up the instance, but it will also lessen the memory load. Not to mention you can have multiple nightmare instances for different users.
Way 2
Don't end and re-use same nightmare instance. Have multiple nightmare instances and queue the call for screenshot. The websites will load fast and it won't take time to boot up an instance, but you will have longer wait time for longer queue.

NodeJS large directory file changes

I am working on more of a security dashboard, it watches for changes in files in the entire home directory with hundreds of sites (all Joomla, so a lot of files).
In order to keep on top of potential security issues we want to watch for file changes in an efficient way without creating unnecessary CPU/Memory overhead. We want to watch it at a faster interval but I know its more of a balancing act when you do want to keep it from using more cpu then a side process should.
I have tried to use "watch" with the following code, running in the home directory:
var watch, fs;
watch = require('watch');
fs = require('fs');
watch.createMonitor(__dirname,{interval:500,filter:function(file,stat){
if(file.indexOf('index.php')!=-1){
return true;
}else{
return false;
}
}},function(monitor){
monitor.filter(function(file){
console.log(file);
})
monitor.on('created',function(file,stat){
console.log(file + ' new');
});
monitor.on('changed',function(file,stat){
console.log(file + ' changed');
});
monitor.on('removed',function(file,stat){
console.log(file + ' deleted');
});
});
However this spikes the CPU to over 100% of a single core (sometimes 2) out of 8. Memory also takes up about 20% of 8gb pretty quickly as well. This is all just to create the watch event on all the files, so its before it can actually detect any file changes.
I know the issue with this is it goes through each file individually, and only does not track it if you filter that sort of file. Typically all I need to watch is the index.php in every directory, down to a point that it could be consistent (with some exceptions).
Is there a module already built to do this? Or is this something new? All modules I find assume its a smaller directory (like watching LESS or something) So not built for this sort of application at all.
Any ideas? I know this code will need to be scrapped as there is no way I can see to stop the CPU overhead.
Do not use package 'watch', just use fs.watch(...)
package 'watch':
consistent APIs
very slow because implement mostly in node, look source to see how it work
souce code: https://github.com/mikeal/watch/blob/master/main.js
fs.watch(..)
non-consistent APIs, not all OSs are supported.
very fast because it reused OS features
document: http://nodejs.org/docs/latest/api/fs.html#fs_fs_watch_filename_options_listener

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

Resources