Json file randomly corrupting from node script - node.js

I'm making a discord bot that queues two people together for a game, it performs this by having their discord Id, queue status, an opponent in a JSON file. Looks like this for each user:
{
"discordId": "296062947329966080",
"dateAdded": "2019-03-11T02:34:01.303Z",
"queueStatus": "notQueuing",
"opponent": null
},
When one person queues up with a command it sets "queueStatus" to Queuing and when another person is found with Queuing it sets opponent to that person and tells both users that they are opponents. The problem is that randomly the JSON file will corrupt when changing and something like this will happen to the bottom:
"dateAdded": "2019-03-11T02:34:01.303Z",
"queueStatus": "notQueuing",
"opponent": null
}
]
}537"
}
]
}
My only idea is that it's because two people doing it at the same time writes to the file at the same time and corrupts it and that fs.writeFileSync would fix it, but if I use fs.writeFileSync the entire rest of the discord bot pauses and stops working until it's done writing which isn't a very practical solution.

The data being stored in the JSON file should be migrated to MongoDB or other DB. CRUD operations on a single static file from multiple jobs/sources is not a scalable solution. Migrating this data storage to a Database will resolve these pausing and stoppages.
Checkout this video on Youtube by freecodecamp.org
However, if the JSON file is required or still preferred I would recommend using EventEmitter to create a single blocking queue for reading and writing.

Related

how to save an image from discord using discord bot node js

So far I've only been able to get text and links from what other people on my discord channel type, but I want to be able to save posted images/gifs. is there any way I can do this through the bot or is it impossible? I'm using discord.js.
Images in Discord.js come in the form of MessageAttachments via Message#attachments. By looping through the amount of attachments, we can retrieve the raw file via MessageAttachment#attachment and the file type using MessageAttachment#name. Then, we use node's FileSystem to write the file onto the system. Here's a quick example. This example assumes you already have the message event and the message variable.
const fs = require('fs');
msg.attachments.forEach(a => {
fs.writeFileSync(`./${a.name}`, a.file); // Write the file to the system synchronously.
});
Please note that in a real world scenario you should surround the synchronous function with a try/catch statement, for errors.
Also note that, according to the docs, the attachment can be a stream. I have yet to have this happen in the real world, but if it does it might be worth checking if a is typeof Stream, and then using fs.createWriteStream and piping the file into it.

How to think asynchronously with nodejs?

I just started developing nodejs. I'm confused to use async model. I believe there is a way to turn most of SYNC use cases into ASYNC way. Example, by SYNC, we load some data and wait until it returns then show them to user; by ASYNC, we load data and return, just tell the user data will be presented later. I can understand why ASYNC is used in this scenario.
But here I have a use case. I'm building an web app, allowing user to place a order (buying something). Before saving the order data into db, I want to put some user data together with order data (I'm using document NoSql db by the way). So I think by SYNC, after I get order data, I make a SYNC call to database and wait for its returned user data. After I get returned data, integrate them together and ingest into db.
I think there might be an issue if I make ASYNC call to db to query user data because user data may be returned after I save data to db. And that's not what I want.
So in this case, how can I do this thing ASYNCHRONOUSLY?
Couple of things here. First, if your application already has the user data (the user is already logged in), then this information should be stored in session so you don't have to access the DB. If you are allowing the user to register at the time of purchase, you would simply want to pass a callback function that handles saving the order into your call that saves the user data. Without knowing specifically what your code looks like, something like this is what you would be looking for.
function saveOrder(userData, orderData, callback) {
// save the user data to the DB
db.save(userData, function(rec) {
// if you need to add the user ID or something to the order...
orderData.userId = rec.id; // this would be dependent on your DB of choice
// save the order data to the DB
db.save(orderData, callback);
});
}
Sync code goes something like this. step by step - one after other. There can be ifs and loops (for) etc. all of us get it.
fetchUserDataFromDB();
integrateOrderDataAndUserData();
updateOrderData();
Think of async programming with nodejs as event driven. Like UI programming - code (function) is executed when an event occurs. E.g. On click event - framework calls back registered clickHandler.
nodejs async programming can also be thought on these lines. When db query (async) execution completes, your callback is called. When order data is updated, your callback is called. The above code goes something like this:
function nodejsOrderHandler(req,res)
{
var orderData;
db.queryAsync(..., onqueryasync);
function onqueryasync(userdata)
{
// integrate user data with order data
db.update(updateParams, onorderudpate);
}
function onorderupdate(e, r)
{
// handler error
write response.
}
}
javascript closure provides the way to keep state in variables across functions.
There is certainly much more to async programming and there are helper modules that help with basic constructs like chain, parallel, join etc as you write more involved async code. but this probably gives you a quick idea.

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

Can MessageChannel overflow

I am working on an AS3 project in FDT6. I am using the lastest FLEX 4.6 and AIR 3.7.
I have a worker.swf file that is embedded into the main application to do threading work with.
I am using the MessageChannel class to pass information between the two.
In my main class I have defined
private var mainToWorker:MessageChannel;
private var workerToMain:MessageChannel;
mainToWorker = Worker.current.createMessageChannel(worker);
workerToMain = worker.createMessageChannel(Worker.current);
on the mainToWorker I only ever send messages. In these messages I send a byte array of information. The information is an object that contains a 'command' property and a 'props' property. Basically acting like a function call. The command is a function name and the props is an object that contains data for that function.
mainToWorkerMutex.lock();
mainToWorker.send(ByteArrayUtils.ObjectToByteArray({command:"DoSomething", props:{propA:1,propB:7}}));
mainToWorkerMutex.unlock();
The same occurs for the workerToMain var except I only send byte data that contains the 'message' and 'props' parameters.
workerToMainMutex.lock();
workerToMain.send(ByteArrayUtils.ObjectToByteArray({command:"complete", props:{return:"result"}}));
workerToMainMutex.unlock();
As a sanity check I make sure that the message channels are getting what they should.
It is working fine when I build it in FDT, however when it is built using an ANT script through flash builder I am sometimes getting the 'command' events coming back through in the workerToMain channel.
I am sending quite a lot of data through the message channel. Is it possible that I am overloading it and causing a buffer overflow into the other message channel somehow? How could that only be happening in FB?
I have checked my code many times and I am sure there is nothing in my own code that is sending that message back.
I had similar issue. When sending many bytearrays using channels sometimes things i received was not things i've actually sended. I had 4 channels (message channel to worker, message channel to main, data channel to worker, data channel to main).
I've noticed that data channel to main was affecting message channel to worker. When i turned off data channel to main - message channel to worker stared working just fine :D...
They have a big issue there with sending byte arrays it seems.
But what helped me was using shareable (at first it was not shareable) bytearray for communication via channels, but only for communication, as soon as i am receiving such bytearray i'm copying it to another byte array and parsing a copy.
This removed the problem (made quite hard stress tests there)...
Cheers
P.S. I'm also using static functions (like your ByteArrayUtils) to create bytearray's used for communication, but it seems fine, even made tests using non static functions.
So, it looks like I have found the issue. Looks like it's the ByteArray that is doing it.
ByteArray.toString() is basically sometimes mangles your data meaning you can't really trust it.
http://www.actionscript.org/forums/showthread.php3?t=155067
If you read the comment by "Jim Freer" he mentions how strings sometimes do this.
My solution was to switch to using a JSON encoded string instead of ByteArray data in the message channel. The reason I was using bytearray data to begin with is because I wanted to preserve class definition information, which JSON doesn't do.

Resolve MongoDB reference

I am currently building a chatting app with nodejs and mongoDB.
Basically I have two collections to maintain in the db.
user = {
_id: ObjectId("1234"),
account: "stan123"
}
thread = {
_user: ObjectId("1234"),
messages: [
{
body:"hi"
_user:ObjectId("1234")
},
{
body:"second msg"
_user:ObjectId("1234")
}
]
}
I am planning to pass the thread model with all resolved info (user) to the client side, so that I can construct my widget with it.
I searched for solutions for this.Some suggests to make extra calls from client side to get the data.
However, I am worried that when the amount of message grows, there will be considerable http calls that might hurt site speed.
I know some drivers can resolve DBRefs automatically and make the code clean.
However, according to
http://docs.mongodb.org/manual/applications/database-references/
I decided to just use id to maintain reference that make it's as simple as possible.
My plan is resolving all references on server side. Current approach is getting the length of message array first.
Then loop through the message array and make a second query to resolve user info separately.
In each query callback, do a messageToResolve++ and if(messageToResolve >= thread.messages.length)
If the condition meets, send the resolved model to client and end the response.
This is not a case I would consider embedded because it would be painful when you need to update user data.
(message is embedded because it exists only when thread exists)
I am not sure if it's a good way to do it.
Does anyone has a better solution?
Sorry if I didn't explain my problem and solution clear enough.
And thanks in advance.

Resources