How to delete the first object from a JSON file - node.js

I'm trying to make a queue in node js and want to delete the first JSON object in my JSON file.
I create and store users in a JSON file and then add them in the queue JSON file by reading the users.json. The problem is that when i get the user from user.json it comes as an object within an array, and i cant filter to compare the id. So how can i do this?
// helper method to read JSON FILE.
const readFile = (callback, returnJson = false, filePath = dataPath, encoding = 'utf8') => {
fs.readFile(filePath, encoding, (err, data) => {
if (err) {
throw err;
}
callback(returnJson ? JSON.parse(data) : data);
});
};
// helper method to write on JSON FILE.
const writeFile = (fileData, callback, filePath = dataPath, encoding = 'utf8') => {
fs.writeFile(filePath, fileData, encoding, (err) => {
if (err) {
throw err;
}
callback();
});
};
// thats how i try to delete first user from the queue, but it deletes the user with index 1.
app.delete('/popLine', (req, res) => {
readFile(data => {
//const userId = req.params["id"];
delete data[1]; // remove the first element from the Line.
writeFile(JSON.stringify(data, null, 2), () => {
res.status(200).send(`queue id: removed`);
});
},
true);
});
// thats how an user is stored in queue.json
"5": [
{
"name": "teresa may",
"email": "parliament",
"gender": "male",
"id": 3
}
],

I would prefer loading the JSON file as a JSON object (if its manageable) then delete the first entry by key and then persisting the file back on the disk (over-write) periodically or instantaneously which be necessary.
You can do a require to load a JSON file as a JSON object. But note, if you do a re-require of the changed file to reload it again during runtime, it will not get you the changes you made to the file in between but the older content from last require. In that case you need to clear the item from the require cache before getting the changes.
I am assuming the file is in order of KBs.. for larger files I wouldn’t prefer this approach, rather I would figure a way out to get that file data in a NoSQL document database.
Hope this helps :)

Related

updating JSON issue, how to correctly update?

I have a file issueData.json and I want to update in POST request. This is my code.
I try to read the file parse to array, push the new, and after it re-write.
app.post("/api/issues", (req, res, next) => {
const issueObj = req.body;
fs.readFile("issuesData.json", (err: Error, data: string | Buffer) => {
if (err) {
res.status(500).send(err);
} else {
const stringData = data.toString();
const issueFile = [...JSON.parse(stringData)];
const updatedIssueFile = issueFile.push(issueObj);
fs.writeFile(
"issuesData.json",
JSON.stringify(updatedIssueFile),
(err: Error) => {
if (err) {
res.status(500).send(err);
} else {
res.status(200).send("Issue has updated");
}
}
);
}
});
});
1) Is it a good practice?
2) Is TS so, what should be the type of req, res, next?
3) It is a good way to update the JSON?
If you're just writing to a file, you might not need to read the contents of the file and append your issueObj to the issueFile array. Maybe you could just write the issueObj to a new line in your file. Maybe something like the appendFile function would help (https://nodejs.org/api/fs.html#fs_fs_appendfile_path_data_options_callback).
Currently, as your file grows, the read operations will take longer and longer and will affect performance. However, just writing will ensure you don't incur that overhead for each POST request.

How can i store firebase data to json in local

In my case I got the data from firestore now how can I save it to
serversettings.json:
var temp = {}
let query = db.collection('guilds')
let data = query.get().then(snapshot => {
snapshot.forEach(doc => {
console.log(doc.id, '=>',doc.data());
})
so I get the output as:
637301291068030997 => { welcomeChannelID: '<#648968505160630285>',
guildMemberCount: 4,
guildOwnerID: '348832732647784460',
guildOwner: 'Ethical Hacker',
prefix: '.',
guildID: '637301291068030997',
guildName: 'test server 3' }
and this:
GUqGqFanJuN7cRJx4S2w => {}
I need to save that data to serversettings.json
await fs.writeFile ("../serversettings.json", JSON.stringify(temp), function(err) {
if (err) throw err;
console.log('done');
})
here temp is variable where multiple data is stored like a:{},b:{}....
i tried var temp = {} temp.table = [] and then temp.table.push(doc.id, ':',doc.data())
but i get empty output so what can i do to get that expected output ?
also, adding to that how can I update the values if that object is already present in JSON will the above function work the same will it override the same value or delete all other values example update prefix from. to, then await fs.writeFile ("../serversettings.json", JSON.stringify(temp),..... so the temp field has a value of only guild id and that field prefix will it update the only prefix and not delete anything else in that array?
HERE is the code that added stuff to temp variable
var temp = {}
temp.guilds = [] // after some lines
snapshot.forEach(doc => {
console.log(doc.id, '=>',doc.data()); // output is above this code
temp.guilds.push(doc.id = doc.data()) // output is below this code
})
Above codes output
{ guilds:
[ { guildID: '637301291068030997', // here missing doc.id field
guildName: 'test server 3',
welcomeChannelID: '-',
guildMemberCount: 4,
guildOwnerID: '348832732647784460',
guildOwner: 'Ethical Hacker',
prefix: '.' },
{} // this missing thing before {} is (some number) also bracket is empty by the way so no worries
]
}
A fast solution for your issue would be to replace
let data = query.get().then(snapshot => {
with
await query.get().then(snapshot => {
so that your temp object can be filled before the program proceeds to save the file.
I haven't used writeFile yet, but here's what its documentation says:
When file is a filename, asynchronously writes data to the file, replacing the file if it already exists.
I don't think your object will be so large that a complete overwrite would be a problem, unless it's changing very often. In that case, I guess you'd have to use a different method that can support an offset, so that you can write only what has changed, but that seems like a real overkill.
Regarding the format of your JSON file, I think what you're trying to do is this:
var temp = {};
temp.guilds = {};
snapshot.forEach(doc => {
console.log(doc.id, '=>', doc.data());
temp.guilds[doc.id] = doc.data();
});
i`m not sure , but firebase has method to convert data from object to JSON, so i think this solution should work
let query = db.collection('guilds')
let data = query.get().then(snapshot => {
let temp = snapshot.toJSON()
await fs.writeFile ("../serversettings.json", temp, function(err) {
if (err) throw err;
console.log('done');
})
})

update a json in node

{
aps: []
}
I read it like this:
let apartments = require("path to json file);
apartments.aps.push(apa); // apa is a valid object
fs.writeFile("path", JSON.stringify(apartments));
aps will contain objs like this
{ "id":0, "address": "something"}
when I push in my json file I see
[object Object]
Because apas is a string. JSON is a text format for representing JavaScript Objects (hence the name); you'll need to parse it (using a suitable library) before you can use it as an object.
Here's a simple working example:
const fs = require('fs');
const data = require('./message.json');
// add new value
data.new = 'new value';
fs.writeFile('message.json', JSON.stringify(data), (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
Original content:
{"a":1,"b":2}
Modified content:
{"a":1,"b":2,"new":"new value"}

uploaded files displaying base64 data in the browser

I'm using couchdb to store attachments that I need to display in the browser.
The data is uploaded from an html input and then processed when saveDoc is called:
getFileData: function(file){
var reader = new FileReader();
return new Promise(function(accept, reject){
reader.onload = (e) => {
accept(e.target.result)
};
reader.readAsDataURL(file);
})
},
saveDoc: function(name, type, filedata, url){
console.log(filedata)
var self=this
return new Promise(function(accept, reject){
self.getData(url).then(data => {
var rev = data['_rev']
console.log(url + ' is the url')
console.log(name + ' is the filename')
documentation.attachment.insert(url, name, filedata, type,
{ rev: rev }, function(err, body) {
if (!err){
console.log(body);
}
else {
console.log(err)
}
})
}).catch(err => {
console.log(err)
})
})
},
I don't get any errors while uploading from the console. But when I navigate to where the attachment should be in the console, I see a browser message telling me the data can't be displayed (for pdf/images), or I see a base64 string that looks like this:
data:image/png;base64,iVBOR...
when the attachment is an html document.
(The data being logged on saveDoc looks like this:
data:application/pdf;base64,JVBER...)
The correct content type as well as a reasonable length is being displayed in my couchdb admin with metadata on the files, so there are no obvious header problems. Can anyone think of any other reason this might not be working in the browser?
Edit
To give some more detail, I uploaded a pdf in Fauxton, which works as expected and displays in teh browser. I then uploaded the same pdf using my saveDoc function, and it somehow added a huge amount of data to the length of the document.
version uploaded in Fauxton:
"_attachments": {
"03_IKB-RH_FUB_mitDB.pdf": {
"content_type": "application/pdf",
"revpos": 5,
"digest": "md5-tX7qKPT6b7Ek90GbIq9q8A==",
"length": 462154,
"stub": true
}
}
version uploaded programmatically:
"_attachments": {
"03_IKB-RH_FUB_mitDB.pdf": {
"content_type": "application/pdf",
"revpos": 4,
"digest": "md5-Zy8zcwHmXsfwtleJNV5xHw==",
"length": 616208,
"stub": true
}
}
The data property of a particular ._attachments{} member should be base64, not dataURL.
When you convert your file to dataURL, you do get base64-encoded binary, but with special prefix. Truncate the prefix, and save only base64-encoded tail.
Basically, to obtain valid base64 you should remove data:*/*;base64, prefix from the beginning of a dataURL-encoded string.
UPD. After diving deeper it turned out, that .attachment.insert(url, name, filedata, type) method accepts binary (not base64) as filedata. To obtain binary from FileReader, fr.readAsArrayBuffer() should be used instead of fr.readAsDataURL().

Stream data from Cassandra to file considering backpressure

I have Node App that collects vote submissions and stores them in Cassandra. The votes are stored as base64 encoded encrypted strings. The API has an endpoint called /export that should get all of these votes strings (possibly > 1 million), convert them to binary and append them one after the other in a votes.egd file. That file should then be zipped and sent to the client. My idea is to stream the rows from Cassandra, converting each vote string to binary and writing to a WriteStream.
I want to wrap this functionality in a Promise for easy use. I have the following:
streamVotesToFile(query, validVotesFileBasename) {
return new Promise((resolve, reject) => {
const writeStream = fs.createWriteStream(`${validVotesFileBasename}.egd`);
writeStream.on('error', (err) => {
logger.error(`Writestream ${validVotesFileBasename}.egd error`);
reject(err);
});
writeStream.on('drain', () => {
logger.info(`Writestream ${validVotesFileBasename}.egd error`);
})
db.client.stream(query)
.on('readable', function() {
let row = this.read();
while (row) {
const envelope = new Buffer(row.vote, 'base64');
if(!writeStream.write(envelope + '\n')) {
logger.error(`Couldn't write vote`);
}
row = this.read()
}
})
.on('end', () => { // No more rows from Cassandra
writeStream.end();
writeStream.on('finish', () => {
logger.info(`Stream done writing`);
resolve();
});
})
.on('error', (err) => { // err is a response error from Cassandra
reject(err);
});
});
}
When I run this it is appending all the votes to a file and downloading fine. But there are a bunch of problems/questions I have:
If I make a req to the /export endpoint and this function runs, while it's running all other requests to the app are extremely slow or just don't finish before the export request is done. I'm guessing because the event loop being hogged by all of these events from the Cassandra stream (thousands per second) ?
All the votes seem to write to the file fine yet I get false for almost every writeStream.write() call and see the corresponding logged message (see code) ?
I understand that I need to consider backpressure and the 'drain' event for the WritableStream so ideally I would use pipe() and pipe the votes to a file because that has built in backpressure support (right?) but since I need to process each row (convert to binary and possible add other data from other row fields in the future), how would I do that with pipe?
This the perfect use case for a TransformStream:
const myTransform = new Transform({
readableObjectMode: true,
transform(row, encoding, callback) {
// Transform the row into something else
const item = new Buffer(row['vote'], 'base64');
callback(null, item);
}
});
client.stream(query, params, { prepare: true })
.pipe(myTransform)
.pipe(fileStream);
See more information on how to implement a TransformStream in the Node.js API Docs.

Resources