Persist data on disk using chrome extension API - google-chrome-extension

I am trying to save some data which should be available even when restart the browser So this data should persist. I am using Chrome Storage Sync API for this. But when I am restarting my browser, I get empty object on using chrome.storage.get.
Here is my sample code:
SW.methods.saveTaskListStore = function() {
chrome.storage.sync.set({
'taskListStore': SW.stores.taskListStore
}, function() {
if (SW.callbacks.watchProcessSuccessCallback) {
SW.callbacks.watchProcessSuccessCallback(SW.messages.INFO_DATA_SAVED);
SW.callbacks.watchProcessSuccessCallback = null;
}
});
};
SW.methods.loadTaskListStore = function() {
SW.stores.loadTaskListStore = [];
chrome.storage.sync.get('taskListStore', function(taskFeed) {
var tasks = taskFeed.tasks;
if (tasks && !tasks.length) {
SW.stores.loadTaskListStore = tasks;
}
});
};
I guess I am using the Wrong API.

If this is not some copy-paste error, you are storing under key taskListStore and trying to get data under key loadTaskListStore.
Besides that, according to the documentation on StorageArea.get(), the result object is an object with items in their key-value mappings. Thus, in your case, you should do:
chrome.storage.sync.get("taskListStore", function(items) {
if (items.taskListStore) {
var tasks = items.taskListStore.tasks;
...

Related

Firebase Functions and Express: listen to firestore data live

I have a website that runs its frontend of Firebase Hosting and its server which is written using node.js and Express on Firebase Functions
What I want to have redirect links from my website so I can map for example mywebsite.com/youtube to my youtube channel. the way I am creating these links is from my admin panel, and adding them to my Firestore database.
My data is roughly something like this:
The first way I approached this, is by querying my Firestore database on every request, but that is heavily expensive and slow.
Another way I tried to approach this is by setting some kind of background listener to the Firestore database which will always provide up to date data. but unfortunately that did not work because Firebase Functions suspends the main function when the current request execution ends.
lastly, which is the most convenience way, I configured an api route, which will be called from my Admin Panel when any change happens to the data, and I would save the new data to some json file. I tried this on my local but it did not work on production because appearently Firebase Functions is a Read-only system, so we can't edit any files after they are deployed. after some research I found out that Firebase Functions allows writing to the tmp directory, so I went forward with this, and tried deploying it. but again, Firebase Functions was resetting the tmp folder when some request execution ends.
here is my api request code which updates the utm_data.json file in the tmp directory:
// my firestore provider
const db = require('../db');
const fs = require('fs');
const os = require('os')
const mkdirp = require('mkdirp');
const updateUrlsAPI = (req, res) => {
// we wanna get the utm list from firestore, and update the file
// tmp/utm_data.json
// query data from firestore
db.collection('utmLinks').get().then(async function(querySnapshot) {
try {
// get the path to `tmp` folder depending on
// the os running this program
let tmpFolderName = os.tmpdir()
// create `tmp` directory if not exists
await mkdirp(tmpFolderName)
let docsData = querySnapshot.docs.map(doc => doc.data())
let tmpFilePath = tmpFolderName + '/utm_data.json'
let strData = JSON.stringify(docsData)
fs.writeFileSync(tmpFilePath, strData)
res.send('200')
} catch (error) {
console.log("error while updating utm_data.json: ", error)
res.send(error)
}
});
}
and this is my code for reading the utm_data.json file on an incoming request:
const readUrlsFromJson = (req, res) => {
var url = req.path.split('/');
// the url will be in the format of: 'mywebsite.com/routeName'
var routeName = url[1];
try {
// read the file ../tmp/utm_data.json
// {
// 'createdAt': Date
// 'creatorEmail': string
// 'name': string
// 'url': string
// }
// our [routeName] should match [name] of the doc
let tmpFolderName = os.tmpdir()
let tmpFilePath = tmpFolderName + '/utm_data.json'
// read links list file and assign it to the `utms` variable
let utms = require(tmpFilePath)
if (!utms || !utms.length) {
return undefined;
}
// find the link matching the routeName
let utm = utms.find(utm => utm.name == routeName)
if (!utm) {
return undefined;
}
// if we found the doc,
// then we'll redirect to the url
res.redirect(utm.url)
} catch (error) {
console.error(error)
return undefined;
}
}
Is there something I am doing wrong, and if not, what is an optimal solution for this case?
You can initialize the Firestore listener in global scope. From the documentation,
The global scope in the function file, which is expected to contain the function definition, is executed on every cold start, but not if the instance has already been initialized.
This should keep the listener active even after the function's execution has completed until that specific instance is running (which should be about ~30 minutes). Try refactoring the code as shown below:
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
admin.initializeApp();
let listener = false;
// Store all utmLinks in global scope
let utmLinks: any[] = [];
const initListeners = () => {
functions.logger.info("Initializing listeners");
admin
.firestore()
.collection("utmLinks")
.onSnapshot((snapshot) => {
snapshot.docChanges().forEach(async (change) => {
functions.logger.info(change.type, "document received");
switch (change.type) {
case "added":
utmLinks.push({ id: change.doc.id, ...change.doc.data() });
break;
case "modified":
const index = utmLinks.findIndex(
(link) => link.id === change.doc.id
);
utmLinks[index] = { id: change.doc.id, ...change.doc.data() };
break;
case "removed":
utmLinks = utmLinks.filter((link) => link.id !== change.doc.id);
default:
break;
}
});
});
return;
};
// The HTTPs function
export const helloWorld = functions.https.onRequest(
async (request, response) => {
if (!listener) {
// Cold start, no listener active
initListeners();
listener = true;
} else {
functions.logger.info("Listeners already initialized");
}
response.send(JSON.stringify(utmLinks, null, 2));
}
);
This example stores all UTM links in an array in global scope which won't be persisted in new instances but you won't have to query each link for every request. The onSnapshot() listener will keep utmLinks updated.
The output in logs should be:
If you want to persist this data permanently and prevent querying in every cold start, then you can try using Google Cloud Compute that keeps running unlike Cloud functions that timeout eventually.

Loading data from API to file and then import it

I want to load some credentials from an API into a Node.js application, and then use them whenever necessary.
Currently there is a file that stores some information and accesses the credentials directly from the environment variables and exports everything. This file is then imported whenever necessary, something like this:
creds.js
module.exports = {
key: process.env.KEY || diqndwqn,
id: process.env.ID || dqw2231qzaxc,
db: {
user: dqdwmkovvoij,
pw: ofo9v8#$w
}
}
What I want is to do a call to an API from where I retrieve these values, where the values can still be imported after but the call is only made once at the start. A solution I can imagine is doing something like a singleton where you only do the API call the first time. I know I could also export a promise but I do not want to request the credentials several times, only one time when the server runs. Any clean alternatives?
You could make a simple class with a populate() function and getters and export it as a singleton.
class MyCreds {
constructor() {
this.key = null;
this.id = null;
this.db = { user: null, pw: null }
}
async populate() {
let creds = await whatever();
this.key = cred.key;
this.id = creds.id;
this.db.user = creds.user;
this.db.pw = creds.pw;
}
}
const myCreds = new MyCreds();
module.exports = myCreds;
Then at the very beginning of your process you populate with await require('my-creds').populate() and access everywhere else the same you currently are, with require('my-creds').id.

Correct way to organise this process in Node

I need some advice on how to structure this function as at the moment it is not happening in the correct order due to node being asynchronous.
This is the flow I want to achieve; I don't need help with the code itself but with the order to achieve the end results and any suggestions on how to make it efficient
Node routes a GET request to my controller.
Controller reads a .csv file on local system and opens a read stream using fs module
Then use csv-parse module to convert that to an array line by line (many 100,000's of lines)
Start a try/catch block
With the current row from the csv, take a value and try to find it in a MongoDB
If found, take the ID and store the line from the CSV and this id as a foreign ID in a separate database
If not found, create an entry into the DB and take the new ID and then do 6.
Print out to terminal the row number being worked on (ideally at some point I would like to be able to send this value to the page and have it update like a progress bar as the rows are completed)
Here is a small part of the code structure that I am currently using;
const fs = require('fs');
const parse = require('csv-parse');
function addDataOne(req, id) {
const modelOneInstance = new InstanceOne({ ...code });
const resultOne = modelOneInstance.save();
return resultOne;
}
function addDataTwo(req, id) {
const modelTwoInstance = new InstanceTwo({ ...code });
const resultTwo = modelTwoInstance.save();
return resultTwo;
}
exports.add_data = (req, res) => {
const fileSys = 'public/data/';
const parsedData = [];
let i = 0;
fs.createReadStream(`${fileSys}${req.query.file}`)
.pipe(parse({}))
.on('data', (dataRow) => {
let RowObj = {
one: dataRow[0],
two: dataRow[1],
three: dataRow[2],
etc,
etc
};
try {
ModelOne.find(
{ propertyone: RowObj.one, propertytwo: RowObj.two },
'_id, foreign_id'
).exec((err, searchProp) => {
if (err) {
console.log(err);
} else {
if (searchProp.length > 1) {
console.log('too many returned from find function');
}
if (searchProp.length === 1) {
addDataOne(RowObj, searchProp[0]).then((result) => {
searchProp[0].foreign_id.push(result._id);
searchProp[0].save();
});
}
if (searchProp.length === 0) {
let resultAddProp = null;
addDataTwo(RowObj).then((result) => {
resultAddProp = result;
addDataOne(req, resultAddProp._id).then((result) => {
resultAddProp.foreign_id.push(result._id);
resultAddProp.save();
});
});
}
}
});
} catch (error) {
console.log(error);
}
i++;
let iString = i.toString();
process.stdout.clearLine();
process.stdout.cursorTo(0);
process.stdout.write(iString);
})
.on('end', () => {
res.send('added');
});
};
I have tried to make the functions use async/await but it seems to conflict with the fs.openReadStream or csv parse functionality, probably due to my inexperience and lack of correct use of code...
I appreciate that this is a long question about the fundamentals of the code but just some tips/advice/pointers on how to get this going would be appreciated. I had it working when the data was sent one at a time via a post request from postman but can't implement the next stage which is to read from the csv file which contains many records
First of all you can make the following checks into one query:
if (searchProp.length === 1) {
if (searchProp.length === 0) {
Use upsert option in mongodb findOneAndUpdate query to update or upsert.
Secondly don't do this in main thread. Use a queue mechanism it will be much more efficient.
Queue which I personally use is Bull Queue.
https://github.com/OptimalBits/bull#basic-usage
This also provides the functionality you need of showing progress.
Also regarding using Async Await with ReadStream, a lot of example can be found on net such as : https://humanwhocodes.com/snippets/2019/05/nodejs-read-stream-promise/

react-beautiful-dnd and mysql state management

okay. I'm confused as to the best way to do this:
the following pieces are in play: a node js server, a client-side react(with redux), a MYSql DB.
in the client app I have lists (many but for this issue, assume one), that I want to be able to reorder by drag and drop.
in the mysql DB the times are stored to represent a linked list (with a nextKey, lastKey, and productionKey(primary), along with the data fields),
//mysql column [productionKey, lastKey,nextKey, ...(other data)]
the current issue I'm having is a render issue. it stutters after every change.
I'm using these two function to get the initial order and to reorder
function SortLinkedList(linkedList)
{
var sortedList = [];
var map = new Map();
var currentID = null;
for(var i = 0; i < linkedList.length; i++)
{
var item = linkedList[i];
if(item?.lastKey === null)
{
currentID = item?.productionKey;
sortedList.push(item);
}
else
{
map.set(item?.lastKey, i);
}
}
while(sortedList.length < linkedList.length)
{
var nextItem = linkedList[map.get(currentID)];
sortedList.push(nextItem);
currentID = nextItem?.productionKey;
}
const filteredSafe=sortedList.filter(x=>x!==undefined)
//undefined appear because server has not fully updated yet, so linked list is broken
//nothing will render without this
return filteredSafe
;
}
const reorder = (list, startIndex, endIndex) => {
const result = Array.from(list);
const [removed] = result.splice(startIndex, 1);
result.splice(endIndex, 0, removed);
const adjustedResult = result.map((x,i,arr)=>{
if(i==0){
x.lastKey=null;
}else{
x.lastKey=arr[i-1].productionKey;
}
if(i==arr.length-1){
x.nextKey=null;
}else{
x.nextKey=arr[i+1].productionKey;
}
return x;
})
return adjustedResult;
};
I've got this function to get the items
const getItems = (list,jobList) =>
{
return list.map((x,i)=>{
const jobName=jobList.find(y=>y.jobsessionkey==x.attachedJobKey)?.JobName;
return {
id:`ProductionCardM${x.machineID}ID${x.productionKey}`,
attachedJobKey: x.attachedJobKey,
lastKey: x.lastKey,
machineID: x.machineID,
nextKey: x.nextKey,
productionKey: x.productionKey,
content:jobName
}
})
}
my onDragEnd
const onDragEnd=(result)=> {
if (!result.destination) {
return;
}
// dropped outside the list
const items = reorder(
state.items,
result.source.index,
result.destination.index,
);
dispatch(sendAdjustments(items));
//sends update to server
//server updates mysql
//server sends back update events from mysql in packets
//props sent to DnD component are updated
}
so the actual bug looks like the graphics are glitching - as things get temporarily filtered in the sortLinkedList function - resulting in jumpy divs. is there a smoother way to handle this client->server->DB->server->client dataflow that results in a consistent handling in DnD?
UPDATE:
still trying to solve this. currently implemented a lock pattern.
useEffect(()=>{
if(productionLock){
setState({
items: SortLinkedList(getItems(data,jobList)),
droppables: [{ id: "Original: not Dynamic" }]
})
setLoading(false);
}else{
console.log("locking first");
setLoading(true);
}
},[productionLock])
where production lock is set to true and false from triggers on the server...
basically: the app sends the data to the server, the server processes the request, then sends new data back, when it's finished the server sends the unlock signal.
which should trigger this update happening once, but it does not, it still re-renders on each state update to the app from the server.
What’s the code for sendAdjustments()?
You should update locally first, otherwise DnD pulls it back to its original position while you wait for backend to finish. This makes it appear glitchy. E.g:
Set the newly reordered list locally as your state
Send network request
If it fails, reverse local list state back to the original list

node-imap module fetch gmail list folders(labels)

I'm trying to fetch Gmail folders list(labels actually).
I'm using node js and this module : https://github.com/mscdex/node-imap
i want to fetch all folders and sub folders.
the documentation that the author left is not so bright.
any idea about this?
finally after hard working i found the answer,
this is how i get folders not only for google even for every email system that use imap standards
after connection to imap server
the get all folders by this function.
function getFolders(username, callback) {
var folders = [];
if (Connection[username]) {
Connection[username].once('ready', function() {
Connection[username].getBoxes(function (err, boxes) {
if (err) {
// TODO : parse some error here
} else {
folders = imapNestedFolders(boxes);
}
return callback(err, folders);
});
});
} else {
return framework.httpError(500, self);
}
}
the parse the folder to a pretty nested tree json object by this function
function imapNestedFolders(folders) {
var FOLDERS = [];
var folder = {};
for (var key in folders) {
if (folders[key].attribs.indexOf('\\HasChildren') > -1) {
var children = imapNestedFolders(folders[key].children);
folder = {
name : key,
children : children
};
} else {
folder = {
name : key,
children : null
};
}
FOLDERS.push(folder);
}
return FOLDERS;
}
you might also change the connection variable to what you wanted.
these functions work with multiple connection because of this the connection variable is an array you can read more about this here,
i wrote how to use multiple connection in node imap here
How can i handle multiple imap connections in node.js?

Resources