I want to save some value (say userid) from my extension and then want to access it on other pages. I cannot do it using cookie because I can access cookie only within the extension.
Is there a way to do that?
I am sorry, it might just be me being thick but I still don't really understand what you are trying to do.
If you are wanting to create a variable that persists across different domains, you need to use chrome.storage. It is similar to HTML5 localStorage but with some significant improvements (persisting across domains being a main one for me). You can read more about setting and getting the values here:
http://developer.chrome.com/extensions/storage.html
If this isn't what you need, please try to give a specific example of what you are trying to do and I will try again.
EDIT: Please see below for examples.
You can set a variable into storage like this (the function is an optional param):
var storage = chrome.storage.local;
var myVal="foo";
storage.set({'myVar': myVal}, function() {
alert('myVar has been saved in local storage');
});
You can retrieve it like this (function is NOT optional):
storage.get('myVar', function(items) {
if (items.myVar) {
alert(items.myVar);
}
});
You can return default values when 'myVar' is undefined by storage.get({'myVar': ''}) - this saves checking with the "if (items.myVar) {" line.
EDIT: Please see below for issues to watch out for.
Firstly, the methods are asynchronous. So...
alert("1");
storage.get('myVar', function(items) {
alert("2");
});
alert("3");
will probably output 1, then 3, then 2. Not a big deal but saves a lot of restructuring later if you know beforehand.
Secondly,
please note that there is a limit to the amount of storage space you have.
For chrome.storage.sync (for which the variable are automatically synced across all of that users browsers) the storage space is very limited at about 100kb. (there are other limits as well, such as a max of 512 separate variables - see the documentation link above). Basically, chrome.storage.sync is for a few small variables only.
chrome.storage.local has much more storage space at about 5mb (unless you set the unlimited storage permission). Unlike chrome.storage.sync there is no other limits on storage chrome.storage.local.
If a set will take you over the storage limit, then it fails.
Thirdly, using a variable as your variable name does NOT work:
var myVal="foo";
var myVarName='myVar';
storage.set({myVarName: myVal}, function() {
alert('THIS WILL NOT WORK');
});
If like me, you really wanted/needed to do something like this, the way to do something like this is by creating an object and assigning it the variables as properties and then putting the object in storage. For example:
var myVarName='myVar';
var mySecondVarName='mySecondVar';
var myVal="foo";
var mySecondVal="bar";
var obj = {};
obj[myVarName] = myVal;
obj[mySecondVarName] = mySecondVal;
//add as many vars as you want to save to the storage
//(as long as you do not exceed the max storage space as mentioned above.)
storage.local.set(obj);
you can then get the values with:
chrome.storage.local.get(myVarName, function (items) {
if (items[myVarName]) {
alert('You got the value that was set to myVarName');
}
});
Or if you wanted the value for mySecondVarName:
chrome.storage.local.get(mySecondVarName, function (items) {
if (items[mySecondVarName]) {
alert('You got the value that was set to mySecondVarName');
}
});
And these can be nested.
Related
I want to use server-to-server cloudkit js. to save record with Asset field.
the Asset field is a m4a audio. after saved, the audio file is corrupt to play
The Apple's Doc is not clear about the Asset field.
In a record that is being saved to the database, the value of an Asset field must be a window.Blob type. In the code fragment above, the type of the assetFile variable is window.File.
Docs:
https://developer.apple.com/documentation/cloudkitjs/cloudkit/database/1628735-saverecords
but in nodejs ,there is no Blob or .File, I filled it with a buffer like this code:
var dstFile = path.join(__dirname,"../test.m4a");
var data = fs.readFileSync(dstFile);
let buffer = Buffer.from(data);
var rec = {
recordType: "MyAttachment",
fields: {
ext: { value: ".m4a" },
file: { value: buffer }
}
}
//console.debug(rec);
mydatabase.newRecordsBatch().create(rec).commit().then(function (response) {
if (response.hasErrors) {
console.log(">>> saveAttachFile record failed");
console.warn(response.errors[0]);
} else {
var createdRecord = response.records[0];
console.log(">>> saveAttachFile record success:", createdRecord);
}
});
The record is successful be saved.
But when I download the audio from icloud.developer.apple.com/dashboard .
the audio file is corrupt to play.
What's wrong with it. thank you to reply.
I was having the same problem and have found a working solution!
Remembering that CloudKitJS needs you to define your own fetch method, I implemented a custom one to see what was going on. I then attached a debugger on the custom fetch to inspect the data that was passing through it.
After stepping through the caller, I found that all asset values are transformed using its toString() method only when the library is embedded in NodeJS. This is determined by the absence of the global window object.
When toString() is called on a Buffer, its contents are encoded to UTF-8 (by default), which causes binary assets to become malformed. If you're using node-fetch for your fetch implementation, it supports Buffer and stream.Readable, so this toString() call does nothing but harm.
The most unobtrusive fix I've found is to swap the toString() method on any Buffer or stream.Readable instances passed as an asset field values. You should probably use stream.Readable, by the way, so that you don't load the entire asset in memory when uploading.
Anyway, here's what it looks like in practice:
// Put this somewhere in your implementation
const swizzleBuffer = (buffer) => {
buffer.toString = () => buffer;
return buffer;
};
// Use this asset value instead
{ asset: swizzleBuffer(fs.readFileSync(path)) }
Please be aware that this workaround mutates a Buffer in an ugly way (since Buffer apparently can't be extended). It's probably a good idea to design an API which doesn't use Buffer arguments so that you can mutate instances that only you create yourself to avoid unintended side effects anywhere else in your code.
Also, sure to vendor (make a local copy) of CloudKitJS in your project, as the behavior may change in the future.
ORIGINAL ANSWER
I ran into the same problem and solved it by encoding my data using Base64. It appears that there's a bug in their SDK which mangles Buffer instances containing non-ascii characters (which, um, seems problematic).
Anyway, try something like this:
const assetField = { value: Buffer.from(data.toString('base64')), 'ascii') }
Side note:
You'll need to decode the asset(s) on the device before using them. There's no way to do this efficiently without writing your own routines, as the methods included in Data / NSData instances requires all data to be in memory.
This is a problem with CloudKitJS (and not the native CloudKit client / service), so the other option is to write your own routine to upload assets.
Neither of these options seem particularly great, but rolling your own atleast means there aren't extra steps for clients to take in order to use the asset.
the end result that I need is to send multiple images to a web browser from a database.
The images are stored as blobs.
I know I can stream them out of the database and into a file and then I could just give the url to the file.
I also know I can hand off base64 string to the browser so it can render the image.
My question is which option is the most optimal? Or best practice? Keep in mind that if I go the stream method, I would have to check to see if the image has changed since the last time I displayed it...and if it has changed then I have to restream it out of the database.
I have been playing with the oracldb for node js and was able to successfully extract one blob into a file but I am also having trouble streaming multiple files.
This is a two question post:
Which is the most optimal:
1. Send Base64 string - I kind of like this method because i dont have to worry about streaming out the file and checking if it has changed since it is coming straight from the databse. My concern is can the browser/nodejs handle it? I know those strings can be very large. I could also be sending more than one image at a time.
Stream the blobs into files.
The second part question is how can i get multiple blobs out below is my code on streaming just one file, i found this example from github lobstream1.js
https://raw.githubusercontent.com/oracle/node-oracledb/master/examples/lobstream1.js
Focusing on the code:
// Stream a LOB to a file
var dostream = function(lob, cb) {
if (lob.type === oracledb.CLOB) {
console.log('Writing a CLOB to ' + outFileName);
lob.setEncoding('utf8'); // set the encoding so we get a 'string' not a 'buffer'
} else {
console.log('Writing a BLOB to ' + outFileName);
}
var errorHandled = false;
lob.on(
'error',
function(err) {
console.log("lob.on 'error' event");
if (!errorHandled) {
errorHandled = true;
lob.close(function() {
return cb(err);
});
}
});
lob.on(
'end',
function() {
console.log("lob.on 'end' event");
});
lob.on(
'close',
function() {
// console.log("lob.on 'close' event");
if (!errorHandled) {
return cb(null);
}
});
var outStream = fs.createWriteStream(outFileName);
outStream.on(
'error',
function(err) {
console.log("outStream.on 'error' event");
if (!errorHandled) {
errorHandled = true;
lob.close(function() {
return cb(err);
});
}
});
// Switch into flowing mode and push the LOB to the file
lob.pipe(outStream);
};
Fixed spooling out images with this method, I did change the dostream a bit.
for(var x = 0; x<result.rows.length;x++)
{
outputFileName = x + '.jpg';
console.log(outputFileName);
console.log(x);
var lob = result.rows[x][0];
dostream(lob,outputFileName);
// cb(null,lob);
}
Thank you for any help.
Given all the detail you provided in subsequent comments including the average image size, number of distinct images, memory available to Node.js, number of concurrent users, and the fact that it's "very critical to have the images up to date", here's my initial take...
For the first implementation, stick to the KISS principle and avoid over-engineering. Disable browser caching and don't cache images in Node.js. Instead, rely on the driver and Oracle Database to do the heavy lifting for you.
As for the table storing the images, try to use SecureFile LOBs over BasicFile LOBs (they are known to perform better) if possible. Also, look at the caching options available to both (CACHE, CACHE READS, and NOCACHE). Consider enabling the CACHE READS option based on your stated workload, but work with your DBA to ensure the buffer cache is sized appropriately so you will not impact others.
You can rely on the connection pool's connection request queue to help control how many people are fetching files concurrently. In fact, you might want to create a separate pool just for this purpose so that people fetching LOBs aren't blocking people doing other things in the application. For example, let's say you normally have one connection pool with 10 connections. You could create two connection pools with 5 connections each (use the connection pool cache to make this easy). Then, in the code path that fetches lobs, use the lob pool and use the other pool for everything else.
Given this setup, I'd also recommend NOT streaming the LOBs. Using the driver's ability to buffer the LOBs in Node.js will greatly simplify the code and you should have plenty of memory given such a small number of concurrent users/file fetches.
The biggest problem with this scenario that the images are pretty large and they'll always be flowing from the database through Node.js to the browser. But since you'll be on an internal network, this might not be much of a problem. If it does turn out to be a problem, you can start to add caching in either the browser or Node.js based on what makes the most sense.
Unless you do something like tiling or the base64 inline encoding, each image needs its own URL, so each invocation of node-oracledb would return just one image. You could do some kind of caching by writing to disk, but this seems extra IO - you will need to test to measure your own system's performance and memory requirements. Regarding accessing multiple images in node-oracledb there's some code in https://github.com/oracle/node-oracledb/issues/1041#issuecomment-459002641 that may be useful.
I am trying to build a logging mechanism, to log changes done to a record. I am currently logging previous and new record. However, as the site is very busy, I expect the logfile to grow seriously huge. To avoid this, I plan to only capture the modified fields only.
Is there a way to capture only the modifications done to a record (in REACT), so my {request.body} will have fewer fields?
My Server-side is build with NODE.JS and the client-side is REACT.
One approach you might want to consider is to add an onChange(universal) or onTextChanged(native) listener to the text field and store the form update in a local state/variables.
Finally, when a user makes an action (submit, etc.) you can send the updated data to the logging module.
The best way I found and works for me is …
on the api server-side, where I handle the update request, before hitting the database, I do a difference between the previous record and {request.body} using lodash and use the result to send to my update database function
var _ = require('lodash');
const difference = (object, base) => {
function changes(object, base) {
return _.transform(object, function (result, value, key) {
if (!_.isEqual(value, base[key])) {
result[key] = (_.isObject(value) && _.isObject(base[key])) ? changes(value, base[key]) : value;
}
});
}
return changes(object, base);
}
module.exports = difference
I saved the above code in a file named diff.js and included it in my server-side file.
It worked good.
Thanks for giving the idea...
I'm new to Node but I figured I'd jump right in and start converting a PHP app into Node/Express. It's a bilingual app that uses gettext with PO/MO files. I found a Node module called node-gettext. I'd rather not convert the PO files into another format right now, so it seems this library is my only option.
So my concern is that right now, before every page render, I'm doing something like this:
exports.home_index = function(req, res)
{
var gettext = require('node-gettext'),
gt = new gettext();
var fs = require('fs');
gt.textdomain('de');
var fileContents = fs.readFileSync('./locale/de.mo');
gt.addTextdomain('de', fileContents);
res.render(
'home/index.ejs',
{ gt: gt }
);
};
I'll also be using the translations in classes, so with how it's set up now I'd have to load the entire translation file again every time I want to translate something in another place.
The translation file is about 50k and I really don't like having to do file operations like this on every page load. In Node/Express, what would be the most efficient way to handle this (aside from a database)? Usually a user won't even be changing their language after the first time (if they're changing it from English).
EDIT:
Ok, I have no idea if this is a good approach, but it at least lets me reuse the translation file in other parts of the app without reloading it everywhere I need to get translated text.
In app.js:
var express = require('express'),
app = express(),
...
gettext = require('node-gettext'),
gt = new gettext();
Then, also in app.js, I create the variable app.locals.gt to contain the gettext/translation object, and I include my middleware function:
app.locals.gt = gt;
app.use(locale());
In my middleware file I have this:
mod
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
var fs = require('fs');
req.app.locals.gt.textdomain(lang);
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.gt.addTextdomain(lang, fileContents);
next();
};
};
It doesn't seem like a good idea to assign the loaded translation file to app, since depending on the current request that file will be one of two languages. If I assigned the loaded translation file to app instead of a request variable, can that mix up users' languages?
Anyway, I know there's got to be a better way of doing this.
The simplest option would be to do the following:
Add this in app.js:
var languageDomains = {};
Then modify your Middleware:
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs');
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
req.textdomain = req.app.locals.gt.textdomain(lang);
next();
};
};
By checking if the file has already been loaded you are preventing the action from happening multiple times, and the domain data will stay resident in the server's memory. The downside to the simplicity of this solution is that if you ever change the contents of your .mo files whilst the server is running, the changes wont be taken into account. However, this code could be extended to keep an eye on the mtime of the files, and reload accordingly, or make use of fs.watchFile — if required:
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs'), filename = './locales/' + lang + '.mo';
var fileContents = fs.readFileSync(filename);
fs.watchFile(filename, function (curr, prev) {
req.app.locals.gt.addTextdomain(lang, fs.readFileSync(filename));
});
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
Warning: It should also be noted that using sync versions of functions outside of server initialisation is not a good idea because it can freeze the thread. You'd be better off changing your sync loading to the async equivalent.
After the above changes, rather than passing gt to your template, you should be able to use req.textdomain instead. It seems that the gettext library supports a number of requests directly on each domain object, which means you hopefully don't need to refer to the global gt object on a per request basis (which will be changing it's default domain on each request):
Each domain supports:
getTranslation
getComment
setComment
setTranslation
deleteTranslation
compilePO
compileMO
Taken from here:
https://github.com/andris9/node-gettext/blob/e193c67fdee439ab9710441ffd9dd96d027317b9/lib/domain.js
update
A little bit of further clarity.
Once the server has loaded the file into memory the first time, it should remain there for all subsequent connections it receives (for any visitor/request) because it is stored globally and wont be garbage collected — unless you remove all references to the data, which would mean gettext would need to have some kind of unload/forget domain method.
Node is different to PHP in that its environment is shared and wraps its own HTTP server (if you are using something like Express), which means it is very easy to remember data globally as it has a constant environment that all the code is executed within. PHP is always executed after the HTTP server has received and dealt with the request (e.g. Apache). Each PHP response is then executed in its own separate run-time, which means you have to rely on databases, sessions and cache stores to share even simple information and most resources.
further optimisations
Obviously with the above you are constantly running translations on each page load. Which means the gettext library will still be using the translation data resident in memory, which will take up processing time. To get around this, it would be best to make sure your URLs have something that makes them unique for each different language i.e. my-page/en/ or my.page.fr or even jp.domain.co.uk/my-page and then enable some kind of full page caching using something like memcached or express-view-cache. However, once you start caching pages you need to make certain there aren't any regions that are user specific, if so, you need to start implement more complicated systems that are sensitive to these areas.
Remember: The golden rule of optimisation, don't do so before you need to... basically meaning I wouldn't worry about page caching until you know it's going to be an issue, but it is always worth bearing in mind what your options are, as it should shape your code design.
update 2
Just to illustrate a bit further on the behaviour of a server running in JavaScript, and how the global behaviour is not just a property of req.app, but in fact any object that is further up the scope chain.
So, as an example, instead of adding var languageDomains = {}; to your app.js, you could instantiate it further up the scope of wherever your middleware is placed. It's best to keep your global entities in one place however, so app.js is the better place, but this is just for illustration.
var languageDomains = {};
module.exports = function locale()
{
/// you can still access languageDomains here, and it will behave
/// globally for the entire server.
languageDomains[lang]
}
So basically, where-as with PHP, the entire code-base is re-executed on each request — so the languageDomains would be instantiated a-new each time — in Node the only part of the code to be re-executed is the code within locale() (because it is triggered as part of a new request). This function will still have a reference to the already existing and defined languageDomains via the scope chain. Because languageDomains is never reset (on a per request basis) it will behave globally.
Concurrent users
Node.js is single threaded. This means that in order for it to be concurrent i.e. handle multiple requests at the "same" time, you have to code your app in such a way that each little part can be executed very quickly and then slip into a waiting state, whilst another part of another request is dealt with.
This is the reason for the asynchronous and callback nature of Node, and the reason to avoid Sync calls whilst your app is running. Any one Sync request could halt or freeze execution of the thread and delay handling for all other requests. The reason why I state this is to give you a better idea of how multiple users might interact with your code (and global objects).
Basically once a request is being dealt with by your server, it is it's only focus, until that particular execution cycle ends i.e. your request handler stops calling other code that needs to run synchronously. Once that happens the next queued item is dealt with (a callback or something), this could be part of another request, or it could be the next part in the current request.
am a newbie, trying to write some basics extension. For my extension to work i need to initialize some data, so what I did is inside my background.js i declared something like this.
localStorage["frequency"] = 1; //I want one as Default value. This line is not inside any method, its just the first line of the file background.js
Users can goto Options page and change this above variable to any value using the GUI. As soon as the user changes it in UI am updating that value.
Now the problem is to my understanding background.js reloads everytime the machine is restarted. So every time I restart my machine and open Chrome the frequency value is changed back to 1. In order to avoid this where I need to initialize this value?
You could just use a specific default key. So if frequency is not set you would try default-frequency. The default keys are then still set or defined in the background.js.
I like to do that in one step, in a function like this
function storageGet(key,defaultValue){
var item = localstorage.getItem(key);
if(item === null)return defaultValue;
else return item;
}
(According to the specification localstorage must return null if no value has been set.)
So for your case it would look something like
var f = storageGet("frequency",1);
Furthermore you might be interested in checking out the chrome.storage API. It's used similar to localstorage but provides additional functionalities which might be useful for your extension. In particular it supports to synchronize the user data across different chrome browsers.
edit I changed the if statement in regard to apsillers objection. But since the specification says it's ought to be null, I think it makes sense to check for that instead of undefined.
This is another solution:
// background.js
initializeDefaultValues();
function initializeDefaultValues() {
if (localStorage.getItem('default_values_initialized')) {
return;
}
// set default values for your variable here
localStorage.setItem('frequency', 1);
localStorage.setItem('default_values_initialized', true);
}
I think the problem lies with your syntax. To get and set your localStorage values try using this:
// to set
localStorage.setItem("frequency", 1);
// to get
localStorage.getItem("frequency");