I would like to save the initial state of a workbook (or an Excel application) in the beginning, so that I could always get back to it regardless of modification on the workbook by my Add-in.
I tried some following code in Home.js:
(function() {
"use strict";
Office.initialize = function(reason) {
$(document).ready(function() {
app.initialize();
initial();
$('#getInitial').click(getInitial);
});
};
var ctxInitial;
function initial () {
ctxInitial = new Excel.RequestContext();
}
function getInitial() {
Excel.run(function () {
var wSheetName = 'Sheet1';
var worksheet = ctxInitial.workbook.worksheets.getItem(wSheetName);
var usedRange = worksheet.getUsedRange();
usedRange.load(["values"]);
return ctxInitial.sync().then(function() {
document.getElementById("area").value += usedRange.values.toString();
});
});
}
})();
In the very beginning, my tests prints well the initial values of the worksheet. However, after some manual modification on some cell values, getInitial prints the current state of the worksheet rather than the initial values.
Does anyone know what's the best practice to realise this?
A context does not store workbook state (How could it? It's a JavaScript object that lives in a completely separate world from the document). A context is merely a pipeline (or, if you will, an accumulator of commands) of what actions to dispatch.
There is no real reason to hang on to a context object, except for using it as a way of creating two objects from the same context (e.g., range1.getIntersection(range2), since objects must be from the same context in order to interact). But beyond that, the context's life can (and generally should) be as quick as possible. That's why in Excel.run we always create a new context for you, and dispose of it at the end.
On a related note, and for the same reasoning, it makes no sense to do an Excel.run and NOT use the context that it provides (or use a different context, as you do in your example). You could just as easily run your code without an Excel.run, it gains you nothing to have it be in an Excel.run block if you're reusing an existing context (and note that you won't get the automatic object-tracking that you would have with a clean Excel.run).
Hope this helps!
~ Michael Zlatkovsky, developer on Office Extensibility team, MSFT
Related
I am very new to node js and socket io. Can this code lead to a race condition on counter variable. Should I use a locking library for safely updating the counter variable.
"use strict";
module.exports = function (opts) {
var module = {};
var io = opts.io;
var counter = 0;
io.on('connection', function (socket) {
socket.on("inc", function (msg) {
counter += 1;
});
socket.on("dec" , function (msg) {
counter -= 1;
});
});
return module;
};
No, there is no race condition here. Javascript in node.js is single threaded and event driven so only one socket.io event handler is ever executing at a time. This is one of the nice programming simplifications that come from the single threaded model. It runs a given thread of execution to completion and then and only then does it grab the next event from the event queue and run it.
Hopefully you do realize that the same counter variable is accessed by all socket.io connections. While this isn't a race condition, it means that there's only one counter that all socket.io connections are capable of modifying.
If you wanted a per-connection counter (separeate counter for each connection), then you could define the counter variable inside the io.on('connection', ....) handler.
The race conditions you do have to watch out for in node.js are when you make an async call and then continue the rest of your coding logic in the async callback. While the async operation is underway, other node.js code can run and can change publicly accessible variables you may be using. That is not the case in your counter example, but it does occur with lots of other types of node.js programming.
For example, this could be an issue:
var flag = false;
function doSomething() {
// set flag indicating we are in a fs.readFile() operation
flag = true;
fs.readFile("somefile.txt", function(err, data) {
// do something with data
// clear flag
flag = false;
});
}
In this case, immediately after we call fs.readFile(), we are returning control back to the node.js. It is free at that time to run other operations. If another operation could also run this code, then it will tromp on the value of flag and we'd have a concurrency issue.
So, you have to be aware that anytime you make an async operation and then the rest of your logic continues in the callback for the async operation that other code can run and any shared variables can be accessed at that time. You either need to make a local copy of shared data or you need to provide appropriate protections for shared data.
In this particular case, the flag could be incremented and decremented rather than simply set to true or false and it would probably serve the desired purpose of keeping track of whether this file is current being read or not.
Shorter answer:
"Race condition" is when you execute a series of ordered asynchronous functions and because of their async nature they won't finish processing in their original order.
In your code, you are executing a series of ordered synchronous process (increasing or decreasing the counter), So they finish instantly after they start, resulting in ordered output. So no racing here!
I have written an event handler under Excel online add-in. It is activated by a button activate, then when a user clicks on another cell or range, the address will be written on a text area myTextArea. The whole thing works.
However, once a new cell is selected, a green loading symbol is shown near the focus; WORKING... is shown on the bottom of Excel; it takes almost 0.5 second.
I am just surprised that it takes time for such a simple action. Does anyone know if it is possible to make this event hander faster? Otherwise, is there any other mean than event handling to make this seamless?
(function() {
"use strict";
Office.initialize = function(reason) {
$(document).ready(function() {
app.initialize();
$('#activate').click(addSelectionChangedEventHandler);
});
}
;
function addSelectionChangedEventHandler() {
Office.context.document.addHandlerAsync(Office.EventType.DocumentSelectionChanged, MyHandler);
}
function MyHandler(eventArgs) {
doStuffWithNewSelection();
}
function doStuffWithNewSelection() {
Excel.run(function(ctx) {
var selectedRange = ctx.workbook.getSelectedRange();
selectedRange.load(["address"]);
return ctx.sync().then(function() {
write(selectedRange.address)
})
}).then(function() {
console.log("done");
}).catch(function(error) {
...
});
}
function write(message) {
document.getElementById("myTextarea").value = message;
}
})();
What you're seeing is network lag. The selection changed event -- once registered -- originates on the server, and triggers the code if Office.js that fires your event handler. Your event handler, in turn, creates a local request for getting the selection Range object and its address, sends it over to the server as part of ctx.sync(), and then waits to hear back from the server before firing the .then.
There's not anything you can do to optimize this flow -- you will pay a pretty high per-transaction cost on Excel Online, and event handlers only add one extra step to that cost. On the other hand, the good news is that the Excel.run/ctx-based model does allow you to batch multiple requests into one, drastically reducing the number of roundtrips that would otherwise be required. That is, fetching the values of 10 different ranges is almost identical in speed to fetching just one; whereas it would be 10 times more expensive if each call were made individually.
Hope this helps!
~ Michael Zlatkovsky, developer on Office Extensibility team, MSFT
Note: My question is about the way of including/passing the dispatcher instance around, not about how the pattern is useful.
I am studying the Flux Architecture and I cannot get my head around the concept of the dispatcher (instance) potentially being included everywhere...
What if I want to trigger an Action from my Model Layer? It feels weird to me to include an instance of an object in my Model files... I feel like this is missing some injection pattern...
I have the impression that the exact PHP equivalent is something (that feels) horrible similar to:
<?php
$dispatcher = require '../dispatcher_instance.php';
class MyModel {
...
public function someMethod() {
...
$dispatcher->...
}
}
I think my question is not exactly only related to the Flux Architecture but more to the NodeJS "way of doing things"/practices in general.
TLDR:
No, it is not bad practice to pass around the instance of the dispatcher in your stores
All data stores should have a reference to the dispatcher
The invoking/consuming code (in React, this is usually the view) should only have references to the action-creators, not the dispatcher
Your code doesn't quite align with React because you are creating a public mutable function on your data store.
The ONLY way to communicate with a store in Flux is via message passing which always flows through the dispatcher.
For example:
var Dispatcher = require('MyAppDispatcher');
var ExampleActions = require('ExampleActions');
var _data = 10;
var ExampleStore = assign({}, EventEmitter.prototype, {
getData() {
return _data;
},
emitChange() {
this.emit('change');
},
dispatcherKey: Dispatcher.register(payload => {
var {action} = payload;
switch (action.type) {
case ACTIONS.ADD_1:
_data += 1;
ExampleStore.emitChange();
ExampleActions.doThatOtherThing();
break;
}
})
});
module.exports = ExampleStore;
By closing over _data instead of having a data property directly on the store, you can enforce the message passing rule. It's a private member.
Also important to note, although you can call Dispatcher.emit() directly, it's not a good idea.
There are two main reasons to go through the action-creators:
Consistency - This is how your views and other consuming code interacts with the stores
Easier Refactoring - If you ever remove the ADD_1 action from your app, this code will throw an exception rather than silently failing by sending a message that doesn't match any of the switch statements in any of the stores
Main Advantages to this Approach
Loose coupling - Adding and removing features is a breeze. Stores can respond to any event in the system with by adding one line of code.
Less complexity - One way data flow makes wrapping head around data flow a lot easier. Less interdependencies.
Easier debugging - You can debug every change in your system with a few lines of code.
debugging example:
var MyAppDispatcher = require('MyAppDispatcher');
MyAppDispatcher.register(payload => {
console.debug(payload);
});
I have a number of event handlers in my page that were accessing global functions (functions defined in Script tags on the page). For instance:
<button id="ClearText" onclick="cleartb()">Clear Text Box</button>
That cleartb() function simply sits on the page:
<script>
function cleartb()
{
vm.essayText('');
return;
}
</script>
Now, vm is my page's view model (but for this question, all that matters is that it was simply a global variable available to the entire page) and I use functions and values it exposes in several event handlers, alert messages, etc.
The problem is that I've moved the definition of vm into a RequireJS AMD module called vm.js:
define(["knockout", "jquery"], function (ko, $) {
var essayText = 'Hello World!';
...
return {
essayText: essayText
}
});
When my onlick event handler runs or I refer to vm in any manner, I get a "vm undefined" error (as expected).
Question 1:
How can I give my page access to the vm variable defined in an AMD module especially if I don't want to "pollute" the global namespace? Is there a best-practice here?
Question 2:
Ultimately, I don't even want cleartb() on the page because it really is a view-model-specific operation. Although I think I can figure out what to do once I have the (an?) answer to Question 1, I would be interested to know how best to move the cleartb function into the vm AMD module so that I still can call it from my onlick event handler.
Note that I want values and function still to be exposed from a vm variable so that I can continue to use vm.cleartb() or inspect the value of vm.essayText() (it's a KO observable). (In other words, I don't want to solve the problem with a cleartb(vm) solution.)
Thank you for any help!
<script>
function cleartb()
{
vm.essayText('');
return;
}
alert(window.cleartb);
</script>
Actually, this way is already pollute the global window variable. So I think your first requirement don't make sense. And then you can do this way:
define(["knockout", "jquery"], function (ko, $) {
var essayText = 'Hello World!', varToBeExported;
...
window.varToBeExported = {
'cleartb': cleartb
};
return {
essayText: essayText
}
});
But if unnecessary, you should using requireJs way - require(['your moudle'],.... .
I'm attempting to load a store catalog into MongoDb (2.2.2) using Node.js (0.8.18) and Mongoose (3.5.4) -- all on Windows 7 64bit. The data set contains roughly 12,500 records. Each data record is a JSON string.
My latest attempt looks like this:
var fs = require('fs');
var odir = process.cwd() + '/file_data/output_data/';
var mongoose = require('mongoose');
var Catalog = require('./models').Catalog;
var conn = mongoose.connect('mongodb://127.0.0.1:27017/sc_store');
exports.main = function(callback){
var catalogArray = fs.readFileSync(odir + 'pc-out.json','utf8').split('\n');
var i = 0;
Catalog.remove({}, function(err){
while(i < catalogArray.length){
new Catalog(JSON.parse(catalogArray[i])).save(function(err, doc){
if(err){
console.log(err);
} else {
i++;
}
});
if(i === catalogArray.length -1) return callback('database populated');
}
});
};
I have had a lot of problems trying to populate the database. Under previous scenarios (and this one), node pegs the processor and eventually runs out of memory. Note that in this scenario, I'm trying to allow Mongoose to save a record, and then iterate to the next record once the record saves.
But the iterator inside of the Mongoose save function never gets incremented. In addition, it never throws any errors. But if I put the iterator (i) outside of the asynchronous call to Mongoose, it will work, provided the number of records that I try to load are not too big (I have successfully loaded 2,000 this way).
So my questions are: Why isn't the iterator inside of the Mongoose save call ever incremented? And, more importantly, what is the best way to load a large data set into MongoDb using Mongoose?
Rob
i is your index to where you're pulling input data from in catalogArray, but you're also trying to use it to keep track of how many have been saved which isn't possible. Try tracking them separately like this:
var i = 0;
var saved = 0;
Catalog.remove({}, function(err){
while(i < catalogArray.length){
new Catalog(JSON.parse(catalogArray[i])).save(function(err, doc){
saved++;
if(err){
console.log(err);
} else {
if(saved === catalogArray.length) {
return callback('database populated');
}
}
});
i++;
}
});
UPDATE
If you want to add tighter flow control to the process, you can use the async module's forEachLimit function to limit the number of outstanding save operations to whatever you specify. For example, to limit it to one outstanding save at a time:
Catalog.remove({}, function(err){
async.forEachLimit(catalogArray, 1, function (catalog, cb) {
new Catalog(JSON.parse(catalog)).save(function (err, doc) {
if (err) {
console.log(err);
}
cb(err);
});
}, function (err) {
callback('database populated');
});
}
Rob,
The short answer:
You created an infinite loop. You're thinking synchronously and with blocking, Javascript functions asynchronously and without blocking. What you are trying to do is like trying to directly turn the feeling of hunger into a sandwich. You can't. The closest thing is you use the feeling of hunger to motivate you to go to the kitchen and make it. Don't try to make Javascript block. It won't work. Now, learn async.forEachLimit. It will work for what you want to do here.
You should probably review asynchronous design patterns and understand what it means on a deeper level. Callbacks are not simply an alternative to return values. They are fundamentally different in how and when they are executed. Here is a good primer: http://cs.brown.edu/courses/csci1680/f12/handouts/async.pdf
The long answer:
There is an underlying problem here, and that is your lack of understanding of what non-blocking IO and asynchronous means. Im not sure if you are breaking into node development, or this is just a one-off project, but if you do plan to continue using node (or any asynchronous language) then it is worth the time to understand the difference between synchronous and asynchronous design patterns, and what motivations there are for them. So, that is why you have a logic error putting the loop invariant increment inside an asynchronous callback which is creating an infinite loop.
In non-computer science, that means that your increment to i will never occur. The reason is because Javascript executes a single block of code to completion before any asynchronous callbacks are called. So in your code, your loop will run over and over, without i ever incrementing. And, in the background, you are storing the same document in mongo over and over. Each iteration of the loop starts sending document with index 0 to mongo, the callback can't fire until your loop ends, and all other code outside the loop runs to completion. So, the callback queues up. But, your loop runs again since i++ is never executed (remember, the callback is queued until your code finishes), inserting record 0 again, queueing another callback to execute AFTER your loop is complete. This goes on and on until your memory is filled with callbacks waiting to inform your infinite loop that document 0 has been inserted millions of times.
In general, there is no way to make Javascript block without doing something really really bad. For example, something paramount to setting your kitchen on fire to fry some eggs for that sandwich I talked about in the "short answer".
My advice is to take advantage of libs like async. https://github.com/caolan/async JohnnyHK mentioned it here, and he was correct for doing so.