There is a web application which does not have any own database, but communicates with remote through API. Performing a call to API takes some time and we do it asynchronously. The responsiveness of the application must be high from the user point of view.
Let's assume that the user is changing some data. To store the data we need to make a call. We start showing the new data right after making the call. But what can we do if the response of the call is unsuccessful? We need to restore the old values and show some kind of warning to the user. But the user may leave the page, where data were changed and see completely different page. What are general patterns to handle such situations?
If you are using .Net 4.5 you can do this using async/await. If your web client that you are calling provides an asynchronous API that returns a Task you can simply await the call inside a try/catch block. The await will cause the method to return immediately so the user will continue to observe the old data while it is executing. Once the we client call completes the method "resumes" after the await and you can update your data.
If the web client call causes an exception, the method will resume in the catch block and you can display an error to the user.
public async Task CallAPI()
{
try
{
var client = ...
await client.CallAPI();
}
catch(Exception ex)
{
// show warning message
}
}
If your web client does not provide an asynchronous API you can achieve the equivalent with the Task Parallel Library.
public void CallAPI1()
{
Task.Factory.StartNew( () =>
{
var client = ...
client.CallAPI();
}).ContinueWith( t =>
{
if(t.Exception != null)
{
// display error
}
else
{
// update web page with
}
},
null,
CancellationToken.None, TaskScheduler.FromCurrentSynchronizationContext());
}
This article has more information on async/await. And this one has some of the best practices to follow.
Related
I want to understand what kind of Firestore database is installed to my box.
The code is running with node.js 9.
If I remove the internet for X minutes and put it back, I can see all the cached transactions going to Firestore (add, updates, deletes).
If I add firebase.firestore().enablePersistence() line after 'firebase.initializeApp(fbconfig), I am getting this error:
Error enabling offline persistence. Falling back to persistence
disabled: FirebaseError: [code=unimplemented]: This platform is either
missing IndexedDB or is known to have an incomplete implementation.
Offline persistence has been disabled.
Now, my question is. If I don't have persistence enabled or can't have it, how come when disconnecting my device from internet, I still have internal transaction going on? Am I really seeing it the proper way?
To me, beside not seeing the console.log() that I have inside the "then()" to batch.commit or transaction.update right away (only when putting back the internet) tells me that I have some kind of internal database persistence, don't you think?
Thanks in advance for your help.
UPDATE
When sendUpdate is called, it looks like the batch.commit is executed because I can see something going on in listenMyDocs(), but the console.log "Commit successfully!" is not shown until the internet comes back
function sendUpdate(response) {
const db = firebase.firestore();
let batch = db.batch();
let ref = db.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection')
.doc('my-new-doc');
batch.update(ref, { "variable": response.state });
batch.commit().then(() => {
console.log("Commit successfully!");
}).catch((error) => {
console.error("Commit error: ", error);
});
}
function listenMyDocs() {
const firebase = connector.getFirebase()
const db = firebase.firestore()
.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection');
const query = db.where('var1', '==', "true")
.where('var2', '==', false);
query.onSnapshot(snapshot => {
snapshot.docChanges().forEach(change => {
if (change.type === 'added') {
console.log('ADDED');
}
if (change.type === 'modified') {
console.log('MODIFIED');
}
if (change.type === 'removed') {
console.log('DELETED');
}
});
});
the console.log "Commit successfully!" is not shown until the internet comes back
This is the expected behavior. Completion listeners fire once the data is committed on the server.
Local events may fire before completion, in an effort to allow your UI to update optimistically. If the server changes the behavior that the client raised events for (for example: if the server rejects a write), the client will fire reconciliatory events (so if an add was rejected, it will firebase a change.type = 'removed event once that is detected).
I am not entirely sure if this applies to batch updates though, and it might be tricky to test that from a Node.js script as those usually bypass the security rules.
We are using a Bot configured via Microsoft Bot Framework written in NodeJS. During the execution flow of a dialog, we present the user with certain information and then some server processing is done via SOAP and the result of this SOAP response would be needed before the next waterfall method starts.
In short, we have the below piece of code:
bot.dialog('changedefaultlogingroupDialog', [
async function (session, args, next) {
wargs[0] = 'change default login group';
var sourceFile = require('./fetchSharePointUserDetail.js');
session.privateConversationData.userSharepointEmail = global.DEVSharepointBotRequestorEmailID;
console.log('\nsession.privateConversationData.userSharepointEmail:'+session.privateConversationData.userSharepointEmail);
var get_SharepointUserId_args = ['get Sharepoint user id', session.privateConversationData.userSharepointEmail];
sourceFile.login(get_SharepointUserId_args);
setTimeout(() => {
global.DEVSharepointTeamcenterUserID = require('./fetchSharePointUserDetail.js').DEVTeamcenterUserId;
console.log('\nglobal.DEVSharepointTeamcenterUserID:'+global.DEVSharepointTeamcenterUserID+'\n');
console.log("Request has been made from directline channel by user id <"+global.DEVSharepointTeamcenterUserID+">");
session.privateConversationData.requestor_id = global.DEVSharepointTeamcenterUserID;
session.privateConversationData.create_ques = session.message.text;
next();
}, 3000);
},
async function (session, result, next) {
Do processing here that is dependent on session.privateConversationData.requestor_id
}
As you can see from the above example, the setTimeout method is waiting for 3 seconds to have the SOAP response retrieved. While this worked in DEV landscape, it failed in our PRD landscape. So I wanted to know what is the more appropriate way of doing this. Is 'await' a correct case for using in this context?. I am asking this as this is in BOT Framework Context and not sure if that has any side affects.
Please suggest.
Thanks,
Pavan.
Await is the correct way to look at this.
I'm not familiar with the bot framework, but I'm guessing that they asynchronous part of your code happens during the login.
await sourceFile.login(get_SharepointUserId_args);
Would be where the asynchronous call is. It could also be in the fetchSharePointUserDetail.js
There is likely a better way to load that file as a module so that you are calling functions on a returned object, rather than returning variables from some code that is obviously executing something.
I'm using simply like below:
class Bot {
constructor(token) {
let _baseApiURL = `https://api.telegram.org`;
//code here
}
getAPI(apiName) {
return axios.get(`${this.getApiURL()}/${apiName}`);
}
getApiURL() {
return `${this.getBaseApiUrl()}/bot${this.getToken()}`;
}
getUpdates(fn) {
this.getAPI('getUpdates')
.then(res => {
this.storeUpdates(res.data);
fn(res.data);
setTimeout(() => {
this.getUpdates(fn);
}, 1000);
})
.catch(err => {
console.log('::: ERROR :::', err);
});
}
}
const bot = new Bot('mytoken');
bot.start();
I'd like to know whether there is a better way to listen for Telegram's updates, instead of using a timeout and redo an Ajax call to 'getUpdates' API
Telegram supports polling or webhooks, so you can use the latter to avoid polling the getUpdates API
Getting updates
There are two mutually exclusive ways of receiving updates for your
bot — the getUpdates method on one hand and Webhooks on the other.
Incoming updates are stored on the server until the bot receives them
either way, but they will not be kept longer than 24 hours.
Regardless of which option you choose, you will receive JSON-serialized Update objects as a result.
More info on: https://core.telegram.org/bots/api#getting-updates
You can use telegraf to easily setup a webhook or to handle the polling for you with a great API
I have been trying to use a service worker within a IIS hosted web site that caches some of the static content of the site. The site is an internal application that uses Windows Authentication. I have been able to register and run a service worker without too much hassle, but as soon as I open the caches and start adding files to the cache, the promise fails with an authorisation failure. the returned HTTP result is 401 Unauthorised. This is the usual response for the first few requests until the browser and the server are able to negotiate the authorisation.
I will post some code soon that should help with the explanation.
EDIT
var staticCacheName = 'app-static-v1';
console.log("I AM ALIVE");
this.addEventListener('install', function (event) {
console.log("AND I INSTALLED!!!!");
var urlsToCache = [
//...many js files to cache
'/scripts/numeral.min.js?version=2.2.0',
'/scripts/require.js',
'/scripts/text.js?version=2.2.0',
'/scripts/toastr.min.js?version=2.2.0',
];
event.waitUntil(
caches.open(staticCacheName).then(function (cache) {
cache.addAll(urlsToCache);
}).catch(function (error) {
console.log(error);
})
);
});
This is just a guess, given the lack of code, but if you're doing something like:
caches.open('my-cache').then(cache => {
return cache.add('page1.html'); // Or caches.addAll(['page1.html, page2.html']);
});
you're taking advantage of the implicit Request object creation (see section 6.4.4.4.1) that happens when you pass in a string to cache.add()/cache.addAll(). The Request object that's created uses the default credentials mode, which is 'omit'.
What you can do instead is explicitly construct a Request object containing the credentials mode you'd prefer, which in your case would likely be 'same-origin':
caches.open('my-cache').then(cache => {
return cache.add(new Request('page1.html', {credentials: 'same-origin'}));
});
If you had a bunch of URLs that you were passing an array to cache.addAll(), you can .map() them to a corresponding array of Requests:
var urls = ['page1.html', 'page2.html'];
caches.open('my-cache').then(cache => {
return cache.addAll(urls.map(url => new Request(url, {credentials: 'same-origin'})));
});
I have a simple app I'm building using Play + AngularJS that requires authentication before most routes can be accessed. The login flow includes a "remember me" feature that stores a session ID in to the browser local storage and gets mapped to a valid authorized database session entry on the server side any time a user returns to the app.
The problem I'm having is that I do the session checking (extract cookie & compare against server) in the run() function of the module:
.run(function ($rootScope, $http, $cookieStore, $location) {
// <snip>
// check if there is already a session?
var sessionId = window.localStorage["session.id"];
if (sessionId == null) {
sessionId = $cookieStore.get("session.id");
}
if (sessionId != null) {
$http.get("/sessions/" + sessionId)
.success(function (data) {
$http.defaults.headers.common['X-Session-ID'] = data.id;
$cookieStore.put("session.id", data.id);
$rootScope.user = data.user;
})
.error(function () {
// remove the cookie, since it's dead
$cookieStore.remove("session.id");
window.localStorage.removeItem("session.id");
$location.path("/login");
});
} else {
if ($location.path() != "/login" && $location.path() != "/signup") {
$location.path("/login");
}
}
});
The problem is that this function executes an AJAX call and I don't know if the session is valid until it completes. However, the controller that loads (via the route selected by $routeProvider) can fire away another AJAX call that often kicks off before the other one finishes, resulting in a race condition and the initial request getting a 401 response code.
So my question is: how can I force run (with its associated $http call) to complete before any other part of the app runs? I have tried using $q/promise here and it doesn't seem to make a difference (perhaps run functions don't honor promises). I've been advisor to use resolve feature in $routeProvider but I don't know exactly what to do and I'm not super execited about having to put that in for every route anyway.
I assume this is a pretty common use case and it gets solved every day. Hopefully someone can give me some direction with my code, or share their approaches for "remember me" and AngularJS.
You need to manual bootstrap your app after you get session from server.It's easy if you use jQuery for example you can do, or even without jQuery you can use injector to access $http before bootstrapping
$.get(server,function(){
//success , set variable.
}).fail(function (){
//failed :( redirect to login or set session to false etc... null
})
.always(function(){
//alwyas bootstrap in both case and set result as a constant or variable Angular.module('app').variable('session',sessionResult);
});
I'm on phone right now, but this should give u the idea