I have 3 instances of a Node.js worker role running on Windows Azure. I'm trying to maintain sessions between all the instances.
Azure Queue seems like the recommended approach, but how do I ensure all the instances receive the session, as the queue deletes the session once a single instance has de-queued it?
Azure table isn't really suitable for my application as the sessions are too frequent and need not be stored for more than 10 seconds.
A queue isn't a good mechanism for session state; it's for message-passing. And once one instance reads a queue message, it's no longer visible while a particular role instance is processing that message. Also: what would you do with the message when done with it? Update it and then make it visible again? The issue is that you cannot choose which "session" to read. It's an almost-FIFO queue (messages that aren't processed properly can reappear). It's not like a key/value store.
To create an accessible session repository, you can take advantage of Azure's in-role (or dedicated role) caching, which is a distributed cache across your role instances. You can use Table Storage too - just simple key/value type of reads/writes. And Table Storage is included in the node.js Azure SDK.
That said: let's go the cache route here. Since your sessions are short-lived, and (I'm guessing) don't take up too much memory, you can start with an in-role cache (the cache shares the worker role RAM with your node code, taking a percentage of memory). The cache is also memcache-compatible, which is easy to access from a node application.
If you take a look at this answer, I show where caching is accessed. You'll need to set up the cache this way, but also set up the memcache server gateway by adding an internal endpoint called memcache_default. Then, point your memcache client class to the internal endpoint. Done.
The full instructions (and details around the memcache gateway vs. client shim, which you'd use when setting up a dedicated cache role) are here. You'll see that the instructions are slightly different if using a dedicated cache, as it's then recommended to use a client shim in your node app's worker role.
Related
We currently use Redis to store the session of one application currently running on Azure. We use it so when Azure scales the application the session is not stored locally in the application itself. We are in the process of placing another application in Azure too, and I'd like to know if we can use that same Redis instance, or if having two applications storing sessions on the same place might cause issues.
I wouldn't use different keys and the same Redis server and database. Your two web apps become co-joined twins. If one web app acts up, it can tank the other one.
If you're deploying the same web app code to two or more web apps, you could use the same Redis server but different databases within the server. Azure Redis Cache has one server, each with 16 databases. However, databases within the same server share service limits.
Or the absolute best thing to do is firewall the two web apps by using two different Azure Redis cache services.
It depends on your key. If you use a key that is only the user name. Then both applications read the same session state. You could do it by creating a composite key with the application name as part of the key.
I would use a seperate instances:
ensuring unique key names across applications adds complexity and risk
high load on one application can impact other applications
if the Redis instance goes down alle your applications fail
I am trying to make my Service Fabric service, which makes a SOAP call to an external service, such that if deployed over 2 or more clusters, it can still work, in that if one service has made the connection to the external service, then the service in the other cluster doesn't try to make the connection, and vice versa.
I can't think of a better way to design this without storing the state in a database, which introduces a host of issues such as locking and race conditions, etc. What are some designs that can fit in this scenario. Any suggestions would be highly appreciated.
There is no way to do that out of the box on Service Fabric.
You have to find an approach to orchestrate these calls between clusters\services, you could:
Create a service in one of the clusters to delegate the calls to other services, and store the info about connections on a single service.
put a message in a queue and each service get one message to open a connection(this can be one of the approaches used above)
Store in a shared cache(redis) every active call, before you attempt to make the call you check if the connection is already active somewhere, when the connection close you remove from the cache for other services be able to open the connection, also enable expiration to close these connections in case of service failure.
Store the state in a database as you suggested
I have a MVC application deployed to Azure Web app.The web app required to scale out in multiple instances.
I want to use Session object (ASP.NET) to store some user data etc.(lightweight), so that can be retrieved quickly.I believe, session will be In-Proc with ARR ON setting.
I've the following questions
Is it ok to use session object in Azure web apps ,will it give
guarantee to use same In-Proc session if ARR is on.
If ARR turned off ,Should I use session object?
Because using Session itself makes application slow,what are the
other alternatives to store small data within Azure webapp/MVC(once authenticated user profile
related data) for quick access in application?
Using IN-PROC sessions in the cloud is a strict no. The reason to host to cloud is to have high availability which is done by having a distributed environment.
To answer your question, the ARR-Affinity cookie will affinitize the client requests to a specific instance. However, if the Process restarts or App-Domain recycles, then all the sessions will be lost. This is one of the primary reasons why Out-Proc session state management is suggested.
I would recommend against using In-Proc session state in any cloud scenario. I understand speed is a concern for you. For this consider using Redis Cache. Refer the documentation here: https://learn.microsoft.com/en-us/azure/redis-cache/cache-aspnet-session-state-provider
HTH
I am planning to have multiple azure mobile service instances, so the first requirement I have is to share the access token of authenticated user across different app instances. I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine, and placing it to blob storage is not recommended for production apps. What is the possible solution I have at this time?
I have read the blog you mentioned about App Service Token Store. As mentioned about where the tokens live:
Internally, all these tokens are stored in your app’s local file storage under D:/home/data/.auth/tokens. The tokens themselves are all encrypted in user-specific .json files using app-specific encryption keys and cryptographically signed as per best practice.
I found this article https://cgillum.tech/2016/03/07/app-service-token-store/ that states that right now we can not share the tokens as it is stored locally on machine.
As Azure-runtime-environment states about the Persisted files that an Azure Web App can deal with:
They are rooted in d:\home, which can also be found using the %HOME% environment variable.
These files are persistent, meaning that you can rely on them staying there until you do something to change them. Also, they are shared between all instances of your site (when you scale it up to multiple instances). Internally, the way this works is that they are stored in Azure Storage instead of living on the local file system.
Moreover, Azure app service would enable ARR Affinity to keep a client subsequent requests talking to the same instance. You could disable the session affinity cookie, then requests would be distributed across all the instances. For more details, you could refer to this blog.
Additionally, I have tried to disable ARR Affinity and scale my mobile service to multiple instances, then I could always browser https://[my-website].azurewebsites.net/.auth/me to retrieve information about the current logged-in user.
Per my understanding, you could accomplish the authentication/authorization by yourself to use auth middle-ware into your app. But, this requires more works to be done. Since the platform takes care of it for you, I assume that you could leverage Easy Auth and Token Store and scale your mobile service to multiple instances without worrying about anything.
What is the best way to refresh the data in all my Azure instances?
I have several instances each running the same web role. On role startup data is read from blob storage into local store. Intermittently there will be a new update to data used by the site. I want to be able to notify each instance programmatically to get the updated data from blob storage into the instance's local storage so that each instance becomes refreshed.
One idea is to write a custom client program to call a web-service on the web role, passing in a role ID to update. If the endpoint is the Role ID then the instance refreshes itself. If not the client tries again until all instances report that they are refreshed. Is this a good approach or is there a built in method in Azure for doing this?
I have considered a separate thread which intermittently checks for an refresh flag, though I'm worried my instances will become out of sync.
There is not a huge amount of data so I could put it in the Azure cache. However, I am concerned about the network latency with using the cache. Does the Azure cache handle this well?
Another idea is to just reboot the instances one after another (with the refresh operation being performed on the role start up).
I think one possible way you could do this is to use a setting (e.g. a timestamp) in a configuration setting - you can then programmatically update the configuration and use the RoleEnvironment.Changing event to do monitor for the change on all your instances - http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx
If you do this make sure you intercept the event in all your roles - and make sure you parse the changes (looking for Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironmentConfigurationSettingChange) and return false to the Cancel parameter to prevent your instances from being rebooted.
Adding to Stuart's answer, let me address your proposed techniques:
I don't think the client-calling-web-service technique is practical, as you've now added a web service as well as a client driver to call it, and you have no ability to contact individual web role instances - the load balancer hides the individual instances from you.
You can store data in the cache, but the only way to update your data "now" is to expire items in the cache. If you have all of your data in a single cache item with a well-known key, then it's easy to expire. If it's across multiple keys, then you have a more complex task, and you won't be able to expire the items atomically (unless you clear the entire cache). Plus, cache expires on its own anyway - you'd have to deal with re-loading if that occurs.
You can use a background thread on your role instances, that looks for a blob (maybe with a single zip file), and copy the zip down to local storage when its id changes (you can store a unique id in the blob's metadata, for instance). Then you could just check this periodically (maybe every minute?).
The reboot idea has a good side and a bad side. On the good side, you're going to avoid side-effects caused by changing local content while your role instance is still running. The bad side is that the role instance will be offline for a few minutes during reboot.
Stuart's suggestion of using a configuration setting is a good one. Just be sure you can update your files without breaking your app. If this cannot be done safely, don't handle the Changing event - just let the role instance recycle.