Debugging Web application using Azure Caching - azure

I would like to find a best practice to debug an existing ASP.NET MVC application. That's a Web Role already hosted into Azure.
The application is using Windows Azure Caching. The configuration file has been defined with the settgins of an Azure Account ->
<dataCacheClient name="default">
<hosts>
<host name="xxxx.cache.windows.net" cachePort="22233" />
</hosts>
</dataCacheClient>
I would like to debug the code. What's the best approach in this case ?
I have already made a test by changing the Host with localhost but it's not working.
Thank you,
PS: I have installed the new SDK 1.8

There is no locally-installed equivalent to Azure Shared Caching. Windows Server AppFabric Caching is somewhat close, but not exactly the same.
You could try to get IT to unblock the ports so you can use Azure. Though if you have multiple developers on a project, each dev would need their own cache instance to avoid stepping on each other's data.
Another option is to completely encapsulate your caching in interfaces. Then you can use something completely different to develop on. In the past I've used the MemoryCache in-memory store for development. You could also use AppFabric Caching, or memcached, or really anything else. You just need to be aware of the differences between your dev and production systems.
Edit: Another option would be switching from Shared Caching to caching in your roles (I'm not sure what the official name for this is, these days). I believe this works locally too. The major drawback is that it's only visible within one Hosted Service. If you only have one Hosted Service anyway, that's no problem. If you have several Hosted Services that need to share data, then it could an issue.

Related

Sitecore DMS + Azure. Why the two don't mix?

I am exploring the idea of hosting my CD environment in Windows Azure. I read that the current release of the DMS does not play ball in the cloud, however, no detailed explanation was given. Apparently Azure support is planned for second quarter 2013, but in the meantime, I'd like to know why it doesn't work so that I can explore potential workarounds.
For instance, is the issue related to sticky sessions (or lack thereof)? Or, is it related to the DMS compatibility with SQL Azure?
It will be an issue with the sticky sessions. As the DMS does all its work server side it needs proper session state management to work. You could do this on Azure using IaaS, but then you would be responsible for installing and maintaining the deployment of Sitecore on the OS rather than using the built in deployment features.
See this post by Jakob Leander for more info: http://www.sitecore.net/Community/Technical-Blogs/Jakob-Leander/Posts/2013/01/Why-we-love-Sitecore-on-Azure.aspx

A little confused about Azure

I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.

mvc-mini-profiler - working with a load balanced web role (azure et al)

I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.

Dispose cache from multiple web role instances from azure application

i am having a problem related to the cache disposal in azure cloud application.
i am using MVC3 structure, using 2 instances.
as we know that the Microsoft azure automatically allocates a web role to serve a web request based on load balancing.
but the problem is that when i dispose a cache "HttpRuntime.Cache.Remove("CacheName")", it is disposed of from the current web role that i am currently alloted by microsoft.
and doesnt dispose the cache from the other instance.
please help me, can i dispose a cache from the two instance a the same time?
using any C# sharp code.
This is a good reason to use a distributed cache. Synchronizing cache adds and removes individually across many instances and caches is hard to do well. Any code or solution that attempts to solve the issue will be pretty hacky. Moving the caching out to a distributed cache will solve the problem for you correctly.
Have you looked at the Windows Azure AppFabric Caching solution?

Like-for-Like SimpleDB Offline

We are currently using Amazon's SimpleDB for a web service. The data is very simple and doesn't require anything like SQL. Its basically a 'property bag'.
We are due to demo our project somewhere where we will not definitely have Internet access and thus may not be able to access SimpleDB. This has only just become apparent and I have been asked to look for service that we can run on a local server that would provide us with a like-for-like (i.e. calls to SimpleDB would work the same on this service) so that we could just direct our code to this rather than the real AWS SimpleDB service without any code change.
Is anyone else doing something similar? What are you using?
We also use Azure, so rather than change our app to work with one service online and another offline, we may change it to only use Azure as this can be run offline and still work.
Windows Azure table storage does not really work offline per se. The storage emulator can be run without an internent connection. However, it is an emulator. So, it does not have 100% fidelity with the cloud service and it is not tuned for any type of performance comparison. You could use this for demos, but I would not suggest using the emulator for any type of 'real' work. Crazy thing about cloud services... they don't work very well offline. ;)
You could maybe use a local version of redis - http://redis.io/ - but this definitely would require some recoding - not like-for-like calls
If the application was written to be testable (meaning you are using something like the repository pattern) you could possibly stub the calls and point to either a very lite Db or a file.
As a reference for anyone who ends up here looking for the same...
We eventually used mdb/node.js which uses the same api calls as SimpleDB. All we had to do was point our app at a new Service Endpoint URL (our MDB Node.js server - which was handily a VMware application that we ran in VMware Player).
This worked perfectly, but thankfully we never actually needed it as we could access the real SimpleDB.
https://github.com/robtweed/node-mdb
http://gradvs1.mgateway.com/main/index.html?path=mdb
Neil

Resources