I am exploring the idea of hosting my CD environment in Windows Azure. I read that the current release of the DMS does not play ball in the cloud, however, no detailed explanation was given. Apparently Azure support is planned for second quarter 2013, but in the meantime, I'd like to know why it doesn't work so that I can explore potential workarounds.
For instance, is the issue related to sticky sessions (or lack thereof)? Or, is it related to the DMS compatibility with SQL Azure?
It will be an issue with the sticky sessions. As the DMS does all its work server side it needs proper session state management to work. You could do this on Azure using IaaS, but then you would be responsible for installing and maintaining the deployment of Sitecore on the OS rather than using the built in deployment features.
See this post by Jakob Leander for more info: http://www.sitecore.net/Community/Technical-Blogs/Jakob-Leander/Posts/2013/01/Why-we-love-Sitecore-on-Azure.aspx
Related
When using Azure web/worker roles users can specify osVersion to explicitly set "Guest OS image" version. This ensures that when Microsoft issues new critical updates they are first shown up on a newer "OS image" which users can explicitly specify and test their service on.
How is the same achieved with Azure Service Fabric? Suppose I deployed my service into Azure Service Fabric and it's been running for a month, then Microsoft issues updates for the OS on the server where the service is running - how are they applied such that I can test them first to ensure they don't break the service?
Brett is correct. SF cluster is based on Azure VMSS and the expectation is that the customer is responsible to patch the OS. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-upgrade/
We have heard from majority of the SF customers that this is not at all desirable and that they do not want to be responsible for OS patching.
The feature to enable an OPT-IN automatic OS patching is indeed a very high priority within Azure Compute team. The exact details on how best to offer this is still in design, however the intent is to have this functionality enabled before the end of the year.
Although that is the right long term solution, to mitigate this issue in the short term, SF team is working on a set of steps that will enable the customers to opt into having the their VMs patched using WU in a safe manner. Once the steps are tested out, we will blog about it and will publish a document detailing the steps. Expect that in the next couple of months.
As I understand it you are currently responsible for managing patching on SF cluster nodes yourself. Apparently moving this to be a SF managed feature is planned but I have no idea how far down the road it might be.
I personally would make this a high priority. Having used Cloud Services for many years I have come to rely on never having to patch my VM's manually. SF is a large backwards step in this particular area.
It'd be great to hear from an Azure PM on this...
Automatic Image based patching like cloud services in service fabric.
Today you do not have that option. The image based patching capability is work in progress. I posted a road map to get there on the team blog : https://blogs.msdn.microsoft.com/azureservicefabric/2017/01/09/os-patching-for-vms-running-service-fabric/ Try out the script and report any issues you hit. Looking forward to your feedback.
Lots of parts of Service Fabric are huge rolling dumpster fires backwards. Whole new hosts of problems have been introduced that the IIS/WAS/WCF team have already solved that need to be developed for once again. The concept of releasing a PAAS platform while requiring OS patch management is laughable. To add insult to injury there is no migration path from "classic cloud PAAS" to this stuff. WEEEE I get to write my very own service host. Something that was provided out of the box for a decade by WAS. Not all of us were scared by the ability to control all aspects of service host communication options via configuration. Now we get to use code so a tweak channel configuration requires a full patch/release cycle!
I was about to use Strongloop for restructuring a Node.JS project when I saw that they are integrating Strongloop into IBM API Connect. The official page even points there.
However from a brief look at features, I saw that the IBM API gateway in contrast to Strongloop Arc (actually Strong Process manager) does not offer free clustering capabilities when deploying the program as PM did. It is mentioned as a paid feature...
I believe this is quite a setback, integrating a good product and limiting its open sourced services...
The API Gateway is different / separate from StrongPM and Arc in general. Some of the features previously available in Arc are now also part of API Connect's API Designer.
If you build an app on top of the LoopBack framework (which still is and should remain OSS), you can deploy that in whatever manner you wish with full clustering.
For example, you could deploy to Bluemix as a Node.js application and run multiple instances of your app for clustering / high-availability.
Or, you could deploy your application using StrongPM on your own hardware and get clustering that way.
If that doesn't answer your question, please clarify and I'll do what I can to provide updates.
I've been reading about azures storage system, and worker roles and web roles.
Do you HAVE to develop an application specifically for azure with this? It looks like you can remote desktop into azure and setup an application in IIS like you normally can on a windows server, right? I'm a little confused because they read like you need to develop an azure specific application.
Looking to move to the cloud, but I don't want to have to rework my application for it.
Thanks for any clarification.
Changes to the ASP.NET application are minimal (for the most part the web application will just work in Azure)
But you don't remote connect to deploy. You actually build a package (zip) with a manifest (xml) which has information about how to deploy your app, and you give it to Azure. In turn, Azure will take care of allocating servers and deploying your app.
There are several elements to think about here -
Code wise - to a large degree this is 'just' .net running on IIS and Windows, so everything is very familiar and all the past learnings, best-practices, etc. apply.
On top of that you may want to leverage some Azure specific capabilities - for example table storage, or queues, or interacting with your deployment - for which you might need to learn a few more APIs, but these aren't big, and are well thought of and kept quite simple, so there's not a bit learning curve. good architecture, of course, would look to abstract these away to prevent/reduce lock-in, but that's a design choice.
Outside the code, however, there's a bit more to think about -
You'd like to think about your deployment - because RDP-ing into a machine and making changes that way takes away many of the benefits of PaaS - namely the ability of the platform to 'self-heal' by automatically re-deploying your application should a server fail.
You would also like to think about monitoring - which would need to be done slightly differently.
Last - cloud enables different scenarios, and provides a scale-out model rather than a scale-up model, which you might want to take advantage of, but it might require doing things a little bit.
So - bottom line - yes - you could probably get an application in Azure very quickly, without really having learning much or anything, but to do things properly, and to really gain from the platform, you'd like to learn a bit more about it. good thing is - it's not much, and it all feels very familiar, just another 'framework' for .net (and Java, amongst others....)
You can just build a pretty vanilla web application with a SQL backend and get it to work on Azure with minimal Azure dependencies. This application will then be pretty portable to another server or cloud platform.
But like you have seen, there are a number of Azure specific features. But these are generally optional and you can do without them, although in building highly scalable sites they are useful.
Azure is a platform, so under normal circumstances you should not need to remote desktop in fiddle with stuff. RDP is really just for use in desperate debugging situations.
I'm writing an application in Node.js for a spare-time, bootstrap project. I have a Windows background and Windows Azure with three-month free trial currently seems like the simplest way to develop, deploy and host the project.
However Windows Azure appears to get expensive after the free trial expires, and in any case I'd like the option to host on non-MS platforms, so I have a couple of questions:
I can see from the tutorial that I need some Windows-specific code to import the port number at which the app should listen - are there many more examples of Windows or Azure specific code requirements further down the line?
I'd like to take a NoSQL approach to data storage since I'm more interested in flexibility and performance than in referential integrity or structural consistency - would it be difficult to wrap Azure Tables in a data access layer that would be reasonably portable to other NoSQL databases such as MongoDB or the various cloud offerings?
Finally, the catch-all question - is there anything else I should be looking out for?
Tackling your second question: there are modules in the NPM registry that can help you here.
Firstly Microsoft have recently released the Azure SDK for node as an NPM installation module. This has a rich API that will help you interface into Azure Tables.
There are also NoSQL clients available in the NPM registry for most solutions (including MongoDB).
If you keep your data access simple, you should be able to make use of the various NoSQL clients that are available and create a nice little module layer that sits above all the ones you need to support.
You could even create a public github repository and submit your hard work into the NPM registry for other people to help you develop.
I have built an app on Windows Azure's node.js support as well and there is virtually no lock in if you stick to npm modules and open platforms.
You should also check into Microsoft's Bizspark program - you get two years of 2 reserved instances for free + storage. Its a great program.
I'm not very skilled on Azure, but googling hasn't give me more answer on this topic.
I have an ASP.NET web page that use R-(D) COM Interface for doing some complex calculus. I'm evaluating to move everything to the Azure platform.
I saw that it's easy to move webpages on Azure however being that I need that RSERVER is installaled on the machine I need to move everything.
I was thinking of creating a VHD machine and publish the entire image on Azure but I'm not sure this is the best solution.
I am not familiar with RSERVER, but here are some guidelines you may follow:
By default all Windows Azure servers run in 64bit mode. This is
important for the COM interfaces.
You may run any executable as a Startup Task in regular
Windows Azure Web/Worker role. Frankly you can create vey complex
startup scripts. You may use the Windows Azure Bootstrapper to
ease the solution. The trick is that RSERVER must support
unattended/silent install.
I would stick to the least friction solution - which would be using a normal Windows Azure Web Role and a Startup Task.
If that is not working for you, you may consider preparing a VHD image and use the Windows Azure VM Role.
I've written a very similar answer to what I'd write to you here. The thing is, the Azure VM role is technically a good solution, depending on what you need to do with it. You can generally create really good solutions with a fairly minimal amount of effort to let legacy code work with Azure & all the shortcomings of the VM role.
In general, if you have a lot of custom installation you need to do, create the Azure VM role, absolutely. But make sure you make the communication with it proper. It's not going to behave exactly like a web or worker role. Although, if I remember correctly, you still have endpoints and configuration there, so you can expose your programming to the outside. Personally however, my architectures are way more queue based (as described in the answer highlighted above) so I'd opt for writing a bridge program in the VM.