Why does the package upload speed differ so much between the various upload methods.
The methods I've used to deploy are
Within VS2013: Very slow uploads
Powershell Commandlets: Again very slow
Azure Portal: Blazingly fast
I would not complain if the speed differed by a few seconds or heck a minute or two, but I'm talking 10's of minutes for 45MB Deployment Package.
Method 1 & 2 takes over 45 minutes using high-speed broadband in New Zealand.
Uploading via method 3 takes just 30 seconds.
Something is not quite right.
Our network is not in question, I was able to simulate this from other networks too.
I can confirm the issue with deployment from Visual Studio taking much longer than direct upload of the package and as far as I know this is a known and common issue since the very first version of Azure.
Nonetheless I would like to point out that upload via Azure Portal is a mere data transfer, after the package has been deployed it takes the service for some more time to become responsive, whereas after VS deployment the service is responsive immediately after the deployment is done. From this I conclude that the difference in deployment time might be due to provisioning of cloud architecture (creating, running or re-configuring of host OS VMs) for your cloud services.
From what I've seen of Azure publishing using Visual Studio vs the Azure Portal, there seems to be no significant difference except for the time it takes to upload the package. As the original poster stated, the Portal is extremely fast; whereas, Visual Studio takes forever to upload the package. I didn't realize the difference for a long time until I tried using the Portal one day just to see how that process worked and I was floored by the difference in speed.
I'd definitely recommend using the Portal vs Visual Studio. One caveat, you seem to need to use Visual Studio the first time around to create the Azure Tools certificate. Not sure if there's a way to create it without Visual Studio.
I think I've seen why Visual Studio takes so long to do the upload. If you start up Fiddler while the update is happening you'll see that Visual Studio is uploading the package in 65k blocks in sequence. As you're in NZ you get our wonderful high latency to US servers, so sending lots of smallish requests is not optimal.
If you use a dedicated storage explorer they're usually much more clever and if the file you're uploading is large it will use larger blocks and also upload them in parallel.
Related
I have an umbraco website setup in Azure. The front-end website loads fine but the back end takes more than 15 seconds from when you hit the "Save and Publish" to show the check mark denoting success. I setup a test website but in Azure VM pointing to the same Azure sql database that hosts the same umbraco Azure website and I don't get this problem.
I just spent some time debugging this same scenario. We had a situation where once someone saved a node, it would take 30 seconds before the UI would become responsive again. Network trace confirmed that we were waiting on API calls back from Umbraco.
We were on an S0 SQL instance, so I bumped it to S1 and the perf got worse!? (guessing indexes rebuilding?).
We already have a few Azure specific web.config options set (like useTempStorage="Sync" in our ExamineSettings.config). Ended up adding the line below and now our saves went from 30-35s to 1-2s!
<add key="umbracoContentXMLUseLocalTemp" value="true" />
This is from the load balancing guide available here - https://our.umbraco.org/documentation/Getting-Started/Setup/Server-Setup/load-balancing/flexible
Umbraco Save and publish is fairly DB intensive. An S1 instance is possibly not beefy enough. We use S2 for our dev sites, and that can take up to 5 seconds to save and publish depending on whether anything else is hitting the database.
You may also have issues with other code running that's slowing things down. Things like Examine indexing can be quite slow on Azure. It could also be one a plugin that slowing things down.
How complex are your DocTypes? Also, which version of Umbraco are you running? Some older versions have bugs in that cause performance issues on Azure.
At the moment we are running our application on an AWS Beanstalk but are trying to determine the suitablilty of Azure.
Our biggest issue is the amount of wasted CPU time we are paying for but not using. We are running on t2.small instances as these have the min amount of RAM we need but we never use even the base amount of CPU time allotted. (20% for a t2.small ) We need lots of CPU power during short bursts of the day and bringing more instances on line in advance of this is the only way we can handle it.
AWS Lambda looks a good solution for us but we have dependencies on Windows components like SAPI so we have to run inside of Windows VMs.
Looking at Azure cloud services we thought using a Web role would be best fit for our app but it seems a Web role is nothing more than a Win 2012 VM with IIS enabled. So as the app scales it just brings on more of these VMs which is exactly what we have at the moment. Does Azure have a service similar to Lambda where you just pay for the CPU processing time you use?
The reason for our inefficient use of CPU resources is that our speech generation app uses lost of 3rd party voices but can only run single threaded when calling into SAPI because the voice engine is prone to crashing when multithreading. We have no control over this voice engine. It must have access to a system registry and Windows SAPI so the ideal solution is to somehow wrap all dependencies is a package and deploy this onto Azure and then kick off multiple instances of this. What "this" is I have no Idea
Microsoft just announced a new serverless compute service as an alternative to AWS Lambda, called Azure Functions:
https://azure.microsoft.com/en-us/services/functions/
http://www.zdnet.com/article/microsoft-releases-preview-of-new-azure-serverless-compute-service-to-take-on-aws-lambda/
With Azure Functions you only pay for what you use with compute metered to the nearest 100ms at Per/GB price based on the time your function runs and the memory size of the function space you choose. Function space size can range from 128mb to 1536mb. With the first 400k GB/Sec free.
Azure Function requests are charged per million requests, with the first 1 million requests free.
Based on the documentation on Azure website here: https://azure.microsoft.com/en-in/campaigns/azure-vs-aws/mapping/, the services equivalent to AWS Lambda are Web Jobs and Logic Apps.
The most direct equivalent of Lambda on Azure is Azure Automation which does a lot of what Lambda does except it runs Powershell instead of Node etc. It isn't as tightly integrated into other services like Lambda is, but it has the same model. i.e. you write a script, and it is executed on demand.
I presume by SAPI you are refering to the speech API? If so you can create Powershell modules for Azure, and they can include dll files. In which case you could create a module to wrap around the SAPI dll, and that should do what you are looking for.
If you want a full compute environment, without the complexity of multiple machines when you run. You could use Azure Batch which would be the Azure recommended way of running what you are looking for.
The cost benefit you need to evaluate would be how much quicker your solution would run against a native .net stack (in batch), and if performance is significantly degraded when run from Powershell.
Personally I would give Automation a try, it is surprisingly powerful.
There is something called "Cloud Service" in azure which allows you to run code on a pure VM. Scaling options on these include such things as CPU%, queue size, etc. If you can schedule your needs, Azure allows you to easily set up a scheduled scaler, i.e. 4 VM's from 8AM until 08:10AM, and of course, in Azure, you pay by the minute, so it could be a feasible solution.
I'd say more, but the documentation in Azure is really so great that I'd be offending them by offering my "translation" here. Checkout azure.com for more info :)
I am on a Windows Azure trial to evaluate migrating a number of commercial ASP.NET sites to Azure from dedicated hosting. All was going OK ... until just now!
Some background - the sites are set up under Web Roles (i.e. as opposed to Web Sites) using SQL Azure and SQL Reporting. The site content was under the X: drive (there was also a B: drive that seemed to be mapped to the same location). There are several days left of the trial.
Without any apparent warning my test sites suddenly stopped working. Examining the server (through RDP) I saw that the B: and X: drives had disappeared (just C: D & E: I think were left), and in IIS the application pools and Sites had disappeared. In the Portal however, nothing seemed to have changed - the same services & config seemed to be there.
Then about 20 minutes later the missing drives, app pools and sites reappeared and my test sites started working again! However, the B: drive was gone and now there was an F: drive (showing the same as X:); also the MS ReportViewer 2008 control that I had installed earlier in the day was gone. It is almost as if the server had been replaced with another (but the IIS config was restored from the original).
As you can imagine, this makes me worried! If this is something that could happen in production there is no way I would consider hosting commercial sites for clients on Azure (unless there is some redundancy system available to keep a site up when such a failure occurs).
Can anyone explain what may have happened, if this is possible/predictable under a live subscription, and if so how to work around it?
One other thing to keep in mind is that an Azure Web Role is not persistent. I'm not sure how you installed the MS Report Viewer 2008 control but anything you add or install outside of a deployment package when you push your solution to Azure is not guaranteed to be available at some future point.
I admit that I don't fully understand the full picture when it comes to the overall architecture of Azure but I do know that Web Roles can and do re-create themselves from time to time. When the role recycles, it returns to the state as it was when it was installed. This is why Microsoft suggests using at least 2 instances of your role because while one or the other may recycle they will never recycle both at the same time, part of what guarantees the 99.9% uptime.
You might also want to consider an Azure VM. They are persistent but require you to maintain the server in terms of updates and software much in the way I suspect you are already doing with your dedicated hosting.
I've been hosting my solution in a large (4 core) web role, also using SQL Azure, for about two years and have had great success with it. I have roughly 3,000 users and rarely see the utilization of my web role go over 2% (meaning I've got a lot of room to grow). Overall it is a great hosting solution in my opinion.
According to the Azure SLA Microsoft guarantees up time of 99.9% or higher on all its products per billing month. (20 min on the month would be .0004% loss, not being critical, just suggesting that they are still within their SLA)
Current status shows that sql databases were having issues in the US north last night, but all services appear to be up currently
Personally, I have seen the dashboard go down, and report very weird problems, but the services that I programmed to worked just fine all the way through it. When I experienced this problem it was reported on the Azure Status, the platform status and the twitter feed
While I have seen bumps, they are few and far between, and I find reliability to be perceptibly higher than other providers that I have worked with.
As for workarounds I would suggest a standard mode for your websites and increasing instances of the site. You might try looking into the new add ins that are available with the latest Azure release. Active Cloud Monitoring by Metrichub might be what you require.
It sounds like you're expecting the web role to act as a Virtual Machine instance.
Web Roles aren't persistent (the machine can be destroyed and recreated at any time), so you should do any additional required set up as a 'startup task' in your Azure project (never install software manually).
Because of this you need at least 2 instances so that rolling upgrades (i.e. Windows security patches, hotfixes and so on) can be performed automatically without having your entire deployment taken offline.
If this doesn't suit your use case then you should look at Azure Virtual Machines, but you'll need to manage updates and so on yourself. It's usually better to use Web Roles properly as you can then do scaling and so on a lot more easily.
I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
When I deploy a .NET build into Windows Azure into a web role (or service role) it takes usually 6-15 minute for it to startup. In AWS's EC2 it takes about the same to startup the image and then a minute or two to deploy the app to IIS (pending of course its setup).
However when I boot up an AWS instance with SUSE Linux & Mono to run .NET, I get one of these booted and deploy code to it in about 2-3 minutes (again, pending it is setup).
What is going on with Windows OS images that cause them to take soooo long to boot up in the cloud? I don't want FUD, I'm curious about the specific details of what goes on that causes this. Any specific technical information regarding this would be greatly appreciated! Thanks.
As announced at PDC, Azure will soon start to offer full IIS on Azure web roles. Somewhere in the keynote demo by Don Box, he showed that this allows you to use the standard "publish" options in Visual Studio to deploy to the cloud very quickly.
If I recall correctly, part of what happens when starting a new Azure role is configuring the network components, and I remember some speaker at a conference mentioning once that that was very time consuming. This might explain why adding additional instances to an already running role is usually faster (but not always: I have seen this take much more than 15 minutes as well on ocassion).
Edit: also see this PDC session.
I don't think the EC2 behavior is specific to the cloud. Just compare boot times of Windows and Linux on a local system - in my experience, Linux just boots faster. Typically, this is because the number of services/demons launched is smaller, as is the number of disk accesses that each of them needs to make during startup.
As for Azure launch times: it's difficult to tell, and not comparable to machine boots (IMO). Nobody knows what Azure does when launching an application. It might be that they need to assemble the VM image first, or that a lot of logging/reporting happens that slows down things.
Don't forget, there is a Fabric controller that needs to check for fault zones and deploy your VMs across multiple fault zones (to give you high availability, at least when there are more than two instances). I can't say for sure, but that logic itself might take some extra time. This might also explain why network setup could be a little complicated.
This will of course explain the difference (if any) between boot times in the cloud and boot times for windows locally or in Amazon. Any difference in operating systems is completely dependent on the way the OS is built!
For eaach BizTalk application we have a setup.bat, which creates the BizTalk application, creates file drops, build code, gacs, registers resources, creates ports -using vcscripts- and applies bindings. We also have a cleanup.bat which performs the opposite of setup.bat
These scripts are then run via nant, and finally used by cruisecontrol.net. These scripts allow us to setup a BizTalk app on a machine with BizTalk and the latest source and tools downloaded.
What do others do to "bootstrap" BizTalk applications in a repeatable and automated manner ?
I've seen BizTalk nant tasks, are they faster than vbscript ?
The setup. bat runs slower on our BizTalk build machine by a factor of about 3 ! Disk, CPU, Memory, paging are all comfortable. A full build/deploy is taking 2 hours before any tests have run - have about 20 BizTalk apps and assorted C# services, custom components. Aside from a new machine, or rebuild - our build machine has 4 gig ram, dual hyper threaded cores and about 5 years old server- Any ideas ? What are you build machines like.
Michael Stephenson has written some great blogs on automated BizTalk builds, take a look at link text
We have used a utility which Mike posted to codeplex which will create an MS build script for a BizTalk application - this has worked very well for us. You can find this at link text
We use NAnt as well for our BizTalk deployment. Specifically, we use a combination of calling the BizTalk related NAntContrib tasks (which all begin with bts) and using the <exec> task to call the command line btstask.exe directly.
At some level, they are all using the same underlying technology to talk to the BizTalk server, so it's hard to say whether NAnt is faster than something like VB.
I will say that in my experience BizTalk appears to be a resource hog. Since it's hard to change that, the only thing we do have control over is the amount of resources we give it. Therefore, if builds are taking too long, and one has the time/money to do so, throw bigger and badder hardware at it. This is generally the cheapest way as the amount of time us developers put into making sub-marginal improvements to build times can end up costing way more than hardware. For example, we've noticed that moving to 8GB of memory can make all the difference, literally transforming the entire experience.
I just create an MSI through the BizTalk Administrator. I keep my binding information separate from the MSI, so developers need to bind ports by importing the binding files, but that is easy.
In cases where assemblies need to be deployed into the gac, I use a batch file that runs gacutil, then install the MSI and finally bind the ports.
This approach is easy to maintain and, more importantly, easy for others to understand and troubleshoot.
In regards to BizTalk being a resource hog, first look at SQL Server and make sure that you limit it to some reasonable amount of memory (it takes whatever it can by default - which is usually most available memory). That one change alone makes a significant difference.
You should also consider using only minimal software during development - that means disabling the anti-virus or excluding directories from getting uselessly scanned when developers compile and deploy. Avoid using MS Word, Messenger, etc on systems that have little RAM (2Gb or less) while developing a BizTalk solution.
On developers' workstations, enable the BizTalk messagebox archive and purge job as explained here:
http://msdn.microsoft.com/en-us/library/aa560754.aspx
Keeping the database small saves valuable disk space which can help to improve overall performance.
There are quite a few solutions out there -
Rob Bowman mentioned Michael Stephenson's msbuild generator
Also on codeplex you can find another framework by Scott Colestock,Thomas Abraham and Tim Rayburn
There's also a minor addition by me palying with Oslo, but that's not half as mature as these two, but it does use the SDC tasks, which is a great starting point if you wish to create your own msbuild based solution.