I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive.
Am I doing something wrong or is that simply how things are developing for Azure?
My specs:
Visual Studio 2008 9.0.30729.1 SP
5 projects running on .NET 3.5 SP1
Azure SDK 1.1 (February 2010)
Single instance of a single web role
Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched
The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds
If your web role has limited functionality, you might be able to just set the Web project as the Active Project in your VS solution and run from there.
For example, my web role doesn't call into table storage, blob storage, etc... it just makes some Azure logging calls and interacts with SQL Azure. So sometimes I just set the web project to be the startup project in the VS debugger, not Azure, and run from there. I've properly written my logging calls to check if Azure is available before they write, so they don't execute in this situation.
Of course, if you're doing lots with table storage, queues, blobs, etc. then this is not for you.
Normally in a development machine we just compile and run the solution. In case of Azure development there is a additional step where the specific project is deployed in the Dev fabric which involves copying the complete web site content to the dynamically created deployment folder. Since you have a large number of files this would require all those files to be copied into a new folder every time you press F5 or Ctrl-F5. This may cause the delay you are noticing.
This scenario also highlights the inflexibility in deploying the solution over the App fabric. Any time you change any content (static or dynamic) in the website the complete site has to be packaged and re uploaded on your production server.
In my case when I changed the port from 80 to something else (under end points) the speed returned to normal.
Microsoft's Steve Marx has a blog post about running a website from a mounted VM in Azure. This may be a good development pattern since you simply update the contents of a VM stored in blob storage instead of having to redeploy to the fabric each time.
Related
An automated Windows update this morning left my Windows Server 2012 R2 Classic Virtual Machine on Azure in a semi-crashed state. The VM is a web server, and all the files and applications in it are still accessible via the browser. In other words, IIS and a number of other services are still running. Unfortunately, however, the VM is not accessible via Remote Desktop and is unresponsive to commands from the Azure management interface on the portal.azure.com website.
This type of error is quite common and can be found reported on many other websites. The error has been happening to Windows users (not just Windows Server) for many years already, and none of the solutions online will work for Azure users, because they involve restarting from a CD, pressing shift-f8 during boot, issuing DOS commands, restoring from backup, or unchecking certain properties in VMWare or other software.
Does anybody have a real solution for this problem on Microsoft Azure?
After struggling with this for weeks, I think I was able to fix this with the help of Microsoft support! I decide to post the solution here in case it can help someone in the future. Here are the three things that you need to do to fix this:
1-Restore the VM from a backup prior to the crash. The VM with the "Undoing Changes" crash is pretty much toast at this point. Now, proceed to steps 2 and 3 to ensure that the next batch of Windows Updates won't crash it again!
2-On your new VM, ensure that the Environment Variables for TEMP and TMP both point to C:\Windows\TEMP. In my case, they were both pointing to a temporary folder in the logged in user's profile.
3-Ensure that C:\Windows\TEMP is always empty. I achieved this by setting up a scheduled task that runs a simple BAT file that deletes all files and folders inside of the C:\Windows\TEMP once a day. I spoke with a Microsoft representative who said that even though you may have plenty of hard drive space in your C:\ drive, the Windows TEMP folder is really not supposed to get much bigger than 500MB. When it gets very large you may have some issues with Windows Updates (mine was just under 500MB when the updates were failing).
I would recommend contacting Azure support as something may have to be done by an engineer to fix the issue and unfortunately classic VMs don't have the redeploy feature.
I've added only InboundPort 3389 RPD, and works well now.
When I publish to Azure my tiny, tiny test project, configured in the latest vNext RC2, I get the following error upon first load after an extremely long wait:
The specified CGI application encountered an error and the server terminated the process.
Subsequently if the app is anything more than the very simple "Hello World" project below, i.e. it uses some MVC etc. then the app is extremely unresponsive, failing to load some images, taking minutes to load each page. Although sometimes it's suddenly fast for a little while, then slow again.
In RC2 there were some changes to the hosting setup, but all these have been implemented in my tiny test project.
I have also seen this question and ensured I am publishing the exact correct version of the CLR, in fact for information the same result happens if I use full or core CLR.
Here is the example project (publishing profiles removed):
https://www.dropbox.com/s/hpkrj6c74eaytjz/TinyProject.zip?dl=1
If I create a new RC1 project, the problem doesn't surface, but as soon as I update it to RC2 the problem persists.
In the end I solved this by creating an App Service Plan that was anything other than the free or shared option, in my case B1 (screenshot from Visual Studio Azure SDK):
Has your Azure Web app instance had a RC1 instance uploaded to it prior to your RC2? Your project looks ok to me, I can't see anything wrong at first glimpse with your project.json, Startup.cs or hosting.json files. I had an instance of RC1 on a Web App, and when trying to upload RC2, nothing would work, just a long long wait until eventually it would time out with a 503 error. I deleted the Web App, and just published the RC2 (using same DNX build as yourself) and everything works fine (so far!).
Also, if you turn on Diagnostic logging in your Web App, does that provide any more info?
I now have my Windows Azure environment set up so that I can access my Worker Role with Remote Desktop. However, I'm not sure how to proceed at the moment. After much digging I found a web site that was offline but in Google's cache there was mention of attaching to the Worker Role running in the Azure Cloud from the Visual Studio debugger. But I only have Visual Developer (not studio) 2010 and I have searched all over and as far as I can see there is no such option to attach to a remote server. I am able to publish my project to the Azure Cloud without error and I have a "healthy" instance of my Worker Role showing as active and running.
I did connect with RDP through the Azure Management portal. The login worked fine and up came the remote desktop window. I searched through much of what I could find and was unable to find my Worker Role. I must have the wrong impression of RDP, because I had hoped to see the Worker Role's main display form when I logged in, just like I do when I debug it locally in the Cloud Emulator. But instead all I saw was a blank desktop with some base level server inspection and management routines. I even checked the Event Viewer for Application related messages and saw none.
So now I'm stuck wondering if my Worker Role is actually running or not, despite the seemingly positive status messages from the Management Portal, and I still want to attach to my Worker Role for debugging through Visual Developer, if it's possible, but I am unable to figure out how.
Anyone with experience in this area that can give me some solid tips on what to do next, please respond.
UPDATE: I believe my worker role may be running because I opened a command window and did a Netstat and saw it listening on the correct port. However, that may just be my Worker Role shell class that starts the custom EXE I have it launch as a spawned proces. I still haven't confirmed if my custom EXE is running yet.
UPDATE-2: Just ran TaskList from a command window and the custom EXE is listed.
UPDATE-3: Everything is working as I just ran a remote test of the service so that's not a problem. Still want to know how to attach to the Worker Role from Visual Developer 2010 for remote debugging, and if it's possible to see the custom EXE's display form like I do when doing local debugging in the Cloud Emulator.
-- roschler
There is a set of articles here which goes in length on how to set up for remote debugging in Azure:
http://blogs.u2u.be/peter/post/2011/06/21/Remote-debugging-an-Azure-Worker-role-using-Azure-Connect-Remote-desktop-and-the-remote-debugger.aspx
http://blogs.u2u.be/peter/post/2011/06/24/Remote-debugging-an-Azure-worker-role-using-Azure-Connect-remote-desktop-and-remote-debugger-part-2.aspx
http://blogs.u2u.be/peter/post/2011/06/26/Remote-debugging-a-Windows-Azure-Worker-Role-using-Azure-Connect-Remote-desktop-and-the-remote-debugger-part-3.aspx
The key takeaway is that you don't need to actually install Visual Studio on Azure, you only need to copy the Remote Debugger bits and then use Azure Connect to add your developer machine to the Virtual Network.
You can setup Remote Debugging with Visual Studio 2012
http://code.msdn.microsoft.com/Remote-Debugging-Windows-dedaaec9
When you say:
But instead all I saw was a blank desktop with some base level server inspection and management routines.
this is exactly what you get with an Azure VM. It's a basic OS install, plus the bare minimum of Azure stuff it needs to run and the code you've uploaded. There's no fancy monitoring or health checks available on the machine by default, you're expected to have provided those yourself to have them available without having to RDP into the machine to check on it.
RDP is very good for tracking down certain problems, like checking that a startup task will run, checking which directories items are installed in and just generally being nosey. If you need extra tools to track down a problem, you can just install them while you're connected to the server. For example I have RDPed into a server and installed the Microsoft Debugging Tools, to track down a memory issue.
I suppose you could remote into your VM, install Visual Studio there, and debug the process...
I also suppose it might be possible to enable remote debugging (not sure what's involved there, but such a thing exists, and it works over TCP) and debug from a local instance of Visual Studio.
To my knowledge, neither is commonly done.
Based on other answers, you would be better off writing a log file to a local storage. You can read the file from RDP if you reallyhace to. Keep in mind, debugging on Azure isn't really simple, and rightly so.
What I was thinking though was, maybe you could run the process using the user's credentials. I can't verify at the moment, but you have a better shot of seeing the ui when you rdp.
I've a VM (#1) with installed SP 2010 and SQL Server 2008. It suites our needs in terms of load and capacity. In case of breaking down we can revert it to the snapshot.
The development process still goes on.
The question is what is good approach for updating the production VM?
Variant 1:
Have a copy of production VM (#2)
When the iteration is finished (development->testing->fixing) and we ready to make update we swap the VMs (#1 <--> #2)
Testing VM #2 becomes production and #1 becomes testing one.
" + " : We go live with fully tested solution.
" - " : Mechanism for sync up between VMs is required.
Variant 2:
Develop, test and fix on the production VM
Publish changes when we are ready.
" + " : We don't need mechanism for sync up between VMs
" - " : Crashes can happen more often.
Any suggestion will be appreciated.
TIA
Do NOT do your development on the production VM. SharePoint is very easy to break during development and you will more than likely bring down your production environment at some point. The risk is basically too high.
Do your development on a separate system. Package your solution/changes properly as a WSP - test it on another system (in between your dev environment, and basically a copy of production). Once it passes all testing on your staging server, deploy the WSP to production.
Swapping systems is a pain in the arse when it comes to SharePoint - you have Alternate Access Mappings, IIS Bindings etc. to worry about - and takes more time and effort than simply just uploading a new WSP and hitting "deploy" (obviously, after testing on your stating server).
It really depends on "what" you are developing:
Content only
It is often easier to do this on the production machine. The standard publishing framework should be sufficent to hide changes from standard users.
Or you could use the content deployment features of sharepoint to move it from the dev farm to the production farm
Or you could use the content deployment tool
Webparts, simple components
In 2010, these should be done as sandboxed solutions (if you can get away with it), then they are very simple to upload/change on your prod environment. Build on your dev enironment, then upload through the sharepoint webpages.
Complex components
Things that require a farm administrator to install on the server (jobs, complex web parts, etc), should be developed with visual studio on your dev box, then moved to prod in a fully tested way. Try having a testing/stage environment.
A combination of all of the above
This usually depends on how much risk your client is willing to expose themselves to, say for instance, if it is a brand new server there is not much risk in taking down users of the system.
I would not recommend installing visual studio on a PRODUCTION SharePoint server
I just built a simple hello world windows azure service containing just one web role, I used visual studio 2008 and Windows azure tools for VS 1.2 I am pretty new to this and I have been trying to deploy an application all afternoon now. I'm in australia and deploying in the region Asia anywhere.
I have pretty much followed the info provided on MSDN and it says uploaded 95% then after about ten minutes the deployment disappears. I have tried using the old windows azure developer portal and 30minutes later I can not access the service and it's status is either busy or stopped.
I have the introductory offer for an extra small compute instance on the subscription I am deploying to. Can anyone with experience with windows azure elaborate on the subject of deploying apps and the status on my application, I am very keen to get into the platform and this issue has just about spoiled my weekend.
Most likely it is related to the UseDevelopmentStorage=true for a connection string. I have accidentally done this a couple of times myself and things just magically don't work and there is no explanation. Missing DLL's are usually a little harder to track down as the application may or may not start depending on where the failure happens. Trace logging and/or infrastructure logging is the best way to find out if the DLL is missing if you can get your application to run that far.
As pointed out already, the best place to start is making the simplest "Hello World!" you possibly can and start extending from there. Yes it will take you a while to make progress but the experiences you gain from this will be invaluable moving forward.
Two things to check before deployment
1. Change Roles' Connection Strings to point to Azure Storage instead of UseDevelopmentStorage
2. All References not belong to asp.net framework should be set to "Copy Local=True"
I would guess that the deployment is going successfully but that the role instances are not able to start. The most common causes of this are eithe referrences to development storage while deployed (UseDevelopmentStorage=true) or a referrence to an assembly with copylocal!=true.