IIS Virtual Directory in Azure - iis

I've been told that you can create virtual directories in IIS hosted on Azure but I'm struggling to find any info on this as its a relatively new feature. I'd like to point the virtual directory to an Azure Drive (XDrive, NTFS Drive) so that I can reference resources on the drive.
I'm migrating an on premise website onto Azure and need to minimise the amount of rework / redevelopment required. Currently the website has access to shared content folders and I'm trying to mimic a similar set up due to tight time scales.
Does anyone have any knowledge of this or pointers for me as I can't find any information on how to do this?
Any information / pointers you have would be great
Thanks
Steve

I haven't had a moment to check myself, but get the latest copy of the Windows Azure Platform Training kit. I'm fairly certain that it has a hands on lab that demonstrates the new feature. However, I do not believe that lab includes creating a virtual directory on a azure drive. Even if you can point it there, you may run into some .NET security limitations. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en
Another resource to look into might be the stuff Cory Fowler is doing http://blog.syntaxc4.net/ He's been spending some time of late really digging into the internals of the new 1.3 roles. So he might be able to lend you a hand.

I've been kicking this issue around for sometime now and I can upload a VHD to Azure and I can create a virtual directory in Azure that points to a physical location on my pc (when running in Dev fabic) and here is the but....
I can't find any examples on where I can do both at the same time, i.e. mount a drive and then map a virtual directory to it.
I've had a look in the 1.3 SDK and looked at various blogs but I can't see any pointers on this - I guess I may have got hold of the wrong end of the stick. If anyone knows how or if this can be done, that would be great.

Related

Azure Storage - File Share - Move 16m files in nested folders

Posting here as server fault doesn't seem to have the detailed Azure knowledge.
I have a Azure storage account, a file share. This file share is connected to a Azure VM through mapped drive. A FTP server on the VM accepts a stream of files and stores them in the File Share directly.
There are no other connections. Only I have Azure admin access, limited support people have access to the VM.
Last week, for unknown reasons 16 million files, which are nested in many sub-folders (origin, date) moved instantly into a unrelated subfolder, 3 levels deep.
I'm baffled how this can happen. There is a clear instant cut off when files moved.
As a result, I'm seeing increased costs on LRS. I'm assuming because internally Azure storage is replicating the change at my expense.
I have attempted to copy the files back using a VM and AZCOPY. This process crashed midway through leaving me with a half a completed copy operation. This failed attempt took days, which makes me confident I wasn't the support guys dragging and moving a folder by accident.
Questions:
Is it possible to just instantly move so many files (how)
Is there a solid way I can move the files back, taking into account the half copied files - I mean an Azure backend operation way rather than writing an app / power shell / AZCOPY?
So there a cost efficient way of doing this (I'm on Transaction Optimised tier)
Do I have a case here to get Microsoft to do something, we didn't move them... I assume something internally messed up.
Thanks
A tool that supports server-side copy (like AzCopy) can move the files quickly because only the metadata is updated. If you wants to investigate the root cause, I recommend opening a support case. (To sort this out – Your best bet is to connect with our Azure support team by filing a ticket, our support team on best effort basis can help you guide on this matter. )

Azure VM Security Center scans seem to be innacurate; how to force a re-scan?

I'm having an issue with Azure Security Center that I'm hoping someone could help me out with.
I have multiple Red Hat Linux VMs in several resource groups under one subscription. When I initially ran through the capabilities of Azure Security Center, there were multiple recommendations for system updates, firewall settings, etc. that I was told to apply on my systems, which I went ahead and did. My scans on the VMs themselves come up clean, with everything installed and up to date.
Unfortunately, Security Center thinks that they are still out of date, and if I check the last scan time, some are over 2 weeks old; is there any way to force a re-scan of the system? Or am I just supposed to wait until Azure feels like re-scanning?
Example: some VMs that aren't re-scanning
Also, a related question: Security Center has re-scanned systems that I know are up to date, but still recommends system updates. How is Azure scanning these systems? Is there somewhere that I can specify the versions for the updates that it is suggesting I update?
Any help would be greatly appreciated. Thanks!

Deploying an application to a Linux server on Google compute engine

My developer has written a web scraping app on Linux on his private machine, and asked me to provide him with a Linux server. I setup an account on Google Compute Engine, created a Linux image with enough resources and a sufficiently large SSD drive. Three weeks later he is claiming that working on Google is too complex quote - "google is complex because their deployment process is separate for all modules. especially i will have to learn about how to set a scheduler and call remote scripts (it looks they handle these their own way)."
He suggests I create an account on Hostgator.com.
I appreciate that I am non-technical, but I cannot be that difficult to use Linux on Google?! Am I missing something? Is there any advice you could give me?
Regarding the suggestion to create an account on Hostgator to utilize what I presume would be a VPS in lieu of a Virtual Machine on GCE , I would suggest seeking a more concrete example from the developer.
For instance, the comment about the "scheduler", let's refer to it as some process that needs to execute on a regular basis:
How is this 'process' currently accomplished on the private machine ?
How would it be done on the VPS ?
What is preventing this 'process' from being done on the GCE VM ?

Azure Virtual Machine Capture

I have a Windows Server running as a Virtual Machine on Azure that I have installed SQL Enterprise on. I installed SQL Server onto a new drive (E:) so that the C: drive would remain for the OS.
I followed the instructions on how to use sysprep and basically capture the image to use going forward for new instances. After following these steps and deploying a new vm with this image, nothing worked. It thought SQL was installed (it wasn't). It also didn't know anything about the additional drives or VHDs.
I came across this Blog post from the Azure team and it references a powershell command Save-AzureVMImage that may be what I'm looking for with the new "Virtual Machine Image".
Ultimately what I want is to have an image that I can use to deploy a new fully functional Windows Server instance with SQL Enterprise installed and the additional VHDs being used... Can someone point me in the right hemisphere on this please...
Save-AzureVMImage until the build 2014 only captures OS disk and not the data disk, since your SQL is installed on a separate mapped drive a data disk. That will not be part of the snapshot\sysprep process.
There is something called VMImages recently launched which captures data disks along side OS disks.You will have to update Azure Commandlets to find more options while capturing Image of a running VM, Refer to the blogs below for more detailed solution
http://vishwanathsrikanth.wordpress.com/2014/04/16/windows-azure-vmimages-updates-to-clonevm-powershell-script/
http://blogs.msdn.com/b/windowsazure/archive/2014/04/14/vm-image-blog-post.aspx
Happy Coding !!

Is it possible to NGen dlls for use in Azure Websites?

We are currently using MVC3, .NET4.5, EF6.1, MSSQL2008(dev), SQL Azure(Test and Live). Our application is quite complicated and we are encountering significant warm up lags, around 30 secs, after an application pool refresh. We use External autoping services to keep the sites warm, which are OKish.... However it would be a much better solution to just deploy native images, and then whenever a app pool refreshes for whatever reasons, we know the application will load as quickly as possible.
Hence the reason for investigating NGEN.
However I am unsure whether this is possible for Azure Websites. Some questions I have:
1) NGen requires Admin privilege. As I understand it I would need admin privilege to install Native images to Azure Websites, or can I generate them on a local "same cpu" machine and copy them across?
2) Require Full Trust now. I believe this is no issue with WAWS.
3) Does NGen only install in Cache and not produce some sort of file for copying to a different location?
Thanks inadvance.

Resources