is there a way to create Generation 2 VM using Azure SDK? - azure

Azure supports UEFI through Generation2 VM.
I am able to create a Generation2 VM using Azure web console, but I cannot a way to specify the generation of the VM through Azure SDK.
I have found a link in Microsoft Docs to create a manged disk using PowerCLI
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/generation-2#frequently-asked-questions
I looked into online documentation of Azure ComputeClient#virtual_machines#create_or_update() api. But still cannot find in the python code docs, any way to specify HyperVGenerations to the VM.

Yes. It's kind of counterintuitive but it goes like this: you need to specify the VM generation on the disk; then the VM, created off of this disk would be of that same generation.
If you already have a disk of gen2 then you just pick it up and specify it when creating the VM. However, I had to create the disk from a VHD file. So when you're creating the disk, you gonna need an IWithCreate instance and then chain a call to the WithHyperVGeneration method. Like this (C#):
public async Task<IDisk> MakeDisk(string vhdPath)
{
return await Azure.Disks.Define(name)
.WithRegion(Region.EuropeWest)
.WithExistingResourceGroup("my-resources")
.WithWindowsFromVhd(vhdPath)
.WithStorageAccount("saname")
.WithHyperVGeneration(HyperVGeneration.V2) // <--- This is how you specify the generation
.WithSku(DiskSkuTypes.PremiumLRS)
.CreateAsync();
}
Then create the VM:
var osDisk = await MakeDisk("template.vhd");
var vm = await Azure.VirtualMachines.Define("template-vm")
.WithRegion(Region.EuropWest)
.WithExistingResourceGroup("the-rg")
.WithExistingPrimaryNetworkInterface("some-nic")
.WithSpecializedOSDisk(osDisk, OperatingSystemTypes.Windows) // <-- Pay attention
.WithSize(VirtualMachineSizeTypes.StandardB2s)
.CreateAsync();

Related

how to create azure batch pool based on custom VM image using java sdk

I want to use custom ubuntu VM image that I had created for by batch job. I can create a new pool by selecting the custom image from the azure portal itself but I wanted to write build script to do the same using the azure batch java sdk. This is what I was able to come up with:
List<NodeAgentSku> skus = client.accountOperations().listNodeAgentSkus().findAll({ it.osType() == OSType.LINUX })
String skuId = null
ImageReference imageRef = new ImageReference().withVirtualMachineImageId('/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Compute/images/$CUSTOM_VM_IMAGE_NAME')
for (NodeAgentSku sku : skus) {
for (ImageReference imgRef : sku.verifiedImageReferences()) {
if (imgRef.publisher().equalsIgnoreCase(osPublisher) && imgRef.offer().equalsIgnoreCase(osOffer) && imgRef.sku() == '18.04-LTS') {
skuId = sku.id()
break
}
}
}
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration()
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef)
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount)
But I am getting exception:
Caused by: com.microsoft.azure.batch.protocol.models.BatchErrorException: Status code 403, {
"odata.metadata":"https://analyticsbatch.eastus.batch.azure.com/$metadata#Microsoft.Azure.Batch.Protocol.Entities.Container.errors/#Element","code":"AuthenticationFailed","message":{
"lang":"en-US","value":"Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:bf9bf7fd-2ef5-497b-867c-858d081137e6\nTime:2019-04-17T23:08:17.7144177Z"
},"values":[
{
"key":"AuthenticationErrorDetail","value":"The specified type of authentication SharedKey is not allowed when external resources of type Compute are linked."
}
]
}
I definitely think the way I am getting the skuId is wrong. Since client.accountOperations().listNodeAgentSkus() does not list the custom image, I just thought of giving skuId based of the ubuntu version that I had used to create the custom image.
So what is the correct way to create pool using custom VM image for azure batch account using java sdk?
You must use Azure Active Directory credentials in order to create a pool with a custom image. It is in the prerequisites section of the Batch Custom Image doc.
This is a frequently asked question:
Custom Image under AzureBatch ImageReference class not working
Azure Batch Pool: How do I use a custom VM Image via Python?
Just shows as the error, you need to authenticate to Azure first and then you could create the pool with a custom image as you want.
First, you need an Azure Batch Account, you can create it in the Azure portal or using Azure CLI. Or you also can create the batch account through Java. See Manage the Azure Batch Account through Java.
Then I think you also need to authenticate to your batch account. There are two ways below:
Use the account name, key, and URL to create a BatchSharedKeyCredentials instance for authentication with the Azure Batch service. The BatchClient class is the simplest entry point for creating and interacting with Azure Batch objects.
BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(batchUri, batchAccount, batchKey);
BatchClient client = BatchClient.open(cred);
The other way is using AAD (Azure Active Directory) authentication to create the client. See this document for detail.
BatchApplicationTokenCredentials cred = new BatchApplicationTokenCredentials(batchEndpoint, clientId, applicationSecret, applicationDomain, null, null);
BatchClient client = BatchClient.open(cred);
Then you can create the pool with the custom as you want. Just like this:
System.out.println("Created a pool using an Azure Marketplace image.");
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration();
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef);
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount);
System.out.println("Created a Pool: " + poolId);
For more details, see Azure Batch Libraries for Java.

Cannot connect to DocumentDb directly after having deployed the DocumentDb account

I have an ARM template that I use to deploy a DocumentDB as well as other Azure reosurces to a resource group. I want my ARM template to setup a Stream Analytics job that uses the DocumentDB as output. In order to do this the DocumentDB account created by the ARM template needs to have a database and a collection setup as well. I cannot find a way to do this from an ARM template so I have written a Powershell CmdLet to create the database and collction for me.
The Stream Analytics job cannot be created by the first ARM template since it depends on having the database and collection created first. Instead I have to divide the deployment into two ARM templates, the first setting up the DocDb account and the second setting up the SA job.
The problem is that I cannot create a database in the DocDB account directly after having deployed the account via the ARM template. I get an exception with the following message: "The remote name could not be resolved: 'test.documents.azure.com'" when I try to execute the CreateDatabaseAsync method with the DocDbEndpoint and AuthKey I get back from the ARM template deployment.
Are there any timing issues after having deployed Azure resources using a ARM template before you can access them programatically? This do not seem to be a problem with other Azure reosurces created this way.
Any help on this matter is highly appreciated as well as what is a good practice for working with ARM templates with DocumentDB and Stream Analytic jobs.
Update 2016-03-23
Code for setting up the connection to the DocumentDB to create the database.
Uri endpointUri = new Uri(documentDbEndPoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
var db = await client.CreateDatabaseAsync(new Database { Id = databaseId });
return db;
Where the documentDbEndPoint is in the form of: https://name.documents.azure.com:443/ and name is the name of my DocDB account just created by the ARM template deployment.
I have the code in a library which I can either call from a Console application or from a Powershell script by loading the library with:
Add-Type -Path <path to library dll file>
No matter if I use powershell or console application I get the same error if I try to create a database just after having created the DocDB account using the ARM template. If I wait like an hour or so both the powershell script and console application works and can create a database in the account.
Seems like there is some kind of timing issue in order for Azure to setup dns records for the newly created DocDB account so that it can be accessed using the DocDB API.
Update 2 2016-03-23
Just tried to create a DocDB account directly from the portal and doing this instead of creating it from an ARM template makes it possible to create a database in the account using my powershell script and console application immediately.
This timing issue has been fixed now and you should be able to use it from the ARM template now.

Tool or usage example to generate and view SAS (Shared Access Signatures) of both Azure Block Blob and Azure File Share

I am looking for a tool or usage example to generate and view SAS (Shared Access Signatures) of both Azure Block Blob and Azure File Share. There are lots of examples for Block Blob and Containers but what about Azure File Share SAS examples or tools.
Ability to create Shared Access Signature on a File Service Share is announced in the latest version of REST API. You must use Storage Client Library 5.0.0 for that purpose.
First, install this library from Nuget:
Install-Package WindowsAzure.Storage -Version 5.0.0
Then the process of creating a SAS on a File Service Share is very much similar to creating a SAS on a blob container. Please see sample code below:
static void FileShareSas()
{
var account = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
var fileClient = account.CreateCloudFileClient();
var share = fileClient.GetShareReference("share");
var sasToken = share.GetSharedAccessSignature(new Microsoft.WindowsAzure.Storage.File.SharedAccessFilePolicy()
{
Permissions = Microsoft.WindowsAzure.Storage.File.SharedAccessFilePermissions.List,
SharedAccessExpiryTime = new DateTimeOffset(DateTime.UtcNow.AddDays(1))
});
}
In the above code, we're creating a SAS with List permission that will expire one day from current date/time (in UTC).
Also if you're looking for a tool to do so, may I suggest you take a look at Cloud Portam (Disclosure: I am building this tool). Recently we released the functionality to manage SAS on a Share.

Can a Worker Role process call Antimalware for Azure Cloud Services programmatically?

I'm trying to find a solution that I can use to perform virus scanning on files that have been uploaded to Azure blob storage. I wanted to know if it is possible to copy the file to local storage on a Worker Role instance, call Antimalware for Azure Cloud Services to perform the scan on that specific file, and then depending on whether the file is clean, process the file accordingly.
If the Worker Role cannot call the scan programmatically, is there a definitive way to check if a file has been scanned and whether it is clean or not once it has been copied to local storage (I don't know if the service does a real-time scan when new files are added, or only runs on a schedule)?
There isn't a direct API that we've found, but the anti-malware services conform to the standards used by Windows desktop virus checkers in that they implement the IAttachmentExecute COM API.
So we ended up implementing a file upload service that writes the uploaded file to a Quarantine local resource, then calling the IAttachmentExecute API. If the file is infected then, depending on the anti-malware service in use, it will either throw an exception, silently delete the file or mark it as inaccessible. So by attempting to read the first byte of the file, we can test if the file remains accessible.
var type = Type.GetTypeFromCLSID(new Guid("4125DD96-E03A-4103-8F70-E0597D803B9C"));
var svc = (IAttachmentExecute)Activator.CreateInstance(type);
try {
svc.SetClientGuid(ref clientGuid);
svc.SetLocalPath(path);
svc.Save();
}
finally
{
svc.ClearClientState();
}
using (var fileStream = File.OpenRead(path))
{
fileStream.ReadByte();
}
[Guid("73DB1241-1E85-4581-8E4F-A81E1D0F8C57")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IAttachmentExecute
{
void SetClientGuid(ref Guid guid);
void SetLocalPath(string pszLocalPath);
void Save();
void ClearClientState();
}
I think the best way for you to know is simply take an Azure VM (IaaS) and activate Microsoft Antimalware extension. Then you may log into it and do all the necessary check and tests against the service.
Later, you will apply all this into the Worker Role (there is a similar PaaS extension available for that, calles PaaSAntimalware).
See the next excerpt from https://msdn.microsoft.com/en-us/library/azure/dn832621.aspx:
"In PaaS, the VM agent is called GuestAgent, and is always available on Web and Worker Role VMs. (For more information, see Azure Role Architecture.) The VM agent for Role VMs can now add extensions to the cloud service VMs in the same way that it does for persistent Virtual Machines.
The biggest difference between VM Extensions on role VMs and persistent VMs is that with role VMs, extensions are added to the cloud service first and then to the deployments within that cloud service.
Use the Get-AzureServiceAvailableExtension cmdlet to list all available role VM extensions."

Copying storage data from one Azure account to another

I would like to copy a very large storage container from one Azure storage account into another (which also happens to be in another subscription).
I would like an opinion on the following options:
Write a tool that would connect to both storage accounts and copy blobs one at a time using CloudBlob's DownloadToStream() and UploadFromStream(). This seems to be the worst option because it will incur costs when transferring the data and also be quite slow because data will have to come down to the machine running the tool and then get re-uploaded back to Azure.
Write a worker role to do the same - this should theoretically be faster and not incur any cost. However, this is more work.
Upload the tool to a running instance bypassing the worker role deployment and pray the tool finishes before the instance gets recycled/reset.
Use an existing tool - have not found anything interesting.
Any suggestions on the approach?
Update: I just found out that this functionality has finally been introduced (REST APIs only for now) for all storage accounts created on July 7th, 2012 or later:
http://msdn.microsoft.com/en-us/library/windowsazure/dd894037.aspx
You can also use AzCopy that is part of the Azure SDK.
Just click the download button for Windows Azure SDK and choose WindowsAzureStorageTools.msi from the list to download AzCopy.
After installing, you'll find AzCopy.exe here: %PROGRAMFILES(X86)%\Microsoft SDKs\Windows Azure\AzCopy
You can get more information on using AzCopy in this blog post: AzCopy – Using Cross Account Copy Blob
As well, you could remote desktop into an instance and use this utility for the transfer.
Update:
You can also copy blob data between storage accounts using Microsoft Azure Storage Explorer as well. Reference link
Since there's no direct way to migrate data from one storage account to another, you'd need to do something like what you were thinking. If this is within the same data center, option #2 is the best bet, and will be the fastest (especially if you use an XL instance, giving you more network bandwidth).
As far as complexity, it's no more difficult to create this code in a worker role than it would be with a local application. Just run this code from your worker role's Run() method.
To make things more robust, you could list the blobs in your containers, then place specific file-move request messages into an Azure queue (and optimize by putting more than one object name per message). Then use a worker role thread to read from the queue and process objects. Even if your role is recycled, at worst you'd reprocess one message. For performance increase, you could then scale to multiple worker role instances. Once the transfer is complete, you simply tear down the deployment.
UPDATE - On June 12, 2012, the Windows Azure Storage API was updated, and now allows cross-account blob copy. See this blog post for all the details.
here is some code that leverages the .NET SDK for Azure available at http://www.windowsazure.com/en-us/develop/net
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.WindowsAzure.StorageClient;
using System.IO;
using System.Net;
namespace benjguinAzureStorageTool
{
class Program
{
private static Context context = new Context();
static void Main(string[] args)
{
try
{
string usage = string.Format("Possible Usages:\n"
+ "benjguinAzureStorageTool CopyContainer account1SourceContainer account2SourceContainer account1Name account1Key account2Name account2Key\n"
);
if (args.Length < 1)
throw new ApplicationException(usage);
int p = 1;
switch (args[0])
{
case "CopyContainer":
if (args.Length != 7) throw new ApplicationException(usage);
context.Storage1Container = args[p++];
context.Storage2Container = args[p++];
context.Storage1Name = args[p++];
context.Storage1Key = args[p++];
context.Storage2Name = args[p++];
context.Storage2Key = args[p++];
CopyContainer();
break;
default:
throw new ApplicationException(usage);
}
Console.BackgroundColor = ConsoleColor.Black;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine("OK");
Console.ResetColor();
}
catch (Exception ex)
{
Console.WriteLine();
Console.BackgroundColor = ConsoleColor.Black;
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine("Exception: {0}", ex.Message);
Console.ResetColor();
Console.WriteLine("Details: {0}", ex);
}
}
private static void CopyContainer()
{
CloudBlobContainer container1Reference = context.CloudBlobClient1.GetContainerReference(context.Storage1Container);
CloudBlobContainer container2Reference = context.CloudBlobClient2.GetContainerReference(context.Storage2Container);
if (container2Reference.CreateIfNotExist())
{
Console.WriteLine("Created destination container {0}. Permissions will also be copied.", context.Storage2Container);
container2Reference.SetPermissions(container1Reference.GetPermissions());
}
else
{
Console.WriteLine("destination container {0} already exists. Permissions won't be changed.", context.Storage2Container);
}
foreach (var b in container1Reference.ListBlobs(
new BlobRequestOptions(context.DefaultBlobRequestOptions)
{ UseFlatBlobListing = true, BlobListingDetails = BlobListingDetails.All }))
{
var sourceBlobReference = context.CloudBlobClient1.GetBlobReference(b.Uri.AbsoluteUri);
var targetBlobReference = container2Reference.GetBlobReference(sourceBlobReference.Name);
Console.WriteLine("Copying {0}\n to\n{1}",
sourceBlobReference.Uri.AbsoluteUri,
targetBlobReference.Uri.AbsoluteUri);
using (Stream targetStream = targetBlobReference.OpenWrite(context.DefaultBlobRequestOptions))
{
sourceBlobReference.DownloadToStream(targetStream, context.DefaultBlobRequestOptions);
}
}
}
}
}
Its very simple with AzCopy. Download latest version from https://azure.microsoft.com/en-us/documentation/articles/storage-use-azcopy/
and in azcopy type:
Copy a blob within a storage account:
AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer1 /Dest:https://myaccount.blob.core.windows.net/mycontainer2 /SourceKey:key /DestKey:key /Pattern:abc.txt
Copy a blob across storage accounts:
AzCopy /Source:https://sourceaccount.blob.core.windows.net/mycontainer1 /Dest:https://destaccount.blob.core.windows.net/mycontainer2 /SourceKey:key1 /DestKey:key2 /Pattern:abc.txt
Copy a blob from the secondary region
If your storage account has read-access geo-redundant storage enabled, then you can copy data from the secondary region.
Copy a blob to the primary account from the secondary:
AzCopy /Source:https://myaccount1-secondary.blob.core.windows.net/mynewcontainer1 /Dest:https://myaccount2.blob.core.windows.net/mynewcontainer2 /SourceKey:key1 /DestKey:key2 /Pattern:abc.txt
I'm a Microsoft Technical Evangelist and I have developed a sample and free tool (no support/no guarantee) to help in these scenarios.
The binaries and source-code are available here: https://blobtransferutility.codeplex.com/
The Blob Transfer Utility is a GUI tool to upload and download thousands of small/large files to/from Windows Azure Blob Storage.
Features:
Create batches to upload/download
Set the Content-Type
Transfer files in parallel
Split large files in smaller parts that are transferred in parallel
The 1st and 3rd feature is the answer to your problem.
You can learn from the sample code how I did it, or you can simply run the tool and do what you need to do.
Write your tool as a simple .NET Command Line or Win Forms application.
Create and deploy a dummy we/worker role with RDP enabled
Login to the machine via RDP
Copy your tool over the RDP connection
Run the tool on the remote machine
Delete the deployed role.
Like you I am not aware of any of the off the shelf tools supporting a copy between function.
You may like to consider just installing Cloud Storage Studio into the role though and dumping to disk then re-uploading. http://cerebrata.com/Products/CloudStorageStudiov2/Details.aspx?t1=0&t2=7
Use could 'Azure Storage Explorer' (free) or some other such tool. These tools provide a way to download and upload content. You will need to manually create containers and tables - and of course this will incur a transfer cost - but if you are short on time and your contents are of reasonable size then this is a viable option.
I recommend use azcopy, you can copy the all the storage account, a container, a directory or a single blob. Here al example of cloning all the storage account:
azcopy copy 'https://{SOURCE_ACCOUNT}.blob.core.windows.net{SOURCE_SAS_TOKEN}' 'https://{DESTINATION_ACCOUNT}.blob.core.windows.net{DESTINATION_SAS_TOKEN}' --recursive
You can get SAS token from Azure Portal. Navigate to storage account overviews (source and destination), then in the sidenav click on "Shared access sigantura" and generate your own.
More examples here
I had to do somethign similar to move 600 GB of content from a local file system to Azure Storage. After a couple iterations of code I finally ended up with taking the 'Azure Storage Explorer' and extended it with ability to select folders instead of just files and then have it recursively drill into the multiple selected folders, loaded a list of Source / Destination copy item statements into an Azure Queue. Then in the upload section in 'Azure Storage Explorer', in the Queue section to pull from the queue and execute the copy operation.
Then I launched like 10 instances of the 'Azure Storage Explorer' tool and had each pulling from the queue and executing the copy operation. I was able to move the 600 GB of items in just over 2 days. Added in smarts to utilize the modified time stamps on files and have it skip over files that have already been both copied from the queue and not add to the queue if it is in sync. Now I can run "updates" or syncs within an hour or two across the whole library of content.
Try CloudBerry Explorer. It copies blob within and between subscriptions.
For copying between subscriptions, edit the storage account container's access from Private to Public Blob.
The copying process took few hours to complete. If you choose to reboot your machine, the process will continue. Check status by refreshing the target storage account container in Azure management UI by checking the timestamp, the value gets updated until the copy process completes.

Resources