I want to import a number of private virtual machines that only I can launch using the ARM REST API.
How do I do that? I cannot find instructions.
This question is a little unclear - do you have a number of already pre-defined virtual machine images that you want to start up, is it multiple copies of the same machine for a load balanced scenario or something else?
Also, you say "Only I can launch" what do you mean by that? By definition, when you describe your resources using Azure Resource Manager, you're essentially making a desired state configuration that you then deploy to Azure, and it will create all those machines for you.
If it's simply a question of creating the configuration file, you can try out cool stuff such as http://Armviz.io to set up your stuff. Alternatively, if you already have a group of resources that you'd like to capture into a script - go here:
http://capturegroup.azurewebsites.net
Related
I want to deploy an azure resource manager template with 2 VMs, one Windows and the other Linux. I read about using copy variable, but that's basically deploying the same resource multiple times. I couldn't figure out a way to deploy 2 different instances of the same resource in the same template. Need your help. Thanks!
you could use a bunch of variables for that, but since windows and linux vm inputs are pretty different I suggest you do not do this, way to much customization. Easier just deploying 2 vms as 2 individual resources.
You can use arrays to achieve your goal:
"osType": [
"windows",
"linux"
]
and then you would have a bunch of variables like osimagewindows and osimagelinux and you would access them like this:
variables(concat('osimage', variables('osType')[copyIndex()]))
ps. Thats too much trouble for the value you are getting. do not bother (unless you want to do this as an exercise).
If your VMs are largely identical except for the OS disk, take a look at this sample:
https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-zones/azuredeploy.json#L194
If you wanted to add in something like Password vs. SSH authn, see: https://github.com/Azure/azure-quickstart-templates/blob/master/100-marketplace-sample/azuredeploy.json#L298-L299
To use the copy loop, you'd need to have arrays for those conditionals, but when you start to add up the duplicate resources you would have (e.g. nics, PublicIps) dealing with the arrays and conditional in a copy loop may very well be simpler than duplicating the resources.
That help?
I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!
My lab just got a sponsorship from Microsoft Azure and I'm exploring how to utilize it. I'm new to industrial level cloud service and pretty confused about tons of terminologies and concepts. In short, here is my scenario:
I want to experiment the same algorithm with multiple datasets, aka data parallelism.
The algorithm is implemented with C++ on Linux (ubuntu 16.04). I made my best to use static linking, but still depends on some dynamic libraries. However these dynamic libraries can be easily installed by apt.
Each dataset is structured, means data (images, other files...) are organized with folders.
The idea system configuration would be a bunch of identical VMs and a shared file system. Then I can submit my job with 'qsub' from a script or something. Is there a way to do this on Azure?
I investigated the Batch Service, but having trouble installing dependencies after creating compute node. I also had trouble with storage. So far I only saw examples of using Batch with Blob storage, with is unstructured.
So are there any other services in Azure can meet my requirement?
I somehow figured it out my self based on the article: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-hpcpack-cluster/. Here is my solution:
Create a HPC Pack with a Windows head node and a set of Linux compute node. Here are several useful template in Marketplace.
From Head node, we can execute command inside Linux compute node, either inside HPC Cluster Manager, or using "clusrun" inside PowerShell. We can easily install dependencies via apt-get for computing node.
Create a File share inside one of the storage account. This can be mounted by all machines inside the cluster.
One glitch here is that for some encryption reason, you can not mount the File share on Linux machines outside the Azure. There are two solutions in my head: (1) mount the file share to Windows head node, and create file sharing from there, either by FTP or SSH. (2) create another Linux VM (as a bridge), mount the File share on that VM and use "scp" to communicate with it from outside. Since I'm not familiar with Windows, I adopted the later solution.
For executable, I simply uploaded the binary executable compiled on my local machine. Most dependencies are statically linked. There are still a few dynamic objects, though. I upload these dynamic object to the Azure and set LD_LIBRARY_PATH when execute programs on computing node.
Job submission is done in Windows head node. To make it more flexible, I wrote a python script, which writes XML files. The Job Manager can load these XML files to create a job. Here are some instructions: https://msdn.microsoft.com/en-us/library/hh560266(v=vs.85).aspx
I believe there should be more a elegant solution with Azure Batch Service, but so far my small cluster runs pretty well with HPC Pack. Hope this post can help somebody.
Azure files could provide you a shared file solution for your Ubuntu boxes - details are here:
https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/
Again depending on your requirement you can create a pseudo structure via Blob storage via the use of containers and then the "/" in the naming strategy of your blobs.
To David's point, whilst Batch is generally looked at for these kind of workloads it may not fit your solution. VM Scale Sets(https://azure.microsoft.com/en-us/documentation/articles/virtual-machine-scale-sets-overview/) would allow you to scale up your compute capacity either via load or schedule depending on your workload behavior.
My developer has written a web scraping app on Linux on his private machine, and asked me to provide him with a Linux server. I setup an account on Google Compute Engine, created a Linux image with enough resources and a sufficiently large SSD drive. Three weeks later he is claiming that working on Google is too complex quote - "google is complex because their deployment process is separate for all modules. especially i will have to learn about how to set a scheduler and call remote scripts (it looks they handle these their own way)."
He suggests I create an account on Hostgator.com.
I appreciate that I am non-technical, but I cannot be that difficult to use Linux on Google?! Am I missing something? Is there any advice you could give me?
Regarding the suggestion to create an account on Hostgator to utilize what I presume would be a VPS in lieu of a Virtual Machine on GCE , I would suggest seeking a more concrete example from the developer.
For instance, the comment about the "scheduler", let's refer to it as some process that needs to execute on a regular basis:
How is this 'process' currently accomplished on the private machine ?
How would it be done on the VPS ?
What is preventing this 'process' from being done on the GCE VM ?
i have an existing program that i would like to upload to the cloud without rewriting it and i'm wondering if that is possible.
For exemple can i upload and run a photoshop instance in the cloud and use it?
Of course not the GUI but photoshop has a communication sdk so web program should be able to control it!
As far as i can see, Worker roles looks good but they have to be written in a specific way and i can't rewrite photoshop !
Thanks for your attention!
As long as your existing program is 64bit compatible and it has installer that supports unattended/silent install; or your programm is xcopy deployable, you can use it in Azure.
For the programm that requires installation and supports unattended/silent install you can use StartUp Task.
For the program that is just xcopy deployable, just put it in a folder of your worker role, and make sure the "Copy to Output" attribute of all required files are set to "Copy always". Then you can use it.
However the bigger question is, what are you going to do with that "existing programm" in Azure, if you do not have API-s to work with.
Here's the thing, the Worker role should be what you need - it's essentially a virtual machine running a slightly different version of Windows, that you can RDP to, and use it normally. You can safely run more or less anything up there, but you need to automate the deployment (e.g. using startup tasks). As this can prove a bit problematic, Microsoft has created a Virtual machine Role. You create your own deployment and that's what gets raised when you instantiate the machine.
However! This machine is stateless, meaning that files it creates aren't saved if it gets restarted. So you need to ensure the files are saved somewhere else, e.g. in blob storage (intended for just such a purpose).
What I would do in your case, is create a virtual machine role, with Photoshop installed, and a custom piece of software next to it, accepting requests via Azure Queues, that does the processing, and saves the file to blob storage, then sends the file onwards to whoever requested