i have an existing program that i would like to upload to the cloud without rewriting it and i'm wondering if that is possible.
For exemple can i upload and run a photoshop instance in the cloud and use it?
Of course not the GUI but photoshop has a communication sdk so web program should be able to control it!
As far as i can see, Worker roles looks good but they have to be written in a specific way and i can't rewrite photoshop !
Thanks for your attention!
As long as your existing program is 64bit compatible and it has installer that supports unattended/silent install; or your programm is xcopy deployable, you can use it in Azure.
For the programm that requires installation and supports unattended/silent install you can use StartUp Task.
For the program that is just xcopy deployable, just put it in a folder of your worker role, and make sure the "Copy to Output" attribute of all required files are set to "Copy always". Then you can use it.
However the bigger question is, what are you going to do with that "existing programm" in Azure, if you do not have API-s to work with.
Here's the thing, the Worker role should be what you need - it's essentially a virtual machine running a slightly different version of Windows, that you can RDP to, and use it normally. You can safely run more or less anything up there, but you need to automate the deployment (e.g. using startup tasks). As this can prove a bit problematic, Microsoft has created a Virtual machine Role. You create your own deployment and that's what gets raised when you instantiate the machine.
However! This machine is stateless, meaning that files it creates aren't saved if it gets restarted. So you need to ensure the files are saved somewhere else, e.g. in blob storage (intended for just such a purpose).
What I would do in your case, is create a virtual machine role, with Photoshop installed, and a custom piece of software next to it, accepting requests via Azure Queues, that does the processing, and saves the file to blob storage, then sends the file onwards to whoever requested
Related
I need to create tools so that a non-experienced/non-technical users can use (which means connect and start/stop) a Virtual machine on Azure. For connection, the RDP connection is doing a good enough job and is easy to take a hand-on. On the other side, to start / stop a virtual machine you normally need to access to the Azure portal which (on top of being not straightforward for a non-technical user) causes some access policy problems. One option could be to just let the virtual machine always "on" but then we are billed for 100% of time even though the user only needs it for a couple hours a week.
That's why I investigated the possibility to create a script that could be put into an executable file that would launch automatically the virtual machine by just clicking the exec. I have already seen this stackoverflow question :
Start azure virtual machine without azure portal
which suggests to create an Azure PowerShell script that would start the virtual machine. Only problem is that launching a powershell script is out of the technical level of the person who would use it. On top of that, there is a need to install Azure add-on for powershell (if I understand correctly) which would not be possible depending on the machine and the rights the user have on it.
So my question : Do you have any idea on how I could make a simple program (in the form for example of an executable that would run on any machine without any dependency) that would start an azure virtual machine ?
One solution I thought about but it seemed very complicated : create a "super low cost" virtual machine that would be on 100% of time and just create an exec that instruct this VM to start the other virtual machine on demand ?
Thanks for your help
I have a problem with the idea that a powershell script is outside of the scope of a user that can run an exe file. If built properly, a ps1 should just be a double-click, exactly like an exe.
Aside from that, you have a couple hurdles to look at.
Your user can't have access to the resources that they need to interact with.
This can be done by passing custom PScredential objects through the script and pulling the credentials from a file. You would build the credential file with ConvertFrom-SecureString and then import it in with CovertTo-SecureString. The biggest problem with this is that if the user can see where that file is stored, they could potentially write a script to access that file and gain privileged access.
Your user doesn't have permission to run the powershell resources needed to execute the script. For this, you'd need to build in runas permission on the script, and I think creating an exe might be the best avenue for that. Although you could have the initial script call another shell with elevated permissions and work through that.
There are tools out there like PowerGUI, that will compile a ps1 file into an exe format. A properly compiled and secure exe file would hide the scripts that call out to secure string files and also allow for custom runas permissions built into the program.
My lab just got a sponsorship from Microsoft Azure and I'm exploring how to utilize it. I'm new to industrial level cloud service and pretty confused about tons of terminologies and concepts. In short, here is my scenario:
I want to experiment the same algorithm with multiple datasets, aka data parallelism.
The algorithm is implemented with C++ on Linux (ubuntu 16.04). I made my best to use static linking, but still depends on some dynamic libraries. However these dynamic libraries can be easily installed by apt.
Each dataset is structured, means data (images, other files...) are organized with folders.
The idea system configuration would be a bunch of identical VMs and a shared file system. Then I can submit my job with 'qsub' from a script or something. Is there a way to do this on Azure?
I investigated the Batch Service, but having trouble installing dependencies after creating compute node. I also had trouble with storage. So far I only saw examples of using Batch with Blob storage, with is unstructured.
So are there any other services in Azure can meet my requirement?
I somehow figured it out my self based on the article: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-hpcpack-cluster/. Here is my solution:
Create a HPC Pack with a Windows head node and a set of Linux compute node. Here are several useful template in Marketplace.
From Head node, we can execute command inside Linux compute node, either inside HPC Cluster Manager, or using "clusrun" inside PowerShell. We can easily install dependencies via apt-get for computing node.
Create a File share inside one of the storage account. This can be mounted by all machines inside the cluster.
One glitch here is that for some encryption reason, you can not mount the File share on Linux machines outside the Azure. There are two solutions in my head: (1) mount the file share to Windows head node, and create file sharing from there, either by FTP or SSH. (2) create another Linux VM (as a bridge), mount the File share on that VM and use "scp" to communicate with it from outside. Since I'm not familiar with Windows, I adopted the later solution.
For executable, I simply uploaded the binary executable compiled on my local machine. Most dependencies are statically linked. There are still a few dynamic objects, though. I upload these dynamic object to the Azure and set LD_LIBRARY_PATH when execute programs on computing node.
Job submission is done in Windows head node. To make it more flexible, I wrote a python script, which writes XML files. The Job Manager can load these XML files to create a job. Here are some instructions: https://msdn.microsoft.com/en-us/library/hh560266(v=vs.85).aspx
I believe there should be more a elegant solution with Azure Batch Service, but so far my small cluster runs pretty well with HPC Pack. Hope this post can help somebody.
Azure files could provide you a shared file solution for your Ubuntu boxes - details are here:
https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/
Again depending on your requirement you can create a pseudo structure via Blob storage via the use of containers and then the "/" in the naming strategy of your blobs.
To David's point, whilst Batch is generally looked at for these kind of workloads it may not fit your solution. VM Scale Sets(https://azure.microsoft.com/en-us/documentation/articles/virtual-machine-scale-sets-overview/) would allow you to scale up your compute capacity either via load or schedule depending on your workload behavior.
For my system, I have a back-end process that uses a 3rd party command line tool to do some occasional processing. This tool writes to and reads from the file system (I point it at some files, it works its magic, and then it writes out the results to another file).
This is obviously easy to do with an Azure Virtual Machine. Just write a Windows Service to manage this command line tool and have it read from a Queue to get the processing jobs.
To this point, however, I've been able to do everything in Azure without having to resort to a full blown VM. I like that. I like not having to worry about applying patches and other maintenance, downtime and the like.
So, my question is, is there something in Azure that would let me have this service without resorting to a VM? Would a "Worker Role" be able to accomplish this? Can it read and write to/from the file system? Can it handle 3rd party tools with a bunch of arbitrary dependencies? Can I launch another process from C# code within the worker role?
Would a "Worker Role" be able to accomplish this?
Absolutely! Remember that a Worker Role is a full blown VM also (with same OS powering Azure Virtual Machine).
Can it read and write to/from the file system?
Yes. However there's a catch. You can can't read/write to any arbitrary location on the VM. You would have full access to a special folder on that VM called Local Storage. You can read more about it here: http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Can it handle 3rd party tools with a bunch of arbitrary dependencies?
Yes, again! Again, there's a catch. Since these VMs are stateless VMs, anything you install after the VM is stood up for you by Microsoft is not guaranteed to be there in case Microsoft decides to tear down that VM for whatever reasons. If you need to install any additional software, you would have to install them via a process called Startup Tasks. You can read about them here: http://msdn.microsoft.com/en-us/library/azure/hh180155.aspx.
Can I launch another process from C# code within the worker role?
Though I have not tried it personally but I think it is possible because you get a VM running latest version of Windows server.
I've got 6 web sites, 2 databases and 1 cloud environment setup on my account
I used the cloud to run some tasks via Windows Task Manager, everything was installed on my D drive but between last week and today the 8 of March my folder containing the "exe" to run as been removed.
Also I've installed SVN tortoise to get the files deployed and it not installed anymore
I wonder if somebody has a clue about my problem
Best Regards
Franck merlin
If you're using Cloud Services (web/worker roles), these are stateless virtual machines. That is: Windows Azure provides the operating system, then brings your deployment package into the environment after bootup. Every single virtual machine instance booted this way starts from a clean OS image, along with the exact same set of code bits from you.
Should you RDP into the box and manually install anything, anything you install is going to be temporary at best. Your stuff will likely survive reboots. However, if the OS needs updating (especially the underlying host OS), your changes will be lost as a fresh OS is brought up.
This is why, with Cloud Services, all customizations should be done via startup tasks or the OnStart() event. You should never manually install anything via RDP since:
Your changes will be temporary
Your changes won't propagate to additional instances; you'll be required to RDP into every single box to perform the same changes.
You may want to download the Azure Training Kit and look through some of the Cloud Service labs to get a better feel for startup tasks.
In addition to what David said, check out http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for the scenarios where the different drives will be destroyed.
Also take a look at http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx which points you to the RSS feed and MSDN article where you can see that a new OS is currently being deployed.
I am developing a website that I intend to run within Windows Azure using a single Web Role. The site will make use of the Sphinx Search engine which will need to run as a Windows Service. So, my question is this...is it possible to install the Sphinx Search Windows Service inside of a Web Role.
From my initial research into Azure I am thinking "yes" for the reason that the Web Role is a VM running IIS. Therefore I should be able to remote in, install the service, and it should work. :)
Does this sound right?
Installing software via RDP is not a viable solution with Web/Worker role instances, as these changes won't persist. You need to install it either from a startup script or from OnStart(). Since you want to install as a service, that would imply startup script, since it would need elevated permissions. Note: The installer must support unattended mode, where all parameters are specified via command line with no human interaction.
What about scalability? If you have more than one instance of your web role running, can sphinx run across two instances? From what I read, it supports ODBC-compliant databases, and you might be able to use it against Windows Azure SQL Database. If that's the case, can two sphinx engines run on two different machines accessing the same data store? If so, this sounds like a viable solution.
If installation cannot be automated, or you need something additional like MySQL, you may want to consider placing the sphinx search engine inside a Virtual Machine (new in June 2012). Now you can spin up a Windows 2008 Server, RDP into it, configure it exactly how you want it.
Strictly speaking yes, you could do that. However this makes the assumption that you would be running on one VM instance and also that the instance would never need restarting.
You should consider looking at Azure worker roles for any functionality that would normally exist as a windows service.
After reading your answers, and thinking about it a bit more, I think dropping the idea of installing a service would be the best course of action. I've been looking at the API for Lucene.NET (this may be the same for Sphinx) and it's possible to encapsulate the writing/managing of indexes, etc, within in code and therefore no need for a service.
For the Azure, there is a library for managing index files using both local and Azure storage which could be of use. Scenarios I've read about show that it's then possible to have a Web Role that will process HTTP requests and perform the searches and a Worker Role to accept DB changes via a queue and have it write them to the indexes.