Deploying an application to a Linux server on Google compute engine - linux

My developer has written a web scraping app on Linux on his private machine, and asked me to provide him with a Linux server. I setup an account on Google Compute Engine, created a Linux image with enough resources and a sufficiently large SSD drive. Three weeks later he is claiming that working on Google is too complex quote - "google is complex because their deployment process is separate for all modules. especially i will have to learn about how to set a scheduler and call remote scripts (it looks they handle these their own way)."
He suggests I create an account on Hostgator.com.
I appreciate that I am non-technical, but I cannot be that difficult to use Linux on Google?! Am I missing something? Is there any advice you could give me?

Regarding the suggestion to create an account on Hostgator to utilize what I presume would be a VPS in lieu of a Virtual Machine on GCE , I would suggest seeking a more concrete example from the developer.
For instance, the comment about the "scheduler", let's refer to it as some process that needs to execute on a regular basis:
How is this 'process' currently accomplished on the private machine ?
How would it be done on the VPS ?
What is preventing this 'process' from being done on the GCE VM ?

Related

How does RunKit make their virtual servers?

There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.

Migrate instance from EC2 to Google Cloud

I have a running Linux instance in Amazon EC2. I'd like to migrate this instance to a Google Cloud VM Instance. I'd like to have the minimum work on this operation, a kind of copy and paste solution. How can I do this?
You can import an Amazon Machine Image (AMI) to Google Compute Engine but it's not just one operation. There is a section in the Google Compute Engine documentation that shows the steps you need to follow in order to achieve your goal.
I hope it helps.
With GCP you can use the import feature which forwards to Cloud Endure site, where you can migrate your existing server, virtual on Cloud or non Cloud or even physical machine, to GCP.
You can also import Amazon Linux AMI EC2 instances on AWS.
Cloud Endure provides also live migration, so it does continues replication, if you don't power on your migrated VM on GCP.
It can also be used for just one time migration.
Amazon Linux AMI can be updated on GCP Cloud as well, so no problems with that.
Migration takes few hours depending on size of the source machine. You might need to change the hard drive paths on /etc/fstab to reflect their names on GCP (dev/xvdf --> /dev/sdb, for example).
The easiest one step solution would be using a third party tool to do it for you. There are way many cloud migration vendors that would make this process nearly zero effort. I did that with cloud endure and it went ok, but obviously it involves costs so make sure to check them out.
Found the end to end video which will give an idea how to do migration from ec2 to google cloud.
link: https://www.youtube.com/watch?v=UT1gPToi7Sg

Scheduler on Azure

I need to be able to generate some type of Scheduling service within Windows Azure, but which is the best and most resilient?
Currently I have a Windows Service running Quartz, which works okay, but on a Windows Server. I need this to run in the cloud.
The tasks, read/write to a database and some will send emails.
I've looked over all the possible solutions in Stack Overflow, but they appear to be old and not updated to the latests Azure Platform.
Any suggestions or pointers?
The most adapted solution might be a worker role, MS has a tutorial specifically for what you're looking for: http://www.windowsazure.com/en-us/develop/net/tutorials/multi-tier-web-site/4-worker-role-a/
This would definitely a less expensive solution than instantiating a virtual machine, but might require some work.
I ended up using the Azure Mobile service and the Scheduler that come with it, which works a treat
I run a Worker Role using Quartz .NET to schedule stuff. Works great!
https://github.com/quartznet/quartznet
Obviously, that would be difficult to do on the cloud since you won't be able to install services or anything that could run in the background. A less than perfect solution would be to have a workstation under your control handle the scheduling and send updates to the web server which would then write them to the DB server. Otherwise, you should self host the website and application, etc.

Best solution to host a (command line) Windows application?

I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements

IIS Virtual Directory in Azure

I've been told that you can create virtual directories in IIS hosted on Azure but I'm struggling to find any info on this as its a relatively new feature. I'd like to point the virtual directory to an Azure Drive (XDrive, NTFS Drive) so that I can reference resources on the drive.
I'm migrating an on premise website onto Azure and need to minimise the amount of rework / redevelopment required. Currently the website has access to shared content folders and I'm trying to mimic a similar set up due to tight time scales.
Does anyone have any knowledge of this or pointers for me as I can't find any information on how to do this?
Any information / pointers you have would be great
Thanks
Steve
I haven't had a moment to check myself, but get the latest copy of the Windows Azure Platform Training kit. I'm fairly certain that it has a hands on lab that demonstrates the new feature. However, I do not believe that lab includes creating a virtual directory on a azure drive. Even if you can point it there, you may run into some .NET security limitations. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en
Another resource to look into might be the stuff Cory Fowler is doing http://blog.syntaxc4.net/ He's been spending some time of late really digging into the internals of the new 1.3 roles. So he might be able to lend you a hand.
I've been kicking this issue around for sometime now and I can upload a VHD to Azure and I can create a virtual directory in Azure that points to a physical location on my pc (when running in Dev fabic) and here is the but....
I can't find any examples on where I can do both at the same time, i.e. mount a drive and then map a virtual directory to it.
I've had a look in the 1.3 SDK and looked at various blogs but I can't see any pointers on this - I guess I may have got hold of the wrong end of the stick. If anyone knows how or if this can be done, that would be great.

Resources