Proftpd incredibly slow sftp and ftp connection on Ec2 - amazon

Iwas using digital ocean for a long time but I wanted to tive it a shot on amazon ec2 machines. I ve created my environment but when I set up proftpd and configure it as a sftp server it transfer files incredibly slow like in bytes per second not even kbps. It was also the same for ftp. I just had this issue on amazon ec2 server, never happened in digital ocean's. I tried eveything from google but not helped.
Is there any solution?

Related

Tunnel between local linux machine( behind NAT) to aws instance(linux)

Is there any utility through which i can create a tunnel between local linux machine and aws instance..
I used this http://www.rkeene.org/projects/info/wiki/142
Its good, but multiple connections don't work properly.
Please suggest me other possibilities
The simplest way I see it's establishing a VPN connection between your external machine and your AWS VPC. You could just have an EC2 instance running OpenVPN facing internet on AWS, and set a client on the other end. You could even use Amazon VPN, but it implements IPSEC, which could be a little more tricky to connect from you Linux box. Another advantage of OpenVPN is that you can have as many clients as you want coming from anywhere.

Use a rackspace cloud image on Amazon EC2?

I've a Rackspace (UK) cloud instance, running Ubuntu 11.10, which has taken 10+ man-hours to install all the packages (and custom application code) I need, tighten security, test, etc.
I can take a snapshot of that, and start another instance on Rackspace UK. That worked nicely. Because I've got /etc under git source control I could see the files the start-up process altered were:
network files (IP address, default gateway)
root password
/etc/hostname
About the only post-startup steps I needed to do were a DNS entry and dpkg-reconfigure postfix to set the new machine name.
I'm assuming, but haven't tested yet, that I could use this image with Rackspace U.S. But what about with Amazon EC2 (or any other cloud provider for that matter)? Can I just download the image, upload it to Amazon S3, and start new instances with it? If not is there a utility I can run to convert from one linux image format to another?
The poor man's approach is to use rsync between servers. Rackspace has a 3-part guide on this, starting here:
http://www.rackspace.com/knowledge_center/index.php/Migrating_a_Linux_Server_From_Command_Line_Stage_1

linode vs amazon ec2 vs heroku for project with amazon s3

I have a project in ruby on rails 3.1 like flickr, tumblr, pinterest...etc with a lot of pictures for maintenance.
My project have database Mongodb.
I'm using amazon s3 for host pictures.
I want to know what is the best hosting combination to get the most quality/price, linode + s3, or amazon ec2 + s3 or heroku + amazon s3.
I need enough scale because the project is growing fast.
Any other suggest is welcome :D.
After much reading I am not clear.
If you want to save the most money then I'd go with linode (Amazon ec2 might cost about the same though). With linode for $19.95/month you get 20gb of space where you can host your website and database. If you're using s3 then you can use most of the 20gb for your database. Not only that but on linode the addons that would cost you money on Heroku will be free (solr/sphinx, background jobs, email, etc). Compare this to Heroku where a 20gb shared database alone costs $15/month. Then you need to pay monthly if you want solr, background jobs, etc.
On linode it's free because you run and maintain your own virtual private server (VPS). Which brings me to one of the most important things to consider here, linode will save you money but it will cost you more time since you have to manage everything yourself.
For what it's worth, I am currently in the process of moving much of my hosting over from Heroku to Linode because of the costs involved and because as a rails developer I feel it's important to understand how to manage my own webserver.
There are a lot of other advantages to having your own VPS though. For example, hosting multiple website, creating multiple databases used by other web apps, your own email server, etc.
Update: April 2014
An even cheaper alternative to linode is digitalocean. Their cheapest plan is currently $5/month.
Just for a performance point of view, you'll get better performance if you use EC2 or Heroku since both are parts of the Amazon infrastructure (Heroku runs on EC2).
But it will only benefits if your pictures are processed by your Python server. If your pictures are served directly to the client, it will not have any impact to use Linode :)

Keeping Elastic Search alive on Amazon EC2 Linux Instance

I have elastic search running on a linux instance on Amazon EC2. I use tunnelier to connect to the instance. I'm new to EC2 and tunnelier (I'm more familiar with Windows Servers and Remote desktop). The problem is that when I disconnect the tunnelier console, my Elastic Search Server is no longer available for clients connecting to it. I would like to know how to keep the Elastic Search Server alive, serving client requests without my having to keep a tunnelier session active.
I guess I didn't ask this properly or so. Anyway, I found the answer here: http://www.elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html. Really really helpful. Thanks a million to the author. Helped me set up elastic search as a service on EC2.

Is a Amazon Machine Images (AMI's) static or it's code be modified and rebuilt

I have a customer who wishes me to do some customisations of the erp system opentaps, which they used via opentaps Amazon Elastic Computing Cloud (EC2) images, I've only worked with it on a normal server and don't know anything about images in the cloud. When I ssh in with the details the client gave me there is no sign of the erp installation directory I'd expect to see. I did originally expect that the image wouldn't be accessible, but the client assured me it was. I suppose they could be confused.
Would one have to create a new image and swap it out or is there a way to alter the source and rebuild like on a normal server?
Something is not quite clear to me here. First of all EC2 images running in the cloud are just like normal virtual servers, so If you have an access to the running instance there is no difference between instance in the cloud and instance on another pc in your home for example.
You have to find out how opentaps are installed on the provided amis, then do your modifications, create an image from the modified instance and save it to s3 for backup if necessary.
If you want to start with fresh instance, you can start up any linux/windows distro on the EC2, install opentaps yourself your way and you are done.

Resources