I'm want to build a FTP server on Microsoft Azure platform.
The server probably will be based on vsftpd (linux ubuntu server) - not closed. Can select another free ftp service.
I'm have two issues:
End-points - If I'm using Passive mode, I'm need to allocate port range for this. Let's say 8000 to 8100. BUT, I'm have a option to allocate only 20 ports in the "End-points" interface.
I'm need to allocate at least one terabyte of storage in this server. How it's could be done with the machine?
Thank you!
There are a lot of articles written on the subject. Whereas one of the most recent is here.
Fr Virtual Machines, the limit of InputEndpoints is actually 150. Refer to the latest Azure limits compilation here.
As for 1T storage - check the VM sizes for Azure Virtual Mchines. Where A0 (the smallest) supports max 1 data disk of 1TB, and A4, A7 support 16 disks 1TB each (so total of 16TB per VM of size A4 or A7).
As for "built-in" endpoints - you can freely and surely remove them. Especially when you do not use them.
Related
I'm interested in creating a VM in Azure and downloading to my own machine to use in Hyper-V. The past couple of attempts at creating a VM have resulted in a 127Gb image. Can anyone tell me what the absolute smallest windows vm available is and how I can choose this in set up. There doesn't seem to be any options for anything smaller than this.
You can use windows vms labeled as smalldisk, those will have 30gb os disk size. any linux vm will have 30gb os disk by default.
if using non portal, you can specify os disk size, I didnt ever try to downsize it below 30, but cant imagine why it wont work (unless there isn't enough space on the disk).
Azure recently announced new B-series VM size, B1ls, which has the smallest memory and lowest cost among Azure VM instances. This offering is in response to customers who were looking for entry-level offerings. B1ls has 512 MiB of memory and 1 vCPU, and it costs only $0.0052 (US East) per hour.
B1ls is available only on Linux.
Reference: https://azure.microsoft.com/en-us/updates/b-series-update-b1ls-is-now-available/
I have an on premise solution that consists of a server (1 machine) and X users (each in 1 machine). All the users are using the same Win32 application. The question is: How do I translate this in to an Azure enviroment? Each of the users machines are using 4 CPUs and 8 GB of RAM (this is necessary).
Do I have to configure a new machine which has to have the 4 CPUs and 8GB per user, or is there a more efficient way to get this done? Because otherwhise this is not economically profitable.
I was thinking about using XenApp and only one VM for all the users to solve this problem. But I'm not quite sure.
Any help is welcome.
you could automate the provisioning of the VMs with ARM templates. Your virtualizationception could also work. But then: how do you garantee the 8GB of RAM that is necessary? Would be good to have some more details about the requirements.
I'm about to create my first Cassandra cluster, starting from the first node :) But immediately I've ran into dilemma on how to implement drives to be used, so any word of advice will be appreciated. Here are musts:
The node must run as Hyper-V (Win srv 2012 R2) VM
I have 2 SSD 256 GB drives available for it
Preferably Ubuntu 14.04 guest OS
My options:
Create dynamic stripped drive (basically software RAID0) in host OS (Win srv), and then create VHDX on top of it that will be used by guest Ubuntu;
No RAID, simply create two VHDX (one per SSD drive), and create guest Ubuntu that uses both VHDXs. Later within Ubuntu use one of the drives for logs and another for SSTables;
Do not create VHDX but connect (passthrough) physical SSDs to the newly created Ubuntu guest, then software RAID0 both SSD during Ubuntu setup process;
Similar with the previous but without software RAID0, and with assigning one drive for logs and other for SSTables
Which of the previous configuration would satisfy Cassandra best? Any resource (experience) about the differences in performances?
It is also important to know that the following is NOT important:
SSD life time - If SSD will survive a year or 10 years is not important at all.
Fault tolerance - I'm not afraid of zero fault tolerance of RAID0 configuration. Fault tolerance of the system will be achieved by using multiple nodes and the appropriate replication policy, so failure of one node is not important.
Also, I'll say that I would be happiest with the first option since it allows me to use my existing VM snapshot-based backup infrastructure, and maybe even add another VHDX on the same RAID0 that will be used by another, non-IO intensive VM.
Finally, when it comes to VHDX on top of SSD - dynamically expanding or fixed?
Many thanks!
I've forgot to say (not sure if important but...):
The cluster should be write-optimized. Expected ingesting rate is 50,000 data points per second. Rare reads - probably no more than one per second.
Let's imagine I have created an Azure virtual machine, a small one initially. I have installed SQL Server and created databases. Also hosted to website by IIS on the virtual machine.
I can see the performance of the small one is not up to the mark. I want to upgrade to a larger machine more powerful one. I know, I can do this from Azure portal.
My question is since I have already fully configured this machine with databases and websites running on the small VM. I need to know, Will I lose all my data and hosted websites if I change size of Virtual Machine (VM) from Small to large from Azure portal? I am worried that if this upgrade I may lose data and website.
You will not lose your (entire) data when you scale.
Why I put Entire - because your data is on the System drive (C). Which by default (if you have not turned this off) has a Read/Write Host Cache enabled. The Write cache can cause some data corruption when the VM is not gracefully shut down, or while changing the size. And this is the only issue you have to be worried about.
Changing VM size is kind of a common task that everyone does almost on a daily basis, especially when using IaaS as dev/test environment.
It is also a recommended corrective action to take if you are having issues with booting up the VM.
So, go ahead and change the size. You can pre-cautious stop your IIS before resizing, to avoid data loss. This only make sense if your application has some logic which writes files to local (C) drive.
I am attempting to increase the size of a Virtual Machine on my Azure subscription from an A2 (2 cores, 3.5GB) machine to a D3 (4 cores, 14GB) machine. The only options available for this particular VM on the configure tab > Virtual machine size are:
- A0
- A1
- A2
- A3
- A4
I do not see an A5 or a D3 virtual machine size available - although these are available for other virtual machines within my subscription. We have had this and a couple of other VMs with the same issue running for about a year and a half - the newer VMs in our subscription (as well as machines in the create gallery) can all be scaled into the memory and CPU intensive versions (A5 or D3, D4).
Is there any pathway that will allow me to upgrade this older VM to a newer specification of Virtual Machine?
According to the Azure MSDN article "Virtual Machine and Cloud Service Sizes for Azure" at:
http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx
You can't increase the size of a "Basic Tier" VM to larger than A4. So, it looks like you will need to use the "Standard Tier" instead.
If the option to switch to the "Standard Tier" is not available for this VM, the explanation may be that VMs created before "April 16, 2013" may not be able to be upgraded to larger than A4 because of the older datacenter in which they reside. The article includes an explanation of this issue and link to a troubleshooting guide for workarounds for the "Error: “Failed to configure virtual machine” with A5, A6 or A7 VM size" at:
https://social.msdn.microsoft.com/Forums/azure/en-US/9693f56c-fcd3-4d42-850e-5e3b56c7d6be/error-failed-to-configure-virtual-machine-with-a5-a6-or-a7-vm-size?forum=WAVirtualMachinesforWindows
This blog article visualizes changeable VM sizes in tables. Note that the blog information could be old, but it shows that there are VM sizes not changeable even though it is selectable on the list.
From the blog, below table gives an answer.
Note that above one is "ASM" while below is "ARM".
Changing VM size is strongly (or we can say definitely) depended on the Azure infrastructure, so the only way to resolve the issue is just to create new VM.