How to create VM instance in Google Compute Engine? - linux

I am new to Google Compute Engine. I want to create a Web Server having following properties:
1 Core
Red Hat Enterprise Linux 7.1 64 bit
RAM 8 GB
HDD: 100 GB
SSH, JDK 1.7
Apache Web server as the proxy to Jboss App Server
Enable HTTP / 80 and HTTPS /443  on public IP
Access Mode – SSH/SCP
I created a new instance having Linux Red Hat 7.1 and machine type n1-standard-2 that provide 2 CPU cores and 7.5 GB of RAM. Can I define exactly one core with 100 GB HD and 8 GB RAM? And how can I define access mode SSH/SCP ?

* I'd like add this update to my answer: Now it's possible to customize the machine type based on your hardware requirements.*
When creating a Compute Engine VM instance you will need to specify a machine type. There is no way to specify amount of CPU and memory. However, you can select a machine type which will be close to your hardware requirements.
For the persistent disk, using the gcloud command tool you can create a disk with the desired size:
$ gcloud compute disks create DISK_NAME --image IMAGE --size 100GB --zone ZONE
Then create your VM instance using your root persistent disk:
$ gcloud compute instances create INSTANCE_NAME --disk name=DISK_NAME boot=yes --zone ZONE
Since Automatic Resizing of root persistent disks is not supported by Compute Engine for Red Hat Enterprise operating system, you will need to manually repartition your disk. You can visit this article for information about repartitioning a root persistent disk.

Can I define exactly one core with 100 GB HD and 8 GB RAM?
No, you can only use predefined machine shapes with the preassigned amounts of CPU and RAM.
See Kamran's answer for how to create a disk of a different size, that is separate from CPU and RAM.
And how can I define access mode SSH/SCP?
That's automatically done for you and it's already running an SSH server. Note that by default, it uses SSH keys, not passwords. To connect to your GCE VM, see these docs; the command would look like:
gcloud compute ssh INSTANCE-NAME --project=PROJECT --zone=ZONE
You can also connect to your instance via your web browser by using the SSH button in the Developers Console.
To use scp, use the flags that are provided for the ssh command, e.g.,
scp -i KEY_FILE \
-o UserKnownHostsFile=/dev/null \
-o CheckHostIP=no \
-o StrictHostKeyChecking=no \
[source-files ...] \
USER#IP_ADDRESS:[dest-location]
or vice versa to copy them back.

Related

Docker Services Scaling Container Bottlenecks 249 Containers

AMD 24 CORE Threadripper and 200GB RAM
Ubuntu 20
Docker Latest Version
Docker Swarm Mode (but the only Host)
I have my docker stack compose file.
Scaling the service up I don't have any problems to 249 Containers, but then I have a bottleneck and don't know why
Do I need to change Settings somewhere to remove the bottleneck?
I already have
fs.inotify.max_queued_events = 100000000
fs.inotify.max_user_instances = 100000000
fs.inotify.max_user_watches = 100000000
in /etc/sysctl.conf
as I had a bottleneck with inotify instances by nearly 100 containers, solved that problem with that.
But I cant scale past 249 Containers
One issue is certainly going to be IP availability if this is docker swarm, as overlay networks - the default on swarm - are automatically /24 and thus limited to ~255 hosts.
So:
a. attach the service to a manually created network that you have scoped to be larger than /24
b. ensure that the services endpoint mode is dnsrr as vip's cannot be used safely with networks with larger than /24 address space.

How to get number of worker, cores, ram from HDI cluster

Here is my scenario. I am creating HDI cluster and installing my custom application using ARM template.
I need following values to be configured for my application using Shell script. Installing my application using CustomScript option in ARM template.
Number of worker nodes
Number of cores per worker node
RAM per worker node
RAM per head node
Number of cores per worker node
You could use Ambari REST API to get number of worker nodes.
PASSWORD=<>
CLUSTERNAME=<>
#Worker nodes
curl -u admin:$PASSWORD -sS -G "https://$CLUSTERNAME.azurehdinsight.net/api/v1/clusters/$CLUSTERNAME/services/HDFS/components/DATANODE">worker.txt
cat worker.txt |grep total_count|awk -F: '{print $2}'|sed 's/\,//g'
RAM per worker node
Do you mean the VM's max RAM? If yes, every worker node VM should have the same RAM. VM's RAM and cores are decided to VM's size. More information please refer to this link. If you want to achieve it with script. I suggest you could write a configure file, such as
Standard_DS1_v2 1 3.5
Standard_DS2_v2 2 7
You could get core and MEM by using awk, it is easy. Such is an example.
mem=`cat configure.txt|grep "Standard_DS1_v2"|awk '{print $3}`

Setting up 3-4 node Cassandra (resource-light) test cluster (s.a. in linux container)?

Trying to see if it is possible to setup a 3 or 4 node Cassandra cluster, with minimum resource requirements, that can be installed on something like a single VM, either inside Linux container, or directly on the single VM using different port-numbers or virtual NICs/IPs.
This will be used for some application demonstration where I might like to demonstrate datastore high-availability, data partitioning, dynamic addition / removal of cluster node.
The setup would be running on a VM running on a laptop, so "resources" are a constraint (i.e. VRAM and VCPUs that can be allocated for this purposes). Also, as the actual data stored would be quite limited (let's say everything in a single key-space, about 10 tables, with 10 odd cols, and 1000 rows).
From your description, it sounds like ccm might be the tool for you. With it, you can create local clusters on your laptop (or in a VM, I suppose), add nodes, delete nodes etc etc. It can be easily installed on a linux OS, MAC, or Windows. You don't need to use a VM, your choice. I imagine you would see performance degradation in a VM.

Separate cpuset for jobs

Is there a way to restrict cpus and memory for users running scripts directly, but allow more cpus and memory on job submission?
I am running torque/pbs on Ubuntu 14.04 server and want to allow "normal" usage of 8 cpus and 16GB RAM, and the rest to be dedicated as a "mom" resource for the cluster. Normal cgroups/cpuset configuration also restricts the running jobs.
If you configure Torque with --enable-cpuset the mom will automatically create a cpuset for each job. Torque isn't really equipped to use part of a machine, but a hack that might work to make this work in conjunction with only using half the machine is to specify np= in the nodes file, and then the mom will restrict the jobs to the first X cpus.

AWS Amazon offers 160GB space for small instance. On booting Suse linux the total root partition space is

AWS Amazon offers 160GB space for small instance. On booting Suse linux the total root partition space I got is 10GB. On df -h I only see/dev/sda1 with 10GB space. Where is rest 150GB? How can I claim this space? I dont want to use EBS as it cost extra and 160GB space suffice my need. Please help.
The extra 150GB is given as an ephemeral storage, e.g. data on this storage won't survive reboots in contrast to the data on your root storage. During launching, you can select where your ephemeral disks should be made available as a device in your machine (this is the -boption when using the command line or in the "Instance Storage" tab when launching via the S3 console. You can then simply mount it in your running instance.

Resources