Expand Root Volume when Creating Ec2 from API - node.js

if you are creating an EC2 from Nodejs api call Runinstances
I found it tricky to expand the root volume if you are creating an EC2 from AMI comes with low disk Space 8GB usually
I you use
using this block will add another EBS Volume to the EC2 without expanding the root volume
BlockDeviceMappings: [
{
DeviceName: "/dev/sdh",
Ebs: {
VolumeSize: 100
}
}
],
what can we do to Expand the root volume ?

Have you tried modifyVolume(params = {}, callback) ⇒ AWS.Requestand you can also modify volume attribute by modifyVolumeAttribute(params = {}, callback) ⇒ AWS.Request.
It is mentioned in same documentation link you have shared.
Thanks

Even I have some issue while calling run-instances, I wanted to exclude some volume but it's not possible in the run-instance
but if you do the operation through AWS Console, AWS UI allows doing the same like expand the volume and exclude volume
You can launch the instance using the run-instance and do the volume modify and increase the size of the volume in another command.
if you find any other solution, please let me also know by answering the below question
Exclude EBS volume while create instance from the AMI

Related

Node red instance in Kubernetes with custom settings.js and other files

I am building a service which creates on demand node red instance on Kubernetes. This service needs to have custom authentication, and some other service specific data in a JSON file.
Every instance of node red will have a Persistent Volume associated with it, so one way I though of doing this was to attach the PVC with a pod and copy the files into the PV, and then start the node red deployment over the modified PVC.
I use following script to accomplish this
def paste_file_into_pod(self, src_path, dest_path):
dir_name= path.dirname(src_path)
bname = path.basename(src_path)
exec_command = ['/bin/sh', '-c', 'cd {src}; tar cf - {base}'.format(src=dir_name, base=bname)]
with tempfile.TemporaryFile() as tar_buffer:
resp = stream(self.k8_client.connect_get_namespaced_pod_exec, self.kube_methods.component_name, self.kube_methods.namespace,
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
print(resp)
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
out = resp.read_stdout()
tar_buffer.write(out.encode('utf-8'))
if resp.peek_stderr():
print('STDERR: {0}'.format(resp.read_stderr()))
resp.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
subdir_and_files = [tarinfo for tarinfo in tar.getmembers()]
tar.extractall(path=dest_path, members=subdir_and_files)
This seems like a very messy way to do this. Can someone suggest a quick and easy way to start node red in Kubernetes with custom settings.js and some additional files for config?
The better approach is not to use a PV for flow storage, but to use a Storage Plugin to save flows in a central database. There are several already in existence using DBs like MongoDB
You can extend the existing Node-RED container to include a modified settings.js in /data that includes the details for the storage and authentication plugins and uses environment variables to set the instance specific at start up.
Examples here: https://www.hardill.me.uk/wordpress/tag/multi-tenant/

Terraform aws eks worker node spot instance

I am following this blog to run terraform to spin up an eks cluster .
https://github.com/berndonline/aws-eks-terraform/blob/master/
I just want to change my ec2 worker node type to spot instance
https://github.com/berndonline/aws-eks-terraform/blob/master/eks-worker-nodes.tf
I googled and narrowed it down to launch configuration section,
any ideas how to change the ec2 type to spot instance ?
Please go through the official document about resource aws_launch_configuration
it gives you the sample on how to set spot instance already:
resource "aws_launch_configuration" "as_conf" {
image_id = "${data.aws_ami.ubuntu.id}"
instance_type = "m4.large"
spot_price = "0.001"
lifecycle {
create_before_destroy = true
}
}
Notes:
spot instances price are keep changing depend on the usage. If you are not familiar with it, use the same price of its on-demond price.
Even you set as on-demond price, AWS will only charge you less (normally 5 times less), unless they are used out. But AWS will never charge more.
Please also go through aws document for details: https://aws.amazon.com/ec2/spot/pricing/

How to detach/remove EBS volumes from AWS EMR using Terraform?

Currently in Terraform, ebs_config option is used to specify the size and number of EBS volumes to be attached to a instance group in EMR. When no ebs_config is specified a default of 32GB EBS volume is attached to the core node in addition to the root volume. My case is not to have any EBS volumes attached to the core node. How do I specify that in terraform ?
Currently I use the following code
name = "CoreInstanceGroup"
instance_role = "CORE"
instance_type = "m4.xlarge"
instance_count = "1"
ebs_config {
size = 1
type = "gp2"
volumes_per_instance = 1
}
Terraform doesn't allow size and volumes_per_instance to be 0.
I managed to figure out this is not a terraform issue but that's how AWS EMR works. When you specify 'EBS only' instance as instance type (say m2.4xLarge), EMR automatically attaches an EBS storage volume in addition to the Root volume. If you specify SSD type instead of 'EBS only' as instance type(say r3.Xlarge), EMR doesn't attach an extra EBS volume.

Pyrax API: Error in creating Compute/Memory flavors with boot-volumes

For background knowledge: Compute/Memory nova instances in Rackspace don't come with a local root volume, Rackspace has a policy to create them with an external SSD bootable volumes. Now the question:
I am trying to create a Compute flavor instance in Rackspace using pyrax api, in a way that Rackspace does in its UI(https://support.rackspace.com/how-to/boot-a-server-from-a-cloud-block-storage-volume/) as follows:
pyrax.cloudservers.servers.create(hostname,image.id,
flavor.id,block_device_mapping,
security_groups=security_groups,
nics=networks, key_name=key)
where
block_device_mapping = {"vda": "59fb72d5-0b33-46c2-b10b-33fed25c5f74:::1"},
the long 32 digit number is the volume_id of the volume I create before server creation using
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type).
I get an error saying:
Policy doesn't allow memory_flavor:create:image_backed to be performed.(HTTP 403).
Also for other flavors which come with a local root volume(needless to say I don't have reference those with 'block_device_mapping' param), the pyrax api for instance creation works fine.
Here is a little thread on the topic in the pyrax/rackspace repo on github: https://github.com/rackspace/pyrax/issues/484 that discusses about the issue.
Is there something I am missing?
When a bootable volume is created, image_id(OS image id) should be specified to boot the volume:
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type,image=image.id)
Also The block_device_map needs some more params:
block_device_map = [{
'boot_index': '0',
'source_type': 'image',
'destination_type': 'volume',
'delete_on_termination': True,
'uuid': image.id,
'volume_size': int(requested_size),
'device_name': 'vda'
}]
And here's the final catch in actually not getting a 403 Forbidden error:
While creating a server instance, don't specify the image id again in the pyrax call params, otherwise pyrax gets confused with what image to boot the instance. Hence just put a None to image_id in the params for pyrax.cloudservers.servers.create() as:
pyrax.cloudservers.servers.create(
hostname,
image=None,
flavor=flavor.id,
block_device_mapping_v2=block_device_map,
security_groups=security_groups,
nics=networks,
key_name=key)

Measure resource usage of Docker container on exit

I create containers which compile/interpret user'c code and pass the result back to the browser (just like JSFiddle). Now, I need to know how much CPU and memory has been used for executing that code. So, I don't need it realtime but on container's exit, so that I can pass these two parameters with the others back to the client.
I tried using pseudo-files like here, but there is no such a location on my server (Ubuntu 14.04). How I can measure these parameters?
docker has a stats api
https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/get-container-stats-based-on-resource-usage
"cpu_usage" : {
"percpu_usage" : [
8646879,
24472255,
36438778,
30657443
],

Resources