Spawn containers on ACI using #azure/arm-containerinstance - node.js

I am working on processing data microservice. I have this microservice dockerized and now I want to deploy it. To achieve it, I am trying to manage containers in Azure Container Instances using Azure Function written in node.js.
The first thing I wanted to test is spawning containers within a group. My idea was:
const oldConfig = await client.containerGroups.get(
'resourceGroup',
'resourceName'
);
const response = await client.containerGroups.createOrUpdate(
'resourceGroup',
'resourceName',
{
osType: oldConfig.osType,
containers: [
...oldConfig.containers,
{
name: 'test',
image: 'hello-world',
resources: {
requests: {
memoryInGB: 1,
cpu: 1,
},
},
},
],
}
);
I've added osType, because docs and interface says it's required, but when I do this I receive error 'to update osType you need to remove and create group containers". When I remove osType, request is successful, but ACI does not change. I cannot recreate whole group upon every new container, because I want them to process jobs and terminate by themselves.

Not all the properties are supported to update. See the details below:
Not all container group properties can be updated. For example, to
change the restart policy of a container, you must first delete the
container group, then create it again.
Changes to these properties require container group deletion prior to
redeployment:
OS type CPU, memory, or GPU resources Restart policy Network profile
So the container group will not change after you update the osType. You need to delete the container group and create it with the changes. Get more details about the Update.

Related

How to add a security group to an existing RDS with CDK without cyclic-dependency

I have a multi-stack application where I want to deploy an RDS in one stack and then in a later stack deploy a Fargate cluster that connects to the RDS.
Here is how the rds gets defined:
this.rdsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
this.rdsSG.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(5432), 'Ingress 5432');
this.aurora = new rds.ServerlessCluster(this, `rds`, {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
vpc: props.vpc,
securityGroups: [this.rdsSG],
// more properties below
});
With that add ingress rule everything is fine, since both the RDS and Fargate are in the same VPC, I can communicate fine. It worries me making that open the world even though its in its own VPC.
const ecsSG = new ec2.SecurityGroup(this, `ecsSG`, {
vpc: props.vpc,
allowAllOutbound: true,
});
const service = new ecs.FargateService(this, `service`, {
cluster,
desiredCount: 1,
taskDefinition,
securityGroups: [ecsSG],
assignPublicIp: true,
});
How can I remove the ingress rule and allow inbound connections to the RDS from that ecsSG since it gets deployed later? If I try to call the following command from the deploy stack, I get a cyclic dependency error:
props.rdsSG.connections.allowFrom(ecsSG, ec2.Port.allTcp(), 'Aurora RDS');
Thanks for your help!
This turned out to be easier than I thought- you can just flip the connection so that rather than trying to modify the rds to accept a security group of the ecs, you use the allowTo to establish a connection to the rds instance.
ecsSG.connections.allowTo(props.rds, ec2.Port.tcp(5432), 'RDS Instance');
Also maybe the other way round the RDS security group might be better described by aws_rds module rather than aws_ec2 module https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_rds/CfnDBSecurityGroup.html (couldn't post a comment due to low rep)
Just as an additional possibility here. What works for me is that I don't need to define any security group. Just the service and the db, and connect the two:
const service = new ecsPatterns.ApplicationLoadBalancedEc2Service(
this,
'app-service',
{
cluster,
...
},
);
const dbCluster = new ServerlessCluster(this, 'DbCluster', {
engine: dbEngine,
...
});
dbCluster.connections.allowDefaultPortFrom(service.service);

Ansible to update HKEY on azure batch node

As a part of ansible workflow , I am looking to update an azure batch pool windows images on runtime with ansible to disable windows update
I have created a azure batch node :
- name: Create Batch Account
azure_rm_batchaccount:
resource_group: MyResGroup
name: mybatchaccount
location: eastus
auto_storage_account:
name: mystorageaccountname
pool_allocation_mode: batch_service
I know for a fact I can use Start task in azure batch node and execute the a cmd to change Hkey to NoUpdate = 1 .
I have an ansible snippet ready :
- name: "Ensure 'Configure Automatic Updates' is set to 'Disabled'"
win_regedit:
path: HKLM:\Software\Policies\Microsoft\Windows\Windowsupdate\Au
name: "NoAutoUpdate"
data: "1"
type: dword
I would like to execute it on a run time in azure batch pool.
Does any one know how can this be archived with ansible ?
To run something on boot in a Batch pool you should simply include it as part of your start task (https://learn.microsoft.com/en-us/rest/api/batchservice/pool/add#starttask).
In this instance however you likely should just make use of the Azure functionality to turn off automatic updates https://learn.microsoft.com/en-us/rest/api/batchservice/pool/add#windowsconfiguration

IoTEdge sometimes re-creates the container

We're running IoT edge modules. Inside our module, we update bunch of files. We noticed that most of the time, if the host is restarted, the container is restarted and the files we updated still exist.
Very few times, however, we noticed that when the host restarted that the container is re-created from the original image thus all data changes were lost.
Our understanding is that iot edge is using docker restart policy = always which should always keep the data of the container.
I would have next suggestions:
do not store important data on the container writable layer => do not rely on the restart policy
the reason of rebuilding the container could be a new version of your module image which was deployed, so the container was recreated using new image
setup your module deployment manifest (example) properly by using the module container createOptions and attach a local volume to the container (createOptions->HostConfig->Binds), and store your data there. This will survive any recreations of your module container . See example. something like:
"createOptions": {
"HostConfig": {
"Binds": [
"/app/db:/app/db"
]
}
}

Nodejs application in docker swarm service, update has no effect

Short and simple; how do I update a replicated nodejs application in a docker swarm?
Expected behavior: once update is triggered, the service receive some form of signal, eg: SIGINT or SIGTERM
What actually happens: nothing... no signals, no updated service. I have to remove the service and create it again with the updated image.
I'm using dockerode to update the service. The documentation for the docker API, of the subject in question, is broken (one can not expand the sub menus for example: UpdateConfig)... making it hard to know if I'm missing any additional specification.
If I run the command: docker service update <SERVICE> expected behavior takes place.
The ForceUpdate flag must be increased by one each time you update... for the update to take place when you do not version your image.
const
serviceOptions = { ... },
service = this.docker.getService(serviceName),
serviceInspected = await this.serviceInspector.inspectService(serviceName)
serviceOptions.registryAuthFrom = 'spec'
// if we do not specify the correct version, we can not update the service
serviceOptions.version = serviceInspected.Version.Index
// it's not documented by docker that we need to increase this force update flag by one, each time we attempt to update...
serviceOptions.TaskTemplate.ForceUpdate = serviceInspected.Spec.TaskTemplate.ForceUpdate + 1
const response = await service.update(serviceOptions)
response.output.Warnings && this.log.info(data.output.Warnings)
Still can't record a SIGTERM signal from the container, but at least now I can update my service

Pyrax API: Error in creating Compute/Memory flavors with boot-volumes

For background knowledge: Compute/Memory nova instances in Rackspace don't come with a local root volume, Rackspace has a policy to create them with an external SSD bootable volumes. Now the question:
I am trying to create a Compute flavor instance in Rackspace using pyrax api, in a way that Rackspace does in its UI(https://support.rackspace.com/how-to/boot-a-server-from-a-cloud-block-storage-volume/) as follows:
pyrax.cloudservers.servers.create(hostname,image.id,
flavor.id,block_device_mapping,
security_groups=security_groups,
nics=networks, key_name=key)
where
block_device_mapping = {"vda": "59fb72d5-0b33-46c2-b10b-33fed25c5f74:::1"},
the long 32 digit number is the volume_id of the volume I create before server creation using
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type).
I get an error saying:
Policy doesn't allow memory_flavor:create:image_backed to be performed.(HTTP 403).
Also for other flavors which come with a local root volume(needless to say I don't have reference those with 'block_device_mapping' param), the pyrax api for instance creation works fine.
Here is a little thread on the topic in the pyrax/rackspace repo on github: https://github.com/rackspace/pyrax/issues/484 that discusses about the issue.
Is there something I am missing?
When a bootable volume is created, image_id(OS image id) should be specified to boot the volume:
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type,image=image.id)
Also The block_device_map needs some more params:
block_device_map = [{
'boot_index': '0',
'source_type': 'image',
'destination_type': 'volume',
'delete_on_termination': True,
'uuid': image.id,
'volume_size': int(requested_size),
'device_name': 'vda'
}]
And here's the final catch in actually not getting a 403 Forbidden error:
While creating a server instance, don't specify the image id again in the pyrax call params, otherwise pyrax gets confused with what image to boot the instance. Hence just put a None to image_id in the params for pyrax.cloudservers.servers.create() as:
pyrax.cloudservers.servers.create(
hostname,
image=None,
flavor=flavor.id,
block_device_mapping_v2=block_device_map,
security_groups=security_groups,
nics=networks,
key_name=key)

Resources