Pyrax API: Error in creating Compute/Memory flavors with boot-volumes - rackspace-cloud

For background knowledge: Compute/Memory nova instances in Rackspace don't come with a local root volume, Rackspace has a policy to create them with an external SSD bootable volumes. Now the question:
I am trying to create a Compute flavor instance in Rackspace using pyrax api, in a way that Rackspace does in its UI(https://support.rackspace.com/how-to/boot-a-server-from-a-cloud-block-storage-volume/) as follows:
pyrax.cloudservers.servers.create(hostname,image.id,
flavor.id,block_device_mapping,
security_groups=security_groups,
nics=networks, key_name=key)
where
block_device_mapping = {"vda": "59fb72d5-0b33-46c2-b10b-33fed25c5f74:::1"},
the long 32 digit number is the volume_id of the volume I create before server creation using
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type).
I get an error saying:
Policy doesn't allow memory_flavor:create:image_backed to be performed.(HTTP 403).
Also for other flavors which come with a local root volume(needless to say I don't have reference those with 'block_device_mapping' param), the pyrax api for instance creation works fine.
Here is a little thread on the topic in the pyrax/rackspace repo on github: https://github.com/rackspace/pyrax/issues/484 that discusses about the issue.
Is there something I am missing?

When a bootable volume is created, image_id(OS image id) should be specified to boot the volume:
pyrax.cloud_blockstorage.create(name=volume_name, size=volume_size,
volume_type=volume_type,image=image.id)
Also The block_device_map needs some more params:
block_device_map = [{
'boot_index': '0',
'source_type': 'image',
'destination_type': 'volume',
'delete_on_termination': True,
'uuid': image.id,
'volume_size': int(requested_size),
'device_name': 'vda'
}]
And here's the final catch in actually not getting a 403 Forbidden error:
While creating a server instance, don't specify the image id again in the pyrax call params, otherwise pyrax gets confused with what image to boot the instance. Hence just put a None to image_id in the params for pyrax.cloudservers.servers.create() as:
pyrax.cloudservers.servers.create(
hostname,
image=None,
flavor=flavor.id,
block_device_mapping_v2=block_device_map,
security_groups=security_groups,
nics=networks,
key_name=key)

Related

Spawn containers on ACI using #azure/arm-containerinstance

I am working on processing data microservice. I have this microservice dockerized and now I want to deploy it. To achieve it, I am trying to manage containers in Azure Container Instances using Azure Function written in node.js.
The first thing I wanted to test is spawning containers within a group. My idea was:
const oldConfig = await client.containerGroups.get(
'resourceGroup',
'resourceName'
);
const response = await client.containerGroups.createOrUpdate(
'resourceGroup',
'resourceName',
{
osType: oldConfig.osType,
containers: [
...oldConfig.containers,
{
name: 'test',
image: 'hello-world',
resources: {
requests: {
memoryInGB: 1,
cpu: 1,
},
},
},
],
}
);
I've added osType, because docs and interface says it's required, but when I do this I receive error 'to update osType you need to remove and create group containers". When I remove osType, request is successful, but ACI does not change. I cannot recreate whole group upon every new container, because I want them to process jobs and terminate by themselves.
Not all the properties are supported to update. See the details below:
Not all container group properties can be updated. For example, to
change the restart policy of a container, you must first delete the
container group, then create it again.
Changes to these properties require container group deletion prior to
redeployment:
OS type CPU, memory, or GPU resources Restart policy Network profile
So the container group will not change after you update the osType. You need to delete the container group and create it with the changes. Get more details about the Update.

Invalid resource name creating a connection to azure storage

I'm trying to create a project of the labeling tool from the Azure form recognizer. I have successfully deployed the web app, but I'm unable to start a project. I get this error all every time I try:
I have tried with several app instances and changing the project name and connection name, none of those work. The only common factor and finding here as that it is related to the connection.
As I see it:
1) I can either start a new project or use one on the cloud:
First I tried to create a new project:
I filled the fields with these values:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And got the error from the question's title:
"Invalid resource name creating a connection to azure storage "
I tried several combinations of names, none of those work.
Then I tried with the option: "Open a cloud project"
Got the same error instantly, hence I deduce the issue is with the connection settings.
Now, In the connection settings I have this:
At first glance, since the values are accepted and the connection is created. I guess it is correct but it is the only point I failure I can think of.
Regarding the storage container settings, I added the required CORS configuration and I have used it to train models with the Forms Recognizer, So that part does works.
At this point at pretty much stuck, since I error message does not give me many clues on where is the error.
I was facing a similar error today
You have to add container name before "?sv..." in your SAS URI in connection settings
https://****.blob.core.windows.net/**trainingdata**?sv=2019-10-10..

Expand Root Volume when Creating Ec2 from API

if you are creating an EC2 from Nodejs api call Runinstances
I found it tricky to expand the root volume if you are creating an EC2 from AMI comes with low disk Space 8GB usually
I you use
using this block will add another EBS Volume to the EC2 without expanding the root volume
BlockDeviceMappings: [
{
DeviceName: "/dev/sdh",
Ebs: {
VolumeSize: 100
}
}
],
what can we do to Expand the root volume ?
Have you tried modifyVolume(params = {}, callback) ⇒ AWS.Requestand you can also modify volume attribute by modifyVolumeAttribute(params = {}, callback) ⇒ AWS.Request.
It is mentioned in same documentation link you have shared.
Thanks
Even I have some issue while calling run-instances, I wanted to exclude some volume but it's not possible in the run-instance
but if you do the operation through AWS Console, AWS UI allows doing the same like expand the volume and exclude volume
You can launch the instance using the run-instance and do the volume modify and increase the size of the volume in another command.
if you find any other solution, please let me also know by answering the below question
Exclude EBS volume while create instance from the AMI

Pulling image from ECR via docker-py

I have a script that retrieves a login for ECR, authenticates a DockerClient instance with the login credentials (reauth set to True), and then attempts to pull a nominated container image.
The code seems to work perfectly when running on my local machine interacting with docker daemon on an EC2 instance, but when running from the EC2 instance I am constantly getting
404 Client Error: Not Found ("repository XXXXXXXX.dkr.ecr.eu-west-2.amazonaws.com/autohld-runner not found: does not exist or no pull access")
The same repo is being used for both executing the code locally and remotely on the EC2 instance. I have tried setting the access to the image within ECR to allow pull for both everyone and my AWS ID. I have granted the role assigned to the EC2 instance Full Admin access also. All with no joy.
If I perform the same tasks on the EC2 instance via command line with the exact same repo URI (copied from the error), it works with no issue.
Is there something I am missing within docker-py ?
url = "tcp://127.0.0.1:2375"
dockerd = docker.DockerClient(base_url=url, version='auto')
dockerd.login(username=ecr.username, password=ecr.password, email='none', registry=ecr.registry, reauth=True)
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest')
get_repo returns the full URI as reported in the error message, the Container element is the name 'autohld-runner'
Thanks
It seems that if the registry has been accessed via the cli then an auth token or something is set and docker remembers this allowing subsequent calls to work. However in this case the instance is starting up completely fresh and using the login method within docker-py.
This doesn't seem to pass the credentials on to the pull, I have found that using the auth_config named argument and passing in a dictionary of auth parameters works.
auth_creds = {'username': ecr.username, 'password': ecr.password}
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest', auth_config=auth_creds)
HTH

Measure resource usage of Docker container on exit

I create containers which compile/interpret user'c code and pass the result back to the browser (just like JSFiddle). Now, I need to know how much CPU and memory has been used for executing that code. So, I don't need it realtime but on container's exit, so that I can pass these two parameters with the others back to the client.
I tried using pseudo-files like here, but there is no such a location on my server (Ubuntu 14.04). How I can measure these parameters?
docker has a stats api
https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/get-container-stats-based-on-resource-usage
"cpu_usage" : {
"percpu_usage" : [
8646879,
24472255,
36438778,
30657443
],

Resources