Refresh IP address for Azure VM via REST API - azure

I am trying to your the REST API to change the IP of my Ubuntu Virtual Machine on Azure.
In the web interface, stopping and starting the VM usually causes the public IP to change. However, just stopping and starting the VM with curl requests to the API does not trigger an IP change.
I can request the current status of the IP configuration using a GET request (see the docs here), but I cannot find any function to refresh it. I also tried setting the IP to static and back to dynamic before turning the VM back on, that also did not work.
I found this similar question here, but when I tried that approach, I got the following error message:
{ "error": {
"code": "IpConfigDeleteNotSupported",
"message": "IP Configuration ipconfig1 cannot be deleted. Deletion and renaming of primary IP Configuration is not supported",
"details": [] }
I have also created a secondary IP configuration. The first one is called ipconfig1, the second I named "alternative". This seems to be a second network interface. I have associated a second IP address with that second network interface. But I am still getting the same error.
My final request looks like this:
curl -X PUT -H "Authorization: Bearer MYTOKEN" -H "Content-Type: application/json" -d '{ "name": "NETWORKINTERFACE542", "id": "GROUP", "location": "westeurope", "properties": { "provisioningState": "Succeeded", "ipConfigurations": [ { "name": "alternative", "properties": { "privateIPAllocationMethod": "Dynamic", "subnet": { "id": "/subscriptions/xx-xx-xx-xx/resourceGroups/GROUP/providers/Microsoft.Network/virtualNetworks/GROUP-vnet/subnets/default" }, "primary": true, "privateIPAddressVersion": "IPv4" } } ], "dnsSettings": { "dnsServers": [], "appliedDnsServers": [] }, "enableAcceleratedNetworking": true, "enableIPForwarding": false }, "type": "Microsoft.Network/networkInterfaces" }' https://management.azure.com/subscriptions/xx-xx-xx-xx/resourceGroups/GROUP/providers/Microsoft.Network/networkInterfaces/NETWORKINTERFACE542?api-version=2020-07-01
(Where the CAPS terms are stand-ins for my actual variable names)
I am still getting the same error, even though I am not even referencing ipconfig1 in my request.
Is there any way to achieve an IP reset?

As your mentioned: In the web interface, stopping and starting the VM usually causes the public IP to change.
Generally, the stop operation in the web UI actually does deallocate operation, so you need to use REST API Deallocate and Start to trigger the public IP address changed.
Virtual Machines - Deallocate
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/deallocate?api-version=2020-12-01
Virtual Machines - Start
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/start?api-version=2020-12-01

Related

Creating a nested python dictionnary

I'm working on converting the nested python dictionnary of whatportistoll to a new format with a new organisatio of keys, but I'm encountering the following issue :
- key skipping problem
Here is the format of the existing nested dictionary :
{
"_default": {
"0": {
"name": "Service name",
"port": "Port Number",
"protocol": "Transport Protocol",
"description": "Description"
},
"1": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
},
...
}
here is the targeted format :
{
"0": {
"tcp": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
},
"udp": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
}
},
"1": {
"tcp": {
"name": "tcpmux",
"port": "1",
"protocol": "tcp",
"description": "Scans against this port are commonly used to test if a machine runs SGI Irix (as SGI is the only system that typically has this enabled). This service is almost never used in practice.RFC1078 - TCPMUX acts much like Sun's portmapper, or Microsoft's end-point mapper in that it allows services to run on arbitrary ports. In the case of TCPMUX, however, after the \"lookup\" phase, all further communication continues to run over that port.builtins.c in Xinetd before 2.3.15 does not check the service type when the tcpmux-server service is enabled, which exposes all enabled services and allows remote attackers to bypass intended access restrictions via a request to tcpmux port 1 (TCP/UDP). References: [CVE-2012-0862] [BID-53720] [OSVDB-81774]Trojans that use this port: Breach.2001, SocketsDeTroieAlso see: CERT: CA-95.15.SGI.lp.vul"
},
"udp": {}
},
...
}
Could you please help resolving this issue ?
Your help is much appreciated.
Best Regards,
Jihane

Unable to scale Azure VMSS beyond 15 instances

We have a few Azure VMSS deployments in our subscription, created over the course of a couple years. The earlier one we created, we can scale up to about 50 instances, but the one created a month ago is unable to scale past 15 instances. When trying to do so, I get:
Failed to update autoscale configuration for 't0'. {
"error": {
"details": [],
"code": "InboundNatPoolFrontendPortRangeSmallerThanRequestedPorts",
"message": "The frontend port range for the inboundNATpool /subscriptions/a48dea64-5847-4d79-aaa6-036530430809/resourceGroups/int-aowappcompat/providers/Microsoft.Network/loadBalancers/LB-int-aowappcompat-t0/inboundNatPools/EtwListenerNatPool-qs8az5dmgu is smaller than the requested number of ports 28 in VM scale set /subscriptions/a48dea64-5847-4d79-aaa6-036530430809/resourceGroups/INT-AOWAPPCOMPAT/providers/Microsoft.Compute/virtualMachineScaleSets/t0."
}
}.
I've tried to find an answer to how to fix this, but there's virtually nothing out there for InboundNatPoolFrontendPortRangeSmallerThanRequestedPorts except for an unhelpful StackOverflow answer. We've gone through the ARM template as well as all the various UIs for the load balancers, public ip addresses, etc as well as diff'd the old and new ARM templates trying to find the source of the disparity, with no luck. Unfortunately, I'm not a networking whiz, so my knowledge here is fairly shallow.
UPDATE: Here's a relevant (maybe?) from my template:
"inboundNatPools": [{
"name": "LoadBalancerBEAddressNatPool",
"properties": {
"backendPort": "3389",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPortRangeEnd": "4500",
"frontendPortRangeStart": "3389",
"protocol": "tcp"
}
}
]
I believe this is not a duplicate of the referenced question, as my problem doesn't seem to be an issue with the port range being too small. As you can see from the snippet, the range covers 1111 ports, and I can't even scale to 16 instances. Similarly, comments about overprovisioning don't seem relevant either, as with 1111 ports, I should be able to 16 instances without issue. Of course there may be something I'm not understanding here.
Any tips? Thanks!
Scale sets currently default to overprovisioning VMs. I also face the same issue. To avoid this, you could wait for the VM instance status to reach a successfully provisioned running status when you manually scale out. Also, you should do not exceed quota limits of Core in your subscription.

Subscribe to all tenants using a wildcard in Eclipse-hono

I'm trying to consume data from hono. I do so by following the guide on Starting a consumer on hono documentation.
I'm currently trying to subscribe to all tenants by add --tenant.id=* at the end of the the mvn command. This results in following command:
mvn spring-boot:run -Drun.arguments=--hono.client.host=localhost,--hono.client.username=consumer#HONO,--hono.client.password=verysecret,--destination.TopicTemplate=gw/\!{tenant}/\!{device}/alp,--destination.Hoscalhost,--destination.Port=11883,--tenant.id=*
I'm not getting any messages when I subscribe like this. When I subscribe using the example command (only for DEFAULT_TENANT), I'm consuming the messages.
The current user permission looks like this:
"consumer#HONO": {
"mechanism": "PLAIN",
"password": "verysecret",
"authorities": [ "application" ]
}
The current application role looks like this:
"application": [
{
"resource": "telemetry/*",
"activities": [ "READ" ]
},
{
"resource": "event/*",
"activities": [ "READ" ]
},
{
"resource": "control/*",
"activities": [ "READ", "WRITE" ]
}
Both of them are still the original ones from Hono github.
EDIT: The consumer also subscribes to event/tenant. In my case this is event/. Events published on topic event/DEFAULT_TENANT and event/MY_TENANT are consumed. However, the consumer for telemetry/ seems not to be registered.
I've finally found out what was going on.
It seems the message is blocked in the QPID dispatch router because of folowwing error: "Parse tree match not found".
This can be resolved by changing the qpid configuration. In this configuration you should be able to find following records:
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
It creates linkroutes (in and out) for event topic but not for the telemetry topic. Adding these records for the telemetry topic resolves the problem.
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
Hono does not (as of now) support consuming messages of all tenants. The consumer is always scoped to a single tenant only. This is also reflected in the (northbound) Telemetry and Event API specifications.
The usage of wildcard characters in order to receive data for multiple/all tenants is not supported. The change you have made to the Dispatch Router configuration may have led you to believe that it does work indeed. However, defining the telemetry address to use link routing instead of the default message routing has some consequences you should be aware of:
All telemetry messages will be routed to the message broker (Artemis) instead of being routed directly to consumers attached to the Dispatch Router. This means that all messages will be written to a queue/topic in Artemis. Depending on the Artemis configuration this might also mean that (telemetry) messages get persisted which will have quite a negative impact on throughput.
Your clients/consumers will now explicitly depend on the (Artemis) broker's support for wildcards being used in AMQP 1.0 link source addresses to receive messages from multiple addresses. While this might be what you want to achieve in the first place, beware that it ties your application to the specific implementation of the AMQP Messaging Network (in this case Artemis) which is not part of Hono.

Error when creating AKS cluster using a reference for the subnet ID

I'm getting an error when I try to deploy an AKS cluster using an ARM template, if the vnetSubnetId in the agentPoolProfiles property is a reference. I've used this exact template before without problems (on October 4th) but now I'm seeing an error with multiple different clusters, and when I do it either through a VSTS pipeline, or manually using PowerShell.
The property is set up like this:
"agentPoolProfiles": [
{
"name": "agentpool",
"count": "[parameters('agentCount')]",
"vmSize": "[parameters('agentVMSize')]",
"osType": "Linux",
"dnsPrefix": "[variables('agentsEndpointDNSNamePrefix')]",
"osDiskSizeGB": "[parameters('agentOsDiskSizeGB')]",
"vnetSubnetID": "[reference(concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))).subnets[0].id]"
}
]
The variable 'vnetName' is based on an input parameter I use for the cluster name, and the vnet itself 100% exists, and is deployed as part of the same template.
If I try to deploy a new cluster I get the following error:
Message: {
"code": "InvalidParameter",
"message": "The value of parameter agentPoolProfile.vnetSubnetID is invalid.",
"target": "agentPoolProfile.vnetSubnetID"
}
If I try to re-deploy a cluster, with no changes to the template or input parameters since it last worked, I get the following error:
Message: {
"code": "PropertyChangeNotAllowed",
"message": "Changing property 'agentPoolProfile.vnetSubnetID' is not allowed.",
"target": "agentPoolProfile.vnetSubnetID"
}
Has something changed that means I can no longer get the vnet ID at runtime? Does it need to be passed in as a parameter now? If something has changed, is there anywhere I can find out the details?
Edit: Just to clarify, for re-deploying a cluster, I have checked and there are no new subnets, and I'm seeing the same behavior on 3 different clusters with different VNets.
Switching from reference() to resourceId() did fix the problem so has been marked as the answer, but I'm still no clearer on why reference() stopped working, will update that here as well if I figure it out.
I think what happened is subnets[0].id returns wrong (DIFFERENT) subnetId. and this is what the error points out. You cannot change the subnetId after deploying the cluster.
Probably somebody created a new subnet in the vnet. But I'd say that overall the approach is flawed. you should build the resourceId() function or just pass it as a parameter

Deploying containerised node.JS application through mesos-marathon

I am using Marathon to deploy my Docker containerised node.js application. My Marathon app spec is as follows :
{
"id": "<some-name>",
"cmd": null,
"cpus": 1,
"mem": 2800,
"disk": 30720,
"instances": 1,
"container": {
"docker": {
"image": "<some-docker-registry-IP>:5000/<repo>",
"network": "BRIDGE",
"privileged": true,
"forcePullImage": true,
"parameters": [
{
"key": "net",
"value": "host"
}
],
"portMappings": [
{
"containerPort": <some-port>,
"hostPort": <some-port>,
"protocol": "tcp",
"name": null
}
]
},
"type": "DOCKER"
}
}
The problem however is that this leads to restarting my server where application is deployed once it is out of memory. I need my services to listen on private IP of host machine and that's why I am using --net=host.
Is it possible to just kill the task freeing up the memory so that Marathon can re-spawn it without restarting/shutting down server? Or is there any other way to make the Docker container routable to the outside world without using --net=host?
Basically, I think there is a problem with you Node application if it shows a memory leaking behavior. Thats the first point I'd address.
Second one is that you should use something like pm2 in your application's Docker image which will take care of restarting you application (in the container itself) when it encounters a problem.
Furthermore, you could implement a Marathon health endpoint, so that Marathon can recognize that the application actually has problems.
To reach some redundancy, I'd strongly advise that you run at least two instances of the application, and use Mesos DNS and a load balancer like marathon-lb on the public slave node(s), which will take care of the routing. This also allows you to use bridged networking if you want to.

Resources