Adding a DNS record with gcloud is all good
gcloud dns record-sets transaction start -z my-zone
gcloud dns record-sets transaction add -z my-zone --name "some_domain.com" --ttl 0 --type TXT "test"
gcloud dns record-sets transaction execute -z my-zone
But when I try to remove that entry
gcloud dns record-sets transaction start -z my-zone
gcloud dns record-sets transaction remove -z my-zone --name "some_domain.com" --ttl 300 --type TXT "test"
gcloud dns record-sets transaction execute -z my-zone
I always get this error
ERROR: (gcloud.dns.record-sets.transaction.remove) Invalid value for
'parameters.name': 'some_domain.com' (code: 400)
The DNS zone file standard requires complete domain names to end with a trailing '.' character. Since this is a common mistake, other gcloud dns ... commands automatically append a trailing '.' to domain names if the user forgets to add one. However, this particular command does not seem to be doing that. This will be fixed soon.
Meanwhile, to workaround it, you need to add a trailing '.' in the domain name. So:
gcloud dns record-sets transaction remove -z my-zone --name "some_domain.com." --ttl 300 --type TXT "test"
Alternatively, you can use import/export as follows:
gcloud dns record-sets export -z my-zone RECORDS-FILE
Edit RECORDS-FILE to remove the records you do not need. Then:
gcloud dns record-sets import -z my-zone --delete-all-existing RECORDS-FILE
If you want to clear all records you have created, leaving the NS and SOA records intact, you can /dev/null as import file:
gcloud dns record-sets import -z my-zone --delete-all-existing /dev/null
Related
In waf-regional you can actually insert an IP in existing set but how I can do the same thing in WAFv2?
When I tried to do that it replaces the whole IP-set, I just want to add one IP in existing IP-set
After some research, I was able to do this with the existing API. Assign the values to all variables in starting of the script
# Get IP set
aws wafv2 get-ip-set --name=$NAME --scope REGIONAL --id=$ID --region $REGION > /root/IP_SET_OUTPUT
# Get token from the JSON
LOCK_TOKEN=$(jq -r '.LockToken' /root/IP_SET_OUTPUT)
# Get IP list from the JSON
arr=( $(jq -r '.IPSet.Addresses[]' /root/IP_SET_OUTPUT) )
# Add our ip to the list
arr+=( "${IP}/${BLOCK}" )
echo "${arr[#]}"
# Update IP set
aws wafv2 update-ip-set --name=$NAME --scope=REGIONAL --id=$ID --addresses "${arr[#]}" --lock-token=$LOCK_TOKEN --region=$REGION
You can't. The API was changed such that you cannot do delta change anymore.
You would need to do get-ip-set, make changes to the returned JSON model, and then call update-ip-set.
My hello disapeared, so HELLO
Using the Azure CLI, I need to get the instances-id and private ip of my VMSS instances in one command.
I've already tried :
az vmss nic list-vm-nics : but I only have the private IP
az vmss list-instances : but I only have instance id
Do you know a cli command to get both of them ?
My need is to get instances that are unhealthy in my application gateway (the backend pool is my VMSS) and delete them.
I successfully got the unhealthy ip instances (with the command az network application-gateway show-backend-health), but I need to map these IP with an instance ID in order to use this command : az vmss delete-instances
And with all az vmss commands, I can't find a way to map a private IP with a instance id...
The goal is to run a job that automatically delete unhealthy instances.
Thanks for your help !
Valentin
You could filter the virtual machine Id with a known private IP address like this,
az vmss nic list -g resourcegroupname --vmss-name vmssname --query "[?ipConfigurations[0].privateIpAddress == '10.0.0.7'].virtualMachine.id" -o tsv
Result
I find what I was searching, and I used the query from Nancy, so I put my script here if it can help someone
#!/bin/bash
set -e
set -o pipefail
#############
# VARIABLES #
#############
resource_group="your_rg"
appgw_name="your_appgw"
vmss_name="your_vmss"
subscription="your_sub"
##########
# SCRIPT #
##########
az account set --subscription ${subscription}
# Get Actual Backend IPs
backend_ips=$(az network application-gateway show-backend-health --resource-group ${resource_group} --name ${appgw_name} | grep 'address')
backend_ips=$(sed 's/"//g;s/ //g;s/address://g' <<< ${backend_ips})
# Get Backend Health
backend_health=$(az network application-gateway show-backend-health --resource-group ${resource_group} --name ${appgw_name} | grep 'health\"')
backend_health=$(sed 's/"//g;s/ //g;s/health://g' <<< ${backend_health})
# Get Backend Count
backend_count=$(echo ${backend_ips} | awk -F "," '{print NF-1}')
# Put backend ips and health into tab
IFS=',' read -ra backend_ips <<< "${backend_ips}"
IFS=',' read -ra backend_health <<< "${backend_health}"
# SEARCH FOR A UNHEALTHY INSTANCE
for (( i=0; i < ${backend_count}; i++))
do
echo -e "\n########\n# [IP] ${backend_ips[$i]}"
echo -e "# [Health] ${backend_health[$i]}\n########\n"
if [ "${backend_health[$i]}" == "Unhealthy" ]
then
echo -e " -> L'instance ${backend_ips[$i]} est à l'état Unhealthy."
unhealthy_instance_id=$(az vmss nic list -g ${resource_group} --vmss-name ${vmss_name} --query "[?ipConfigurations[0].privateIpAddress == '${backend_ips[$i]}'].virtualMachine.id" -o tsv | cut -d "/" -f 11)
echo -e " -> Suppression de l'instance n°${unhealthy_instance_id}"
#az vmss delete-instances -g ${resource_group} -n ${vmss_name} --instance-ids ${unhealthy_instance_id}
fi
done
exit 0
I have deployed an Amazon EC2 cluster of 3 Ubuntu machines (2 of them make up the cluster and the last one is just a client who submits jobs and manages their storage). I connect to all of them via password-less SSH.
What happens is that every time I restart these machines they get new public hostnames from Amazon which I want to replace in my SSH configuration file located in ~/.ssh/config
So far, I figured out a way to get their names and hostnames using Amazon CLI with the following command at my local machine (CentOS 7):
aws ec2 describe-instances --query "Reservations[*].Instances[*].[PublicDnsName,Tags]" --output=text | grep -vwE "None"
This prints something like
ec2-XX-XX-XXX-XXX.us-east-2.compute.amazonaws.com
Name datanode1
ec2-YY-YY-YYY-YYY.us-east-2.compute.amazonaws.com
Name namenode
ec2-ZZ-ZZ-ZZZ-ZZZ.us-east-2.compute.amazonaws.com
Name client
i.e. the hostname, a new line, the corresponding name and so on. The IP fields above like XX-XX-XXX-XXX and so on, are basically 4 hyphen separated numbers of 2 or 3 digits. The grep command I have simply removes the last useless line. Now I want to find a way to replace these hostnames to the SSH configuration file or maybe regenerate it, which looks like
Host namenode
HostName ec2-YY-YY-YYY-YYY.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Host datanode1
HostName ec2-XX-XX-XXX-XX.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Host client
HostName ec2-ZZ-ZZ-ZZZ-ZZZ.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Please note that I don't know how the Amazon CLI command sorts the output. But of course, I can change the order of the machines in my SSH file or maybe it is a good idea to delete it and recreate it.
Below is what I finally figured out and it works. This is Bash script you can just save as .sh file like script.sh and execute. If it can't run simply do chmod +x script.sh. I have added comments to clarify what I am doing.
#Ask Amazon CLI for your hostnames, remove the last line, replace the "Name\t" with "", combine every 2 consecutive lines and save to a txt file
aws ec2 describe-instances --query "Reservations[*].Instances[*].[PublicDnsName,Tags]" --output=text | grep -vwE "None" | sed 's/Name\t//g' | sed 'N;s/\n/ /' > 'ec2instances.txt';
#Change the following variables based on your cluster
publicKey="mykey.pem";
username="ubuntu";
#Remove any preexisting SSH configuration file
rm config
touch config
while read line
do
#Read the line, keep the 1st word and save it as the public DNS
publicDns=$(echo "$line" | cut -d " " -f1);
#Read the line, keep the 2nd word and save it as the hostname you will be using locally to connect to your Amazon EC2
instanceHostname=$(echo "$line" | cut -d " " -f2);
#OK, we are now ready to store to SSH known hosts
sshEntry="Host $instanceHostname\n";
sshEntry="$sshEntry HostName $publicDns\n";
sshEntry="$sshEntry User $username\n";
sshEntry="$sshEntry IdentityFile ~/.ssh/$publicKey\n";
#Attach to the EOF, '-e' enables interpretation of backslash escapes
echo -e "$sshEntry" >> config
#Below is the txt file you will be traversing in the loop
done < ec2instances.txt
#Done
rm ~/.ssh/config
mv config ~/.ssh/config
rm ec2instances.txt
I spinned a docker-openvpn container in my (local) Kubernetes cluster to access my Services securely and debug dependent services locally.
I can connect to the cluster via the openVPN server. However I can't resolve my Services via DNS.
I managed to get to the point where after setting routes on the VPN server:
I can ping a Pod by IP (subnet 10.2.0.0/16)
I can ping a Service by IP (subnet 10.3.0.0/16 like the DNS which is at 10.3.0.10)
I can curl to a Services by IP and get the data I need.
but when i nslookup kubernetes or any Service, I get:
nslookup kubernetes
;; Got recursion not available from 10.3.0.10, trying next server
;; Got SERVFAIL reply from 10.3.0.10, trying next server
I am still missing something for the data to return from the DNS server, but can't figure what I need to do.
How do I debug this SERVFAIL issue in Kubernetes DNS?
EDIT:
Things I have noticed and am looking to understand:
nslookup works to resolve Service name in any pod except the openvpn Pod
while nslookup works in those other Pods, ping does not.
similarly traceroute in those other Pods leads to the flannel layer 10.0.2.2 and then stops there.
from this I guess ICMP must be blocked at the flannel layer, and that doesn't help me figure where DNS is blocked.
EDIT2:
I finally figured how to get nslookup to work: I had to push the DNS search domain to the client with
push "dhcp-option DOMAIN-SEARCH cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
add with the -p option in the docker-openvpn image
so i end up with
docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \
-u udp://192.168.10.152:1194 \
-n 10.3.0.10 \
-n 192.168.10.1 \
-n 8.8.8.8 \
-n 75.75.75.75 \
-n 75.75.75.76 \
-s 10.8.0.0/24 \
-d \
-p "route 10.2.0.0 255.255.0.0" \
-p "route 10.3.0.0 255.255.0.0" \
-p "dhcp-option DOMAIN cluster.local" \
-p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \
-p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
Now, nslookup works but curl still does not
finally my config looks like this:
docker run -v /etc/openvpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig \
-u udp://192.168.10.152:1194 \
-n 10.3.0.10 \
-n 192.168.10.1 \
-n 8.8.8.8 \
-n 75.75.75.75 \
-n 75.75.75.76 \
-s 10.8.0.0/24 \
-N \
-p "route 10.2.0.0 255.255.0.0" \
-p "route 10.3.0.0 255.255.0.0" \
-p "dhcp-option DOMAIN-SEARCH cluster.local" \
-p "dhcp-option DOMAIN-SEARCH svc.cluster.local" \
-p "dhcp-option DOMAIN-SEARCH default.svc.cluster.local"
-u for the VPN server address and port
-n for all the DNS servers to use
-s to define the VPN subnet (as it defaults to 10.2.0.0 which is used by Kubernetes already)
-d to disable NAT
-p to push options to the client
-N to enable NAT: it seems critical for this setup on Kubernetes
the last part, pushing the search domains to the client, was the key to getting nslookup etc.. to work.
note that curl didn't work at first, but seems to start working after a few seconds. So it does work but it takes a bit of time for curl to be able to resolve.
Try curl -4. Maybe it's resolving to the AAAA even if A is present.
I had been not able to create a new CNAME for a specific managed-zone.
I can see there are examples for A and TXT entries like:
$ gcloud dns record-sets transaction add -z MANAGED_ZONE \
--name my.domain. --ttl 1234 --type A "1.2.3.4"
$ gcloud dns record-sets transaction add -z MANAGED_ZONE \
--name my.domain. --ttl 2345 --type TXT "Hello world" "Bye \
world"
But I keep getting too few arguments error.
Currently I'm issuing:
$ gcloud dns record-sets -z=MYZONE transaction add\
--name="NAME" --type=CNAME --ttl 3600 --rrdatas="DEST"
I guess the issue is related to the rrdatas field but I have been unable to find any documentation.
The command does not have a rrdatas flag. You can just put the value you want for rrdatas at the end of the command as a positional argument. Also, note that the -z zone flag should be provided after all the commands. So:
$ gcloud dns record-sets -z=MYZONE transaction add --type=CNAME \
--name="www.example.com." --ttl 3600 --rrdatas="target.example.com."
should be changed to this:
$ gcloud dns record-sets transaction add -z=MYZONE --type=CNAME \
--name="www.example.com." --ttl 3600 "target.example.com."
According to the record types documented on the API, note that the rrdatas value should point to a valid record or must end with periods (.) in the case of fully-qualified DNS names.