Azure: Unable to connect to cluster (Aks-engine) using kubectl - azure

SOLUTION
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.
PROBLEM
I deployed a Kubernetes cluster using aks-engine but I'm getting this error Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout when I try to use kubectl. I'm able to access the master node with the serial console but not through ssh (same error comes in this case).
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl get node
Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Aks-Engine version - v0.28.1-linux-amd64
Kubernetes version - 1.10.12
Here is the kubeconfig.westus2.json file -
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "*****"
"server": "https://masquerade-az.westus2.cloudapp.azure.com"
},
"name": "masquerade-az"
}
],
"contexts": [
{
"context": {
"cluster": "masquerade-az",
"user": "masquerade-az-admin"
},
"name": "masquerade-az"
}
],
"current-context": "masquerade-az",
"kind": "Config",
"users": [
{
"name": "masquerade-az-admin",
"user": {"client-certificate-data":"****","client-key-data":"*****"}
}
]
}
This is the screenshots for inbound ports.
This is the screenshot for outbound ports.

As shared by the original poster, the solution is:
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.

Related

Connection refused when calling Dapr sidecar in Azure Container Apps

I have an ASP.NET 6 application using Dapr. I works in the local development environment, but when I deploy it to Azure Container Apps, it is unable to reach the Dapr sidecar.
The application is defined in Bicep, with the following settings relevant to Dapr:
"dapr": {
"enabled": true,
"appId": "<redaced>",
"appProtocol": "http",
"appPort": 5032
}
The following environment variables are set on the container:
"env": [
{
"name": "DAPR_HTTP_PORT",
"value": "3500"
},
{
"name": "DAPR_GRPC_PORT",
"value": "3500"
}
]
I use the Dapr SDK, and make the following call: _daprClient.GetStateAsync<ICollection<string>>(STORE_NAME, KEY), which results in an exception with this message: ---> Grpc.Core.RpcException: Status(StatusCode="Internal", Detail="Error starting gRPC call. HttpRequestException: Connection refused (127.0.0.1:3500) SocketException: Connection refused", DebugException="System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:3500).
The documentation for Azure Container Apps states that 3500 is the default port for the Dapr sidecar. Any ideas why this does not work?

How do I access another Docker container (running a node express server) from a AWS SAM Local Docker container?

We are developing locally using SAM Local to invoke a Lambda in an API Gateway. SAM Local does this using a Docker container (setup as close to the Lambda node runtime as possible). We want this Lambda to access some data in an API Mocking service in the shape of some Node express servers running in another container (this could also just be run locally too if needed). Both containers are in a user-created Docker bridge network created as follows:
docker network create sam-demo
The API mocking service is run and added to the bridge network:
docker run --network sam-demo --name mock -d -P mock:latest
The Lambda is invoked in debug mode and added to the bridge network:
sam local start-api -t template.json -d 9229 --docker-network sam-demo
Inspecting the bridge network reveals both the SAM local lambda (wizardly_knuth) and the mocks are there:
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5ebfa4295a56e4df840676a2e214891543fd4e8cb271ed70ddd67946ab451119": {
"Name": "wizardly_knuth",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"d735c9aa840e4ce7180444cf168cd6b68451c9ca29ba87b7cb23edff11abea7b": {
"Name": "mock",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Now, what should the URL be for the Lambda to hit the mock? According to Docker docs it should be the IPv4Address of the mock container i.e. http://172.20.0.2 but I am not sure what port or how to find what port?
I can exec into the mock and ping the SAM Local container successfully BUT I can't do the same from the SAM Local container as the shell doesn't have ping, curl, nc or anything installed.
I can't hit the Mock container directly from my machine as it is a Mac and I believe there are issues with doing so.
Any advice or next steps are greatly appreciated.
Much thanks,
Sam
UPDATE
In the end I gave up on this approach as I could not figure out what the URL for the Lambda should be to hit the mock within the Docker Bridge network.
The alternative approach was to actually just hit the mock Docker container directly from the Lambda using this URL (the mock container is exposing port 3002):-
http://docker.for.mac.localhost:3002/
Hope this might help somebody out.... please let me know if anyone solves the bridge network issue I originally posted about.
Thanks,
Sam

What is the proper configuration to persist edgeAgent storage?

I am working on an Azure IoT Edge project. Currently I am going through the production readiness checklist. I followed the documentation to use storage on the host filesystem for the edgeAgent and edgeHub modules.
When I run sudo iotedge check edgeHub is OK but edgeAgent raises a warning:
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ production readiness: Edge Hub's storage directory is persisted on the host filesystem - OK
Here is a snippet from the deployment template:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeAgent/storage/:/iotedge/storage/"
]
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeHub/storage:/iotedge/storage/"
],
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
}
},
As of release 1.0.9, there's an issue where edgeAgent's configuration doesn't update unless its image tag is updated. Two options from your current state:
Use a specific tag in the image settings (always recommended). E.g. mcr.microsoft.com/azureiotedge-agent:1.0.9
Delete the edgeAgent container on the device: docker rm -f edgeAgent. It will get restarted in under 30 secs and the new storageFolder env var will be picked up.
Run 'iotedge check' again after the container is update, and this warning should go away.
I have followed the same documentation and was able to avoid the production readiness checklist warnings on my Raspberry Pi 3.
1) I have configured the "Binds" as per the documentation Link module storage to device storage
"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]
2) I have provided the user access on the HostStoragePath, from SSH terminal.
sudo chown 1000 /etc/iotedge/storage/
sudo chmod 700 /etc/iotedge/storage/
3) Restarted Raspberry Pi 3, to make sure the grant access takes into effect.
Make sure you have host storage folders available on your edge device.
Make sure to provide the user full access on those folders.
Try the deployment with your updated manifest, it should probably work.

Get host IP from Vagrant and use it with Ansible

I playing around with vagrant to set up some droplets and deploy my nodejs server with ansible (I am using DigitalOcean). I have some parts in my js code where I need to set the current IP into the script. Problem is that I can't set the IP manually so I get a random IP via vagrant from DO. How can I "get" this IP and make use of it in my Ansible script? sure I just could do a wget http://ipinfo.io/ip -qO - on the host itself or check it with ip but I guess it should also work to get this info from Vagrant?
How can I "get" this IP and make use of it in my Ansible script?
Use ipify_facts:
- name: Get my public IP
ipify_facts:
- debug: var=ipify_public_ip
From the documentation (specifically: http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts), Ansible seems to have a pre-defined variable containing your networking information:
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
I can't remember which is the public interface on DigitalOcean, but you should be able to use {{ ansible_eth0.ipv4.address }} in your playbook.
Side note, you can use this command to list all "discovered" variables:
ansible hostname -m setup

How to run Logstash-forwarder from client machine

I am new to logstash. I installed elasticsearch, kibana, logstash and logstash-forwarder in Ubuntu from this tutorial everything is fine while running on the local machine.
Now I want to include log file from another system so I installed logstash-forwarder on client machine but it fails to run. I cannot able to figure out mistake. In client machine I didn`t install logstash since logstash is running on server.If anything I misunderstood please let me know. What would be the configuration of server and client access.
Logstash forwarder in client:
{
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/syslog",
"/var/log/auth.log"
],
"fields": { "type": "syslog" }
}
]
}
Thanks
Install logstash-forwarder is enough.
Here are the ways to troubleshooting your problem.
Check if logstash is running on port 5000
Login logstash server, and run :
telnet localhost 5000
If you can't telnet, then make sure, logstash service is running properly.
Check if there are firewall issue between clients and logstash server.
Login client (where you installed forwarder on it)
telnet server_ip_addr 5000
If you can't, you need open the firewall port 5000 between client to logstash server.
In config file, the file part is fine, could you update the network part,
Let me know the result
"network": {
"servers": [ "server_ip_addr:5000" ],
"timeout": 15,
"ssl certificate": "/etc/pki/tls/certs/logstash-forwarder.crt",
"ssl key": "/etc/pki/tls/certs/logstash-forwarder.key",
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},

Resources