Docker on Ubuntu unable to connect to localhost but works connecting to its ip - linux

I am running Ubuntu 18.04
$ uname -r
5.3.0-46-generic
I have docker installed
$ docker --version
Docker version 19.03.8, build afacb8b7f0
I have a simple docker image that exposes port 80. The Dockerfile that generated it was
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
COPY publish .
EXPOSE 80
ENTRYPOINT ["dotnet", "SampleWebApp.dll"]
When I run a container for this image I can see the following:
$ docker run myimage:latest -p 8080:80
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /
And if I see the containers running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6f5bea7b329d registry.gitlab.com/whatever/myimage:latest "dotnet SampleWebApp…" 4 seconds ago Up 2 seconds 80/tcp dreamy_leavitt
So I can see that it's running on the port 80/tcp.
Not sure why it does not run on port 8080 which is where I wanted to map it.
Also, the http://[::]:80 seems confusing. I've read something about it being IPv6. No idea what consequences this has or why normal IPv4 wouldn't work.
My interface info:
$ ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:71ff:fe7f:305 prefixlen 64 scopeid 0x20<link>
ether 02:42:71:7f:03:05 txqueuelen 0 (Ethernet)
RX packets 131843 bytes 105630866 (105.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 201439 bytes 268197990 (268.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp3s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 1c:1b:0d:a4:83:16 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 118628 bytes 17999594 (17.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 118628 bytes 17999594 (17.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethca5fd09: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::3c56:d6ff:fe0c:846 prefixlen 64 scopeid 0x20<link>
ether 3e:56:d6:0c:08:46 txqueuelen 0 (Ethernet)
RX packets 7 bytes 533 (533.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52 bytes 7342 (7.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.135 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::8a58:c682:3833:3bb1 prefixlen 64 scopeid 0x20<link>
ether e4:be:ed:4f:0f:21 txqueuelen 1000 (Ethernet)
RX packets 519710 bytes 524989683 (524.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 439859 bytes 165781721 (165.7 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So.. docker interface seems to have the address 172.17.0.1.
However I cannot access my container using the following urls:
$ curl http://localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl http://localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
$ curl http://0.0.0.0:80
curl: (7) Failed to connect to 0.0.0.0 port 80: Connection refused
$ curl http://0.0.0.0:8080
curl: (7) Failed to connect to 0.0.0.0 port 8080: Connection refused
$ curl http://172.17.0.1:8080
curl: (7) Failed to connect to 172.17.0.1 port 8080: Connection refused
$ curl http://172.17.0.1:80
curl: (7) Failed to connect to 172.17.0.1 port 80: Connection refused
$ curl http://127.0.0.1:8080
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl http://127.0.0.1:80
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
so no access using localhost, 127.0.0.1 or the docker interface IP.
If I inspect the container:
sasw#Z3:~$ docker inspect 6f5bea7b329d
[
{
"Id": "6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a",
"Created": "2020-04-20T13:06:37.883347676Z",
"Path": "dotnet",
"Args": [
"SampleWebApp.dll",
"-p",
"8080:80"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 30636,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-04-20T13:06:38.295411125Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e00403d6c5eb3ccbe3c5c7b6ec8cf8289158e4c9fbe6ff5872ea932e69d60f38",
"ResolvConfPath": "/var/lib/docker/containers/6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a/hostname",
"HostsPath": "/var/lib/docker/containers/6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a/hosts",
"LogPath": "/var/lib/docker/containers/6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a/6f5bea7b329d05bcb534953745f376da9c7efbe54de5532f8648b618152b722a-json.log",
"Name": "/dreamy_leavitt",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/8f56c544522ccb6556358601706cb900c405c19b47e54c25d8b3dac979100e5b-init/diff:/var/lib/docker/overlay2/81bfee49e33d9761a6ca78dfd6f3f9a54a9333b4d4fc9986e8084f6b45232f04/diff:/var/lib/docker/overlay2/c2add2cb2d687126c6826c7dd9e1c85be1473a53d6b878554aa87615701344a0/diff:/var/lib/docker/overlay2/ebd0b92c5111423fb8d1219f757e41013a1473bdbe5cf3553cecbd4337f76766/diff:/var/lib/docker/overlay2/9197af6ebe4c70f0a84c7c267b1ba069aa710d917abe9fb3fee13320a17ab765/diff:/var/lib/docker/overlay2/1f463e8667b6eecc7c251ac05316b8d5d32840bff13d9f5cb7853c88e6f1f40e/diff:/var/lib/docker/overlay2/b7c9450f53334bef02f50cc854b33140b97f4ff3d2343b3fcac7b20f647c454e/diff",
"MergedDir": "/var/lib/docker/overlay2/8f56c544522ccb6556358601706cb900c405c19b47e54c25d8b3dac979100e5b/merged",
"UpperDir": "/var/lib/docker/overlay2/8f56c544522ccb6556358601706cb900c405c19b47e54c25d8b3dac979100e5b/diff",
"WorkDir": "/var/lib/docker/overlay2/8f56c544522ccb6556358601706cb900c405c19b47e54c25d8b3dac979100e5b/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "6f5bea7b329d",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ASPNETCORE_URLS=http://+:80",
"DOTNET_RUNNING_IN_CONTAINER=true"
],
"Cmd": [
"-p",
"8080:80"
],
"Image": "registry.gitlab.com/ddd-malaga/continuous-deployment-gitlab-docker-dotnet:latest",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"dotnet",
"SampleWebApp.dll"
],
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4e53bd2bc6cb83b7c0cba9fcdf07eb564a11ca6b955514670ba3f464aa0a96b7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": null
},
"SandboxKey": "/var/run/docker/netns/4e53bd2bc6cb",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "83976112bb202b79880777563cd1b06ef27781fd288b210b19fb499e3bf51c90",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "7589efd57cea8d2b04823657fcfc54225991bc58c93ff0e463b6f12acb28b853",
"EndpointID": "83976112bb202b79880777563cd1b06ef27781fd288b210b19fb499e3bf51c90",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
I can see the IP Address 172.17.0.2. Again, I don't know where this comes from.
But now I can try to access the container IP at the port I told it to map:
$ curl http://172.17.0.2:8080
curl: (7) Failed to connect to 172.17.0.2 port 8080: Connection refused
Surprisingly, if I access the same container IP but the exposed port 80 it works
sasw#Z3:/$ curl http://172.17.0.2:80
Hello World!
If I stop and delete complete container and images and try again with the following random port:
$ docker run myimage:latest -p 1234:1234
Status: Downloaded newer image for registry.gitlab.com/myimage:latest
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /
it seems those ports are completely ignored and it remains listening on the container ip and port 80
$ curl http://172.17.0.2:80
Hello World!
It's clear I am missing some knowledge here and the links I find are not very useful or the point me to things about IPv6 like this https://docs.docker.com/config/daemon/ipv6/ that mentions something about a /etc/docker/daemon.json which I don't even have.
Could anybody point me into the right direction to understand what's happening and why? Thanks!

It seems the problem was that none of my arguments to docker run was taking effect because I placed them AFTER the image. Insane!
So this:
docker run myimage:latest -p 8080:80 --name whatever
will run the container ignoring completely the port mapping and the container assigned name.
However this:
docker run -p 8080:80 --name whatever myimage:latest
Will map the port 80 to my localhost:8080 so that the web app is available at https://localhost:8080

Related

Docker Compose: Use static and dynamic network interface in container

I have an app that needs to access the internet and a local resource in a local area network. I want to run this app with docker compose. My host-system has 4 physical network ports and I connect with two Ethernet cables to the local-area net and the internet.
Host system can ping addresses from both networks successfully. But from inside the container I get no ping to outside. Tried with ping, tried a specific port with nmap, to no avail.
I have the following output when I analyze the network interfaces:
# network configuration host-system
enp03s31f6:
IPv4: XXX.XXX.XXX.XX
IPv6: XXXX:XXXX:XXXX:XXXX
enp2s0:
IPv4: 192.168.163.222 # <- host computer's address in local network
IPv6: XXXX:XXXX:XXXX:XXXX <- somehow an IPv6 address is assigned but only IPv4 is relevant
My docker-compose.yml is posted below:
# docker-compose.yml
version: "3.6"
networks:
dhcp_net:
app_local_net:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "192.168.163.222"
ipam:
driver: default
config:
- subnet: "192.168.163.0/24"
services:
main:
build:
context: .
image: "my_custom_app"
ports:
- "192.168.163.222:520:520"
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
networks:
dhcp_net:
app_local_net:
ipv4_address: 192.168.163.222
command: ./start_my_app
Of course there are more services in the compose-file, I left them out for better focus.
when I go into the container while it is running, I can successfully install iputils-ping and ping internet addresses. When I try to ping an address in the local network (app_local_net), host is unreachable.
When I go and inspect the container's network, following output is generated:
docker network inspect my_app_app_local_net
[
{
"Name": "my_app_app_local_net",
"Id": "b43184e60537541a764c3479ece9e861c49169b7629f810f532276b9949b522f",
"Created": "2021-11-17T17:37:50.935949741+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.163.0/24"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"e3ae8336c0ac18f7d164a15bd9e7f590ffa18b8b1688900a7ad639f92ba7bcf2": {
"Name": "my_app_main_1",
"EndpointID": "4d2fa1e37a823144fe1ffd3ea4f0720d8be83d4fc17629d5921cbd781e775838",
"MacAddress": "02:42:c0:a8:a3:de",
"IPv4Address": "192.168.163.222/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.host_binding_ipv4": "192.168.163.222"
},
"Labels": {
"com.docker.compose.network": "app_local_net",
"com.docker.compose.project": "my_app",
"com.docker.compose.version": "1.29.2"
}
}
]
ifconfig shows the following:
root#container$:ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.21.0.2 netmask 255.255.0.0 broadcast 172.21.255.255
ether 02:42:ac:15:00:02 txqueuelen 0 (Ethernet)
RX packets 1132 bytes 1641413 (1.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 870 bytes 60160 (60.1 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.163.222 netmask 255.255.255.0 broadcast 192.168.163.255
ether 02:42:c0:a8:a3:de txqueuelen 0 (Ethernet)
RX packets 59 bytes 7508 (7.5 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 252 (252.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 188.10.163.4 netmask 255.255.255.0 broadcast 188.10.163.255
ether 02:42:bc:0a:a3:04 txqueuelen 0 (Ethernet)
RX packets 25 bytes 2975 (2.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 16 bytes 1716 (1.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1716 (1.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
if I do a $ ping -I eth1 192.168.163.102 inside the container, it will not work.
I see the following questions as related and already incorporated advice, but somehow I am stuck:
Docker compose yml static IP addressing
How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?
Docker Compose with static public IP over LAN but different with Host IP
Provide static IP to docker containers via docker-compose
Is there anything I am missing? Thanks for any help in advance :-)

meteor Verifying Deployment - Connection refused

I am trying to deploy a meteor Application, But I am receiving this error message on the Verifying Deployment section with the following error message -
------------------------------------STDERR------------------------------------
: (7) Failed to connect to 172.17.0.2 port 3000: Connection refused
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 172.17.0.2 port 3000: Connection refused
=> Logs:
=> Setting node version
NODE_VERSION=14.17.4
v14.17.4 is already installed.
Now using node v14.17.4 (npm v6.14.14)
default -> 14.17.4 (-> v14.17.4 *)
=> Starting meteor app on port 3000
=> Redeploying previous version of the app
When I do the sudo netstat -tulpn | grep LISTEN in the server it shows this
tcp 0 0 10.0.3.1:53 0.0.0.0:* LISTEN 609/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 406/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 745/sshd: /usr/sbin
tcp6 0 0 :::22 :::* LISTEN 745/sshd: /usr/sbin
When I run sudo docker ps i receive the following message -
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e51b1b4bf3a3 mup-appName:latest "/bin/sh -c 'exec $M…" About an hour ago Restarting (1) 49 seconds ago appName
68b723183f3d mongo:3.4.1 "/entrypoint.sh mong…" 9 days ago Restarting (100) 9 seconds ago mongodb
In my firewall i have also opened the Port 3000
If I check the Docker is running it seems like there is no docker running!!
Also in my mup.js file I am using http and not https
module.exports = {
servers: {
one: {
host: 'xx.xx.xxx.xxx',
username: 'ubuntu',
pem: '/home/runner/.ssh/id_rsa'
}
},
meteor: {
name: 'appName',
path: '../../',
docker: {
image: 'zodern/meteor:latest',
},
servers: {
one: {}
},
buildOptions: {
serverOnly: true
},
env: {
PORT: 3000,
ROOT_URL: 'http://dev-api.appName.com/',
NODE_ENV: 'production',
MAIL_URL: 'smtp://xxxx:xxx/eLPCB3nw3jubkq:#email-smtp.eu-north-1.amazonaws.com:587',
MONGO_URL: 'mongodb+srv://xxx:xx#xxx.iiitd.mongodb.net/Development?retryWrites=true&w=majority'
},
deployCheckWaitTime: 15
}
proxy: {
domains: 'dev.xxx.com',
ssl: {
letsEncryptEmail: 'info#xxx.com'
}
}
}
Any idea what might cause this issue?
I don't know why, but in the MUP docs the correct image name is zodern/meteor:root
If your app is slow to start, increase the deployCheckWaitTime . In my complex apps I put 600, just to ensure the app is up.

Impossible to activate HugePage on AKS nodes

Hi dear Stackoverflow community,
I'm struggling in HugePage activation on a AKS cluster.
I noticed that I first have to configure a nodepool with HugePage support.
The only official Azure Hugepage doc is about transparentHugePage (https://learn.microsoft.com/en-us/azure/aks/custom-node-configuration), but I don't know if it's sufficient...
Then I know that I have to configure pod also
I wanted to rely on this (https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/), but as 2) not working...
But in despite of whole things i've done, I could not make it.
If I'm following Microsoft documentation, my nodepool spawn like this:
"kubeletConfig": {
"allowedUnsafeSysctls": null,
"cpuCfsQuota": null,
"cpuCfsQuotaPeriod": null,
"cpuManagerPolicy": null,
"failSwapOn": false,
"imageGcHighThreshold": null,
"imageGcLowThreshold": null,
"topologyManagerPolicy": null
},
"linuxOsConfig": {
"swapFileSizeMb": null,
"sysctls": {
"fsAioMaxNr": null,
"fsFileMax": null,
"fsInotifyMaxUserWatches": null,
"fsNrOpen": null,
"kernelThreadsMax": null,
"netCoreNetdevMaxBacklog": null,
"netCoreOptmemMax": null,
"netCoreRmemMax": null,
"netCoreSomaxconn": null,
"netCoreWmemMax": null,
"netIpv4IpLocalPortRange": "32000 60000",
"netIpv4NeighDefaultGcThresh1": null,
"netIpv4NeighDefaultGcThresh2": null,
"netIpv4NeighDefaultGcThresh3": null,
"netIpv4TcpFinTimeout": null,
"netIpv4TcpKeepaliveProbes": null,
"netIpv4TcpKeepaliveTime": null,
"netIpv4TcpMaxSynBacklog": null,
"netIpv4TcpMaxTwBuckets": null,
"netIpv4TcpRmem": null,
"netIpv4TcpTwReuse": null,
"netIpv4TcpWmem": null,
"netIpv4TcpkeepaliveIntvl": null,
"netNetfilterNfConntrackBuckets": null,
"netNetfilterNfConntrackMax": null,
"vmMaxMapCount": null,
"vmSwappiness": null,
"vmVfsCachePressure": null
},
"transparentHugePageDefrag": "defer+madvise",
"transparentHugePageEnabled": "madvise"
But My node is still like that:
# kubectl describe nodes aks-deadpoolhp-31863567-vmss000000|grep hugepage
Capacity:
attachable-volumes-azure-disk: 16
cpu: 8
ephemeral-storage: 129901008Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32940620Ki
pods: 110
Allocatable:
attachable-volumes-azure-disk: 16
cpu: 7820m
ephemeral-storage: 119716768775
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 28440140Ki
pods: 110
My kube version is 1.16.15
I saw also that I should enable featuregate like this --feature-gates=HugePages=true (https://dev.to/dannypsnl/hugepages-on-kubernetes-5e7p) but I don't know how to make that in AKS... anyway As my node is not displaying any HugePage availability, i'm not sure it's useful for now.
I even try to recreate the aks cluster with a --kubeconfig, but everything remain the same: i cannot use HugePage...
Please I need your help again, i'm completely lost into this AKS service...
Install kubectl-node-shell on your laptop
curl -LO https://github.com/kvaps/kubectl-node-shell/raw/master/kubectl-node_shell
chmod +x ./kubectl-node_shell
sudo mv ./kubectl-node_shell /usr/local/bin/kubectl-node_shell
Get the nodes you want to get inside:
kubectl get pod <YOUR_POD> -o custom-columns=CONTAINER:.spec.nodeName -n <YOUR_NAMESPACE>
If node is NONE, that means your pod is in pending state. Pick up one random node:
kubectl get pod -n <YOUR_NAMESPACE>
Get inside your node:
kubectl node-shell <NODE>
Configure Hugepage:
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
Restart kubelet (still in the node, yes):
systemctl restart kubelet
Exit from node-shell by C-d (Ctrl + d)
Check HugePage is ON (ie. Values must not be 0)
kubectl describe node <NODE>|grep -i -e "capacity" -e "allocatable" -e "huge"
Either check you pod not in pending state anymore, or launch your helm install/kubectl apply now!

Connection refused when connecting to the exposed port on docker container

Dockerfile looks like this:
FROM ubuntu:latest
LABEL Spongebob Dockerpants "s.dockerpants#comcast.net"
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev build-essential
#Add source files
COPY . /app
ENV HOME=/app
WORKDIR /app
# Install Python web server and dependencies
RUN pip3 install -r requirements.txt
ENV FLASK_APP=app.py
# Expose port
EXPOSE 8090
#ENTRYPOINT ["python3"]
CMD ["python3", "app.py"]
CMD tail -f /dev/null
I started the container like this:
docker run --name taskman -p 8090:8090 -d task-manager-app:latest
I see the container running, and my localhost listening on 8090:
CORP\n0118236 # a-33jxiw0rv8is5 in ~/docker_pete/flask-task-manager on master*
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1ac5cb27698 task-manager-app:latest "/bin/sh -c 'tail -f…" About a minute ago Up About a minute 0.0.0.0:8090->8090/tcp taskman
CORP\n0118236 # a-33jxiw0rv8is5 in ~/docker_pete/flask-task-manager on master*
$ sudo netstat -nlp | grep 8090
tcp6 0 0 :::8090 :::* LISTEN 1154/docker-proxy
I tried to reach 8090 on the container via localhost per the docker run command I issued, but get 'connection refused'
CORP\n0118236 # a-33jxiw0rv8is5 in ~/docker_pete/flask-task-manager on master*
$ curl http://localhost:8090
curl: (56) Recv failure: Connection reset by peer
I then inspected the port-binding, and it looks ok:
CORP\n0118236 # a-33jxiw0rv8is5 in ~/docker_pete/flask-task-manager on master*
$ sudo docker port c1ac5cb27698 8090
0.0.0.0:8090
When I do a docker inspect , I see this:
$ docker inspect c1ac5cb27698 | grep -A 55 "NetworkSettings"
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7c2249761e4f48eef373c6744161b0709f312863c94fdc17138913952be698a0",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8090/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8090"
}
]
},
"SandboxKey": "/var/run/docker/netns/7c2249761e4f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "ea7552d0ba9e8f0c865fa4a0f24781811c7332a1e7473c48e88fa4dbe6e5e05d",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "cfb5be57fdeed8a08b1650b5706a00542c5249903ce33052ff3f0d3dab619675",
"EndpointID": "ea7552d0ba9e8f0c865fa4a0f24781811c7332a1e7473c48e88fa4dbe6e5e05d",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
I am able to ping the container from my localhost:
CORP\n0118236 # a-33jxiw0rv8is5 in ~/docker_pete/flask-task-manager on master*
$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=255 time=0.045 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=255 time=0.042 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=255 time=0.047 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2053ms
rtt min/avg/max/mdev = 0.042/0.044/0.047/0.008 ms
Is there anything in the configuration that would be causing these connection refused? Is something wrong with the binding?
Your docker file contains two CMD line, but docker will only honor the latest one.
CMD ["python3", "app.py"]
CMD tail -f /dev/null
The actual command executed inside your container is the tail command, which doesn't bind and listen on the port. You can ping the container because the container is alive with the tail command.

Multicontainer Docker application failing on deploy

So I have a problem with deploying my application to elastic beanstalk at Amazon. My application is a multi-container Docker application that includes node server and mongoDB inside of it. Somehow the application crashes every time and I get this bizarre error from mongoDB.
Error is as follows:
18-05-28T12:53:02.510+0000 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 3867 processes, 32864 files. Number of processes should be at least 16432 : 0.5 times number of files.
2018-05-28T12:53:02.540+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2018-05-28T12:53:02.541+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2018-05-28T12:53:03.045+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2018-05-28T12:53:03.045+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2018-05-28T12:53:03.047+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2018-05-28T12:53:03.161+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] now exiting
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] shutting down with code:0
This is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"volumes":[
{
"name": "mongo-app",
"host": {
"sourcePath": "/var/app/mongo-app"
}
},
{
"name": "some-api",
"host": {
"sourcePath": "/var/app/some-api"
}
}
],
"containerDefinitions": [
{
"name": "mongo-app",
"image": "mongo:latest",
"memory": 128,
"portMappings": [
{
"hostPort": 27017,
"containerPort": 27017
}
],
"mountPoints": [
{
"sourceVolume": "mongo-app",
"containerPath": "/data/db"
}
]
},
{
"name": "server",
"image": "node:8.11",
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8001
}
],
"links": [
"mongo-app"
],
"mountPoints":[
{
"sourceVolume": "some-api",
"containerPath": "/some-data"
}
]
}
]
}
And this is my Dockerfile:
FROM node:8.11
RUN mkdir -p /api
WORKDIR /api
COPY package.json /api
RUN cd /api && npm install
COPY . /api
EXPOSE 8001
CMD ["node", "api/app.js"]
Any Ideas why the application is crashing and does not deploy? It seems to me that the mongoDB is causing the problem but I cant understand or find the root of the problem.
Thank you in advance!
I spent a while trying to figure this out as well.
The solution: Add a mountpoint for "containerPath": "/data/configdb". Mongo expects to be able to write to both /data/db and /data/configdb.
Also, you might want to bump up "memory": 128 for Mongo to something higher.

Resources