Zabbix Auto Discovery by ping - discovery

I would like to add all newly discovered hosts to a host group AutoDiscovered.
I have added:
Action: Auto Discovery Ping
Conditions: Discovery rule = Ping
Operation: Add to host groups: AutoDiscovered
Discovery Rule:
Name: Ping
Range: 192.168.0.0/23
Delay: 60
Checks: ICMP ping
Unique: IP address
Enabled: Y
I wait 60 seconds, but the Zabbix host does not start pinging hosts. I get no output from:
tcpdump -n not port 22 and host zabbix.server and not port 80 and not port 53
I had Zabbix to start pinging like crazy. Why does this not happen?

You just have to be patient: 1 hour later hosts started trickling in.

Related

DigitalOcean Network Firewall allowing SSH connections on ports other than only 22

I have a droplet on DigitalOcean with IPv4 and IPv6 enabled. The droplet is behind a digital ocean network firewall with the following rules:
Inbound:
SSH TCP 22 All IPv4, All IPv6
HTTP TCP 80 All IPv4, All IPv6
HTTP TCP 443 All IPv4, All IPv6
Outbound:
ICMP ICMP All IPv4 All IPv6
All TCP TCP All ports All IPv4 All IPv6
All UDP UDP All ports All IPv4
My understanding and expectation is that will block all ssh attempts on ports other than port 22. However when checking the sshd unit in systemd journal. I see the following entries:
2022-12-29 03:00:32 Disconnected from invalid user antonio 43.153.179.44 port 45614 [preauth]
2022-12-29 03:00:32 Received disconnect from 43.153.179.44 port 45614:11: Bye Bye [preauth]
2022-12-29 03:00:31 Invalid user antonio from 43.153.179.44 port 45614
2022-12-29 02:58:37 Disconnected from invalid user desliga 190.129.122.3 port 1199 [preauth]
2022-12-29 02:58:37 Received disconnect from 190.129.122.3 port 1199:11: Bye Bye [preauth]
2022-12-29 02:58:37 Invalid user desliga from 190.129.122.3 port 1199
and many more of these lines, which means the firewall is not blocking ssh connections on ports other than 22.
The following graph shows the number of ssh connections to ports other than 22 in the last hour. The connections are reduced with enabling the Network Filter, but they not diminished.
Could it be that the Network Firewall of DigitalOcean is broken?
What am I missing?
Anyone is seeing the same situation on their infrastructure?
The ports being shown in the log are the remote ports that the connections are coming from on the remote IPs, and not indicating that those ports are listening on your server or through the firewall. The firewall is configured from your description to allow for any remote IP and port to connect to your droplet on local ports 22, 80, and 443.

Remote access to OpenShift Local (CRC) running on Win11

I've got CRC running on Windows 11 and I would like to connect there from a RHEL9 VM.
CRC listening on 127.0.0.1:6443
Port forwarding rule created on Win machine to fwd connections on 192.168.1.156 (local intf) to 127.0.0.1:
$ netsh interface portproxy show v4tov4
Listen on ipv4: Connect to ipv4:
Address Port Address Port
192.168.1.156 9000 127.0.0.1 6443
Added rule in firewall to allow connections to port 9000
From the VM:
[test#workstation ~]$ telnet 192.168.1.156 9000
Trying 192.168.1.156...
Connected to 192.168.1.156.
Escape character is '^]'.
Connection closed by foreign host.
[test#workstation ~]$ oc login -u developer -p developer https://192.168.1.156:9000
The server is using a certificate that does not match its hostname: x509: certificate is valid for 10.217.4.1, not 192.168.1.156
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Error from server (InternalError): Internal error occurred: unexpected response: 412
Any idea on how I can fix this and be able to connect from my VM towards CRC?
thanks

Ubuntu local IP address does not resolve

I set a Hugo web server, which listen on localhost:30000.
The ubuntu machine has the 192.168.2.137 address.
When i do:
curl http://localhost:30000/ -> OK
curl http://127.0.0.1:30000/ -> OK
but,
curl http://192.168.2.137:30000/ -> curl: (7) Failed to connect to 192.168.2.131 port 30000: Connection refused
What could be the reason for that?
My /etc/netplan/00-installer-config.yaml looks like:
network:
version: 2
renderer: NetworkManager
ethernets:
enp0s3:
dhcp4: false
addresses: [192.168.2.137/24]
gateway4: 192.168.2.1
nameservers:
addresses:
- 8.8.8.8
lo:
renderer: networkd
match:
name: lo
addresses:
- 192.168.2.137/24
I also add an entry to /etc/hosts:
192.168.2.137 localhost
You said that it’s listen on localhost so if you use other interface it won’t work which is normal . You should listen on all interfaces or listen to the interface enp0s3.

Python/Pod cannot reach the internet

I'm using python3 with microk8s to develop a simple web service.
The service is working properly (with docker in my local development machine), but the production machine (Ubuntu18.04 LTS with microk8s in Azure) cannot reach the internet (SMTP/Web REST API) once the pod was started (all internal service is working).
Problem
The pod cannot ping the hostname but the IP address. After investigation, the pod is working as expected except for the external resource. When executing the nslookup, it seems to be ok. But the ping is not working.
bash-5.1# ping www.google.com
ping: bad address 'www.google.com'
bash-5.1# nslookup www.google.com
Server: 10.152.183.10
Address: 10.152.183.10:53
Non-authoritative answer:
Name: www.google.com
Address: 74.125.68.103
Name: www.google.com
Address: 74.125.68.106
Name: www.google.com
Address: 74.125.68.99
Name: www.google.com
Address: 74.125.68.104
Name: www.google.com
Address: 74.125.68.105
Name: www.google.com
Address: 74.125.68.147
Non-authoritative answer:
Name: www.google.com
Address: 2404:6800:4003:c02::93
Name: www.google.com
Address: 2404:6800:4003:c02::63
Name: www.google.com
Address: 2404:6800:4003:c02::67
Name: www.google.com
Address: 2404:6800:4003:c02::69
bash-5.1# ping 74.125.68.103
PING 74.125.68.103 (74.125.68.103): 56 data bytes
64 bytes from 74.125.68.103: seq=0 ttl=55 time=1.448 ms
64 bytes from 74.125.68.103: seq=1 ttl=55 time=1.482 ms
^C
--- 74.125.68.103 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.448/1.465/1.482 ms
bash-5.1# python3
>>> import socket
>>> socket.gethostname()
'projects-dep-65d7b8685f-jzmxx'
>>> socket.gethostbyname('www.google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
socket.gaierror: [Errno -3] Try again
Environments/Settings
host $ #In Host
host $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
host $ microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dashboard
dns
ha-cluster
ingress
metrics-server
registry
storage
disabled:
ambassador
cilium
fluentd
gpu
helm
helm3
host-access
istio
jaeger
keda
knative
kubeflow
linkerd
metallb
multus
portainer
prometheus
rbac
traefik
# In Pod
bash-5.1 # python3
>>> import sys
>>> print({'version':sys.version, 'version-info': sys.version_info})
{'version': '3.9.3 (default, Apr 2 2021, 21:20:32) \n[GCC 10.2.1 20201203]', 'version-info': sys.version_info(major=3, minor=9, micro=3, releaselevel='final', serial=0)}
bash-5.1 #
bash-5.1 # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local ngqy0alqbw2elndk2awonodqmd.ix.internal.cloudapp.net
nameserver 10.152.183.10
options ndots:5
You can confirm your pod network namespace can connect to external and internal vnet ips or not through the following commands: -
kubectl --namespace=kube-system exec -it ${KUBE-DNS-POD-NAME} -c kubedns -- sh
#run ping/or nslookup using metadata endpoint
If you restart the pod or container, it can fix the issue of hostname not resolving for external IP addresses or else, you can move the pod to a different node. Also, edit the Kubernetes dns add on master (repeat for every master) as below : -
vi /etc/kubernetes/addons/kube-dns-deployment.yaml
And change the arguments for the health container as below: -
"--cmd=nslookup bing.com 127.0.0.1 >/dev/null"
"--url=/healthz-dnsmasq"
"--cmd=nslookup bing.com 127.0.0.1:10053 >/dev/null"
"--url=/healthz-kubedns"
"--port=8080"
"--quiet"
You can also try restarting the kube coredns through the following command: -
kubectl -n kube-system rollout
This will force the kubedns container to restart if the above condition occurs.
Thanking you,

How to check whether certain port is opened or block on any other servers from a dev box?

I am trying to connect to one of our staging Cassandra servers on port 9042 and 9160 here in our company from a dev box.. Through the code, I am not able to connect to it... The program gets hanged at my SELECT query..
So I am wondering is there any way to figure out from my dev box whether these two ports are either blocked on my Cassandra staging servers or not?
Below is the Cassandra staging server url which I am trying to connect from my dev box -
sc-host01.vip.slc.qa.host.com
And my dev box machine url is -
username-dyn-vm1-4.phx-os1.tratus.dev.host.com
Can anyone tell me how to figure out what can be the possible reason to which I am not able to connect to it..
How to check from my dev box whether these ports are opened or not on my Cassandra staging servers?
Update:-
ubuntu#username-dyn-vm1-4:~/build$ traceroute sc-host01.vip.slc.qa.host.com
traceroute to sc-host01.vip.slc.qa.host.com (10.109.107.64), 30 hops max, 60 byte packets
1 10.9.209.1 (10.9.209.1) 4.594 ms 6.628 ms 8.299 ms
2 * * *
3 * * *
4 * * *
5 * * *
6 * * *
7 stgcass01-1.vip.slc.qa.host.com (10.109.107.64) 7.907 ms 3.312 ms 3.950 ms
This is what I got when I ran nmap -
ubuntu#username-dyn-vm1-4:~/build$ nmap -p T:9160 sc-host01.vip.slc.qa.host.com
Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-13 20:01 UTC
Nmap scan report for sc-host01.vip.slc.qa.host.com (10.109.107.64)
Host is up (0.0037s latency).
rDNS record for 10.109.107.64: stgcass01-1.vip.slc.qa.host.com
PORT STATE SERVICE
9160/tcp open apani1
Nmap done: 1 IP address (1 host up) scanned in 0.19 seconds
ubuntu#username-dyn-vm1-48493:~/build$ nmap -p T:9042 sc-host01.vip.slc.qa.host.com
Starting Nmap 6.00 ( http://nmap.org ) at 2013-10-13 20:02 UTC
Nmap scan report for sc-host01.vip.slc.qa.host.com (10.109.107.64)
Host is up (0.0049s latency).
rDNS record for 10.109.107.64: stgcass01-1.vip.slc.qa.host.com
PORT STATE SERVICE
9042/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
Does that mean port is opened correctly and there is no problem?
And with telnet I get this -
ubuntu#username-dyn-vm1-4:~/build$ telnet sc-host01.vip.slc.qa.host.com 9042
Trying 10.109.107.64...
Connected to stgcass01-1.vip.slc.qa.host.com.
Escape character is '^]'.
^CConnection closed by foreign host.
ubuntu#username-dyn-vm1-4:~/build$ telnet sc-host01.vip.slc.qa.host.com 9160
Trying 10.109.107.64...
Connected to stgcass01-1.vip.slc.qa.host.com.
Have you tried telnet from the dev box?
telnet sc-host01.vip.slc.qa.host.com 9042
telnet sc-host01.vip.slc.qa.host.com 9160
if you get a telnet prompt back, you have connectivity, if it hangs there the connection may be timing out, if the command fails outright you may have firewall rules preventing access. You can try 'traceroute sc-host01.vip.slc.qa.host.com' to see the path the connection is trying to take.

Resources