Missing "kernel: Firewall" messages - linux

Where are my iptables logging Blocked messages? I wonder if this is an OpenVZ issue or something from the scripted install. Note, I'm highly technical, but not a server admin. Could the OpenVZ host be blocking and logging outside of my VSP?
I have two newly installed machines running running text-mode CentOS 7 x64, yum up to date packages, and with iptables/CSF.
Also, I ensured machine #2 has all the packages that are on machine #1, though #2 has some extras.
OpenVZ VPS (installed with their image of CentOS 7 x64)
VMware VM (installed with official CentOS 7 x64 minimal mode)
I performed my extra installs/configs exactly the same on both machines, and I have these lines in /etc/csf/csf.conf
TESTING = "0"
TCP_IN = "22,80,443"
UDP_IN = ""
On the VM, I'm getting these /var/log/messages when I nmap scan it:
Apr 12 17:25:23 mach kernel: Firewall: *UDP_IN Blocked* IN=ens192 OUT= ...
Apr 12 17:25:55 mach kernel: Firewall: *TCP_IN Blocked* IN=ens192 OUT= ...
On the VPS, I'm NOT getting any Firewall /var/log/messages when I nmap scan it... but I think it is properly blocking traffic.
How do I even proceed/diagnose this?

Related

How to confirm if TimeSync service is enabled on a RHEL 8.2 VM running in Azure?

Im new to linux - so im abit confused if i have to do any best practice time sync config with Azure, or not?
From https://learn.microsoft.com/en-us/windows-server/networking/windows-time-service/accurate-time?redirectedfrom=MSDN#allowing-linux-to-use-hyper-v-host-time
The above link mentions: "For Linux guests running in Hyper-V, clients are typically configured to use the NTP daemon for time synchronization against NTP servers. If the Linux distribution supports the TimeSync version 4 protocol and the Linux guest has the TimeSync integration service enabled, then it will synchronize against the host time. This could lead to inconsistent time keeping if both methods are enabled."
How can i confirm this?
How can i confirm if TimeSync service is enabled on my RHEL 8.2 VM running in Azure?
Also how can i confirm if my ntp daemaon is configured for time synchronization against NTP servers?
As part of my investigation I have run the following on the RHEL 8.2 VM (running in Azure)
My findings on this lab are that ntp is not configured directly (/etc/ntp.conf does not exist and (as recorded in earlier comments) the ntpq command is not found,.
[user#vm-aep-dev-eastu ~]$ service ntpd status
Redirecting to /bin/systemctl status ntpd.service
Unit ntpd.service could not be found
.
however "chrony" is active.Chrony appears to be synchronising the system clock with NTP servers.
systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-07-16 08:58:39 UTC; 7h ago
Other details:
$ /sbin/lsmod | egrep -i "^hv|hyperv"
hv_utils 36864 2
hv_balloon 28672 0
hyperv_fb 20480 1
hv_netvsc 86016 0
hv_storvsc 20480 4
hid_hyperv 16384 0
hyperv_keyboard 16384 0
hv_vmbus 114688 7 hv_balloon,hv_utils,hv_netvsc,hid_hyperv,hv_storvsc,hyperv_keyboard,hyperv_fb
Thanks
From the document Time sync for Linux VMs in Azure,
On Ubuntu 19.10 and later versions, Red Hat Enterprise Linux, and
CentOS 8.x, chrony is configured to use a PTP source clock.
For more information about Red Hat and NTP, see Configure NTP.
If both chrony and VMICTimeSync sources are enabled simultaneously,
you can mark one as prefer, which sets the other source as a backup.
Because NTP services do not update the clock for large skews except
after a long period, the VMICTimeSync will recover the clock from
paused VM events far more quickly than NTP-based tools alone.
See here for more details.

Unable to load bnxt_en driver intermittently on linux os backed by hypervisor

I have a VM backed by vCenter.
vCenter ESXi have physical adapter "Broadcom BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller" and SR-IOV enabled on this.
VM is connected to 1mgmt network (vmxnet3) and 2 SR-IOV adapters (SRIOVPassthrough).
Upon booting of the VM, only 2 networks shown up. (1mgmt and 1SR-IOV).
Journalctl -k logs showed following error.
[ 4832.408471] bnxt_en 0000:13:00.0 (unnamed net_device) (uninitialized): Error (timeout: 500015) msg {0x0 0x0} len:0
[ 4832.408930] bnxt_en: probe of 0000:13:00.0 failed with error -1
Reboot of machine did not help at all.
For the successful one adapter
bnxt_en 0000:03:00.0 eth1: NIC Link is Up, 25000 Mbps full duplex, Flow control: ON - receive & transmit
bnxt_en 0000:03:00.0 eth1: FEC autoneg off encodings: None
I did rescan of the pci devices and did multiple times reboot without any success.
Any pointers would be really helpful
We've got a similar issue and were able to fix it.
In our case we had the same error message on Debian 10, 11 and Oracle Linux 8 but we installed it directly on hardware without an hypervisor.
But it could be the same issue cause you're using passthrough.
There are two ways to fix it:
Usage of UEFI Boot
Disable PXE Boot and keep Bios / Legacy Boot
Both options fixed it.
Disabling PXE didn't work for us, but we can get the ports back online, by running
echo 0000:af:00.0 > /sys/bus/pci/drivers/bnxt_en/bind
Where 0000:af:00.0 is the PCI number for the port, which can be gotten from dmesg | grep bnxt_en and looking for the port or ports that failed.

Trying Wireguard on Raspberry Pi failed with "RTNETLINK answers: Operation not supported"

Steps I tried
I am trying to setup a Wireguard client on a Raspberry pi
This is the configuration on used
# /etc/wireguard/wg0-client.conf
[Interface]
Address = 10.10.0.4/32
Address = fd86:ea04:1111::4/128
SaveConfig = true
PrivateKey = CLIENT-PRIVATE-KEY
DNS = 8.8.8.8
[Peer]
PublicKey = SERVER-PUBLIC-KEY
Endpoint = SERVER-PUBLIC-IP:PORT
AllowedIPs = 0.0.0.0/0, ::/0
After setup the Wireguard config, I run the sudo wg-quick up wg0-client, it fails like this
pi#raspberrypi:~ $ sudo wg-quick up wg0-client
[#] ip link add wg0-client type wireguard
RTNETLINK answers: Operation not supported
Unable to access interface: Protocol not supported
[#] ip link delete dev wg0-client
Cannot find device "wg0-client"
the Wireguard server side has been working for a while with other devices, so I do not paste the info here
OS and hardware context
/etc/os-release info
pi#raspberrypi:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
/sys/firmware/devicetree/base/model for hardware info
Raspberry Pi 3 Model B Rev 1.2
I solved this the other day for my Pi-2 by removing Wireguard updating/upgrading the Kernel to the latest version, installing the Kernel headers, and reinstalling Wireguard. Worked like a charm after that.
But, you may only need the kernel headers.
You can try doing "sudo apt-get install raspberrypi-kernel-headers" before anything else.
I'm on:
Linux raspberrypi 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linux

Xen HVM domU VNC not refreshing screen

On one of our hypervisors running Xen (v.4.6.0 on top of Debian Jessie on a Dell R420), when we configure a domU for HVM and connect to the console via VNC, the connection displays a static image and appears to not accept mouse or keyboard input (leading you to think that the VM is frozen/not responsive). The behavior persists after closing and reconnecting over VNC, but the mouse/keyboard input from the previous session is now reflected (so if you tab three times, you can see that the appropriate radio or input button is highlighted after closing/opening the VNC connection, but you need to close the window again to see where the next input is, making it unusable).
We have Xen running smoothly on three other physical machines with HVM-configured domUs (2x Debian Jessie, 1x Ubuntu Xenial, all with v.4.6.0) and have been comparing what could be different, we noticed that QEMU could be updated on the troublesome Xen host. After upgrading QEMU from 1.2.2 to 1.2.5 (matching the version on the working hosts) and rebooting, the issue still persists. We have copied the VM config to another host with successful results, leading us to believe there is something isolated to this machine.
Results of cat /sys/hypervisor/properties/capabilities
xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
Results of xl info:
host : vm-host
release : 3.16.0-4-amd64
version : #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02)
machine : x86_64
nr_cpus : 16
max_cpu_id : 47
nr_nodes : 1
cores_per_socket : 8
threads_per_core : 2
cpu_mhz : 2500
hw_caps : bfebfbff:2c100800:00000000:00007f00:77bee3ff:00000000:00000001:00000281
virt_caps : hvm hvm_directio
total_memory : 32704
free_memory : 17945
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 6
xen_extra : .0
xen_version : 4.6.0
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset :
xen_commandline : placeholder dom0_mem=1024M,max:1024M dom0_max_vcpus=1 dom0_vcpus_pin no-real-mode edd=off
cc_compiler : gcc (Debian 5.3.1-8) 5.3.1 20160205
cc_compile_by : ijc
cc_compile_domain : debian.org
cc_compile_date : Tue Feb 9 17:46:27 UTC 2016
xend_config_format : 4
Sample domU config:
name="VM1"
uuid="91f4c306-101b-431b-bf73-2146b2a137fb"
vcpus=2
memory=2048
disk = [ "phy:/dev/vg1/centos,xvda2,w",
"file:/path/folder/images/CentOS-7-x86_64-Minimal-511.iso,xvdb:cdrom,r" ]
builder = "hvm"
boot = "dc"
vnc = "1"
vnclisten = "0.0.0.0"
vncdisplay = "0"
vncpasswd = "password"
vga ="stdvga"
videoram = 64
Any and all advice on how to get VNC working smoothly and properly would be greatly appreciated!
Try add GRUB_GFXPAYLOAD_LINUX="keep" or GRUB_GFXPAYLOAD_LINUX="640x480" (or another resolution) into /etc/default/grub on DomU and then run update-grub2 (on DomU) and reboot. This helped me with the same error.
Thanks for the recommendation. It turned out that we had mixed versions of Xen and its dependencies installed (some 4.4, some 4.6). We ended up removing Xen and all related packages and reinstalling. During installation, we noticed that installing xen-hypervisor-4.6-amd64 was coming from the stretch repo (expected), but its dependencies were coming from the jessie main repo with older versions (e.g., libxen-4.4 instead of libxen-4.6). To solve it, we ran
apt-get -t stretch install xen-hypervisor-4.6-amd64
which properly installed all dependencies from stretch, and after a reboot, VNC connections to HVM domU were working as expected.

After suspend guest OS hangs when using vagrant with nfs

Host OS Ubuntu 15.10
Guest OS Ubuntu 14.10
Using Vagrant with nfs and Virtualbox and static ip on the private network.
It is working perfectly except that after having suspended the host OS, the entire guest OS will be unusable.
This does not happen when using the normal virtualbox shared folders.
It's not only the nfs shared folder that is unusable, the entire OS is hanging.
Even syslog does not seem to see much action.
This is syslog on the guest, from waking up until vagrant halt is completed.
Feb 26 07:15:33 vagrant kernel: [ 8375.252989] e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Feb 26 07:16:11 vagrant kernel: [ 8413.109832] nfs: server 192.168.33.1 not responding, still trying
Feb 26 07:16:38 vagrant kernel: [ 8440.687476] nfs: server 192.168.33.1 not responding, still trying
Feb 26 07:17:01 vagrant CRON[3776]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Feb 26 07:20:33 vagrant rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="753" x-info="http://www.rsyslog.com"] exiting on signal 15.
How can this be fixed?
How should I debug it?

Resources