Snort installed on ubuntu not sending alerts to syslog - linux

I have a Magento website setup on a linux machine that is based on a Bitnami
ready-made image.
The main goal is to be notified by email whenever there might be a potential attack on the site.
To achieve that I decided to install Snort IDS and email the alerts coming to the syslog using Swatch.
I've installed snort by following this tutorial from Snort's official website.
I've just finished section 9 of that tutorial which means:
Installed all the perquisites.
Installed Snort IDS on the machine.
Setup a test rule to alert when ICMP requests (ping) occurs.
Next to allow Snort to log alerts to syslog I've uncommented this line in the snort.conf file:
output alert_syslog: LOG_AUTH LOG_ALERT
I've tested the installation by running this command:
sudo /usr/local/bin/snort -A console -q -u snort -g snort -c /etc/snort/snort.conf -i eth0
while Snort is running I've made a ping request from another system.
I can see alerts registering in Snort's log file but nothing was added to the syslog.
Trail and errors:
Run snort as user root.
Set syslog to bounce logs to another server (remote syslog).
I don't have great deal of experience with linux so any help to point me to the right direction will be very much appreciated.
Some facts:
Bitnami Magento Stack 1.9.1.0-0
Ubuntu 14.04.3 LTS
Snort 2.9.7.5

I've posted this question on linuxquestions.org aswell and got an answer.
Following unSpawn reply I've reviewed the rsyslog conf files and found that auth logs are sent to the auto.log file.
Which led to a quick fix of adding an additional .conf file to /etc/rsyslog.d with the content:
auth /var/log/syslog
Also as suggested I've made some changes to the snort execution command (omitting the -q -A console):
sudo /usr/local/bin/snort -u snort -g snort -c /etc/snort/snort.conf -i eth0
after restarting the rsyslog service I found the missing Snort alerts in syslog.

Related

How to enable bmcweb in openbmc

Now I'm success to build the openbmc and run it on a server with aspeed2500 bmc.
I can login openbmc and also ssh on it.
But I can't access the webui by browser.
This site can't be reached
refused to connect.
ERR_CONNECTION_REFUSED
How can i access webui by browser ?
First you should feel free to reach out on the discord https://discord.gg/69Km47zH98
Or on Email list, and ask the experts for more detailed help.
I will share what I do when I want to know if BMC web is working on a machine.
Make sure the bitbake recipe is in included
Make sure bmcweb is running, and there are not error message
Make sure the network is allowing bmcweb to receive and send message
To make sure the recipe is included I typical run
find -name bmcweb
at the bitbake build directory. It should be in rootfs. If you don't see bmcweb in the build directory, there is a issue with your recipes, and it it is not being included.
To make sure bmcweb is running on the bmc, I ssh on and run ps | grep "bmcwebor journalctl -u bmcweb or systemctl status bmcweb
.Typically Typically these give me confidence bmcweb is running, or give indication it is not running.
The network is the most difficult item for me to check. The netstat command will indicate what ports are open on the bmc. Or from the host you can run nmap ${bmc_ip} to list open ports.
Those are the three steps I follow when I am unsure about bmcweb. Feel welcome to reach out to the discord or Email list.

Zabbix server not running

I just installed Zabbix 5.0 LTS (The Latest verion of Zabbix) on RHEL-8 OS. On logging in the Zabbix front end, I gett a message saying "Zabbix server is not working". & bar below says:"Zabbix server not working.Information delayed may not be current".Kindly provide help?
Edit:My server port is 10051.
On entering "service zabbix start", I get output:
Redirecting to /bin/systemctl start zabbix.service
Failed to start zabbix.service:Unit zabbix.service not found.
& on entering "systemctl restart zabbix-server zabbix-agent httpd php-fpm", I get:
Job for zabbix-server.service failed because the control process exited with error code.
See "systemctl status zabbix-server.service" and "journalctl -xe" for details.
Output of "journalctl -xe":
RHEL8 platform-python[5746]: SELinux is preventing zabbix_server from using the dac_override capability.
*** Plugin dac_overrride (91.4 confidence) suggests ************************
If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
Then turn on full auditing to get path information about the offending file and generate the error again.
Do
Turn on full auditing
#auditctl -w /etc/shadow -p w
Try to recreate AVC.Then execute
#ausearch -m avc -ts recent
If you see PATH record check ownership/permissions on file, and fix it,
otherwise report as a bugzilla
*** Plugin catchall (9.59 confidence) suggests *************************
If you believe that zabbix_server should have the dac_override capability by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
#ausearch -c 'zabbix_server' --raw | audit2allow -M my-zabbixserver
#semodule -X 300 -i my-zabbixserver.pp
RHEL8 dbus-daemon[779]: [system] Activating service name='org.fedoraproject.Setroubleshootd' requested by ':1.40' (uid=0 pid=748 comm="/usr/sbin/sedispatch " label="sytem_u:system_r:auditd_t:s0") (using servicehelper)
On entering "systemctl status zabbix-server.service", I get output:
zabbix-server.service-Zabbix server: Loaded:....
Active:....
Process: 4959 ExecStart=/usr/sbin/zabbix_server -c $CONFILE (code=exited,status=1/FAILURE)
RHEL8 systemd[1]:zabbix-server.service:Control process exited,code=exited status=1
RHEL8 systemd[1]:zabbix-server.service:Failed with result 'exit-code'.
RHEL8 systemd[1]:Failed to start Zabbix Server. What do I do now?
Solved. I had to configure SELinux. Just go to "vim /etc/selinux/config" & change SELinux from "enforcing" to "permissive".After that,reboot the system & then Zabbix server will start working.
You have seen that in this file /var/log/messages content about Zabbix. In this file you see a text like a bellow:
run sealert -l 84e0b04d-d0ad-4347-8317-22e74f6cd020
Do this command:
#sealert -l 84e0b04d-d0ad-4347-8317-22e74f6cd020
Then you see a verbose about this problem, to solve this you need to allow Zabbix in the environment. The command audit2allow does this.
link below:

X11 forwarding request failed on channel 0

When I do "ssh -X abcserver", I got message "X11 forwarding request failed on channel 0".
I checked online and it was suggested to solve it by switching "X11UseLocalhost no" to "X11UseLocalhost yes".
However, both my manager and I don't have this administrative privilege. I am wondering, except this solution, whether there is another option to solve the issue ? I also don't have sudo privilege to directly install X11 on the server.
My local platform is:
Linux version 3.16.0-4-amd64 (debian-kernel#lists.debian.org)
(gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02)
The remote platform is:
Linux version 3.13.0-88-generic (buildd#lgw01-16)
(gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
#135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016
Adding the -v option to ssh when trying to log in will give a lot of debug information which might give a clue to exactly what the problem is, like for instance
debug1: Remote: No xauth program; cannot forward with spoofing.
which in my case installing xauth on the server fixed the issue.
I had to edit the sshd config file on the remote server to fix the issue. It worked on Ubuntu 16.04 Server:
$ sudo vim /etc/ssh/sshd_config
Set `X11UseLocalhost no`
Save the file.
$ sudo service sshd restart
$ exit
Now it works!
$ ssh -X user#remotehost
$ xclock
sudo apt install xauth
change the line #AddressFamily any to AddressFamily inet in /etc/ssh/sshd_config
sudo service ssh restart
This is enough on Ubuntu 18.04 LTS.
After login with ssh -X (or after activating the PuTTY / KiTTY option "Enable X11 forwarding") you should see that the environment variable DISPLAY is automatically defined to localhost:10.0 or similar. After first successful login (with a functional X11 forwarding) the file .Xauthority will be generated. Another positive sign of success.
If you are interested to see and to understand the details of X11 forwarding within your session you can try with lsof -i -P|grep ssh.
1.make sure that during ssh -X root#server you have root permission.
2.update the /etc/ssh/sshd_config and make sure this line is uncommented
X11Forwarding yes
3.systemctl restart sshd
4.exit from server
5.ssh -X root#server
6.virt-manager
In my case, as superuser, editing /etc/ssh/sshd_config on the remote host and changing the following line fixed it.
From
#X11Forwarding no
to
X11Forwarding yes
Then: pkill -HUP sshd on the remote host to make sshd reload its config, which also closes the sshd session.
After X11 forwarding suddenly stopped working after no other changes than moving the ssh server to another wifi, I followed the answer to this seemingly completely different question and it worked.
In other words, it seems the solution for me was to specify AddressFamily inet in /etc/ssh/sshd_config.

Installing Apache on Windows Subsystem for Linux

Having just updated to the newest Windows 10 release (build 14316), I immediately started playing with WSL, the Windows Subsystem for Linux, which is supposed to run an Ubuntu installation on Windows.
Maybe I'm trying the impossible by trying to install Apache on it, but then someone please explain me why this won't be possible.
At any rate, during installation (sudo apt-get install apache2), I received the following error messages after the dependencies were downloaded and installed correctly:
initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: No such file or directory
runlevel:/var/run/utmp: No such file or directory
* Starting web server apache2 *
* The apache2 configtest failed.
Output of config test was:
mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory
chmod: missing operand after '755'
Try 'chmod --help' for more information.
invoke-rc.d: initscript apache2, action "start" failed.
Setting up ssl-cert (1.0.33) ...
Processing triggers for libc-bin (2.19-0ubuntu6.7) ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for ufw (0.34~rc-0ubuntu2) ...
WARN: / is group writable!
Now, I understand that there seem to be some folders and files missing for Apache2 to work. Before I start changing anything that will mess with my Windows installation, I want to ask whether there's a different way? Also, should I worry about / being group writable or is this just standard Windows behaviour?
In order to eliminate this warning
Invalid argument: AH00076: Failed to enable APR_TCP_DEFER_ACCEP
Add this to the end of /etc/apache2/apache2.conf
AcceptFilter http none
Note the following in your output
failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file
I tried listing /var/lock. It points to /run/lock, which doesn't exist.
Create the directory with
mkdir -p /run/lock
The install should now work (you may need to clean the installation first)
You have to start bash.exe in administrator mode to avoid a lot of problems related to network.
i installed Lamp (Apache/MySQL/Php) without any problem :
Start bash.exe in administrator mode
type : sudo apt-get install lamp-server^
add these 2 lines in /etc/apache2/apache2.conf :
Servername localhost
AcceptFilter http none
then you can start apache :
/etc/init.d/apache2 start
Following the great advice here I edited apache2.conf and inserted the following to end of file after receiving all the various errors above and apache2 then worked great on the debian wsl package:
Servername localhost
AcceptFilter http none
AcceptFilter https none

mcelog: Cannot access bus threshold trigger `bus-error-trigger': Permission denied

Since this weekend I get a mail every hour from my server with the following message:
/etc/cron.hourly/mcelog.cron:
mcelog: Cannot access bus threshold trigger `bus-error-trigger': Permission denied
With the subject: "Cron <root#s1> run-parts /etc/cron.hourly"
On my VPS I run CentOS 6.7 and Plesk v12.0.18.
Does anyone know how I can fix this?
Thanks, Alexander
I've seen this on a couple of Plesk servers with SELinux enabled. The problem is that the security contexts of the scripts under /etc/mcelog are incorrect, so SELinux prevents mcelog from executing them. To fix this, run the following commands as root:
# semanage fcontext -a -t bin_t '/etc/mcelog/.*-error-trigger'
# restorecon -R /etc/mcelog
(If the semanage command is not available, install the policycoreutils-python package. You could just use chcon, but this would not survive a filesystem relabel.)
See: http://forum.odin.com/threads/mcelog-cron-error.334110/

Resources