how to reset mongodb cache - linux

Could you please tell how to reset mongodb cache to make sure that results of query are not from cache?
Now I try to do it rebooting server:
sudo reboot.
Is there any other way?
Thank you.

Following user366534's comment, do this:
stop mongod
sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"
start mongod

MongoDB itself doesn't handle the caching. Instead, it's taken care of by the OS's virtual memory manager, as MongoDB uses Memory Mapped files. See: Mongodb.org
I'm no Linux guru, but for dev purposes when I wanted to ensure a completely clean cache, I did a reboot as that seemed to be what made the difference.
I'm assuming you want to do this for dev/performance testing reasons rather than for production purposes too...
Update:
Check out this link: http://www.linuxask.com/questions/how-to-clear-cache-from-memory-in-linux

Related

Nodejs/Gcloud/kubectl any command we run from WSL2 is deadly slow

I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.

Database 'neo4j' is unavailable. Cannot reset neo4j database

I have my community 4.1.1 neo4j service installed on the ubuntu commandline running on my windows machine. I have been using neo4j steadily for a month or two now, just recently it has prevented me from accessing the neo4j database, it will say this in neo4j browser:
Database 'neo4j' is unavailable. Run :sysinfo for more info.
I have tried uninstalling neo4j and reinstalling but that has not worked either. I tried playing around with the default listen address previously, but now with the reinstall all config data is back to normal. Running ./neo4j-community-4.1.1/bin/cypher-shell under bin does not work. It says:
Unable to establish connection in 3000ms
If I run ./neo4j-community-4.1.1/bin/cypher-shell -a 192.168.0.19 it says:
Database 'neo4j' is unavailable
When I run ./neo4j-community-4.1.1/bin/neo4j-admin check-consistency --database=neo4j it also states:
.2020-08-18 22:12:16.868+0000 WARN [o.n.c.ConsistencyCheckService] Index was dirty on startup which means it was not shutdown correctly and need to be cleaned up with a successful recovery. Index file: /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/neostore.relationshipgroupstore.db.id.
I would love to reset everything from scratch but I am unsure how
At this point I cannot even access the browser at localhost:7474. It hangs indefinitely trying to load.
I am truly stumped. Anyone have any advice on how I navigate this issue?
It's not easy to guess the issue without seeing your system, but may I ask if you can try to delete your default database, i.e. neo4j physically from the disk (e.g. rm -rf /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/), and then try to create another database with different name instead (open neo4j.conf, search for dbms.active_database, which point out on default database, and change it to some other name)?
I had this problem running on a linux server. The server was up but got this error on any query: Database 'neo4j' is unavailable. To troubleshoot I ran sudo neo4j console and the problem went away. When I ran the console as user ne04j the problem came back.
$ /usr/share/neo4j/bin/neo4j console
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So I tried: sudo chown -R neo4j:neo4j /var/lib/neo4j/data and the problem went away. Apparently when I'd done a restore of the database I'd run the neo4j server as root and when the system runs neo4j it does it as the user neo4j so couldn't read any of its data. It seems that an error like this would warrant an easy to parse error message but verbosity is not the neo4j way.

NFSClient issue on FreeBSD: "rpc.umntall: not found"

We have a FreeBSD 8 server that hasn't been restarted since it got booted. It has been restarted now and we're trying to reconnect the NFS mount to it.
$ sudo /etc/rc.d/nfsclient start
NFS access cache time=60
rpc.umntall: not found
The obvious reason for the error rpc.umntall: not found is because the program doesn't exist on the computer.
Is there any other way to mount to a NFS server that is connected to the network than using NFSClient. Or can I force the client to move past the part in the script that requires rpc.umntall?
I only ask because it was started before, and I'd be very surprised if we removed any programs from it.
rpc.umntall is installed as part of the base system, usually in /usr/sbin/.
If you take a look at the contents of /etc/rc.d/nfsclient, you'll find this:
unmount_all()
{
# If /var/db/mounttab exists, some nfs-server has not been
# successfully notified about a previous client shutdown.
# If there is no /var/db/mounttab, we do nothing.
if [ -f /var/db/mounttab ]; then
rpc.umntall -k
fi
}
A cheap work around would be to delete /var/db/mounttab.
However, if you want to fix the problem, you'll want to fix the missing rpc.umntall. Is it not in /usr/sbin/? If not, you could try to restore it from a published image, or you could attempt to build it from source.
If it's somewhere else on the computer, you could try to find it using find / | grep rcp.umntall.
If it exists in /usr/sbin, but isn't working, then that would likely mean that something is wrong with the PATH variable being used by your rc subsystem. You could double check that by hardcoding the path to rpc.umntall right in the /etc/rc.d/nfsclient script.

Why does mongodb complain about transparent_hugepage?

A few questions are already asking about how to fix the mongodb warning:
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always.'
** We suggest setting it to 'never'
But I'm wondering if it should be fixed. I get this warning from MongoDB 3.0.1 on a Ubuntu VM running on Google's Cloud. Should I trust MongoDB that 'never' is better? Or should I trust Google/Ubuntu that they set it to 'always' for a good reason? I imagine there are tradeoffs to be considered and don't know what I'd be trading to keep it or fix it.
Asking how to fix it is fine, but asking whether to fix it seems wiser.
Edit: Mongodb have addressed this issue since I wrote this answer. Their recommendation is at https://docs.mongodb.com/master/tutorial/transparent-huge-pages/ and probably ought to be your go-to solution. My original answer will still work, but I'd consider it a hack now that an official solution is available.
Original answer: According to the MongoDB documentation, http://docs.mongodb.org/manual/reference/transparent-huge-pages/, and support, https://jira.mongodb.org/browse/DOCS-2131, transparent_hugepage (THP) is designed to create fewer large memory blocks rather than many small memory blocks in systems with a lot of memory. This is great if your software needs large contiguous memory accesses. For MongoDB, however, regardless of memory available, it requires numerous smaller memory accesses and therefore performs better with THP disabled.
That makes me think either way will work, but you'll get better mongo (or any database) performance with THP off, giving you smaller bites of memory. If you don't have much memory anyway, THP probably ought to be off no matter what you run.
Several ways to do that are outlined in the link above. The most universally applicable appears to be editing rc.local.
$ sudo nano /etc/rc.local
Insert the following lines before the "exit 0" line.
...
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
exit 0
Note: redhat-based systems may use "redhat_transparent_hugepage" rather than "transparent_hugepage" and can be checked by:
ls /sys/kernel/mm/*transparent_hugepage*/enabled
cat /sys/kernel/mm/*transparent_hugepage*/enabled
To apply the changes, reboot (which will run rc.local) or:
$ sudo su
# source /etc/rc.local
# service mongod restart
# exit
to properly apply the changes made above
For Ubuntu using upstart scripts:
Since we are deploying machines with Ansible I don't like modifying rc files or GRUB configs.
I tried using sysfsutils / sysfs.conf but ran into timing issues when starting the services on fast (or slow machines). It looked like sometimes mongod was started before sysfsutils. Sometimes it worked, sometimes it did not.
Since mongod is an upstart process I found that the cleanest solution was to add the file /etc/init/mongod_vm_settings.conf with the following content:
# Ubuntu upstart file at /etc/init/mongod_vm_settings.conf
#
# This file will set the correct kernel VM settings for MongoDB
# This file is maintained in Ansible
start on (starting mongod)
script
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
end script
This will run the script just before mongod will be started.
Restart mongod (sudo service mongod restart) and done.
In Ubuntu I used the option 'Init Script' of this document: http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/
None of these worked for me on Amazon ec2 instance running Ubuntu 14.04, not even the init.d script recommended by MongoDB. I had to use the hugeadm tool by first installing it via apt-get and then running sudo hugeadm --thp-never, this post pointed me to hugeadm. I'm still trying to figure out how to disable the transparent_hugepage defrag. hugeadm doesn't seem to have an easy way to do that.

PHP-FPM and capistrano, "No input file specified"

I use capistrano to deploy new versions of a website to servers that run nginx and php-fpm, and sometimes it seems like php-fpm gets a bit confused after deployment and expects old files to exist, generating the "No input file specified" error. I thought it could have had something to do with APC, which I uninstalled, but I realize the process doesn't get as far as checking stuff with APC.
Is there a permission friendly way to tell php-fpm that after deployment it needs to flush its memory (or similar), that I could use? I don't think I want to do sudo restarts.
rlimit_files isn't set in php-fpm.conf, and ulimit -n is 250000.
Nginx has it's own rather aggressive filecache. It's worse when NFS is involved since that has it's own cache as well. Tell capistrano to restart nginx after deployment.
It can also be an issue with your configuration as Mohammad suggests, but then a restart shouldn't fix the issue, so you can tell the two apart.

Resources