SSD mount configuration for Varnish file storage - varnish

I have Varnish 5.1.1 on Centos 6.5 and want to Use a fresh SSD for file storage, (my RAM 64GB get full quickly as I have a lot of objects to cache)
As said in Varnish Doc I have mounted a tmpfs partition for the working directory :
"The shmlog usually is located in /var/lib/varnish and you can feel free to remove any data found in this directory if needed. Varnish suggests make sure this location is mounted on tmpfs by using /etc/fstab to make sure this gets mounted on server reboot / start."
I have a dedicated 256 GB SSD drive for cache storage.
Do I have to mount it as tmpf with noatime like working dir ?
I did not find any suggestion on how to configure SSD for Varnish needs.

No, you do not have to mount anything special if you're just going to use your SSD. tempfs is specific for a RAM drive only, and if you're not going to take advantage of the superior speed of RAM over SSD, then leaving /var/lib/varnish as is on the default install is good enough.

/var/lib/varnish is used for logging, and Varnish logs a lot of data. Since it uses a circular buffer, size isn't an issue, however the I/O will wear your disks down.
TL;DR: always mount the work directory as tmpfs.

Related

NFS mount point as a disk device linux

A remote NAS server provides an NFS share (/myShare) to a linux client machine
From the linux client , I mounted the NFS share ( Eg./mnt/myShare )
My question is , Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
I would like to use this disk as a physical disk itself to a container to store its data.
Can device mapper be of help here.. Any leads would be of help here
--kk
Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
The answer is yes and no. Yes, because you can mount everything anywhere, i.e. you can:
mount -t nfs nas:/myShare /dev/mydevice
(provided that the directory /dev/mydevice exists).
NO, because a disk is a file under /dev, which basically exposes a set of sectors (or clusters) - other OS components use that to present a file system, which then is mounted somewhere else.
You, instead, have already a file which represents a file system. You can mount that file system wherever you want. 99% of your OS, and your programs, will not care about.
But your share is not a disk, because it is something (a directory part of a file system) exported by another machine. And this difference cannot be circumvented. I think you can live with this without problems but, if your question is literally correct, then no: an exported share is not a disk.
If you want to use the raw hard drive, then you don't need a filesystem. Perhaps your NAS server can be configured to export its storage as an iSCSI target.
NFS itself doesn't implement storage as block device.
But what you can do is the following:
Mount /myShare onto /mnt/myShare.
Create a file the size of all of myShare. For example, if myShare is 3TB in size, do truncate -s 3T /mnt/myShare/loop.img. (at this point, if you wanted a filesystem, you could have done mkfs -t ext4 /mnt/myShare/loop.img).
Set up the file as a loop device: sudo losetup /dev/loop7 /mnt/myShare/loop.img
You now have a 3TB block device on /dev/loop7, which you can see in the output of grep loop7 /proc/partitions.

How to install UBIFS filesystem for SSD flash drive?

I have SSD flash drive. So I want to make a new machine use UBIFS filesystem.
How can I do that? With Ubuntu desktop 17.0 dont have options for this filesystem.
And another question, with new filesystem, Could my machine run faster than ext2, FAT... filesystem? Because they are not designed to optimize for flash drive.
Thank you~
I had some result and found that. SSD called FTL devices, It transfer raw flash device to emulate block device with its controller. So My SSD can't use UBIFS like a filesystem. UBIFS just support for raw flash.
This link could make anything simple: https://digitalcerebrum.wordpress.com/random-tech-info/flash-memory/raw-flash-vs-ftl-devices/

What are some ways to go about running a Vagrant VM in RAM?

I have a development environment running in a Vagrant VM (virtualBox). Considering I have 11Gb of spare RAM I thought I could run the VM completely in RAM.
Would anyone know of the approach to this and would I gain much performance from it?
If you have that much memory available, most probably you'll have the image cached in the host OS cache so you don't need to worry about that.
I've tried putting image files on to ramdisk on my Macbook and didn't see any improvement in 5 mins run (most of which was apt-get install stuff).
Traditionally, VirtualBox has opened disk image files as normal files, which results in them being cached by the host operating system like any other file. The main advantage of this is speed: when the guest OS writes to disk and the host OS cache uses delayed writing, the write operation can be reported as completed to the guest OS quickly while the host OS can perform the operation asynchronously. Also, when you start a VM a second time and have enough memory available for the OS to use for caching, large parts of the virtual disk may be in system memory, and the VM can access the data much faster. (c) 5.7. Host IO caching
Also the benefits you'll see greatly depend on the process you run there, if that's dominated by cpu / network, tinkering with the storage won't help you alot.

Cloning Linux partitions to external hdd

I want to copy Linux Ubuntu 14.04 installed on my hard disk to external hard disk. The purpose is to have exact OS bootable from external hdd on another PC.
Here are listed Disk Drives , Ubuntu is installed on 1,0TB Hard Disk.
Partition 1 is NTFS created and used by Windows, this is Partition 2 , and this is Partition 3: h??p://imgur.com/PY0tujU.
External hard disk is here: h??p://imgur.com/51mVrO2
How can I make exact same bootable Linux on my external hard disk? (using disk dump)
To achieve this, you should have a clone software and then things will be easier. For example, when you use AOMEI Backupper to clone OS, it will automatically pack up your system partition and system reserved partition (both make a complete OS package), copying everything here bit by bit.

How to disable the filesystem cache on the root device?

I've got an ARM virtual machine running on top of KVM/QEMU, with a file mounted as the root filesystem. The VM doesn't have networking, so NFS mounting the root is out of the question. I am testing a particular transport mechanism for IO, so I'm kind of stuck with what I've got.
I want to send files into the guest, so I'd like to mount the file on the host, write things to it, and then unmount it to force a flush. The contents of the filesystem are trivial, and I have a backup, so I have no problem with corruption. Likewise, performance is not an issue.
The problem is, when I do this mount-write-unmount thing, the guest never sees the file. I'm guessing this is a result of the kernel's filesystem cache, and that when I do ls, the file isn't there. I'm guessing the metadata concerning the filesystem are cached in memory, and the updates to the filesystem never appear.
I'm guessing if I disable filesystem caching, then all reads will be forced to disk, causing the filesystem to be hit, and my file to appear. Any tips?
I can think of this:
sync
echo 3 > /proc/sys/vm/drop_caches
And this:
qemu -drive cache=none,file=file.img

Resources