Cassandra moving data_file_firectories - cassandra

Regarding the location of cassandra created data files and system files, I need to move the "commitlog_directory", "data_file_directories" and "saved_caches_directory" which have settings in the "cassandra.yaml" config file. It is currently at the default location "/var/lib/cassandra". The data is only some test data and of course the system generated keyspaces which are
dse_perf
dse_system
OpsCenter
system
system_traces
There are also the commitlog and saved_caches.db to move.
I am thinking of moving the keyspace directories with linux shell commands but I'm very unsure if they will become corrupt somehow. There is simply no space in the default drive and we need to move everything to the secondary and tertiary mounted drives.
Right now I'm in the process of moving all the files and resetting the yaml settings.
I have two questions -
Regarding the cassandra.yaml file, are there any other files besides this that are depended upon to have the location of the commitlog_directory and data_file_directories and saved_caches_directory, and their 'wrong location' will cause failure once I move all these files? I am also concerned the files (like the db files) inside the tables themselves have references to their own location and cause failure once they are moved.
If I just move the three settings commitlog_directory and data_file_directories and saved_caches_directory, will dse/cassandra actually create all the system keyspaces (system_traces, dse_perf, system, OpsCenter, dse_system), and the commitlof and the saved_caches.db, and will any other upstream config files be out of sync with that (same as first part of question 1)?
It is a very new installation so re installing would not be the end of the world but I realllly don't want to because we have kerberos and all kinds of other stuff on top of this cluster now.
This OS is ubuntu 14.0.4 and the DSE version is 4.7.

I just finished doing this. My instances are in AWS EC2 so your process may vary, but in essence:
create a new volume and attach it to the instance. my new device was
/dev/xvdg.
create new mount point sudo mkdir /new_data
format the new volume sudo mkfs -t ext4 /dev/xvdg
edit /etc/fstab so that your mount will survive reboots and add this
line /dev/xvdg /new_data ext4 defaults,nofail,nobootwait 0 2
mount the new volume sudo mount -a
make the new directories sudo mkdir -p
/new_data/lib/cassandra/commitlog
chown the ownership sudo chown -R cassandra:cassandra
/new_data/lib/cassandra
change cassandra.yaml to point to the new dirs
drain the node. if you're moving the data dir, copy over the data
from the old location to the new location. if you're moving
commitlog only, just restart cassandra.

I was able to move all the files and the commitlog as well. I changed the yaml and pointed it to where I wanted it to go. Remember to run the following command afterward -
chown -R cassandra:cassandra
And voila! Everything is reading/writing as it should. Cassandra is neato.

Related

Unable to write over an SSHFS mounted folder with SLURM jobs

I have the following problematic and I am not sure what is happening. I'll explain briefly.
I work on a cluster with several nodes which are managed via slurm. All these nodes share the same disk memory (I think it uses NFS4). My problem is that since this disk memory is shared by a lots of users, we have a limit a mount of disk memory per user.
I use slurm to launch python scripts that runs some code and saves the output to a csv file and a folder.
Since I need more memory than assigned, what I do is I mount a remote folder via sshfs from a machine where I have plenty of disk. Then, I configure the python script to write to that folder via an environment variable, named EXPERIMENT_PATH. The script example is the following:
Python script:
import os
root_experiment_dir = os.getenv('EXPERIMENT_PATH')
if root_experiment_dir is None:
root_experiment_dir = os.path.expanduser("./")
print(root_experiment_dir)
experiment_dir = os.path.join( root_experiment_dir, 'exp_dir')
## create experiment directory
try:
os.makedirs(experiment_dir)
except:
pass
file_results_dir = os.path.join( root_experiment_dir, 'exp_dir' , 'results.csv' )
if os.path.isfile(file_results_dir):
f_results = open(file_results_dir, 'a')
else:
f_results = open(file_results_dir, 'w')
If I directly launch this python script, I can see the created folder and file in my remote machine whose folder has been mounted via sshfs. However, If I use sbatch to launch this script via the following bash file:
export EXPERIMENT_PATH="/tmp/remote_mount_point/"
sbatch -A server -p queue2 --ntasks=1 --cpus-per-task=1 --time=5-0:0:0 --job-name="HOLA" --output='./prueba.txt' ./run_argv.sh "python foo.py"
where run_argv.sh is a simple bash taking info from argv and launching, i.e. that file codes up:
#!/bin/bash
$*
then I observed that in my remote machine nothing has been written. I can check the mounted folder in /tmp/remote_mount_point/ and nothing appears as well. Only when I unmount this remote folder using: fusermount -u /tmp/remote_mount_point/ I can see that in the running machine a folder has been created with name /tmp/remote_mount_point/ and the file is created inside, but obviously nothing appears in remote machine.
In other words, it seems like by launching through slurm, it bypasses the sshfs mounted folder and creates a new one in the host machine which is only visible once the remote folder is unmounted.
Anyone knows why this happens and how to fix it? I emphasize that this only happens if I launch everything through slurm manager. If not, then everything works.
I shall emphasize that all the nodes in the cluster share the same disk space so I guess that the mounted folder is visible from all machines.
Thanks in advance.
I shall emphasize that all the nodes in the cluster share the same disk space so I guess that the mounted folder is visible from all machines.
This is not how it works, unfortunately. Trying to put it simply; you could say that mount point inside mount points (here SSHFS inside NFS) are "stored" in memory and not in the "parent" filesystem (here NFS) so the compute nodes have no idea there is an SSHFS mount on the login node.
For your setup to work, you should create the SSHFS mount point inside your submission script (which can create a whole lot of new problems, for instance regarding authentication, etc.)
But before you dive into that, you probably should enquiry whether the cluster has another filesystem ("scratch", "work", etc.) where there you could temporarily store larger data than what the quota allows in your home filesystem.

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

Tired of creating /run/postgresql and setting read and execute writes after every reboot

I'm running Arch Linux, I installed PostgreSQL as any other arch package. I'm running postgres with a local database located in my user directory. (postgres -D /home/user/data/) When I do so, I get the error FATAL: could not create lock file "/run/postgresql/.s.PGSQL.5432.lock": No such file or directory. Creating the directory /run/postgresql and giving the postgres user access solves this problem
$ sudo mkdir /run/postgresql
$ sudo chmod a+w /run/postgresql
however I'm tired of writing these commands every time I reboot, as /run gets cleared when rebooting. I could write a script to execute these commands, but I feel like I'm doing this the wrong way to begin with. Is there any way I could let postgres create its directory itself, or maybe have it not use /run/postgres for it's lock files in the first place?
Postgres creates the lock file in /run/postgresql by default.
From the manpage:
-k directory
Specifies the directory of the Unix-domain socket on which postgres is
to listen for connections from client applications. The default is
normally /run/postgresql, but can be changed at build time.
Use -k directory to tell postgres to use a different directory.
Run your command as postgres -k /tmp -D /home/user/data/.
Solution 1 (By managing temporary directory /run/postgresql, /var/run/postgresql)
Directory /run/postgresql is a temporary directory. Path
/var/run/postgresql is usually a symbolic link to /run/postgresql.
systemd-tmpfiles is mechanism to manage such temporary files and directories. systemd-tmpfiles creates temporary directories during
boot and sets their owner, group and permissions. It may read configuration files in three different locations. Files in
/etc/tmpfiles.d override files with the same name in
/usr/lib/tmpfiles.d and /run/tmpfiles.d.
We can create directory /run/postgresql on the fly at boot time using systemd-tmpfiles mechanism by creating postgresql configuration file as below
echo "d /run/postgresql 0755 postgres postgres -" > /usr/lib/tmpfiles.d/postgresql.conf
Solution 2 (By relocating PostgreSQL lock file location)
Another way to fix the issue is to relocate the PostgreSQL lock file
location. We can do so by using below query
ALTER SYSTEM SET unix_socket_directories='<any-existing-path-with-valid-permissions>, /tmp';
Here we can provide any path for PostgreSQL lock file which is already present on the system and have required permissions to manage lock files by postgres user.

AWS EC2: Moving /var to EBS

I'm banging my head against a wall for the last 5 hours or so.
I have a brand new Centos 6 installation with Plesk. Once the machine is booted up I'm trying to move the /var folder to an attached EBS (/dev/xvdj):
#copy original /var to /dev/xvdj
mkdir /mnt/new
mount /dev/xvdj /mnt/new
cd /var
cp -Rax * /mnt/new
cd /
mv var var.old
#mount EBS as new /var
umount /dev/xvdj
mkdir /var
mount /dev/xvdj /var
I know prior to moving /var I'm supposed to boot the instance into runlevel 1 (single user) to prevent anything writing and reading from /var. However, this locks me out from the instance which I learned the hard way.
I tried to manually stop mysql, webserver and mail server, but after I move /var I can't bring these services back up, they just state [FAILED] when I attempt to start. They also don't write anything into /var/log. On a first glance permission of the directories inside /var look alright, symlinks exist too.
Any ideas?
This is a very common requirement for all corporate clients, having separate partition does help a lot in order to increase volume size at any given point of time.
Most of the people get stuck with SSH connection problem after doing partitioning that's when they use a more generalized approach to do partitioning.
I have specially written a blog for this with a detail step by step procedure to perform such operation on AWS EBS.
Steps to create separate /var partition on AWS EBS volume
Also if you choose to do partitioning using LVM then here is one more post which has detailed step by step procedure with screenshots.
Create root swap and LVM partition on AWS EBS volume
Hope this helps! :)
The best way to do that is probably offline. Detach your EBS disks from the first instance, attach to another one, mount them and make the changes, including the fstab of the root EBS. Then, detach and attach it again on the original instance and boot. I would do that way.

How do I give apache permission to use a directory on an NTFS partition?

I am running Linux (Lubutu 12.10) on an older machine with a 20GB hard drive. I have a 1 TB external hard drive with an NTFS partition on it. On that partition, there is www directory that holds my web content. It is auto-mounted at startup as /media/t515/NTFS.
I would like to change the apache document directory from /var/www to /media/t515/NTFS/www.
I need to keep the partition as an NTFS partition, because I use the same hard drive on a different machine running WAMP.
I changed the file "default" in /etc/apache2/sites-available to the new location, and restarted the server. When I tried to go to local host, I got the error:
403 Forbidden
You don't have permission to access / on this server.
I then changed the automount options in fstab to include the option "umask=0000", and then to "umask=2200", both to no avail. I still get the same error message.
I can access the NTFS partition with no problem from other applications, and when logged in as any user. But Apache seems to be unable (or unwilling) to access the partition. How do I give apache permission to use a directory on an NTFS partition?
After many many attempts here is what succeeded for me and nothing else that is : changing the configuration of Apache so that it uses www-data (Apache user) no more but my own user instead.
Very simple to do. In my version of Apache the two lines to be changed are in the /etc/apache2/envvars file (it can be another file in another version) :
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
I replaced www-data by my user name (here toto :)) :
export APACHE_RUN_USER=toto
export APACHE_RUN_GROUP=toto
In my experience I've always had to remount the drive with RW permissions. found this:
sudo mount -t ntfs -o rw,auto,user,fmask=0022,dmask=0000 /dev/whatever /mnt/whatever
or:
For NTFS partitions, use the permissions option in fstab.
First unmount the ntfs partition.
Then edit /etc/fstab
Graphical gksu gedit /etc/fstab
Command line sudo -e /etc/fstab
Identify your partition UUID with blkid
sudo blkid
And add or edit a line for the ntfs partition
# change the "UUID" to your partition UUID
UUID=12102C02102CEB83 /media/windows ntfs-3g auto,users,permissions 0 0
Make a mount point (if needed)
sudo mkdir /media/windows
Now mount the partition
mount /media/windows
The options I gave you, auto, will automatically mount the partition
when you boot and users allows users to mount and umount .
You can then use chown and chmod on the ntfs partition.
Both found here: https://askubuntu.com/questions/11840/how-to-chmod-on-an-ntfs-or-fat32-partition
None of the answers above solve the issue, in fact, the problem is related to Apache itself, not filesystem or permissions.
The only thing you need to do is :
<Directory "/www/mywebdirectoryinapartitioneddisk">
Require all granted
</Directory>
this will solve the issue
here the post in my blog explaining everything in detail. It could work on NTFS
http://www.tbogard.com/2014/09/12/making-apache-server-to-read-a-partitioned-disk-the-definitive-solution/
It's actually quite simple:
1) Create a local user on the Windows host
2) Grant appropriate NTFS permissions to that user
3) Verify access (Windows only)
... THEN ...
4) Configure your NTFS mount on Linux to use the same Windows user and group (Linux user/group is irrelevant here)
5) Configure Apache to use that Linux group (Linux user/group is essential here)

Resources