How to change the Rocket Chat Data Directory - ubuntu-server

I have installed the Rocket Chat by following the steps provided here https://rocket.chat/docs/installation/manual-installation/ubuntu/snaps/
I have ubuntu server [Ubuntu 16.04.1]
To store the data [files], I would want to make use of this partition.
/dev/sda3 413422648 71880 392327024 1% /media/data/sd1
Do I have any configuration file where I can specify the storage path?

You can try to mount the new partition at the location of the data directory

Related

How do i extend Oracle VM Template Oracle database 19c dev/sdb1 storage space

I would like to extend the storage of my oracle vm template for the partition that holds /u01/ directory which houses all components for database apex and ords.
I would like to know what is the safest way to go about doing this. i read online that using fdisk command I would need to delete the partition. but I a fearful of this and need some guidance
If the disk wasn't set up as an logical volume to begin with then your options are limited. Here's how I would do it (it isn't elegant, but it works):
Add a new disk to the system and format/mount it as /u01new
Stop all Oracle services
Use "cp -pR /u01/* /u01new" to copy everything from the old disk to the new one
Unmount /u01
Unmount /u01new and remount it as /u01
Restart all Oracle services
Once you're sure everything is working ok, drop the original disk

Adding SSD data disk to Azure classic VM and moving PSQL database files to it

I use Azure infrastructure to host a Django/Postgresql application. The app and the db are on separate VM. Each VM has Ubuntu 14.04, and it's the classic flavor.
I've been using the OS disk for my db's storage (capacity: 30GB). This was fine in the early days when the size of my DB was small. But now, going out of disk space has become a real danger.
Via what steps can I procure more storage space for my DB VM. Moreover, what steps would I need to execute to move the postgresql DB to this newly procured storage space?
I want to avoid downtime or data loss, and being an accidental DBA of sorts, would love fool-proof steps explained in terms understandable for beginners. Thanks in advance!
Update:
After mounting the disk, the steps entail:
Editing the data_directory in /etc/postgresql/9.3/main/postgresql.conf to point to the new location (e.g. /home/data/postgresql/9.3/main)
Transferring contents of PG's data directory to /home/data via sudo rsync -av /var/lib/postgresql /home/data
Restarting posgtesql via sudo /etc/init.d/postgresql restart
Note: change steps accordingly if the additional storage is mounted somewhere other than /home/data
For now, only S series VMs(such as Ds, FS) could add SSD disk. You could attach SSD disk to a classic VM on new Azure Portal. Please refer to these steps:
1.Create a new classic storage account(Premium). If you have Premium storage account, you could not create it.
2.Attach SSD disk on new Azure Portal.
3.Type:Select Premium (SSD) Location Select your storage account
Once the new data disk is attached, you'll need to create a filesystem and mount the disk. Assuming you're using ext4 filesystem, here's how to proceed:
sudo mkfs.ext4 /dev/sdc
mkdir -p /home/data
mount /dev/sdc /home/data
df -h #to view the attached disk
You could obviously mount it to a location different than /home/data as well.
Next, to ensure the data drive gets remounted in the event of a system reboot, do the following:
sudo -i blkid #get uuid for the relevant disk just added
sudo nano /etc/fstab
And then add the following at the end of the file:
UUID=<uuid> /home/data ext4 defaults,nofail 1 2
E.g. it could be
UUID=753a5b1b-4461-74d5-f6e3-27e9ff3b6c56 /home/data ext4 defaults,nofail 1 2
Note that /home/data is the drive you mounted the disk to, so change it as you like.
For a complete reference, go to: https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-classic-attach-disk#initialize-a-new-data-disk-in-linux
At first, try to migrate your classic VM to ARM VM. The migration step is as follows; https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-ps-migration-classic-resource-manager
Those steps seems long but not so difficult or painful. Migrating classic to ARM is optional but its strongly recommended as it gives more granual and flexible resource management including security configuration.
Then, (1) add additional disk to your VM and (2) setup mounted disk to your database storage.
(1) Add additional disk to your VM: It does not require downtime; https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-attach-disk-portal?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json
(2) Setup postgreSQL is out of Azure-related question, so I will omit the answer.

Plex Media Server And encFS

I'm trying to Spin Plex Media server in Docker. And I want to mass my media as a Volume, but encrypted. Flow:
1. Mount volume from external storage on underlaying host.
2. mount volume to docker container as volume
3. encrypt data with encfs inside the docker
4. access data in Plex
5. Enjoy your media
The issue is than:
mount shows:
encfs on /media type fuse.encfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)
Data is readable from unix level. BUT it is not readable for Plex (shows as emtpy folder only).
I susspect encFS or Plex itself not supporting Fuse mounts...
Any ideas? any flags for mounting? Any way to change mount type (can be other "proxy" container).
I haven't used encFS however, in case it helps or you're not aware, the default plex user (usually 'plex') must be an owner or in a group that is assigned to the media files. In addition, if plex is showing an empty folder, it may be simply that the folder does not have read, write AND execute set i.e. chmod 775 (the folder needs the execute bit in order to list contents which is why 664 won't work). I wrote a guide over on Tech-KnowHow for this yesterday, which outlines a few ways to get this done, if you need any help with it just leave me a comment and I'll see what I can do while it's all still fresh in my mind.
Good luck!
Direct link: https://www.tech-knowhow.com/2016/03/how-to-plex-permissions-linux/

Can I use GlusterFS volume storage directly without mounting?

I have setup small cluster of GlusterFS with 3+1 nodes.
They're all on the same LAN.
There are 3 servers and 1 laptop (via Wifi) that is also GlusterFS node.
A laptop often disconnects from the network. ;)
Use case I want to achieve is this:
I want my laptop to automatically synchronize with GlusterFS filesystem when it reconnects. (That's easy and done.)
But, when laptop is disconnected from cluster I still want to access filesystem "offline". Modify, add, remove files..
Obviously the only way I can access GlusterFS filesystem when it's offline from cluster, is accessing volume storage directly. The one I configured creating a gluster volume. I guess it's the brick.
Is it safe to modify files inside storage?
Will they be replicated to the cluster when the node re-connects?
There are multiple questions in your list:
First: Can I access GlusterFS when my system is not connected to it:
If you setup a GlusterFS daemon & brick on your system, mount this local daemon through gluster how you would usually do that and add a replication target also, you can access your brick through gluster as if it was not on your local system. The data will then be synchronized with the replication target once you re-connect your system to the network.
Second: Can I edit files in my brick directly:
Technically you can: You can just navigate to your brick and edit a file, however since gluster will not know what you changed, the changes will not be replicated and you will create a split brain situation. So it is certainly not advisable (so don't do that unless you want to change it manually in your replication brick also).
Tomasz, it is definitely not a good idea to directly manipulate the backend volume. Say you add a new file to the backend volume, glusterfs is not aware of this change and the file appears as spurious file when the parent directory is accessed via the glusterfs volume. I am not sure if glusterfs is ideal for your usecase

Remove filesystem with overlay changed bytes

I develop a software that uses a set of big files.
I cannot download all them.
I need to reproduce timeout error that cannot be reproduce otherwise.
There are stage host. I mounted its remote folder with sshfs but I cannot launch local
server instance because it can change these files. It requires write permissions.
With "sshfs -o ro" it fails to start.
I want to know is it possible to say to save changes locally that could overlay actual bytes in remote files?
You should be able to use UnionFS or AUFS (or any other Union mount filesystem) to use these two folders together. You would have the readonly mount with sshfs and merge it with local folder that would be preferred. Reading would occur from the remote filesystem until write has been done.

Resources