I set up an nfs/autofs-ldap system connecting 5-6 Ubuntu boxes. All my computers export their drives for storage large files, and are auto-mounted at
/drives/machine1/drive1
/drives/machine1/drive2
...
Inside the user's home directory, I ask users to set up symbolic links to point to one of the dedicated drives for storing large files. For example for user1:
cd /homes/user1/
ln -s /drives/machine1/drive1/users/user1/workdir .
when a user logs in any one of my boxes, he/she can use ~/workdir to work on data.
However, when the network is down, and a user happens to use machine1 as his desktop, I wonder if the link ~/workdir can have a fallback link, such as /local_mount/machine1/drive1, which is the original path in the fstab and /etc/exports?
if a fallback link is supported, one can still be able to access all his files without recreating the links.
does Unix/Linux symbolic link support this feature? any hack to make this possible?
You can setup a symbolic link at the unmounted /drives/machine1/drive1/users/user1/workdir location [aka recursive link] to point to /local_mount/machine1/drive1.
The [only?] problem:
You'll have to have the same /drives/machine1/drive1/users/* structure under the unmounted file system.
Related
I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.
Can I create and use an svn repository on an NTFS partition when working with svn in Linux? That is, repository on the NTFS partition and checkouts and commits to and from an EXT4 partition.
I realize that NTFS support in Linux is limited and does not support permissions and symbolic links for example. Would that, or any other limitations, cause any issues?
The reason I am asking is because I am thinking about either 1) moving my repository to my Dropbox folder (which resides on an NTFS partition) or 2) moving my repository to a memory stick (which could potentially be NTFS partitioned).
My use case is very simple. I am the only person using the repository. Currently my repository resides on EXT4 and I either access it from the same machine as the repository is located on, or from a second machine thorough svn+ssh://. However, if I went with one of the options above, the access strategy would obviously change.
I would be hesitant to do this because, as you stated, NTFS partitions don't support Unix style permissions.
The Subversion repository directory is usually owned and can only be written to by the user who runs whatever Subversion server process is running. For example, if you're using Apache httpd, and you're Apache user is called httpd, the user who owns the repository is httpd and this would be the only user with write permissions on the files and directories.
A NTFS partition on a Windows box does have permissions set correctly because the Subversion server process would use Windows permission settings. A Linux server will have problems.
Also NTFS partitions are case preserving and not case sensitive, I don't know how this would affect the Subversion server process running on a Linux box. Again, a Windows Subversion server process would be fine with this. A Linux server may have problems.
Unfortunately, I can't say for certain one way or another. I've never tried it, nor seen it done. However, there is a post on the Wandisco Forum that covers this very scenario. The user was able to get around his problems, but I would be hesitant to say that all is beer and candy from then on.
Please say you're not doing this, so you can share a file:// protocol Subversion repository among multiple users. This is a big, fat no-no. Instead, you should at least run the svnserve process, and have users accessing your repository via the svn:// protocol. It's very simple to setup svnserve -- even as a Windows service. The only problem may be that port 3620 (The Subversion server port) is being blocked by your firewall or router.
Dropbox multiboot ntfs folder sync.
In an earlier closed thread by vanadium people we're wanting solution to sync Dropbox on multiple boot systems in one ntfs directory. Vanadium had a good suggestion that I tweaked a little bit to solve.
You must install it in Windows or other system and setup Dropbox folder from Dropbox.
Reboot into Linux system. (I used Ubuntu 18)
Install Dropbox to Ext 4 partition.
Open file manager to Home folder and delete Dropbox directory. Leave this file manager open.
Open a new file manager to the main directory ntfs or other that other os Dropbox folder is in.
Hit ctr + h then drag the Dropbox folder to the directory you deleted it from. (This creates a symbol link shortcut to the Dropbox folder you want)
Now sync Dropbox in Linux.
If you want Dropbox to load at startup you must set the partition folder to auto mount on startup in terminal.
1 - Write down the UUID of the drive that you want to mount by executing the following command:
sudo blkid
2 - Then edit the fstab:
sudo gedit /etc/fstab
3 - Add at the end of the file fstab:
UUID=D638F77338F7514B /media/baraldi/win_www ntfs defaults 0 0
Be sure the UUID matches what you recorded in the first step
4 - Restart)
Or Use the "Disks" app.
Load the Disks app (In System) and select the disk with the filesystem you want to mount on startup.
Then select the filesystem on that disk and click on the gears (for configuration).
Select "Edit Mount Options" from the popup menu.
On the setup options, click to check the "Mount on Startup" box. (This will add the entry to fstab when you click on "OK").
Reboot, and your filesystem should be available.
I agree with other comments here regarding manually adding lines to fstab via CLI/text editor. If you take the time to look at your fstab file it will help you understand what changes have been made and, ultimately the CLI method will become faster for you.
I have multiple websites on a dedicated server running under Linux/Apache. The sites need to access common data from a directory named 'DATA' under the doc root. I cannot replicate this directory for every site. I would like to put this under a common directory (say /DATA) and provide a symbolic link to this directory from the doc root for each of the sites.
www/DATA -> /DATA
Is there a better way of doing this?
If I put this common directory (/DATA) directly under Linux root directory, can there be problems from Linux standpoint as the directory size can be several gigabytes and the sub directories under /DATA will need have write permissions.
Thanks
Use Alias along with the Directory directive. This will allow the site to access the directory via a url path.
I'm not sure what exactly it means that you'll have scripts accessing the directory to provide data. Executing shell scripts to read an produce data is a different story entirely, but you probably want to avoid this if this is what you're doing. Application pages could be included in the data directory and use a relative path to get to the data. Then all sites get the same scripts and data.
I don't know what your data is, but I'd probably opt to put it in a database. Think about how you have to update multiple machines if you have to scale your app. Maybe the data you have is simple and a DB is overkill.
in the linux bible book, i've found that it will be useful to install linux on different partitions; for example separating /var will be benefinc to avoid that an attacker will fill the hard drive and stops the OS (since the page will be in (/var/www/), and letting the application which is in /usr running, (nginx for example) how can we do this?
am sorry for that question, because am new in linux system, when i've tried the first time to load another partition (the d: in windows), it asked me to mount it first (i've made a shortcut to a document in the d: and the shortcut dont work untill i mount the partition), so does it make sense to make 5 partitions (/boot, /usr, /var, /home, /tmp) to load the OS?
do the web hosters make the same strategy?
even you divide the partitions.
Attacker can fill the logs and make the web service unstable. Which mostly or defaultly located in /var/log folder. Some distros even log folder in /etc/webserver/log folder.
there are some uploading related flaws that made php upload features fill up the file limit on tmp folder.
This will not protect you at all. You must look the security from another perspective.
I'm writing a program for Linux that stores its data and settings in the home directory (e.g. /home/username/.program-name/stuff.xml). The data can take up 100 MB and more.
I've always wondered what should happen with the data and the settings when the system admin removes the program. Should I then delete these files from every (!) home directory, or should I just leave them alone? Leaving hundreds of MB in the home directories seems quite wasteful...
I don't think you should remove user data, since the program could be installed again in future, or since the user could choose to move his data on another machine, where the program is installed.
Anyway this kind of stuff is usually handled by some removal script (it can be make uninstall, more often it's an unsinstallation script ran by your package manager). Different distributors have got different policies. Some package managers have got an option to specify whether to remove logs, configuration stuff (from /etc) and so on. None touches files in user homes, as far as I know.
What happens if the home directories are shared between multiple workstations (ie. NFS mounted)? If you remove the program from one of those workstations and then go blasting the files out of every home directory, you'll probably really annoy the people who are still using the program on other workstations.