Linux disc partition and Nginx - linux

in the linux bible book, i've found that it will be useful to install linux on different partitions; for example separating /var will be benefinc to avoid that an attacker will fill the hard drive and stops the OS (since the page will be in (/var/www/), and letting the application which is in /usr running, (nginx for example) how can we do this?
am sorry for that question, because am new in linux system, when i've tried the first time to load another partition (the d: in windows), it asked me to mount it first (i've made a shortcut to a document in the d: and the shortcut dont work untill i mount the partition), so does it make sense to make 5 partitions (/boot, /usr, /var, /home, /tmp) to load the OS?
do the web hosters make the same strategy?

even you divide the partitions.
Attacker can fill the logs and make the web service unstable. Which mostly or defaultly located in /var/log folder. Some distros even log folder in /etc/webserver/log folder.
there are some uploading related flaws that made php upload features fill up the file limit on tmp folder.
This will not protect you at all. You must look the security from another perspective.

Related

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Changing the order in which directories are searched for programs in Linux

I recently was in a situation when the software center on my Ubuntu installation was not starting. When I tried to launch it from console, I found that python was unable to find Gtk, although i hadn't removed it.
from gi.repository import Gtk,Gobject
ImportError: cannot import name Gtk
I came across a closely related question at Stackoverflow( i am unable to provide link to the question as of now).The solution accepted(and also worked for me) was to remove the duplicate installation of gtk from /usr/local as Gobject was present in this directory but not Gtk.
So, I removed it and again launched software-center and it launched.
While I am happy that the problem is solved, I would like to know if removing files from /usr/local can cause severe problems.
Also, echo $PATH onn my console gives:
/home/rahul/.local/bin:/home/rahul/.local/bin:/home/rahul/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/rahul/.local/bin:/home/rahul/.local/bin
which tells that /usr/local/bin is searched for before /usr/bin . Should $PATH be modified such that the order of lookup is reversed? If yes, then how?
You actually don't want to change that.
/usr/local is a path that, according to the Linux Filesystem Hierarchy Standard is dedicated to data specific to this host. Let's go a bit back in time: when computers were expensive, to use one you had to go to some lab, where many identical UNIX(-like) workstations were found. Since disk space was also expensive, and the machines were all identical and had more or less the same purpose (think of a university), they had /usr mounted from a remote file server (most likely through the NFS protocol), which was the only machine with big enough disks, holding all the applications that could be used from the workstation. This also allowed for ease of administration, where adding a new application or upgrading another to a newer version could just be done once on the file server, and all machines would "see" the change instantly. This is why this scheme permaned even as bigger disks become inexpensive.
Now imagine that, for any reason, a single workstation needed a different version of an application, or maybe a new application was bought with with only a few licenses and could thus be run only on selected machines: how to handle this situation? This is why /usr/local was born, so that single machines could somehow override network-wide data with local data. For this to work, of course /usr/local must point to a local partition, and things in said path must come first than things in /usr in all search paths.
Nowadays, UNIX-like Linux machines are very often stand-alone machines, so you might think this scheme no longer makes senses, but you would be wrong: modern Linux distributions have package management systems which, more or less, play the role that of the above-mentioned central file server: what if you need a different version of an application that what is available in the Ubuntu repository? You could install it manually, but if you put it in /usr, it could be overwritten by an update performed by the package management system. So you could just put it in /usr/local, as this path is usually granted not to be altered in any way by the package management system. Again, it should be clear that in this case you want anything in /usr/local to be found first than anything in /usr.
Hope you get the idea :).

NodeJS reading/writing files to network drive

I have a script that writes files to disk using fs createWriteStream.
What I am trying to achieve now is write those files to a shared network drive.
With a directory like so - //hostname/scratch/reece
I am running this script on windows, but this application will sit on ubnutu/rhel when I deploy it.
This is a crucial part of this script so any suggestions on how I can write to a network drive would be great.
The same would go for reading from a network drive and sending that back via HTTP.
Keeping in mind there would likely be hundreds of thousands of requests to write to this drive through my nodejs api, so I would like to avoid relying on background processes to handle the file transfer.
Any ideas on approach?
You will have to connect the drive to your server using a technology appropriate for that particular OS (may be different on Ubuntu vs. Windows). You can then address that server through whatever OS mount tech it uses.
In Windows, you can use either a drive letter or a UNC path. On Ubuntu, perhaps a mounted network file system volume.
This is one case where you aren't likely to make the exact same setup work on Windows vs. Ubuntu. If you put the appropriate root path name into a config file, the rest of your code can probably be identical. Beyond this, it isn't clear what you're asking.

Subversion (svn) repository on NTFS partition in Linux?

Can I create and use an svn repository on an NTFS partition when working with svn in Linux? That is, repository on the NTFS partition and checkouts and commits to and from an EXT4 partition.
I realize that NTFS support in Linux is limited and does not support permissions and symbolic links for example. Would that, or any other limitations, cause any issues?
The reason I am asking is because I am thinking about either 1) moving my repository to my Dropbox folder (which resides on an NTFS partition) or 2) moving my repository to a memory stick (which could potentially be NTFS partitioned).
My use case is very simple. I am the only person using the repository. Currently my repository resides on EXT4 and I either access it from the same machine as the repository is located on, or from a second machine thorough svn+ssh://. However, if I went with one of the options above, the access strategy would obviously change.
I would be hesitant to do this because, as you stated, NTFS partitions don't support Unix style permissions.
The Subversion repository directory is usually owned and can only be written to by the user who runs whatever Subversion server process is running. For example, if you're using Apache httpd, and you're Apache user is called httpd, the user who owns the repository is httpd and this would be the only user with write permissions on the files and directories.
A NTFS partition on a Windows box does have permissions set correctly because the Subversion server process would use Windows permission settings. A Linux server will have problems.
Also NTFS partitions are case preserving and not case sensitive, I don't know how this would affect the Subversion server process running on a Linux box. Again, a Windows Subversion server process would be fine with this. A Linux server may have problems.
Unfortunately, I can't say for certain one way or another. I've never tried it, nor seen it done. However, there is a post on the Wandisco Forum that covers this very scenario. The user was able to get around his problems, but I would be hesitant to say that all is beer and candy from then on.
Please say you're not doing this, so you can share a file:// protocol Subversion repository among multiple users. This is a big, fat no-no. Instead, you should at least run the svnserve process, and have users accessing your repository via the svn:// protocol. It's very simple to setup svnserve -- even as a Windows service. The only problem may be that port 3620 (The Subversion server port) is being blocked by your firewall or router.
Dropbox multiboot ntfs folder sync.
In an earlier closed thread by vanadium people we're wanting solution to sync Dropbox on multiple boot systems in one ntfs directory. Vanadium had a good suggestion that I tweaked a little bit to solve.
You must install it in Windows or other system and setup Dropbox folder from Dropbox.
Reboot into Linux system. (I used Ubuntu 18)
Install Dropbox to Ext 4 partition.
Open file manager to Home folder and delete Dropbox directory. Leave this file manager open.
Open a new file manager to the main directory ntfs or other that other os Dropbox folder is in.
Hit ctr + h then drag the Dropbox folder to the directory you deleted it from. (This creates a symbol link shortcut to the Dropbox folder you want)
Now sync Dropbox in Linux.
If you want Dropbox to load at startup you must set the partition folder to auto mount on startup in terminal.
1 - Write down the UUID of the drive that you want to mount by executing the following command:
sudo blkid
2 - Then edit the fstab:
sudo gedit /etc/fstab
3 - Add at the end of the file fstab:
UUID=D638F77338F7514B /media/baraldi/win_www ntfs defaults 0 0
Be sure the UUID matches what you recorded in the first step
4 - Restart)
Or Use the "Disks" app.
Load the Disks app (In System) and select the disk with the filesystem you want to mount on startup.
Then select the filesystem on that disk and click on the gears (for configuration).
Select "Edit Mount Options" from the popup menu.
On the setup options, click to check the "Mount on Startup" box. (This will add the entry to fstab when you click on "OK").
Reboot, and your filesystem should be available.
I agree with other comments here regarding manually adding lines to fstab via CLI/text editor. If you take the time to look at your fstab file it will help you understand what changes have been made and, ultimately the CLI method will become faster for you.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Resources