Sharing folder within Linux machine to use it in Database Directory - linux

I need to refresh the database with new dump files. But, unfortunately, that server machine doesn't have enough space. So, now trying to import same dump files, which is already present in the other machine (same network). Both machine has same OS running (Linux) with same version.
Now, I'm planning to share the source dump folder and create new directory in destination database, which will point network folder. But, I'm not sure how to share folder in Linux.
Any suggestion will be appreciated.

You probably want to share the directory with NFS. Here is a basic outline of the process.
On the server (where the files are):
yum -y install nfs-utils nfs-utils-lib // your pkg manager may vary
vi /etc/exports
// add a line like below
/directory/I/am/sharing *(ro,sync) // can replace * with an IP addr
service rpcbind start
service nfs start
chkconfig --levels 235 rpcbind on // so they auto-start at boot
chkconfig --levels 235 nfs on
(open your firewall, if needed!)
On the client (who wants to see the files):
yum -y install nfs-utils nfs-utils-lib
mkdir -p /the/mount/point // you choose the name
mount name.of.your.server:/directory/I/am/sharing /the/mount/point
(to make the mount happen at boot, add this info in /etc/fstab):
name.of.your.server /directory/I/am/sharing /the/mount/point nfs ro
Notes:
* You may need portmap in place of rpcbind
* ro means read-only, I assumed you wanted 1-way sharing. You may want rw
* There are more detailed instructions all over the 'net -- google them

Related

You don't have permission to access / on this server ubuntu 14.04

Agenda: To have an common Project Folder between Linux and Windows
I have changed my document root from : /var/www/html to /media/mithun/Projects/test in my ubuntu machine 14.04
I get error as :
Forbidden
You don't have permission to access / on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80
So i added some scripts to : sudo gedit /etc/apache2/sites-available/000-default.conf
# DocumentRoot /var/www/html
DocumentRoot /media/mithun/Projects/test
But Document Root /var/www/test works but not with Windows NTFS Partition Drive.
Even after referring to :
Error message "Forbidden You don't have permission to access / on this server"
Issue with my Ubuntu Apache Conf file. (Forbidden You don't have permission to access / on this server.)
No success :( So kindly assist me with it...
Note: Projects is an New Volume (Internal Drive: In Windows its E:/ Drive)
#Lmwangi - Please check my updates for your reference below:
Output of : ls /etc/apparmor.d/
abstractions lightdm-guest-session usr.bin.evince usr.sbin.cupsd
cache local usr.bin.firefox usr.sbin.mysqld
disable sbin.dhclient usr.lib.telepathy usr.sbin.rsyslogd
force-complain tunables usr.sbin.cups-browsed usr.sbin.tcpdump
I tried killing apparmor:
sudo /etc/init.d/apparmor kill
I receive output as : Usage: /etc/init.d/apparmor
{start|stop|restart|reload|force-reload|status|recache}
After this, i was also able to restart apache successfully
maybe the problem is simple : is your new root directory accessible to the www-data user ?
Try :
$ chown -R www-data:www-data /media/mithun/Projects
As you have you have discovered by now, you cannot just manipulate permissions on an NTFS partition (using tools like chmod)
However, you can try forcing a given owner/permissions for the entire partition when you mount it.
Now the wayto do this, depends on the NTFS-utilities you are actually using (and which i don't know, so I'm assuming you are using ntfs-3g)
E.g. mount the partition with the following parameters (replace dev/sdX with your actual partition, and /path/to/wheredrive/is/mounted` with your target path):
mount -o gid=www-data /dev/sdX /path/where/the/drive/is/mounted
should make all the files on the partition belong to the www-data group.
If the filesystem sets the group ownership explicitely, this still might not work.
In this case, you might need to setup a usermap, that maps your windows users/groups (as found on the partition) to your linux users/groups.
The ntfs-3g.usermap utility will help you generate an initial usermap file, which you can then edit to your needs:
ntfs-3g.usermap /dev/sdX
Then pass the usermap to the mount options:
mount -o usermapping=/path/to/usermap.file /dev/sdX /path/where/the/drive/is/mounted
I suspect that you have apparmor enforcing rules that prevent Apache from reading non-whitelisted directory paths. I suggest that you
Edit the apparmor config for Apache to access your custom path. You'll need to hunt around /etc/apparmor.d/ . You may also find that using apparmor in non-enforcing mode helpful.
$ sudo aa-complain /etc/apparmor.d/*
Use mod_apparmor? See this
Or disable apparmor completely. See this
My order of preference would be 1,3,2. That should fix this for you :)
While using ubuntu with windows I faced same issue and it is resolved by remounting drive with read and write access. Below command will help you to do that:
sudo mount -o remount,rw /disk/location /disk/new_location
If it is still not working then in windows os, go to the power options and disable fast startup.
When you shut down a computer with Fast Startup enabled, Windows locks down the Windows hard disk. You won’t be able to access it from other operating systems if you have your computer configured to dual-boot. Even worse, if you boot into another OS and then access or change anything on the hard disk (or partition) that the hibernating Windows installation uses, it can cause corruption. If you’re dual booting, it’s best not to use Fast Startup or Hibernation at all.
Original article: https://www.howtogeek.com/243901/the-pros-and-cons-of-windows-10s-fast-startup-mode/

Configure Raspberry Pi as WIFI Access Point / Hotspot / File Server

So I have a Raspberry PI set up as an access point and I can connect to it as if it was a router to share an internet connection.
just like explained here: http://www.instructables.com/id/How-to-make-a-WiFi-Access-Point-out-of-a-Raspberry/
Now all I want this for is just so I can access files from the RPi and transfer them to other devices.
The question is how can I (after a device connected the RPi via WiFi) access files from the RPi?
You can install any number of server applications to share files, like FTP or HTTP. If you want to share files with computers running Microsoft Windows® the best bet would probably be SAMBA. To do this from the command line, try the following steps:
sudo apt-get install samba samba-common-bin
and then after it's installed you need to edit the configuration:
sudo nano /etc/samba/smb.conf
Uncomment the line that says # security = user by removing the # from the beginning of the line. You also need to find where it says read only = yes in the [homes] section and change it to read only = no. Press [CTRL]+X to exit nano and press y to save.
Then restart the SAMBA service with the new configuration:
sudo service samba restart
Finally, you have to add a password for each user. for the default user pi just enter:
sudo smbpasswd -a pi
Repeat the above command for each user you wish to add.
You should now be able to access your Pi's files from your Windows computer by navigating to it just like any other shared folder: \\raspberrypi\pi or in my case I have to use the IP address because of my network setup \\192.168.0.209\pi

Backup a Lacie 2 Big NAS on a remote linux server

I want to backup my Lacie OS 3.x NAS 4TB on a remote server using the native web interface.
The best solution for me would be to use rsync, unfortunatly i do not have ssh shell access on the disk.
I tried to backup my device with a "compatible rsync server" but without success :
Going to backup > New Backup, Network backup, selecting all my shares, Rsync compatible server.
I'm typing working ssh credentials of my debian backup server (which have rsync 3.0.9) and it doesn't list any rsync destination so i can't continue the backup shcedule.
The web interface also provide a solution on a "NetBackup Server", but i don't know how I can install it on Debian (not sure it's the symantec product).
Also, the NAS provide a working SFTP access, but i only want to backup modified files (Because backup 4TB each time is a bit greedy).
Any solution ?
With some help, i finaly discover that Rsync could be used as a daemon with preconfigured destinations :
On my debian side, by creating a /etc/rsyncd.conf containning
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
[documents]
path = /home/juan/Documents
comment = The documents folder of Juan
uid = juan
gid = juan
read only = no
list = yes
auth users = rsyncclient
secrets file = /etc/rsyncd.secrets
hosts allow = 192.168.1.0/255.255.255.0
/etc/rsyncd.secrets
rsyncclient:passWord
user:password
Do not forget
chmod 600 /etc/rsyncd.secrets
And then launch
rsync --daemon
After that, i can finaly view rsync destination when configuring Backup on my Nas.
Source : http://www.jveweb.net/en/archives/2011/01/running-rsync-as-a-daemon.html

Create a Debian imaging server for windows 7

Issue
I have been tasked with creating a Debian imaging server for our company. Unfortunately my knowledge with, both Linux and servers is very limited, (this is part of an up-skilling program).
Steps
Currently I have tried to follow the below tutorials on creating a PXEBoot server and a ProxyDHCP:
ProxyDHCP:help.ubuntu.com/community/UbuntuLTSP/ProxyDHCP
PXE Boot : https://help.ubuntu.com/community/PXEInstallMultiDistro
PXE Boot : https://wiki.debian.org/PXEBootInstall#Installing_Debian_using_network_booting
Originally I had tried to used a configured DHCP server on the Linux server which I had gotten working, however my manager advised that they would prefer the DHCP to come from the router instead.
So I have used apt-get to install below applications and followed sources to get the configs correctly. However it still doesn't seem 100% correct (see latest)
Task
So currently the task I have been set is per below:
Has to be in Debian
Has to be console based server only (no gui interface)
DHCP has to come from router
Server should deploy windows images
Images taken need to bee compacted (all blank space removed)
I can only find Ubuntu guides for these PXEBoot and ProxyDHCP creations, and the problem with this is that the locations they refer to do not always exist in Debian.
So I am stuck with half the options available to me, and because I have a limited knowledge here, I cannot identify where I am going wrong, or if these locations are elsewhere.
Can anyone provide me with a tutorial, or a set of command lines to help?
I would really appreciate this.
Using
I am currently using (on Debian console):
TFTPD-HPA
DNSMASQ
iPXE
SysLinux
Latest
I have been able to get the dnsmaq and tftp-hpa service "working". This is to say when I run them they start. However I still don't seem to be able to boot into an installation with this up and running.
I have another thread on forums.debian.net/viewtopic.php?f=5&t=118315
I have been able to fix my issue using 3 applications and a lot of research.
The applications I have used are; DNSMASQ, TFTPD-HPA and SAMBA
These applications have been configured as per below:
TFTPD-HPA
`apt-get install tftpd-hpa
nano /etc/default/tftpd-hpa
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/tftpboot/"
TFTP_ADDRESS="<server address>:69"
TFTP_OPTIONS="-4 –secure --create"
RUN_DAEMON=”yes”
OPTIONS="-l -s /tftpboot"
mk dir /tftpboot
mk dir /tftpboot/pxelinux.cfg`
DNSMASQ
apt-get install dnsmasq
nano /etc/dnsmasq.conf
Interface=eth0
port=0
log-dhcp
log-queries
log-facility=/var/log/dnsmasq.log
tftp-root=/tftpboot
dhcp-boot=pxelinux.0,<server name>,<server address>
dhcp-range=192.168.1.10,proxy,255.255.255.0
dhcp-no-override
pxe-prompt="Press F8 for boot menu", 2
pxe-service=X86PC, "comment", pxelinux
SAMBA
apt-get install samba
nano /etc/samba/smb.conf
[global]
Workgroup = workgroup
Server role = standalone server
Dns proxy = no
Wins support = yes
Passwd program = /usr/bin/passwd %u
Passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *Password\supdated\ssuccessfully*
Syslog = 0
Log file = /var/log/smb.log.%m
Max log size = 1000
Map to guest = bad user
Usershare allow guests = yes
Security = user
[images]
Comment = Network SAMBA share
Path = tftpboot
Create mask = 0775
Guest ok = yes
Browseable = yes
Read only = no
Writeable = yes

Setting up FTP on Amazon Cloud Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to set up FTP on Amazon Cloud Server, but without luck.
I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them.
Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto's answer.
I'm assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.
Step #1: Install vsftpd
SSH to your EC2 server. Type:
> sudo yum install vsftpd
This should install vsftpd.
Step #2: Open up the FTP ports on your EC2 instance
Next, you'll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:
Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select 'Anywhere'. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.
Step #3: Make updates to the vsftpd.conf file
Edit your vsftpd conf file by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Disable anonymous FTP by changing this line:
anonymous_enable=YES
to
anonymous_enable=NO
Then add the following lines to the bottom of the vsftpd.conf file:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:
To save changes, press escape, then type :wq, then hit enter.
Step #4: Restart vsftpd
Restart vsftpd by typing:
> sudo /etc/init.d/vsftpd restart
You should see a message that looks like:
If this doesn't work, try:
> sudo /sbin/service vsftpd restart
Step #5: Create an FTP user
If you take a peek at /etc/vsftpd/user_list, you'll see the following:
# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody
This is basically saying, "Don't allow these users FTP access." vsftpd will allow FTP access to any user not on this list.
So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that's not listed in /etc/vsftpd/user_list, you can skip to the next step.)
Creating a new user on an EC2 instance is pretty simple. For example, to create the user 'bret', type:
> sudo adduser bret
> sudo passwd bret
Here's what it will look like:
Step #6: Restricting users to their home directories
At this point, your FTP users are not restricted to their home directories. That's not very secure, but we can fix it pretty easily.
Edit your vsftpd conf file again by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Un-comment out the line:
chroot_local_user=YES
It should look like this once you're done:
Restart the vsftpd server again like so:
> sudo /etc/init.d/vsftpd restart
All done!
Appendix A: Surviving a reboot
vsftpd doesn't automatically start when your server boots. If you're like me, that means that after rebooting your EC2 instance, you'll feel a moment of terror when FTP seems to be broken - but in reality, it's just not running!. Here's a handy way to fix that:
> sudo chkconfig --level 345 vsftpd on
Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:
> sudo ntsysv
Now vsftpd will automatically start up when your server boots up.
Appendix B: Changing a user's FTP home directory
* NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer *
You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you'll need to change the user's default home directory:
> sudo usermod -d /var/www/ username
In this specific example, it's typical to give the user permissions to the 'www' group, which is often associated with the /var/www folder:
> sudo usermod -a -G www username
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Thanks #clone45 for the nice solution. But I had just one important problem with Appendix b of his solution. Immediately after I changed the home directory to var/www/html then I couldn't connect to server through ssh and sftp because it always shows following errors
permission denied (public key)
or in FileZilla I received this error:
No supported authentication methods available (server: public key)
But I could access the server through normal FTP connection.
If you encountered to the same error then just undo the appendix b of #clone45 solution by set the default home directory for the user:
sudo usermod -d /home/username/ username
But when you set user's default home directory then the user have access to many other folders outside /var/www/http. So to secure your server then follow these steps:
1- Make sftponly group
Make a group for all users you want to restrict their access to only ftp and sftp access to var/www/html. to make the group:
sudo groupadd sftponly
2- Jail the chroot
To restrict access of this group to the server via sftp you must jail the chroot to not to let group's users to access any folder except html folder inside its home directory. to do this open /etc/ssh/sshd.config in the vim with sudo.
At the end of the file please comment this line:
Subsystem sftp /usr/libexec/openssh/sftp-server
And then add this line below that:
Subsystem sftp internal-sftp
So we replaced subsystem with internal-sftp. Then add following lines below it:
Match Group sftponly
ChrootDirectory /var/www
ForceCommand internal-sftp
AllowTcpForwarding no
After adding this line I saved my changes and then restart ssh service by:
sudo service sshd restart
3- Add the user to sftponly group
Any user you want to restrict their access must be a member of sftponly group. Therefore we join it to sftponly by:
sudo usermod -G sftponly username
4- Restrict user access to just var/www/html
To restrict user access to just var/www/html folder we need to make a directory in the home directory (with name of 'html') of that user and then mount /var/www to /home/username/html as follow:
sudo mkdir /home/username/html
sudo mount --bind /var/www /home/username/html
5- Set write access
If the user needs write access to /var/www/html, then you must jail the user at /var/www which must have root:root ownership and permissions of 755. You then need to give /var/www/html ownership of root:sftponly and permissions of 775 by adding following lines:
sudo chmod 755 /var/www
sudo chown root:root /var/www
sudo chmod 775 /var/www/html
sudo chown root:www /var/www/html
6- Block shell access
If you want restrict access to not access to shell to make it more secure then just change the default shell to bin/false as follow:
sudo usermod -s /bin/false username
Great Article... worked like a breeze on Amazon Linux AMI.
Two more useful commands:
To change the default FTP upload folder
Step 1:
edit /etc/vsftpd/vsftpd.conf
Step 2: Create a new entry at the bottom of the page:
local_root=/var/www/html
To apply read, write, delete permission to the files under folder so that you can manage using a FTP device
find /var/www/html -type d -exec chmod 777 {} \;
In case you have ufw enabled, remember add ftp:
> sudo ufw allow ftp
It took me 2 days to realise that I enabled ufw.
It will not be ok until you add your user to the group www by the following commands:
sudo usermod -a -G www <USER>
This solves the permission problem.
Set the default path by adding this:
local_root=/var/www/html
Don't forget to update your iptables firewall if you have one to allow the 20-21 and 1024-1048 ranges in.
Do this from /etc/sysconfig/iptables
Adding lines like this:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 20:21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:1048 -j ACCEPT
And restart iptables with the command:
sudo service iptables restart
I've simplified clone45 steps:
Open the ports as he mentioned
sudo su
sudo yum install vsftpd
echo -n "Public IP of your instance: " && read publicip
echo -e "anonymous_enable=NO\npasv_enable=YES\npasv_min_port=1024\npasv_max_port=1048\npasv_address=$publicip\nchroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf
sudo /etc/init.d/vsftpd restart
I followed clone45's answer all the way to the end. A great article! Since I needed the FTP access to install plug-ins to one of my wordpress sites, I changed the home directory to /var/www/mysitename. Then I continued to add my ftp user to the apache(or www) group like this:
sudo usermod -a -G apache myftpuser
After this I still saw this error on WP's plugin installation page: "Unable to locate WordPress Content directory (wp-content)". Searched and found this solution on a wp.org Q&A session: https://wordpress.org/support/topic/unable-to-locate-wordpress-content-directory-wp-content and added the following to the end of wp-config.php:
if(is_admin()) {
add_filter('filesystem_method', create_function('$a', 'return "direct";' ));
define( 'FS_CHMOD_DIR', 0751 );
}
After this my WP plugin was installed successfully.
maybe worth mentioning in addition to clone45's answer:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
The vsftpd version that comes with Ubuntu 12.04 Precise does not
permit chrooted local users to write by default. By default you will
have this in /etc/vsftpd.conf:
chroot_local_user=YES
write_enable=YES
In order to allow local users to write, you need to add the following parameter:
allow_writeable_chroot=YES
Note:
Issues with write permissions may show up as following FileZilla errors:
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Could not connect to server
References:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
VSFTPd stopped working after update
In case you are getting 530 password incorrect
1 more step needed
in file /etc/shells
Add the following line
/bin/false
FileZila is good FTP tool to setup with Amazon Cloud.
Download FileZila client from https://filezilla-project.org/
Click on File -> Site Manager - >
New Site
Provide Host Name IP address of your amazon cloud location (Port if any)
Protocol - SFTP (May change based on your requirement)
Login Type - Normal (So system will not ask for password each time)
Provide user name and password.
Connect.
You need to do these step only 1 time, later it will upload content to the same IP address and same site.

Resources