NFS mounts failing if FIREHOL firewall started - linux

I am trying to setup NFS mounts between two machines on the same local network, however it seems I need to be more specific in my firewall (FIREHOL) setup as the client side cannot mount the exports.
Did look at netstat to determine the ports that open up, but they seems to be non-static/changing.
I know it is firewall related as disabled/stopping my Firehol causes the problem to dissapear.
Any specific areas I should investigate?

Well, first of all, you need to make sure that portmap is also enabled in your Firehol configuration.
I am not super sure about the low level workings of NFS's ports, but it does not use the same ports everytime.
You could do something like the following to enable the NFS ports, as well as portmap. (Check rpcinfo -p)
This would enable the rpc queries (to determine the ports, for the firewall, to know AFTER NFS was started(or restarted))
I also suggest the use of 'src' to restrict the client IP's you are serving to, if you don't already have it :)
Lastly, remember to restart the firewall/firehol AFTER nfs restarts, so rpcqueries are happy with the ports for nfs service.
Example (where 192.168.152.176 is your client machine)
server portmap accept src 192.168.152.176
server nfs accept src 192.168.152.176

Related

Ubuntu 20.04: what are the security risks without firewall?

Ubuntu 20.04: what are the security risks without firewall?
Installed Ubuntu 20.04, but forget to enable firewall using ufw.
SSH 22 port: use keys(2048 bit) for login, no password.
Setting UsePAM=true, any risk?
Any other services that may have security holes without firewall, and hackers can break into the server?
Case for firewall
Yes you should enable the firewall. It's an important security layer.
Software has bugs. The firewall layer prevents some bugs or mistakes from causing harm.
Security is layered for the same reason airplanes have redundant systems. Even single engine airplanes are designed to glide when they lose thrust.
SSH and Services You Know About
While proper SSH configuration is another topic, it illustrates a reason firewalls are needed. You're config is on the right track but without reading the entire man-page you're still unsure if it's secure.
If you're unsure about SSH, a firewall can limit access from source IPs that you define adding another layer.
SSH is but one of a handful of services you're running that might be accessible over the public internet. Sometimes services become open to the public unintentionally.
Third Party Software
One type of bug is a software update or install that inadvertently opens a service and exposes that service to the public internet.
I frequently see application installs that open a private service bound to 0.0.0.0 when it should be bound to 127.0.0.1. If you don't know the difference, you aren't alone. Binding to 0.0.0.0 (or *) means open to the public internet.
This isn't just a user-workstation problem. Package managers are susceptible to this too. NPM, Python PIP, and Apt all can run executables on your system.
Checking for Open Services
Run sudo netstat -n to show active internet connections.
For example, here's output:
Active Internet connections
Proto Recv-Q Send-Q Local Address Foreign Address (state)
tcp4 31 0 192.168.1.17.53624 3.xxx.96.61.443 CLOSE_WAIT
tcp4 0 0 192.168.1.17.53622 162.xxx.35.136.443 ESTABLISHED
udp4 0 0 *.3722 *.*
[...]
I do not know what udp port 3722 is but my system will accept traffic from ANYWHERE to that port.
Closing
The firewall is a layer that lives lower in the network stack than applications and thus provides a layer to guard against configuration and application problems.
Enabling the firewall will prevent you accidentally exposing something you didn’t know was open - telnet, ftp, databases, Jupyter to name a few.
Regarding ssh with disabled password and ssh keys, it’s a good way to enable shell access but be aware that if there is no password on the ssh key, and the private key is stolen, then the thief will have access.
Also, remember ssh only encrypts transport. If you trust everyone who has or can obtain root access, that’s not a big deal, but if someone dishonest connects as root on the same host, then they can still spy on connections. Just something to be aware of.

Docker: intercept outbound traffic and change ip:port to another container

First of all, I wanna say that I don't have much experience in advanced networking on Linux.
A have a task to deploy our .deb packages in containers, and applications are mostly tunned for operating on localhost while being designed with the capability of operating onset of server machines (DB, application, client, etc), but since components of the app have been distributed between containers, I need to make it work together. The goal is to do it w/o any pre-setup sequences that change the IP address in configs for components since target IP is uncertain and IP alias in /etc/hosts may not solve the problem.
Could I somehow intercept outbound connection to localhost:5672 and forward it to, we say, 172.18.0.4:5672 with the ability to correctly receive incoming traffic from the resource we forwarded to? Can you give me some examples of the script?

UPnP portforwarding persist

I'm trying to make a portforwarding for different ports for communications, but it seems they are lost on reboot.
I'm using a script to make them, and it uses the following syntax:
upnpc -a 192.168.1.95 22 22 TCP
...
Since my system is made to actually stress the gateway to reboot, I need to have these ports open after a reboot. I could do it in the software (running the script if connection lost), but I don't want to do that unless it is absolutely necessary.
Do you have some idea of how to make a portforwarding with UPnP such that the forwarding is persisted after a reboot?
Port mappings are specifically not required to be persistent between gateway reboots, clients are supposed to keep an eye on the mappings and re-map when needed. WANIPConnection spec v2 also does not even allow indefinite mappings: another reason to keep the client running as long as you need the mapping to exist.

Network share over internet from one host to another

Maybe it's a silly question or bad idea, but I want to realize it.
I need to share my drives from one host(Linux) to another over Internet and mount on dest host.
Both computers using different ISP's and under NAT(router).
Source host is Linux.
Dest host is Windows\Mac.
1st I tired NFS:
I opened 111 and 2049 on source PC to dest host on router. FS's were exported to dest host.
It didn't work. I guess, NFS is designed only for local networks.
2nd was SAMBA:
In configuration I commented under global section
network/hosts-related lines to make the shares open for all.
Ports 139 and 445 were opened, but no luck. Servers were not pingable during test, I don't know if it's important.
If you have any solutions,comments or suggestions to use other protocols, please reply.
Thanks in advance!
I did not hear about storage share over Internet, because the network flow and strategies are not controlled by you. Too many things are uncontrollable, if you really want to do that, I think you should confirm the below things before you do that:
1. Does the two host have individual Internet IP address? The two host should be pingable interactively.
2. Are the ports opened for the specific port you want to use? and also the firewall(hardware or software) allow the ports to go through. You can verify this by **telnet** command. `telnet host port`
In my opinion, both NFS and SAMBA work in application layer, they can work locally, and through Internet. But when in Internet, many things can not be controlled by ourselves in the network layer. And it is also not safe when used in Internet.
Both host do have individual IP's but not pingable. telnet was working for mentioned hosts in both directions. Yes I understand this could work slow or maybe won't work at all. I guess I need to find some NAS solution, but it would cost quite some money

How can I develop using a local VM server without using URLs with ports in them?

I'm setting up a linux server in a VM for my development.
Previously I've had PHP, MySQL etc etc all installed locally on my Mac. Apart from being a security risk, it's a drag to maintain and keep up to date, and there's a risk that an OS upgrade will wipe part of your setup out as the changes you make are fairly non-standard.
Having the entire server contained within a VM makes it easily upgradable and portable between machines. It means I can have the same configuration as the destination server and with shared folders even if the VM gets corrupted my work is safe on the host machine.
Previously with the local installation I was able to develop on convenient URLs like http://site.dev. I'd quite like to carry this over to the VM way of development but I'm struggling to figure out how, if it's possible at all.
Here's the problem:
In Bridged mode, the VM is part of the same network as the host. This is great but I can't choose a fixed IP address as I may be joining other networks and that address may be taken already. I'd like a consistent way of addressing my VM.
In NAT mode I can't directly address the VM without using port forwarding. I can use http://site.dev if I use the hosts file to forward that to localhost and then localhost:8080 forwards to the vm:80. The trouble is I have to access http://site.dev:8080 which is inconvenient for URL construction.
Does anyone know a way around this? I'm using ubuntu server and virtualbox.
Thanks!
The answer is to define a separate host-only network adapter and use that for host->guest communication.
You can do this by powering down the guest and adding the adapter in the VM settings. Once that's done you can boot the guest again and configure the new network interface however suits you best. I chose a fixed IP address in an unused range.

Resources