X authority bypass - linux

I'm trying to write an application that runs as a daemon and monitors
running X sessions. Right now I'm struggling to find documentation
regarding the X security model. Specifically, I'm attempting to
connect to running X displays from my daemon process. Calling
XOpenDisplay(dispName) doesn't work, I guess because my process
doesn't have permission to connect to this display. After a bit of
research, it looks like I need to do something with xauth.
In my test environment, the X server is started like this:
/usr/bin/X -br -nolisten tcp :0 vt7 -auth /var/run/xauth/A:0-QBEVDj
That file contains a single entry, that looks like this:
#ffff##: MIT-MAGIC-COOKIE-1 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
by adding an entry to ~/.Xauthority with the same hex key, I can
connect to the X server. However, this is difficult because I need to
programmatically find the auth file the X server is using (the
location of which I guess will change from distro to distro, and
probably from one boot to the next), then query it, then write a new
auth file. If the process is running as a daemon, it might not have a
home directory, so how do I know where to write the new entries to?
Ideally, what I'm looking for is a way to bypass the need to have the
xauth cookie in ~/.Xauthority, or even to know what the cookie is at
all. I realise that this is unlikely - what good is a security model
if it's easily bypassed? but I'm hoping someone on this list may have
a few good ideas. Is there a way to specify that my process is
privileged and thus should automatically be given access to any
display on the local machine?

You don't have to use a home directory if you specify an XAUTHORITY environment variable, which specifies the location of the .Xauthority file. Read the xauth man page.
But, in general, it's hard to locate the auth file, for the reasons you mentioned; also, this "fishing for auth tokens" approach would only work for local displays.
With regard to letting root (or some other user) connect to an X server willy-nilly, you'd probably have to patch the source code to do this, and you'd have to use something like getpeereid to obtain the connecting user's uid/gid (this only works on Unix-domain sockets, which I presume would be the type used for local connections, anyway).

Xauth is not the only security mechanism for X
There is also another one (less secure) that just performs IP based authentication
(See xhost).
So if you switch your X server to this less secure mode it will trust any connections coming
from the defined set of IPs.
This way you do not need to deal with Xauthority at all.

Related

How to provide destination MAC address to socket

I have an application on my linux host that communicates via UDP to another machine via 10G ethernet. The machine on the other end does not respond to ARP requests. I am able to get it's MAC address through other means (a different interface, on
Is there a way to programmatically get this information into the arp table w/out privileged status?
I know I can on a command line issue "sudo arp -s 1.2.3.4 AA:BB:CC:DD:EE:FF" every time I power it up.
I know I can add "1.2.3.4 AA:BB:CC:DD:EE:FF" to etc/ethers
I know that as a priviledged usr/process I can issue an ioctl to SIOCSARP.
All of these mechanism's require sudo/root access. I read something about giving the application "CAP_NET_ADMIN" permissions.
I'm looking for this capability so that the end users don't need to do any of the above. It seems like, If I, w/out sudo/root, can open a socket that determines the need for this network information, there should be a way for me, w/out sudo/root, to provide it.
No, you can't edit ARP information as non-root. This makes sense, as otherwise malicious attacker would be able to modify ARP tables and completely disrupt network communication and compromise security.
The solution to your problem is to fix your network configuration.

Securing a simple Linux server that holds a MySQL database?

A beginner question, but I've looked through many questions on this site and haven't found a simple, straightforward answer:
I'm setting up a Linux server running Ubuntu to store a MySQL database.
It's important this server is secure as possible, as far as I'm aware my main concerns should be incoming DoS/DDoS attacks and unauthorized access to the server itself.
The database server only receives incoming data from one specific IP (101.432.XX.XX), on port 3000. I only want this server to be able to receive incoming requests from this IP, as well as prevent the server from making any outgoing requests.
I'd like to know:
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
Are there any other additions to the linux environment that can boost security?
I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
To access the database server requires the SSH key (which itself is password protected).
A famous man once said "security is a process, not a product."
So you have a db server that should ONLY listen to one other server for db connections and you have the specific IP for that one other server. There are several layers of restriction you can put in place to accomplish this
1) Firewall
If your MySQL server is fortunate enough to be behind a firewall, you should be able to block out all connections by default and allow only certain connections on certain ports. I'm not sure how you've set up your db server, or whether the other server that wants to access it is on the same LAN or not or whether both machines are just virtual machines. It all depends on where your server is running and what kind of firewall you have, if any.
I often set up servers on Amazon Web Services. They offer security groups that allow you to block all ports by default and then allow access on specific ports from specific IP blocks using CIDR notation. I.e., you grant access in port/IP combination pairs. To let your one server get through, you might allow access on port 3000 to IP address 101.432.xx.xx.
The details will vary depending on your architecture and service provider.
2) IPTables
Linux machines can run a local firewall (i.e., a process that runs on each of your servers itself) called iptables. This is some powerful stuff and it's easy to lock yourself out. There's a brief post here on SO but you have to be careful. It's easy to lock yourself out of your server using IPtables.Keep in mind that you need to permit access on port 22 for all of your servers so that you can login to them. If you can't connect on port 22, you'll never be able to login using ssh again. I always try to take a snapshot of a machine before tinkering with iptables lest I permanently lock myself out.
There is a bit of info here about iptables and MySQL also.
3) MySQL cnf file
MySQL has some configuration options that can limit any db connections to localhost only - i.e., you can prevent any remote machines from connecting. I don't know offhand if any of these options can limit the remote machines by IP address, but it's worth a look.
4) MySQL access control via GRANT, etc.
MySQL allows you very fine-grained control over who can access what in your system. Ideally, you would grant access to information or functions only on a need-to-know basis. In practice, this can be a hassle, but if security is what you want, you'll go the extra mile.
To answer your questions:
1) YES, you should definitely try and limit access to your DB server's MySQL port 3000 -- and also port 22 which is what you use to connect via SSH.
2) Aside from ones mentioned above, your limiting of PHPMyAdmin to only your IP address sounds really smart -- but make sure you don't lock yourself out accidentally. I would also strongly suggest that you disable password access for ssh connections, forcing the use of key-pairs instead.You can find lots of examples on google.
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
If you don't have access to a separate firewall, I would use ip tables. There are a number of managers available for you on this. So yes. Remember that if you are using IPtables, make sure you have a way of accessing the server via OOB (short for out of band, which means accessing it in such a way that if you make a mistake in IP tables, you can still access it via console/remote hands/IPMI, etc)
Next up, when creating users, you should only allow that subnet range plus user/pass authentication.
Are there any other additions to the linux environment that can boost security? I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
Ubuntu ships with something called AppArmor. I would investigate that. That can be helpful to prevent some shenanigans. An alternative is SELinux.
Further, take more steps with phpmyadmin. That is your weakest link in the security tool chain we are building.
To access the database server requires the SSH key (which itself is password protected).
If security is a concern, I would NOT use SSH key style access. Instead, I would use MySQLs native support for SSL certificate authentication. Here is now to configure it with phpmyadmin.

Understanding Openstack noVNC security

I'm trying to get a deeper understanding of the architecture and design of Openstack noVNC security. I found this document. It makes sense but missing details. Can somebody confirm my understanding is right, or correct me if I'm wrong.
0) noVNC allows VNC clients in web browsers, good for clients without java or vnc client installed.
1) VNC server is provided by the hypervisor, Every VM has its own VNC server, at port 59xx, not accessible from outside.
2) Websocket proxy bridges to VNC server and provide service for noVNC client (javascript in browser), say at port 6080.
3) Simple security: Security could alternatively be guaranteed by VNC password, but it's not convenient to type every time and not easy to change. Every VM on the same hypervisor has to share the same password. Different compute nodes may use different VNC passwords.
4) To provide better access control, consoleauth is introduced. We can now use Openstack authentication for VNC. When a new request for remote console comes, a dynamic access URL (with a token) is generated, cached/registered, and sent back to client. Later, only previously registered connections are accepted.
I would like to know more about whether/how dynamic firewall rules are created, and whether/when the tokens are invalidated. I know the best way is to read the source code, but a high level description is also valuable. Thanks.

Notify me when a socket binds, like inotify does for files

I am interested in finding out when things SSH into my boxen to create a reverse tunnel. Currently I'm using a big hack - just lsof with a few lines of script. So my goal is to see when a socket calls bind() and, ideally, get the port it binds to (it's listening locally since it's a reverse tunnel) and the remote host that I would be connecting to. My lsof hack is basically fine, except I don't get instant notifications and it's rather... hacky :)
This is easy for files; once a file does just about anything, inotify can tell me in Linux. Of course, other OSs have a similar capability.
I'm considering simply tailing the SSHD logs and parsing the output, but my little "tunnel monitor" daemon needs to be able to figure out the state of the tunnels at any point in time, even if it hasn't been running the whole time SSHD has.
I have a pretty evil hack I've been considering as well. It's a script that invokes GDB on /usr/sbin/sshd, then sets a breakpoint on bind. Then it runs it with the options -d -p <listening port> -- Running a separate SSHD for these tunnels is fine. Then it waits for that breakpoint to get hit, and uses GDB's input to get the remote hosts's IP address and the local IP on which SSH is now listening. Again, that's text parsing and opens some other issues.
Is there a "good" way to do this?
I would use SystemTap for a problem like this. You can use it to probe the kernel to see when a bind is done by any process on the system. http://sourceware.org/systemtap/

How can I download a file over multiple interfaces in OS X or Linux?

I have a large file I want to download from a server I have root access to. I also have several different, concurrent internet connections from my machine to the server at my disposal.
Do you know of any protocol, (S)FTP client, HTTP client, AFP client, or any other file transfer protocol server and client combination that supports multithreaded downloads over different connections?
One option would be the "old fashioned" multi-part file..
split -b 50m hugefile multiparthugefile_
That will create multiparthugefile_a, multiparthugefile_b and so on. To rejoin them, use the cat command:
cat multiparthugefile_* > hugefile_rejoined
To actually transfer the files using different interfaces, the wget --bind-address=ADDRESS flag should work:
--bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host.
This problem seems like something Bittorrent is positioned to do well, but I'm not sure exactly how you would do this..
Perhaps create a temporary tracker (or use something like OpenBitTorrent.com), and run multiple clients locally - as long as the clients support the LAN transfer feature, each client would grab different parts from the server, and share them with the (local) clients. You'd end up with multiple copies of the file locally, but it would only transferred over the internet once
Any of these? You'll need a webserver hosting the same file on all the interfaces though.
In case of HTTP or HTTPS, as long as server supports range requests you can fetch the ranges separately and stitch them together. I started working on a use case that is pointed by you. If you are still interested, here is a link to my repository https://github.com/m0hithreddy/MID.
The program (MID) uses SO_BINDTODEVICE socket option to bind to a specific interface, so in most of the cases you require super user permissions and CAP_NET_RAW capability (root user has).
MID determines the network interfaces to use in the download and adopts two step split for downloading the content.
First step: The file is divided among network interfaces (in real time).
Second step: Further the file is divided among several HTTP range requests that arises from that particular interface (NOTE: Server should support them at the first place to make all of this possible)
MID supports HTTP and HTTPS protocol.
Cheers :)
http - check out one of the various download manager (ie firefox with http://www.downthemall.net/ extension)
there are also ftp downloader that support multiple streams

Resources