I am looking for a Hook allowing me to tell/instruct the ServerConnector (or whatever other object in the Jetty architecture that would be a more suitable hook-point) NOT to accept an incoming TCP/IP connection on the Web Server ports (eg: 80 or 443) based on the IP address of the remote party (because that is all the info there is available before accept() is called and before, LATER after accept, the HTTP data becomes available from the TCP/IP frame's payload).
The idea is that I would attach a peace of software to that hook that for each incoming connection can say : Accept/Revoke.
With revoke the incoming connection would be IGNORED and the socket would stay closed avoiding the Jetty container to do more work/use resources on it.
It would also make it possible to protect system features (eg. special admin menu) after certain server ports by only accepting, for example, local addresses (10.32....), etc. It would have been even nicer to ALSO be able to get to the MAC address of the connecting device and only allow access if the MAC address is in an authorisation list (=the user had physical access to the LAN to connect his device or else his MAC address would never be the one that is seen on the LAN because there is no peer to peer MAC address exchange if not on the same LAN).
In this way I intend to prevent hostile connections to consume CPU, bandwidth and sockets just to send them the 'main' pages during attacks and force them into waiting for time-outs. At the same time it would additionally allow to protect access to non public features based on 'physical local access'. The hook software could even revoke connections on parts of the IP address, for instance the Chinese IP address range, etc.
I found the typical TCP/IP socket interface on the ServerConnector classes (open/close/accept/...) but I didn't find a place where I could register for a pre-accept() hook or callback. I also looked at the ServerConnector's class ancestors but I didn't really see anything of the sorts either. I started with org.eclipse.jetty.server Class ServerConnector
For clarity, and for people that are more confortabel with code, this would conceptually represent in code what I look for.
...setup Jetty server, Factories, etc
ServerConnector http2=new ServerConnector(this.oServer,sslF,alpnF,h2F,https2F);
http2.setPort(8443);
http2.setPreAcceptHook(8843, oMyHook); <---INVENTED line
this.oServer.addConnector(http2);
The hook would be a simple method of the MyHook object that returns true or false.
boolean AcceptRevoke(IP, Port) <--return true=accept, false=revoke
I would welcome any hint as to what direction to look in (a class name or so) or a confirmation that such feature is not available or cannot be and therefore never will be available. There may be constraints in container design that would not allow a pre-accept revocation, like those one can do with socket libraries, that I am not aware of and hence would be looking for an impossibility.
Many Thanks.
Product and Versions Info:
Jetty 9.3.0 (v20150612)
alpn-boot-8.1.3 (v20150130)
JRE 8 (Oracle 1.8.0 Build 45 - b15)
You have A LOT OF WORK ahead of you.
To do this, you'll essentially have to provide your own Socket implementation layer for Java. There is no layer you can hook onto in Jetty that allows you to do anything that you mentioned. The kinds of things you want to do are at a much lower level, something at the JVM and/or OS levels.
You'll need to handle java.net.Socket, java.net.ServerSocket, and java.nio.channels.ServerSocketChannel.
Which means you'll be writing your own javax.net.ServerSocketFactory, javax.net.SocketFactory, java.net.SocketImpl, and java.net.SocketImplFactory for regular Sockets.
And for SSL/TLS, you'll need to write/handle/manage your own javax.net.ssl.SSLSocket, javax.net.ssl.SSLServerSocket, javax.net.ssl.SSLSocketFactory, and javax.net.ssl.SSLServerSocketFactory.
And don't forget to grab all of the jetty-alpn code additions to the standard OpenJDK SSL libraries so that you can get HTTP/2 working for you as well (seeing as you are obviously using it, based on your question details)
Before you test this on Jetty, test your implementation on a simple ServerSocket and ServerSocketChannel setup.
After you confirm it does what you want, then you'll need to wire it up into Jetty. For normal ServerSocket / ServerSocketChannel, you'll have to work with the system-wide the java.nio.channels.spi.SelectorProvider
For SSLServerSocket / SSLServerSocketChannel, you'll wire it up via a custom/overridden org.eclipse.jetty.util.ssl.SslContextFactory implementation of your own.
Or ...
You can use a combination of post-accept logic (built into Jetty) and OS level management of connections (such as iptables), which would be far simpler.
The requirement you stated to hook into a point before accept is possible with a Java server, but is a special kind of effort where you extend the core Java networking and Socket behaviors.
As an alternative, you could probably just override the configure(Socket socket) method of the ServerConnector. If you want to reject the socket, just close the connection. This might cause some strange exceptions, if so report them in Bugzilla and we'll clean up.
Alternatively, open a Bugzilla to ask us to make the accepted(Socket socket) method protected instead of private and you can reject there.
Note that with either of these, the client will see the connection as accepted and then immediately close.
Related
I am trying to use zeroMQ for communicating between 2 processes. The message contains instructions from one process for the second to execute, so that from a security perspective it is quite important that only the proper messages are sent and received.
If I am worried about 3rd parties who may try to intercept or send malicious messages to the process, am I correct in assuming that as long as my messages are sent/received on IP 127.0.0.1 i am always safe? or is there any circumstance where this can be compromised?
Thanks for the help all!
Assumptions and security are usually two things you don't want to mix. The short answer to your question is that sending or receiving traffic to localhost (127.0.0.1) will not, under default conditions, send or receive traffic outside of the local host.
Of course if the machine itself is compromised then you are no longer secure at all.
You've applied the ipc tag, which I assume means that you're using the ipc:// protocol (if not, you should be if all of the communication is happening on one box). In this case, you shouldn't be using IPv4 addresses at all (or localhost), but ipc endpoint names. See here and here.
For ipc, you're not connecting or binding to an IP or DNS address, but something much more akin to a local file name. You just need to make sure both processes refer to the same filename, and that permissions are set so that both processes can appropriately access the directory (see the ZMQ docs for a little more info there, search for ipc). The only difference between an ipc endpoint name and a filename is that you don't need to create the file, ZMQ creates the resource so both processes can communicate with the same thing.
As S.Richmond says, if your machine is compromised, then all bets are off, but there's no way to publish ipc endpoints to the internet if you use them appropriately.
Domain
I am designing a piece of software ATM and I would like to hide necessary cryptographic operations behind a "crypto daemon", which accesses encrypted keys on disk and offers a high level, application specific interface to these operations.
The other programs have to:
Authenticate to the daemon (valid authentication allows the daemon to decrypt the keys on disk)
Issue commands to the daemon and receive answers
I have the idea of using TCP via localhost for these operations. After doing the TCP handshake, the program has to authenticate to the daemon and - if successful - crypto commands can be issued to the daemon.
Actual Question
At least two assumptions have to hold, otherwise this is insecure by design:
TCP channels on localhost cannot be hijacked/modified (except by the admin/root)
TCP channels on localhost are private (cannot be peeked at) (except by admin/root)
Are these assumptions true? Why? Why not? Is anything else flawed?
Loopback connections are in general not subject to man-in-the-middle attacks (from other non-root processes on the same box, or from outside the box).
If you intend to use loopback as a poor-man's-transport-security (instead of TLS) you will have to ensure that both ends of the connection are suitably protected:
On the client end, what is the consequence of an unauthorised low-privilege process being allowed to connect to the server? If client access is protected by a plaintext password, you might consider a brute-force attack against that password, and address that with password complexity and auditing of failures.
On the server end, is it conceivable for a low-privilege process to cause the server to terminate (eg crash it with malformed input or resource exhaustion), and replace it with a trojaned listener that would steal the password from a client or return misleading results? You might consider using Unix's low port number protection to stop this (but then you have to worry about running as root/setuid/fork).
It depends on your threat model how much of a concern you have with protecting against local access; you may take the position that your infra sec and monitoring is sufficient to discount the scenario of an attacker getting a low-privilege account. But in the general case, you may be better off biting the bullet and going with the well-known solution of TLS.
It's secure from everything except the kernel and other applications running in the localhost. If you can trust those, it's secure. If not, not.
TCP packets would get routed back at IP layer itself, if the address is localhost. Since it is not passing through wires, it is not possible to get hijacked/ altered.
From Wikipedia, Relevant info is emphasized.
Implementations of the Internet Protocol Suite include a virtual network interface through which network application clients and servers can communicate when running on the same machine. It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device.
Unix-like systems usually name this LOopback interface lo or lo0.
Various IETF standards reserve the IPv4 address block 127/8 (from which 127.0.0.1 is most commonly used), the IPv6 address ::1, and the name localhost for this purpose.
I'm developing chat application using app.js which is webkit+node.js framework.
So i have node.js plus bridged web browser environment on both sides.
I want to make file transfer feature somewhat similar to Skype one.
So, initial idea is to:
1.connect clients to main server.
2.Each client gets ip of oposite ones.
3.Start socket or websocket server on both clients and connect to each other.
4.Sender reads the file and transmits it to the reciver.
Question are:
1.Im not really sure that one client can "see" the other.
2.file is a binary data, but websockets are made for text messages so i need some kind of coding/decoding stuff. I thought about base 64 but it has 30% of "overhead" information. So i need something more effitient (base 128?).
3.If it is not efficient to use websocket should i use TCP sockets instead? What problems can appear if i decide to use them?
Yeah i know about node2node and BinaryJS, i just dont know should i use them or not. And i really what to do something myself.
OK, with your communication looking like this:
(C->N)<->N<->(N->C)
(...) is installed on one client's machine. N's are node servers, C's are web clients.
This is out of your control. Some file sharing apps send test packets from the central server to clients, to check whether ports are open and NAT rules are configured correctly, etc. Your clients will start their own servers on some port, your master server can potentially create a test connection to these servers to see whether they're started correctly and open to the web, BEFORE telling other clients that they can send files.
Websockets are great for status messages from your servers to the web GUIs and general client-to-client communication. For the actual file transfers, I would use TCP sockets, see the next answer. On the other hand base64 encoding is really not a slow process, play with it and benchmark its performance, then decide with some data to back up your decision.
You could use a combination: websockets from your servers to the web GUIs, but TCP communication between the servers themselves. TCP servers (and streams) aren't hard to set up in Node, I see no disadvantages. It might actually be less complicated than installing node2node on those servers, since TCP is already built-in.
I'm writing a piece of P2P software, which requires a direct connection to the Internet. It is decentralized, so there is no always-on server that it can contact with a request for the server to attempt to connect back to it in order to observe if the connection attempt arrives.
Is there a way to test the connection for firewall status?
I'm thinking in my dream land where wishes were horses, there would be some sort of 3rd-party, public, already existent servers to whom I could send some sort of simple command, and they would send a special ping back. Then I could simply listen to see if that arrives and know whether I'm behind a firewall.
Even if such a thing does not exist, are there any alternative routes available?
Nantucket - does your service listen on UDP or TCP?
For UDP - what you are sort of describing is something the STUN protocol was designed for. It matches your definition of "some sort of simple command, and they would send a special ping back"
STUN is a very "ping like" (UDP) protocol for a server to echo back to a client what IP and port it sees the client as. The client can then use the response from the server and compare the result with what it thinks its locally enumerated IP address is. If the server's response matches the locally enumerated IP address, the client host can self determinte that it is directly connected to the Internet. Otherwise, the client must assume it is behind a NAT - but for the majority of routers, you have just created a port mapping that can be used for other P2P connection scenarios.
Further, you can you use the RESPONSE-PORT attribute in the STUN binding request for the server to respond back to a different port. This will effectively allow you to detect if you are firewalled or not.
TCP - this gets a little tricky. STUN can partially be used to determine if you are behind a NAT. Or simply making an http request to whatismyip.com and parsing the result to see if there's a NAT. But it gets tricky, as there's no service on the internet that I know of that will test a TCP connection back to you.
With all the above in mind, the vast majority of broadband users are likely behind a NAT that also acts as a firewall. Either given by their ISP or their own wireless router device. And even if they are not, most operating systems have some sort of minimal firewall to block unsolicited traffic. So it's very limiting to have a P2P client out there than can only work on direct connections.
With that said, on Windows (and likely others), you can program your app's install package can register with the Windows firewall so your it is not blocked. But if you aren't targeting Windows, you may have to ask the user to manually fix his firewall software.
Oh shameless plug. You can use this open source STUN server and client library which supports all of the semantics described above. Follow up with me offline if you need access to a stun service.
You might find this article useful
http://msdn.microsoft.com/en-us/library/aa364726%28v=VS.85%29.aspx
I would start with each os and ask if firewall services are turned on. Secondly, I would attempt the socket connections and determine from the error codes if connections are being reset or timeout. I'm only familiar with winsock coding, so I can't really say much for Linux or mac os.
I have a website and application which use a significant number of connections. It normally has about 3,000 connections statically open, and can receive anywhere from 5,000 to 50,000 connection attempts in a few seconds time frame.
I have had the problem of running out of local ports to open new connections due to TIME_WAIT status sockets. Even with tcp_fin_timeout set to a low value (1-5), this seemed to just be causing too much overhead/slowdown, and it would still occasionally be unable to open a new socket.
I've looked at tcp_tw_reuse and tcp_tw_recycle, but I am not sure which of these would be the preferred choice, or if using both of them is an option.
According to Linux documentation, you should use the TCP_TW_REUSE flag to allow reusing sockets in TIME_WAIT state for new connections.
It seems to be a good option when dealing with a web server that have to handle many short TCP connections left in a TIME_WAIT state.
As described here, The TCP_TW_RECYCLE could cause some problems when using load balancers...
EDIT (to add some warnings ;) ):
as mentionned in comment by #raittes, the "problems when using load balancers" is about public-facing servers. When recycle is enabled, the server can't distinguish new incoming connections from different clients behind the same NAT device.
NOTE: net.ipv4.tcp_tw_recycle has been removed from Linux in 4.12 (4396e46187ca tcp: remove tcp_tw_recycle).
SOURCE: https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux
pevik mentioned an interesting blog post going the extra mile in describing all available options at the time.
Modifying kernel options must be seen as a last-resort option, and shall generally be avoided unless you know what you are doing... if that were the case you would not be asking for help over here. Hence, I would advise against doing that.
The most suitable piece of advice I can provide is pointing out the part describing what a network connection is: quadruplets (client address, client port, server address, server port).
If you can make the available ports pool bigger, you will be able to accept more concurrent connections:
Client address & client ports you cannot multiply (out of your control)
Server ports: you can only change by tweaking a kernel parameter: less critical than changing TCP buckets or reuse, if you know how much ports you need to leave available for other processes on your system
Server addresses: adding addresses to your host and balancing traffic on them:
behind L4 systems already sized for your load or directly
resolving your domain name to multiple IP addresses (and hoping the load will be shared across addresses through DNS for instance)
According to the VMWare document, the main difference is TCP_TW_REUSE works only on outbound communications.
TCP_TW_REUSE uses server-side time-stamps to allow the server to use a time-wait socket port number for outbound communications once the time-stamp is larger than the last received packet. The use of these time-stamps allows duplicate packets or delayed packets from the old connection to be discarded safely.
TCP_TW_RECYCLE uses the same server-side time-stamps, however it affects both inbound and outbound connections. This is useful when the server is the first party to initiate connection closure. This allows a new client inbound connection from the source IP to the server. Due to this difference, it causes issues where client devices are behind NAT devices, as multiple devices attempting to contact the server may be unable to establish a connection until the Time-Wait state has aged out in its entirety.