if you open a website and then follow a link within the same site trough opening it in an inkognito mode, do the browsers keep their connection on a tcp level (as they would per https://www.rfc-editor.org/rfc/rfc2616#section-8.1 ?
I've never heard about a standard about inconginto mode but browsers certainly should not reuse TCP connections from non-incognito mode in incognito one. I've just checked Firefox and Chrome. They open new TCP connections.
You can check it by yourself. Write a dump by Wireshark. Apply HTTP filter. Find a request. Apply "Follow TCP stream" context menu. You will see the whole TCP stream. It's easy to read.
Related
I am facing the following situation:
I have several devices (embedded devices running ARCH Linux) and i would like to have administration access to each device at any time. The problem is the devices are behind a NAT, so establishing a connection from a server to a device is not possible. How could i achieve this?
I thought i could write a simple service running on the device that opens a connection to a server at startup. This TCP connection remains open and can be used from the server to administrate the device. But is it a good idea to keep TCP connections open for a long time? If i have a lot of devices, for example 1000, will i have a problem on the server side with 1000 open TCP connections?
Is there maybe another way?
Thanks a lot!
But is it a good idea to keep TCP connections open for a long time?
It's not necessarily a bad idea; although in practice the connections will fail from time to time (e.g. due to network reconfiguration, temporary network outages, etc), so your clients should contain logic to reconnect automatically when this happens. Also note that TCP will not usually not detect it when a completely-idle TCP connection no longer has connectivity, so to avoid "zombie connections" that aren't actually connected, you may want to either enable SO_KEEPALIVE, or have your clients and/or server send the (very occasional) bit of dummy data on the socket just to goose the TCP stack into checking whether connectivity still exists on the socket.
If i have a lot of devices, for example 1000, will i have a problem on the server side with 1000 open TCP connections?
Scaling is definitely an issue you'll need to think about. For example, select() is typically implemented to only handle up to a fixed number of connections (often 1024), or if your server is using the thread-per-connection model, you'd find that a process with 1000+ threads is not very efficient. Check out the c10k problem article for lots of interesting details about various approaches and how well they scale up (or don't).
Is there maybe another way?
If you don't need immediate access to the clients, you could always have them check in periodically instead (e.g. once every 5 minutes); or you could have them occasionally send a UDP packet to the server instead of keeping a TCP connection all the time, just to let the server know their presence, and have the server indicate to them somehow (e.g. by updating a well-known web page that the clients check from time to time) when it wanted one of them to open a full TCP connection. Or maybe just use multiple servers to share the load...
The only limit I know of is imposed by state tracking in the iptables code. Check the value of net.ipv4.netfilter.ip_conntrack_max on both sides if you're using this to make sure you have enough headroom for other activities.
If you set the socket option SO_KEEPALIVE before the connect() call, the kernel will send TCP keepalives to make sure the far end is still there. This will mean that connections won't linger forever in the event of a reboot.
In Linux, can an application enable or disable TCP window scaling for TCP/IP connections created by the application? As opposed to a system-wide modification through sysctl using the net.ipv4.tcp_window_scaling parameter.
No you can't. There are no per-process APIs for sockets at all, just per-socket APIs and global kernel configurations.
But you don't need to modify the scale settings directly. You just need to set the socket receive buffer size you want prior to connecting. Then the appropriate window scale is negotiated during the connect handshake. If you want mo window scaling! make sure your socket receive buffer is < 64k before connecting. In the case of accepted sockets, that is set on the listening socket.
Is it possible to initiate a TCP connection from a browser?
If so, does there already exist browser (esp. Firefox and Chrome) extensions that do this? If no extensions exist yet, do you know the core elements/functions to create a Firefox/Chrome browser-initiated TCP connection?
The Chrome Browser (I think since v24 in the stable channel) lets you host a TCP server, and the samples indicate that it can connect to a telnet server, which means that it is also capable of being a TCP client.
https://github.com/GoogleChrome/chrome-app-samples/tree/master/tcpserver
https://github.com/GoogleChrome/chrome-app-samples/tree/master/telnet
But these are not standardized, so, if you can work with websockets, do prefer that.
http://developer.chrome.com/apps/socket.html
There are Websockets, but they are limited to the websocket protocol outlined at in RFC 6455.
So far, most modern webbrowsers support it
I'm going through the Whitepaper by RADVISION on NAT/Firewall traversal for H.323 endpoints.
It is suggested there to use ITU-T H.460.18,17 and 19.
460.17 is very clear way for NAT traversal, but I'm not so clear about the 460.18.
Both present a clear solution for Firewall, but how is 460.18 a solution for NAT traversal?
Regards,
H.460.18 works by opening pinholes when moving from one protocol/network connection to the next.
H.323 works in the following classic way to connect a call:
RAS is used over UDP to register to the gatekeeper
Q.931 is used over TCP (usually) to initiate a call
H.245 is used to negotiate media capabilities and open media channels
RTP/RTCP is used to send actual media
Now, to be able to open up Q.931 and H.245, you need the endpoint to be listening on a TCP address for incoming connections. If the endpoint is behind a NAT - that will be impossible to achieve.
So H.460.18 adds special messages to get these TCP connections from the inside out (=reverse).
On RAS, when a new TCP connection needs to be opened for Q.931, a RAS SCI (ServiceControlIndication) message will be sent to the endpoint so that the endpoint will open up the TCP connection for Q.931 instead of just waiting to get an incoming connection.
On Q.931, when a new H.245 connection needs to be opened, it is initiated today already on Q.931; but now it will always be done from the endpoint behind the NAT to a public address.
To sum it up:
H.460.17 uses a single connection outbound from the endpoint to the gatekeeper and then just tunnels everything on top of it.
H.460.18 just opens up a new pinhole from one protocol to the next by having the endpoint behind a NAT do the connecting instead of doing the listening.
The problem with H.460.17 is that virtually no H.323 equipment supports it.
H.460.18 works nicely, even across vendors. It lets the endpoint behind the firewall poke a whole and then uses that whole for both ways of communication. Its rather simple when you read though the standards document. But beware that it is patented by Tandberg, so you have to get a (free) license before you can implement it.
You can look at the GNU Gatekeeper to see the details how H.460.18 gets through the firewall.
I have a website and application which use a significant number of connections. It normally has about 3,000 connections statically open, and can receive anywhere from 5,000 to 50,000 connection attempts in a few seconds time frame.
I have had the problem of running out of local ports to open new connections due to TIME_WAIT status sockets. Even with tcp_fin_timeout set to a low value (1-5), this seemed to just be causing too much overhead/slowdown, and it would still occasionally be unable to open a new socket.
I've looked at tcp_tw_reuse and tcp_tw_recycle, but I am not sure which of these would be the preferred choice, or if using both of them is an option.
According to Linux documentation, you should use the TCP_TW_REUSE flag to allow reusing sockets in TIME_WAIT state for new connections.
It seems to be a good option when dealing with a web server that have to handle many short TCP connections left in a TIME_WAIT state.
As described here, The TCP_TW_RECYCLE could cause some problems when using load balancers...
EDIT (to add some warnings ;) ):
as mentionned in comment by #raittes, the "problems when using load balancers" is about public-facing servers. When recycle is enabled, the server can't distinguish new incoming connections from different clients behind the same NAT device.
NOTE: net.ipv4.tcp_tw_recycle has been removed from Linux in 4.12 (4396e46187ca tcp: remove tcp_tw_recycle).
SOURCE: https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux
pevik mentioned an interesting blog post going the extra mile in describing all available options at the time.
Modifying kernel options must be seen as a last-resort option, and shall generally be avoided unless you know what you are doing... if that were the case you would not be asking for help over here. Hence, I would advise against doing that.
The most suitable piece of advice I can provide is pointing out the part describing what a network connection is: quadruplets (client address, client port, server address, server port).
If you can make the available ports pool bigger, you will be able to accept more concurrent connections:
Client address & client ports you cannot multiply (out of your control)
Server ports: you can only change by tweaking a kernel parameter: less critical than changing TCP buckets or reuse, if you know how much ports you need to leave available for other processes on your system
Server addresses: adding addresses to your host and balancing traffic on them:
behind L4 systems already sized for your load or directly
resolving your domain name to multiple IP addresses (and hoping the load will be shared across addresses through DNS for instance)
According to the VMWare document, the main difference is TCP_TW_REUSE works only on outbound communications.
TCP_TW_REUSE uses server-side time-stamps to allow the server to use a time-wait socket port number for outbound communications once the time-stamp is larger than the last received packet. The use of these time-stamps allows duplicate packets or delayed packets from the old connection to be discarded safely.
TCP_TW_RECYCLE uses the same server-side time-stamps, however it affects both inbound and outbound connections. This is useful when the server is the first party to initiate connection closure. This allows a new client inbound connection from the source IP to the server. Due to this difference, it causes issues where client devices are behind NAT devices, as multiple devices attempting to contact the server may be unable to establish a connection until the Time-Wait state has aged out in its entirety.