Can connection abort in J1939 for any request be destination specific? - protocols

When any one requests to vehicle with PGN 0xEA00 in J1939 what are the possible reason for connection abort 0x255 in connection manager?
And will this connection manager be destination specific?
-> Detailed reason for COnnection abort
-> If connection is aborted will it be destination specific

Related

close connection from proxy server to target server and not from client to proxy server

Objective:
Never close connection between client and SOCKS proxy + reuse it to send multiple HTTPS requests to different targets (example targets: google.com, cloudflare.com) without closing the socket during the switch to different target.
Step 1:
So I have client which connects to SOCKS proxy server over TCP connection. That is client socket(and only socket(file descriptor) used in this project).
client -> proxy
Step 2:
Then after connection is established and verified. Then it does TLS connect to the target server which can be for example google.com (DNS lookup is done before this).
Now we have connection:
client -> proxy -> target
Step 3:
Then client sends HTTPS request over it and receives response successfully.
Issue appears:
After that I want to close connection explicitly between proxy and target so I can send request to another target. For this it is required to close TLS connection and I don't know how to do it without closing connection between client and proxy which is not acceptable.
Possible solutions?:
1:
Would sending Connection: close\n\r request to current target close connection only between proxy and target and not close the socket.
2:
If I added Connection: close\n\r to headers of every request, would that close the socket and thus it's not valid solution?
Question:
(NodeJS) I made custom https Agent which handles Agent-s method -> callback(req, opts) where opts argument is request options from what client sent to target (through proxy). This callback returns tls socket after it's connected, I built tls socket connection outside of the callback and passed it to agent. Is it possible to use this to close connection between proxy and target using req.close(), would this close the socket? Also what is the point of req in Agent's callback, can it be used in this case?
Any help is appreciated.
If you spin up wireshark and look at what is happening through your proxy, you should quickly see that HTTP/S requests are connection oriented, end-to-end (for HTTPS) and also time-boxed. If you stop and think about it, they are necasarily so, to avoid issues such as the confused deputy problem etc.
So the first bit to note is that for HTTPS, the proxy will only see the initial CONNECT request, and then from there on everything is just a TCP stream of TLS bytes. Which means that the proxy won't be able to see the headers (that is, unless your proxy is a MITM that intercepts the TLS handshake, and you haven't mentioned this, so I've assumed not).
The next bit is that the agent/browser will open connections in parallel (typically a half-dozen for a browser) and will also use pipelining and keep-alive to send multiple requests down the same connection.
Then there are connection limits imposed by the browser, and servers. These typically cap the number of requests, and the duration that they are held open, before speculatively closing them. If they didn't, any reasonably busy server would quickly exhaust all their TCP sockets.
So all-in, what you are looking to achieve isn't going to work.
That said, if you are looking to improve performance, the node client has a few things you can enable and tweak:
Enable TLS session reuse, which will make connections much more
efficient to establish.
Enable keep-alive, which will funnel multiple requests through
the same connection.

how does cassandra server identifies crashed client connections?

How does cassandra server running in windows operating system identifies if any connection disconnected abnormally by Client, so that it closes the connection and allows for any new connection if native_transport_max_concurrent_connections property set.
In the cassandra.cluster file there should be a idle_heartbeat_interval config property. This says how often it will ping the clients to see if they are still active. If a client is not still active, it should close the connection.
https://datastax.github.io/python-driver/api/cassandra/cluster.html

Linux application doesn't ACK the FIN packet and retransmission occurs

I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?
Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?

difference between socket failing with EOF and connection reset

For testing a networking application, I have written an asio port "proxy": it listens on a socket for the application client activity and sends all incoming packets to another socket where it is listened to by the application server, and back.
Now when either the application or the server disconnect for various reasons, the "proxy" usually gets an EOF but sometimes it receives a "connection reset".
Hence, the question: when does a socket fail with a "connection reset" error?
A TCP connection is "reset" when the local end attempts to send data to the remote end and the remote end answer with a packet with the RST flag set (instead of ACK). This almost always happens because the remote end doesn't know about any TCP connection that matches the remote&local addresses and remote&local port numbers. Possible reasons include:
The remote end has been rebooted
A state-tracking firewall somewhere in the path has been rebooted/changed/added/removed
A load balancer has incorrectly directed the TCP connection to a different node than the one it was supposed to go to.
The remote IP address has changed hands (the new owner doesn't know anything about TCP connections belonging to the old owner).
The remote end considers that the TCP connection has been closed already (but somehow the local end doesn't agree).
Note that if the remote end answers the initial (SYN) packet in a TCP connection with a RST packet, it is considered "Connection refused" instead of "Connection reset by peer".

Unable to connect to azure from a specific server

I have an Azure service bus queue which can't connect to my queue. On my pc it works fine, On our dev server it also works fine. We have deployed it on our test box and We are getting this error when trying to receive messages from the queue:
Microsoft.ServiceBus.Messaging.MessagingCommunicationException: Could
not connect to net.tcp://jeportal.servicebus.windows.net:9354/. The
connection attempt lasted for a time span of 00:00:14.9062482. TCP
error code 10060: A connection attempt failed because the connected
party did not properly respond after a period of time, or established
connection failed because connected host has failed to respond
168.62.48.238:9354. ---> System.ServiceModel.EndpointNotFoundException: Could not connect to
net.tcp://jeportal.servicebus.windows.net:9354/. The connection
attempt lasted for a time span of 00:00:14.9062482. TCP error code
10060: A connection attempt failed because the connected party did
not properly respond after a period of time, or established
connection failed because connected host has failed to respond
168.62.48.238:9354. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond
168.62.48.238:9354
We have disabled the firewall and it still doesn't work, any suggestions on troubleshooting ?
If this is related to firewall setting that you may want to try to set the connectivity mode to Http. More details at
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.servicebus.connectivitysettings.mode.aspx
and:
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.servicebus.connectivitymode.aspx
Try to increase the timeouts on your bindings to 1 minute and add your server application as an exception in Windows Firewall manually.
So this ended up being a simple issue of ou network firewall being restricted. We had told our SA's to open the ports up for 9354 goinging to the sb. They said they did open them... but they didn't. I walked throght it with them and we discovered it wasn't open

Resources