DNS Packet (Header, Requests, Responses) How To Parse - dns

What is the best way to parse an DNS Response Stream.
The DNS Packet is splitted into 3 parts.
DNS Header DNS Requests DNS Responses
To Parse each file is easy. But how do i know where each parts are splitted. There is no seperator.
43bf8100000100010000000003777777076773746174696303636f6d0000010001c00c000100010000003c0004acd91663
43bf81000001000100000000 (Header)
03777777076773746174696303636f6d0000010001 (Requests)
c00c000100010000003c0004acd91663 (Responses)
See here: http://www.tcpipguide.com/free/t_DNSMessageHeaderandQuestionSectionFormat.htm
When i have multiple Answers or maybe questions. How do i know where to seperate the request section from the response section?
Cheers

Related

ICMP timestamp - firewall configured to drop timestamp request, but vulnerability scanner can send request and get a response

We use an external scanner (Qualys) to scan our external assets.
We have a firewall in front of the external assets, but it is configured to whitelist the scanner so that the external assets get scanned in-depth.
But the firewall is also configured to drop incoming ICMP timestamp requests, which the scanner is still able to send and get a timestamp request from the external assets behind the firewall.
We have consulted with the vendor, vendor conducted some scans and analysis on their end, and replied the target response cannot be controlled by Qualys scan. The ICMP timestamp is an active QID means it is only flagged based on the target response.
And yes, I can see the timestamp response with the actual timestamp replies for the scanned assets.
So I am just wondering whether I am missing something here.
Apologies in advance if it seems to be a silly question to ask :)
Any guidance would be much appreciated.

ISAPI Filter modifies 302 response - IIS drops request and puts into HTTPERR - IPv6 / HTTP2.0

Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.

Monitoring outbound server http header information?

What tool / function can we use on our linux server running CentOS to monitor the http headers that are sent from our application to another application on a different server? Looking for http header monitoring from server to server. My issue is I have no idea how to capture the data sent from the server, meaning the http headers sent via a post. I have tried many methods and third party software's like fiddler2 and ieinspector and the list goes on, but they only seem to capture the client headers and not what is being sent out from the server. I just need to capture the string being sent out via a post function and what is being returned. Seems simple, yet in this case, I'm beyond lost and running out of time to resolve what should be a simple solution. Please advise & thank you kindly.

How to identify whether http request is from browser or from proxy server (or server)?

I have a server that takes http request and return json data. How does my server know if the http request is from a client browser and not from a server? especially if traffic may proxy from a client to another server and make a call to my server.
I know i can check the http header to know user-agent, remote-addr..etc but it is not secure. People can fake the http request header.
What other tricks I can do to identify the incoming request?
There is no way for you to know. "Anonymous proxies" will not have the X-Forwarded-For header. Some IRC servers will port scan clients as they connect looking for common proxy server ports like 8080, 3128, ect. You could hack up a tool like YAPH to look for proxies on people connecting to you. But it won't pick up phpproxy, or proxies running on strange ports.
This is an up hill battle, and its why hackers use them. If this is a problem, perhaps you should reevaluate your business model or how your application functions.
If you're able to check for headers, you'll be able to see X-Forwarded-For, which will tell you the ip of the "real" request. Legitimate proxies utilize this header.
For browsers, User-Agent header is what you'll be interested in. Popular browsers and crawlers will utilize this header.
That said, those headers can be faked or omitted. There is no single way to determine the "real" factor of incoming requests. It's best to incorporate as many headers, patterns and behaviors to determine legitimacy of a request.

Are HTTPS headers encrypted?

When sending data over HTTPS, I know the content is encrypted, however I hear mixed answers about whether the headers are encrypted, or how much of the header is encrypted.
How much of HTTPS headers are encrypted?
Including GET/POST request URLs, Cookies, etc.
All the HTTP headers are encrypted†.
That's why SSL on vhosts doesn't work too well - you need a dedicated IP address because the Host header is encrypted.
†The Server Name Identification (SNI) standard means that the hostname may not be encrypted if you're using TLS. Also, whether you're using SNI or not, the TCP and IP headers are never encrypted. (If they were, your packets would not be routable.)
The headers are entirely encrypted. The only information going over the network 'in the clear' is related to the SSL setup and D/H key exchange. This exchange is carefully designed not to yield any useful information to eavesdroppers, and once it has taken place, all data is encrypted.
New answer to old question, sorry. I thought I'd add my $.02
The OP asked if the headers were encrypted.
They are: in transit.
They are NOT: when not in transit.
So, your browser's URL (and title, in some cases) can display the querystring (which usually contain the most sensitive details) and some details in the header; the browser knows some header information (content type, unicode, etc); and browser history, password management, favorites/bookmarks, and cached pages will all contain the querystring. Server logs on the remote end can also contain querystring as well as some content details.
Also, the URL isn't always secure: the domain, protocol, and port are visible - otherwise routers don't know where to send your requests.
Also, if you've got an HTTP proxy, the proxy server knows the address, usually they don't know the full querystring.
So if the data is moving, it's generally protected. If it's not in transit, it's not encrypted.
Not to nit pick, but data at the end is also decrypted, and can be parsed, read, saved, forwarded, or discarded at will. And, malware at either end can take snapshots of data entering (or exiting) the SSL protocol - such as (bad) Javascript inside a page inside HTTPS which can surreptitiously make http (or https) calls to logging websites (since access to local harddrive is often restricted and not useful).
Also, cookies are not encrypted under the HTTPS protocol, either. Developers wanting to store sensitive data in cookies (or anywhere else for that matter) need to use their own encryption mechanism.
As to cache, most modern browsers won't cache HTTPS pages, but that fact is not defined by the HTTPS protocol, it is entirely dependent on the developer of a browser to be sure not to cache pages received through HTTPS.
So if you're worried about packet sniffing, you're probably okay. But if you're worried about malware or someone poking through your history, bookmarks, cookies, or cache, you are not out of the water yet.
HTTP version 1.1 added a special HTTP method, CONNECT - intended to create the SSL tunnel, including the necessary protocol handshake and cryptographic setup.
The regular requests thereafter all get sent wrapped in the SSL tunnel, headers and body inclusive.
HTTPS (HTTP over SSL) sends all HTTP content over a SSL tunel, so HTTP content and headers are encrypted as well.
With SSL the encryption is at the transport level, so it takes place before a request is sent.
So everything in the request is encrypted.
Yes, headers are encrypted. It's written here.
Everything in the HTTPS message is encrypted, including the headers, and the request/response load.
the URL is also encrypted, you really only have the IP, Port and if SNI, the host name that are unencrypted.
To understand, what is encrypted and what not, you need to know that SSL/TLS is the layer between the transport-layer and the application-layer.
in the case of HTTPS, HTTP is the application-layer, and TCP the transport-layer. That means, all Headers below the SSL-Level are unencrypted. Also, SSL itself may expose data. The exposed data includes(for each layer's Header):
NOTE: Additional Data might be exposed too, but this data is pretty sure to be exposed.
MAC:
Source MAC address(Current Hop)
Destination MAC address(Next Hop)
IP(assuming IPv4):
Destination IP address
Source IP address
IP Options(if set)
Type-Of-Service(TOS)
The number of hops the current packet passed, if TTL is set to 64
TCP:
Source Port
Destination Port
TCP-Options
Theoretically, you can encrypt the TCP-Headers, but that is hard to implement.
SSL:
Hostname(if SNI is being used)
Usually, a browser won't just connect to the destination host by IP immediantely using HTTPS, there are some earlier requests, that might expose the following information(if your client is not a browser, it might behave differently, but the DNS request is pretty common):
DNS:
This request is being sent to get the correct IP address of a server. It will include the hostname, and its result will include all IP addresses belonging to the server.
HTTP:
the first request to your server. A browser will only use SSL/TLS if instructed to, unencrypted HTTP is used first. Usually, this will result in a redirect to the seucre site. However, some headers might be included here already:
User-Agent(Specification of the client)
Host (Hostname)
Accept-Language (User-Language)

Resources