From the 802.11 spec, the FCS field seems be mandatory. But I do see this field is missing in some wifi traffic.
What I'm trying to do is decoding the 802.11 messages in my program.
If FCS field is optional, how to determine if it's present since the length of FrameBody part may be variable?
[Update]
The snapshot the parsing result of this capture mesh.pcap from Wireshark SampleCaptures website.
You can see there is no FCS field in the parsing result.
Unfortunately, certain firmware strips the FCS while others do not. You would think that the presence of this field would be identified in the MPDU layer where it resides however, it is identified in the radiotap header which means the radiotap parser has to share information with the MPDU parser. To get to it, you have 'unpack' the flags portion of the radiotap and '&' with 0x10
Related
I have not found any document/note in the kernel that would mandate to pass 16/32-bit values in netlink messages towards the kernel in network byte order. So my question is if I have to use htonl/htons functions when filling up netlink message. Is there such requirement at all?
according to this article this could be controlled on per-attribute basis
There are two special flags which may be present in netlink
attributes, though I have yet to encounter them in my work.
NLA_F_NESTED: specifies a nested attribute; used as a hint for
parsing. Doesn’t always appear to be used, even if nested attributes
are present. NLA_F_NET_BYTEORDER: attribute data is stored in network
byte order (big endian) instead of host endianness
UPD: looks like native (little) endian does not work well for some cases: I'm getting errno 4097 trying to pass IPSET CREATE timeout using it. network byte order works fine.
Are there any specifications in the Java Card API, RE or VM specs as to how the card must react to faulty ISO7816-4 APDUs (provided that such malformed APDUs are passed to the card at all)?
Are there different requirements for the APDU handling of applets?
If I were to send e.g. a (faulty) 3-byte long first interindustry APDU to the
card/applet - who should detect/report this error?
Who would detect/report a first interindustry APDU containing a bad LC
length field?
No, there is no generic specification that defines how to handle malformed APDU's.
In general you should always return a status word that is in a valid ISO 7816-3/4 range. Which one depends fully on the context. Generally you should try always to throw an ISOException with a logical status word on error conditions. You should try never to return a 6F00 status word, which is returned if the Applet.process() method exits with an exception other than ISOException. The most common (not all) ISO status words have been defined in the ISO7816 interface.
Unfortunately, ISO 7816-4 only provides some hints regarding which status words may be expected. On the other hand, unless the error is very specific (e.g. incorrect PIN), there is not too much a terminal can do if it receives a status word on a syntactically incorrect APDU (it is unlikely to fix an incorrect APDU command data field). Any specific status words should be defined by higher level protocols. ISO 7816-4 itself can only be used as a (rotten) foundation for other protocols. No clear rules for handling syntactic (wrong length) or semantic (wrong PIN) errors have been defined.
With regard to malformed APDU's: 3 byte APDU's won't be received by the Applet. Bytes with an incorrect Lc byte may be received. It would however be more logical if this would influence the transport layer in such a way that the transport layer either times out because it is expecting more data, or that spurious bytes are discarded. It cannot hurt to check and return a wrong length error, but please use the values of APDU.getIncomingLength() or APDU.setIncomingAndReceive() as final values for Ne if you decide to continue.
In the system I am testing right now, it has a couple of virtual L2 devices chained together to add our own L2.5 headers between Eth headers and IP headers. Now when I use
tcpdump -xx -i vir_device_1
, it actually shows the SLL header with IP header. How do I capture the full packet that is actually going out of the vir_device_1, i.e. after the ndo_start_xmit() device call?
How do I capture the full packet that is actually going out of the vir_device_1, i.e. after the ndo_start_xmit() device call?
Either by writing your own code to directly use a PF_PACKET/SOCK_RAW socket (you say "SLL header", so this is presumably Linux), or by:
making sure you've assigned a special ARPHRD_ value for your virtual interface;
using one of the DLT_USERn values for your special set of headers, or asking tcpdump-workers#lists.tcpdump.org for an official DLT_ value to be assigned for them;
modifying libpcap to map that ARPHRD_ value to the DLT_ value you're using;
modifying tcpdump to handle that DLT_ value;
if necessary, modifying other programs that would capture on that interface or read capture files as written by tcpdump on that interface to handle that value as well.
Note that the DLT_USERn values are specifically reserved for private use, and no official versions of libpcap, tcpdump, or Wireshark will ever assign them for their own use (i.e., if you use a DLT_USERn value, don't bother contributing patches to assign that value to your type of headers, as they won't be accepted; other people may already be using it for their own special headers, and that must continue to be supported), so you'll have to maintain the modified versions of libpcap, tcpdump, etc. yourself if you use one of those values rather than getting an official value assigned.
Thanks Guy Harris for providing very helpful answers to my original question!
I am adding this as an answer/note to a follow up question I asked in the comments.
Basically my question was what is the status of the packet received by PF_PACKET/SOCK_RAW.
For an software device(no queue), dev_queue_xmit() will call dev_hard_start_xmit(skb, dev) to start transmitting skb buffer. This function calls dev_queue_xmit_nit() before it calls dev->ops->ndo_start_xmit(skb,dev), which means the packet PF_PACKET sees is at the state before any changes made in ndo_start_xmit().
In the XMPP RFC, there are two MUST directives stating that the XML used for STARTTLS and SASL must not include any whitespace, for the sake of something that the spec states as "security layer byte precision". What is that?
Relevant extacts from RFC:
...
During STARTTLS negotiation, the entities MUST NOT send any whitespace as separators between XML elements (i.e., from the last character of the first-level element qualified by the 'urn:ietf:params:xml:ns:xmpp-tls' namespace as sent by the initiating entity, until the last character of the first-level element qualified by the 'urn:ietf:params:xml:ns:xmpp-tls' namespace as sent by the receiving entity). This prohibition helps to ensure proper security layer byte precision.
...
During SASL negotiation, the entities MUST NOT send any whitespace as separators between XML elements (i.e., from the last character of the first-level element qualified by the 'urn:ietf:params:xml:ns:xmpp-sasl' namespace as sent by the initiating entity, until the last character of the first-level element qualified by the 'urn:ietf:params:xml:ns:xmpp-sasl' namespace as sent by the receiving entity). This prohibition helps to ensure proper security layer byte precision.
This directive is to ensure proper handling of byte streams. Imagine if a client sends a newline after XML fragments, it might send a response like this:
<response ... /> [LF]
The server will parse the XML incrementally up to the final '>', at which point it will send a <success/> element back to the client. Now the client will send a new stream start i.e. <stream:stream ... > using the security layer. This should cause the security layer to break on the server side, since it will expect the extra LF character to be part of the security layer when it is not.
You may say that the server should simply clear its receive buffer before issuing a <success/> packet, but this not the proper way to treat a bytestream. After all, the underlying subsystem might have delayed the delivery of that LF character, and the server might receive it after sending the <success/> packet.
The solution, of course, is for the client to NOT send such extra data. You can read more about this specific discussion here on the mailing list.
I am writing a driver for a device that is connected by serial port. Unfortunately, the 9th data bit indicates whether the character should be interpreted as command or as data.
Using the built-in parity check does not work for me because an error is indicated by an additional character (NUL). And then I don't know wheter I received two data bytes or one with an parity error.
Is there a way to get this parity bit elsewhere?
EDIT: Apparently, this problem also exists on Windows (see http://gilzu.com/?p=6). It ended up with rewriting the serial driver. Is this also my only option on Linux?
As I see it, you should be able to use PARMRK as is, assuming that the \377 \0 pattern is unlikely to appear in your input. otherwise, yes, you may modify your serial driver to prepend the parity (or rather, if this byte had a parity error) to each byte. I'd go with the former, though.