In AUTOSAR E2E Profile1, CAN the counter max value be 0xF? - autosar

In AUTOSAR E2E Profile1, the counter max value is fixed as 0x0E. But what if we want to change it to 0xF? Any side effect? is there any consideration to skip the 0xF in the original AUTOSAR standard?
I checked the AUTOSAR_SWS_E2ELibrary.pdf, it only says 0xF is a invalid value to be skipped. But why is it an invalid value? waht's the consequences if we not skip the 0xF?
One of my customer insist to use the 0xF unless we can show them the side effects.

The value 0x0F is meant as "invalid value", which means, even though the message was transmitted (by Com out of its message buffer initialized at Com_Init()), it might be default inital data, and SWC was not running, and therefore not updating the data. That can be found out by the receiver, because of the invalid.
Changing this and using 0x0F as valid value means, you are violating the standardized E2E Profile1. If the E2E Profile1 is specified in the SystemDescription to be used, every communication partner does rely on E2E Profile1 as specified in AUTOSAR, not your "we use 0xF as valid" implementation and will fail.
Not sure, why your customer relies on 0x0F being a valid value for E2E Profile1. Tell him, this violates AUTOSAR.

Related

AMF value for AUTS in case of SQN synchronization failure

In case of sequence synchronization failure, why doesn't USIM use the AMF value received from the network to calculate the AUTS via f1* function, instead 0x0000 is set for AMF for calculating AUTS for both UMTS and E-UTRAN?
ETSI TS 133 102 says:
The AMF used to calculate MAC-S assumes a dummy value of all zeros so that it does not need to be transmitted in the clear in the re-synch message.
However, what I don't fully understand is why would that matter? Wasn't the AMF sent in clear format previously in the signalings?
LTE security / Gunther Horn ... [et al.]. – 2nd edition states:
AUTS is included in an Authentication Failure message from the UE to the MME. The MME forwards AUTS to the HSS requesting new AVs. The AMF used to calculate MAC-S is set to all zeros so that it does not need to be transmitted back to the HSS.
If AMF was not set to 0x0000, then AuC would have to inform HSS? If so, why?

BLE Clarifying the Read and Indicate Operations

I'm writing code for Pycom's Lopy4 board and have created a BLE service for environmental sensing which currently has one characteristic, temperature. I'm taking the temperature as a float and attempting to update the characteristic value every two seconds.
When I use a BLE scanner app, whenever I try to read, I read a value of "temperature10862," which is the characteristic's name and uuid. Yet when I press the indicate button, the value shows the correct temperature string, updating automatically every two seconds.
I'm a bit confused overall. Is this a problem with my code on the Pycom device or am I simply misunderstanding what a BLE read is supposed to be? Since the temperature values are obviously being updated on the device, but why does the client, the app, only show these values with an indication rather than a read?
I am sorry for any vagueness in the question, but any help or guidance would be appreciated.
Read Attempt
Indicate Attempt
Returning "temperature10862" as a read response is obviously incorrect. Sending the temperature as a string is in this case also incorrect, since you use the Bluetooth SIG-defined characteristic https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.temperature.xml. According to that the value should consist of a signed 16-bit integer in units of 0.01 degrees Celcius.
If you look at https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Services/org.bluetooth.service.environmental_sensing.xml, you will see that it's mandatory to support Read and optional to support Notifications. Indications, however are not permitted. So you should change your indicate property to notify instead.
The value sent should be the same regardless if the value is sent as a notification or read response.
Be sure you read the Environmental Sensing specs and follow the rest of the GATT service structure.

How is SECURITY_MODE_COMMAND message requests to start/stop ciphering?

I read about SECURITY_MODE_COMMAND that it's sent by the NW to stop/start enciphering of messages.
I could not find in SECURITY_MODE_COMMAND message structure what fields I need to check in order to find out if ciphering should begin or should end.
Can I get some help with that ?
I assume you are talking about the NAS Security_Mode_Command message, described in TS 33.401 section 7.2.4.4, and defined in TS 24.301 section 8.2.20.
From TS 24.301 section 8.2.20, we can see that Security_Mode_Command contains the information element "Selected NAS Security Algorithms", which is defined in section 9.9.3.23.
I think the answer to your question is, that you should check this field.
If it contains a valid value for an algorithm, then ciphering should be switched on using this algorithm. But if ciphering is already on, and it contains
0 0 0 EPS encryption algorithm EEA0 (null ciphering algorithm)
then no ciphering should be applied. Therefore, you could interpret that as "switch off ciphering".
But I also note that the same spec says in section 8.2.20 Security Mode Command, that
This message is sent by the network to the UE to establish NAS signalling security.
So, I'm not completely sure if it should be sent to switch ciphering off, as that's not specifically mentioned in the spec.

Heartbleed: Payloads and padding

I am left with a few questions after reading the RFC 6520 for Heartbeat:
https://www.rfc-editor.org/rfc/rfc6520
Specifically, I don't understand why a heartbeat needs to include arbitrary payloads or even padding for that matter. From what I can understand, the purpose of the heartbeat is to verify that the other party is still paying attention at the other end of the line.
What does these variable length custom payloads provide that a fixed request and response do not?
E.g.
Alice: still alive?
Bob: still alive!
After all, FTP uses the NOOP command to keep connections alive, which seem to work fine.
There is, in fact, a reason for this payload/padding within RFC 6520
From the document:
The user can use the new HeartbeatRequest message,
which has to be answered by the peer with a HeartbeartResponse
immediately. To perform PMTU discovery, HeartbeatRequest messages
containing padding can be used as probe packets, as described in
[RFC4821].
>In particular, after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
>When a HeartbeatRequest message is received and sending a
HeartbeatResponse is not prohibited as described elsewhere in this
document, the receiver MUST send a corresponding HeartbeatResponse
message carrying an exact copy of the payload of the received
HeartbeatRequest.
If a received HeartbeatResponse message does not contain the expected
payload, the message MUST be discarded silently. If it does contain
the expected payload, the retransmission timer MUST be stopped.
Credit to pwg at HackerNews. There is a good and relevant discussion there as well.
(The following is not a direct answer, but is here to highlight related comments on another question about Heartbleed.)
There are arguments against the protocol design that allowed an arbitrary limit - either that there should have been no payload (or even echo/heartbeat feature) or that a small finite/fixed payload would have been a better design.
From the comments on the accepted answer in Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C?
(R..) In regards to the last question, I would say any large echo request is malicious. It's consuming server resources (bandwidth, which costs money) to do something completely useless. There's really no valid reason for the heartbeat operation to support any length but zero
(Eric Lippert) Had the designers of the API believed that then they would not have allowed a buffer to be passed at all, so clearly they did not believe that. There must be some by-design reason to support the echo feature; why it was not a fixed-size 4 byte buffer, which seems adequate to me, I do not know.
(R..) .. Nobody thinking from a security standpoint would think that supporting arbitrary echo requests is reasonable. Even if it weren't for the heartbleed overflow issue, there may be cryptographic weaknesses related to having such control over the content the peer sends; this seems unlikely, but in the absence of a strong reason to support a[n echo] feature, a cryptographic system should not support it. It should be as simple as possible.
While I don't know the exact motivation behind this decision, it may have been motivated by the ICMP echo request packets used by the ping utility. In an ICMP echo request, an arbitrary payload of data can be attached to the packet, and the destination server will return exactly that payload if it is reachable and responding to ping requests. This can be used to verify that data is being properly sent across the network and that payloads aren't being corrupted in transit.

Handling of faulty ISO7816 APDUs

Are there any specifications in the Java Card API, RE or VM specs as to how the card must react to faulty ISO7816-4 APDUs (provided that such malformed APDUs are passed to the card at all)?
Are there different requirements for the APDU handling of applets?
If I were to send e.g. a (faulty) 3-byte long first interindustry APDU to the
card/applet - who should detect/report this error?
Who would detect/report a first interindustry APDU containing a bad LC
length field?
No, there is no generic specification that defines how to handle malformed APDU's.
In general you should always return a status word that is in a valid ISO 7816-3/4 range. Which one depends fully on the context. Generally you should try always to throw an ISOException with a logical status word on error conditions. You should try never to return a 6F00 status word, which is returned if the Applet.process() method exits with an exception other than ISOException. The most common (not all) ISO status words have been defined in the ISO7816 interface.
Unfortunately, ISO 7816-4 only provides some hints regarding which status words may be expected. On the other hand, unless the error is very specific (e.g. incorrect PIN), there is not too much a terminal can do if it receives a status word on a syntactically incorrect APDU (it is unlikely to fix an incorrect APDU command data field). Any specific status words should be defined by higher level protocols. ISO 7816-4 itself can only be used as a (rotten) foundation for other protocols. No clear rules for handling syntactic (wrong length) or semantic (wrong PIN) errors have been defined.
With regard to malformed APDU's: 3 byte APDU's won't be received by the Applet. Bytes with an incorrect Lc byte may be received. It would however be more logical if this would influence the transport layer in such a way that the transport layer either times out because it is expecting more data, or that spurious bytes are discarded. It cannot hurt to check and return a wrong length error, but please use the values of APDU.getIncomingLength() or APDU.setIncomingAndReceive() as final values for Ne if you decide to continue.

Resources