ScardTransmit always returning error 16 - apdu

I am trying to build application in windows using the Winscard library to communicate with contactless smartcard reader.
I am able to connect to the device but when I try so send some data using scardtransmit I get a error 16. I have attached the piece of code that I am using below
SCARD_IO_REQUEST pioSendPci = *SCARD_PCI_T1;
//SCARD_IO_REQUEST pioSendPci = *SCARD_PCI_RAW;
DWORD dwRecvLength;
BYTE pbRecvBuffer[258];
BYTE cmd1[260];
cmd1[0]= 0xA0;
cmd1[1]= 0x0D;
cmd1[2]= 0x01;
cmd1[3]= 0x00;
cmd1[4]= 0x01;
ULONG sendbuflen= 0x05;
dwRecvLength = sizeof(pbRecvBuffer);
rv2 = SCardTransmit(hCard, &pioSendPci , cmd1,sendbuflen ,NULL, pbRecvBuffer, &dwRecvLength);

The command you are trying to send does not look like a valid APDU.
A valid APDU (see ISO/IEC 7816-4) has this form (except for extended length APDUs):
+--------+--------+--------+--------+--------+----------+--------+
| CLA | INS | P1 | P2 | [Lc] | [DATA] | [Le] |
+--------+--------+--------+--------+--------+----------+--------+
| 1 Byte | 1 Byte | 1 Byte | 1 Byte | 1 Byte | Lc Bytes | 1 Byte |
+--------+--------+--------+--------+--------+----------+--------+
Where Lc contains the number of transmitted command DATA bytes or is empty (i.e. no Le) if there is no DATA bytes. Le encodes the number of expected response data bytes, with the special case Le empty indicating no expected response data bytes and Le = 0x00 indicating 256 (or maximum) expected reponse data bytes.

Related

Correct WAVE_FORMAT_1S16 PCM dual-channel format?

I am trying to play 2 16-bit PCM streams on Windows 10 WinMM Audio,
each on a separate channel, using this WAVEFORMATEX :
const WAVEFORMATEX
_wex_ = // (WAVEFORMATEX)
{ .wFormatTag = WAVE_FORMAT_PCM
, .nChannels = 2
, .nSamplesPerSec = 8000
, .nAvgBytesPerSec= 16000
, .nBlockAlign = 4
, .wBitsPerSample = 16
, .cbSize = 0
};
When I play them with 2 processes, process A laying out its stream like:
Bit 31: | :Bit 0
<PCM_16_BITS> | 00 ... 00
and process B laying out its stream like:
Bit 31: | :Bit 0
00 ... 00 | <PCM_16_BITS>
, then the stream plays on left and right channels successfully (is Mixed
by the WAS Mixer device - each stream plays on only one channel so is played only
on left or right speaker ).
But if I write a single process which combines the two streams, so that
a single stream is laid out like:
Bit 31: | :Bit 0
<LEFT PCM_16_BITS> | <RIGHT PCM_16_BITS>
then the stream plays as garbled nonsense.
Please could anyone enlighten me as to the actual byte layout that Windows Audio
expects the frames to have for 2-channel 16-bit PCM as configured by my WAVEFORMATEX ?
I have written the code to invoke waveOutOpen, waveOutPrepareHeader, and waveOutWrite, etc., which works fine, it is just when I try to
play a combined stream with the 2 left|right 16-bit audio samples laid out in
high 16 bits and low 16 bits of 32-bit words that the output is garbled -
I guess I just do not know what format Windows Audio is expecting here.
Or is my WAVEFORMATEX in error somehow?
Or point to where this might be documented ? The Microsoft docs go into excruciating detail on header file contents without actually explaining things like stream layouts at all.
Thanks in advance for any helpful advice / replies.
Aha! I finally found :
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ksmedia/ns-ksmedia-waveformatextensible
and
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ksmedia/ns-ksmedia-ksaudio_channel_config
which do answer the question - I guess if I am sending 2 channels
in one frame I have to use 12 bytes per channel .
OK, the last answer wasn't really an answer - here is a better attempt -
To save anyone else the headaches I have been through the last few days,
here is the corrected 'pwfx' waveOutOpen parameter:
const WAVEFORMATEXTENSIBLE
_wex_ext_ =
{ .Format =
{ .wFormatTag = WAVE_FORMAT_EXTENSIBLE
, .nChannels = 2
, .nSamplesPerSec = 8000
, .nAvgBytesPerSec= 32000
, .nBlockAlign = 4
, .wBitsPerSample = 16
, .cbSize = sizeof(WAVEFORMATEXTENSIBLE) -
sizeof(WAVEFORMATEX)
}
, .Samples = {16}
, .dwChannelMask = (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT)
, .SubFormat = DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_PCM)
};
So if that is used as the structure to which 'pwfx' points :
switch
( r =
waveOutOpen
( &aoh // Audio Output Handle
, WAVE_MAPPER
, ((WAVEFORMATEX*)&_wex_ext_)
, ((DWORD_PTR)waveOutProc) // Callback function on WOM_DONE
, ((DWORD_PTR)0UL) // no user paramet
, WAVE_FORMAT_DIRECT | CALLBACK_FUNCTION // do not modify audio data
)
)
{case MMSYSERR_NOERROR:
break;
case MMSYSERR_ALLOCATED:
ok = false;
emsg = "Specified resource is already allocated.";
break;
case MMSYSERR_BADDEVICEID:
ok = false;
emsg = "Specified device identifier is out of range.";
break;
case MMSYSERR_NODRIVER:
ok = false;
emsg = "No device driver is present.";
break;
case MMSYSERR_NOMEM:
ok = false;
emsg = "Unable to allocate or lock memory.";
break;
case WAVERR_BADFORMAT:
ok = false;
emsg = "Attempted to open with an unsupported waveform-audio format.";
break;
case WAVERR_SYNC:
ok = false;
emsg = "The device is synchronous but waveOutOpen was called without using the WAVE_ALLOWSYNC flag.";
break;
default:
ok = false;
if ( 0 == waveOutGetErrorText(r, ((u8_t*) &pcm[0].l[0]), 1024 ))
{ u16_t len = 1024;
s8_t *str = ((s8_t*)&pcm[0].l[0]);
utf8str( ((u16_t*)&pcm[0].l[0]), wcslen((U16_t*)&pcm[0].l[0]), &str, &len);
emsg = ((const char*) &pcm[0].l[0]);
}
break;
}
Then either of the two SINGLE CHANNEL formats played by 2 processes:
Bit:15
Bit: 31: | Bit: 0:
<16-bit LE PCM L>| 0 ... 0 # all zeros, OR:
Bit:15
Bit: 31: | Bit: 0:
0 ... 0| <16-bit LE PCM R> |
works fine, and mixes so that "PCM L" plays on LEFT channel, and PCM R
plays on RIGHT channel,
But I have had no NO success playing this with a single process :
Bit:15
Bit: 31: | Bit: 0:
<1| <16-bit LE PCM L> | <16-bit LE PCM R>
The same occurs when playing the stream with ALSA (alsa-lib) on Linux -
so it is not a windows thing.
On Linux, with ALSA's S16_LE format,
I have tried playing the same streams, in one process, like:
Bit:15
Bit: 31: | Bit: 0:
<16-bit LE PCM L>| 0 ... 0 # all zeros
0 ... 0| <16-bit LE PCM R>
I have also tried:
Bit:15
Bit: 31: | Bit: 0:
0 ... 0| <16-bit LE PCM L>|
0 ... 0| <16-bit LE PCM R>|
and:
Bit:15
Bit: 31: | Bit: 0:
<16-bit LE PCM L>| 0 ... 0|
<16-bit LE PCM R>| 0 ... 0|
But these also donot work (the stream is full of clicks & glitches
and the signals seem to alternate between Left & Right & are barely
audible). Of course, how could that work if I've specified the
S16_LE format, which expects 16-bit frames , and I'm writing 32-bit
frames ?
So, I'm really stuck as to what the exact interleaving byte format
should be.
On Windows, the code doing the interleaving looks like:
typedef struct pcm_frame_s
{ U16_t l[320], r[320], lr[640];
} PCMFrame_t;
static
PCMFrame_t pcm[16] = {0};
...
// pcmA & pcmB are pointers to pcm[i].l[0] & pcm[i].r[0] :
if ( pcmA && pcmB )
{ register U8_t
np;
register U16_t
*ppcm =&pcm[i].lr[0];
for( np = 0
, pcmA=&pcm[i].l[0]
, pcmB=&pcm[i].r[0]
; np < 160
; ++np, ++ppcm, ++pcmA, ++pcmB
)
{ *ppcm = *pcmA ? *pcmA : pcm_slnc[np];
ppcm += 1;
*ppcm = *pcmB ? *pcmB : pcm_slnc[np];
} // pcm_slnc is a pre-computed "Comfort Noise" block
}else
{ register U8_t
np;
register U16_t
*ppcm = &pcm[i].lr[0]
, *pspcm = pcmA ? pcmA : pcmB;
if(left) // user has chosen to put 1st stream on left channel
{
for(np=0; np < 160; ++np, ++ppcm, ++pspcm)
{ *ppcm = *pspcm;
ppcm += 1;
*ppcm = pcm_slnc[np];
}
} else
{
for(np=0; np < 160; ++np, ++ppcm, ++pspcm)
{ *ppcm = pcm_slnc[np];
ppcm += 1;
*ppcm=*pspcm;
}
}
}
while ( (WHDR_PREPARED | WHDR_DONE)
!= (wh[i].dwFlags & (WHDR_PREPARED | WHDR_DONE))
)
{ if (!WaitOnAddress
( &(wh[i].dwFlags)
, &playingDwFlags
, sizeof(u32_t)
, INFINITE
)
)
{ say(FMT("WaitOnAddress failed in INFINITE mode."));
ok = false;
break;
}
}
wh[i].dwFlags &= ~WHDR_DONE;
switch
( r = waveOutWrite(aoh, &wh[i], sizeof(wh[i])) )
{case MMSYSERR_NOERROR:
...
I think I MUST specify 32-bits per channel, and use a 32-bit frame size?
So there is no way I can interleave two 16-bit left | right channel
PCM values next to each other without using a 32-bit stream format ?
I am on the verge of giving up and just using 2 processes and the
mixer, which strangely DOES honor the "<left 16 bits>|0" (left process) and
"0|<right 16-bits>" (right process) (left | right) channel format
and send them to the left / right speakers as expected.
But there is no way to specify two 16-bit channels in a 32-bit word?
Strange.

Sending a byte [] over javacard apdu

I send a byte [] from the host application to the javacard applet. But when I try to retrieve it as byte [] via the command buffer[ISO7816.OFFSET_CDATA], I am told that I cannot convert byte to byte[]. How can I send a byte [] via command APDU from the host application and retrieve it as byte[] on the other end (javacard applet). It appears buffer[ISO7816.OFFSET_CDATA] returns byte. See my comments on where the error occurs.
My idea is as follows:
The host application sends challenge as a byte [] to be signed by the javacard applet. Note that the signature requires the challenge to be a byte []. The javacard signs as follows:
private void sign(APDU apdu) {
if(!pin.isValidated()) ISOException.throwIt(SW_PIN_VERIFICATION_REQUIRED);
else{
byte [] buffer = apdu.getBuffer();
byte numBytes = buffer[ISO7816.OFFSET_LC];
byte byteRead =(byte)(apdu.setIncomingAndReceive());
if ( ( numBytes != 20 ) || (byteRead != 20) )
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH);
byte [] challenge = buffer[ISO7816.OFFSET_CDATA];// error point cannot convert from byte to byte []
byte [] output = new byte [64];
short length = 64;
short x =0;
Signature signature =Signature.getInstance(Signature.ALG_RSA_SHA_PKCS1, false);
signature.init(privKey, Signature.MODE_SIGN);
short sigLength = signature.sign(challenge, offset,length, output, x); // challenge must be a byte []
//This sequence of three methods sends the data contained in
//'serial' with offset '0' and length 'serial.length'
//to the host application.
apdu.setOutgoing();
apdu.setOutgoingLength((short)output.length);
apdu.sendBytesLong(output,(short)0,(short)output.length);
}
}
The challenge is sent by the host application as shown below:
byte [] card_signature=null;
SecureRandom random = SecureRandom . getInstance( "SHA1PRNG" ) ;
byte [] bytes = new byte [ 20 ] ;
random . nextBytes ( bytes) ;
CommandAPDU challenge;
ResponseAPDU resp3;
challenge = new CommandAPDU(IDENTITY_CARD_CLA,SIGN_CHALLENGE, 0x00, 0x20,bytes);
resp3= c.transmit(challenge);
if(resp3.getSW()==0x9000) {
card_signature = resp3.getData();
String s= DatatypeConverter.printHexBinary(card_signature);
System.out.println("signature: " + s);
} else System.out.println("Challenge signature error " + resp3.getSW());
Generally, you send bytes over through the APDU interface. A Java or Java Card byte[] is a construct that can hold those bytes. This is where the APDU buffer comes in: it is the byte array that holds the bytes sent over the APDU interface - or at least a portion of them after calling setIncomingAndReceive().
The challenge therefore is within the APDU buffer; instead of calling:
short sigLength = signature.sign(challenge, offset,length, output, x);
you can therefore simply call:
short sigLength = signature.sign(buffer, apdu.getOffsetCdata(), CHALLENGE_SIZE, buffer, START);
where CHALLENGE_SIZE is 20 and START is simply zero.
Then you can use:
apdu.getOutgoingAndSend(START, sigLength);
to send back the signed challenge.
If you require to keep the challenge for a later stage then you should create a byte array in RAM using JCSystem.makeTransientByteArray() during construction of the Applet and then use Util.arrayCopy() to move the byte values into the challenge buffer. However, since the challenge is generated by the offcard system, there doesn't seem to be any need for this. The offcard system should keep the challenge, not the card.
You should not use ISO7816.OFFSET_CDATA anymore; it will not return the correct result if you would use larger key sizes that generate larger signatures and therefore require the use of extended length APDUs.

Unknown Error (6c 15) with Setoutgoinglength in java card 2.2.1

I wrote a code that for java card 2.2.1 and I test it eith JCIDE.
I get error in method Setoutgoinglength()
public void getoutput(APDU apdu)
{
byte [] buffer = apdu.getBuffer();
byte hello[] = {'H','E','L','L','O',' ','W','O','R','L','D',' ', 'J','A','V','A',' ','C','A','R','D'};
short le = apdu.setOutgoing();
short totalBytes = (short) hello.length;
Util.arrayCopyNonAtomic(hello, (short)0, buffer, (short)0, (short)totalBytes);
apdu.setOutgoingLength(totalBytes);
apdu.sendBytes((short) 0, (short) hello.length);}
6CXX means that your Le is not equal to the correct length of response data (XX is equal to the length of correct response data). 6C15 specifically means that the correct Le to be used should be 0x15.
What happened is that your Le field is 0x00 (which is actually interpreted by the card as 256 in decimal form) but you used totalBytes, which has a value of 0x15, as parameter to apdu.setOutgoingLength() which is not equal to 256.
The correct APDU to send is 00 40 00 00 15

What dtrace script output means?

I am tracing DTrace probes in my restify.js application (restify it is http server in node.js that provides dtrace support). I am using sample dtrace script from restify documentation:
#!/usr/sbin/dtrace -s
#pragma D option quiet
restify*:::route-start
{
track[arg2] = timestamp;
}
restify*:::handler-start
/track[arg3]/
{
h[arg3, copyinstr(arg2)] = timestamp;
}
restify*:::handler-done
/track[arg3] && h[arg3, copyinstr(arg2)]/
{
#[copyinstr(arg2)] = quantize((timestamp - h[arg3, copyinstr(arg2)]) / 1000000);
h[arg3, copyinstr(arg2)] = 0;
}
restify*:::route-done
/track[arg2]/
{
#[copyinstr(arg1)] = quantize((timestamp - track[arg2]) / 1000000);
track[arg2] = 0;
}
And the output is:
use_restifyRequestLogger
value ------------- Distribution ------------- count
-1 | 0
0 |######################################## 2
1 | 0
use_validate
value ------------- Distribution ------------- count
-1 | 0
0 |######################################## 2
1 | 0
pre
value ------------- Distribution ------------- count
0 | 0
1 |#################### 1
2 |#################### 1
4 | 0
handler
value ------------- Distribution ------------- count
128 | 0
256 |######################################## 2
512 | 0
route_user_read
value ------------- Distribution ------------- count
128 | 0
256 |######################################## 2
512 | 0
I was wondering what is value value field - what does it mean?
Why there is 124/256/512 for example? I guess it means the time/duration but it is in strange format - is it possible to show miliseconds for example?
The output is a histogram. You are getting a histogram because you are using the quantize function in your D script. The DTrace documentation says the following on quantize:
A power-of-two frequency distribution of the values of the specified expressions. Increments the value in the highest power-of-two bucket that is less than the specified expression.
The 'value' columns is the result of (timestamp - track[arg2]) / 1000000 where timestamp is the current time in nanoseconds. So the value shown is duration in milliseconds.
Putting this all together, the route_user_read result graph is telling you that you had 2 requests that took between 128 and 256 milliseconds.
This output is useful when you have a lot of requests and want to get a general sense of how your server is performing (you can quickly identify a bi-modal distribution for example). If you just want to see how long each request is taking, try using the printf function instead of quantize.

Modbus: convert hex value to dword in nodejs

Specification given in the manual of meter which uses modbus as communication protocol:
“dword” refers to 32-bit unsigned integer using two data addresses and 4 bytes
of memory with high word at the front and low word at the end, it varies from 0
to 4294967295. rx = high word *65536+low word
i have this hexValue : 00003ef5.
where highword = 0000
and lowword = 3ef5
so rx would be 3ef5 and this converted to decimal gives 16,117.
For now i have used this:
var rx = hexValue.substr(0,4)*65536 + hexValue.substr(4,8);
console.log(parseInt(rx,16));
Is there a way to convert my hex value directly to dword using nodejs buffer library or any other method better than this?

Resources