How to concatenate arrays in Java Card - javacard

Java has lot of ways concatenate arrays but it seems Java Card has none of it. Is there a way?
For example i want to concatenate those two array
byte[] a= {(byte) 'P', (byte) 'K'};
byte[] b= {(byte) 'T', (byte) 'G'};
What i want:
byte[] C= {(byte) 'P', (byte) 'K', (byte) 'T', (byte) 'G'};
is there any way?

In Java Card resources are scarce, so arrays will never be concatenated. Array concatenation would create a new object, which means additional memory will have to be claimed.
Best practices are to only create objects with the new operator (for persistent arrays in EEPROM/flash or JCSystem.makeTransientByteArray and friends for transient memory (RAM) during installation / personalization and not during normal operation in the field.
In order to concatenate arrays you can use Util.arrayCopy() with offset and length to copy data between already existing arrays including the APDU buffer.
Similarly almost all library calls working with buffers will always require an offset and length as well so pre-existing array (buffers) can be used - at the cost of boundary checking, which you will have to do yourself.

No, there is no API available for this.

Related

Problems with SHA 2 Hashing and Java

I am working on following the SHA-2 cryptographically functions as stated in https://en.wikipedia.org/wiki/SHA-2.
I am examining the lines that say:
begin with the original message of length L bits append a single '1' bit;
append K '0' bits, where K is the minimum number >= 0 such that L + 1 + K + 64 is a multiple of 512
append L as a 64-bit big-endian integer, making the total post-processed length a multiple of 512 bits.
I do not understand the last two lines. If my string is short can its length after adding K '0' bits be 512. How should I implement this in Java code?
First of all, it should be made clear that the "string" that is talked about is not a Java String but a bit string. These algorithms are binary/bit based. The implementation will generally not handle bits but bytes. So there is a translation phase where you should see bytes instead of bits.
SHA-512 is operated on in blocks of 512 bits (SHA-224/256) or 1024 bits (SHA-384/512). So basically you have a 64 or 128 byte buffer that you are filling before operating on it. You could also directly cache the data in 32 bit int fields (SHA-224/256) or 64 bit long fields, as that is the word size that is operated on.
Now the padding is relatively simple procedure. The padding is called bit-padding. As it is used in big-endian mode (SHA-2 fortunately uses this instead of the braindead little endian mode in SHA-3) the padding consists of a single bit set on the highest order bit in a byte, with the rest filled by zero's. That makes for a value of (byte) 0x80 which must be put in the buffer.
If you cannot create this padding because the buffer is full then you will have to process the previous block, and then set the first bit of the now available buffer to (byte) 0x80. In the newer Java you can also use (byte) 0b1_0000000 byte the way, which is more explicit.
Now you simply add zero's until you have 8 to 16 bytes left, again depending on the hash output size used. If there aren't enough bytes then fill till the end, process the block, and re-start filling with zero bytes until you have 8 or 16 bytes left again.
Now finally you have to encode the number of bits in those 8 or 16 bytes you've left. So multiply your input by eight, and make sure you encode those bytes in the same way as you'd expect in Java with the least significant bits as much to the right as possible. You might want to use https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#putLong-long- for this if you don't want to program it yourself. You may probably forget about anything over 2^56 bytes anyway, so if you have SHA-384/SHA-512 then simply set the first eight bytes to zero.
And that's it, except that you still need to process that last block and then use as many bytes from the left as required for your particular output size.

How many actual bytes of memory node.js Buffer uses internally to store 1 logical byte of data?

Node.js documentations states that:
A Buffer is similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap.
Am I right that all integers are represented as 64-bit floats internally in javascript?
Does it mean that storing 1 byte in Node.js Buffer actually takes 8 bytes of memory?
Thanks
Buffers are simply an array of bytes, so the length of the buffer is essentially the number of bytes that the Buffer will occupy.
For instance, the new Buffer(size) constructor is documented as "Allocates a new buffer of size octets." Here octets clearly identifies the cells as single-byte values. Similarly buf[index] states "Get and set the octet at index. The values refer to individual bytes, so the legal range is between 0x00 and 0xFF hex or 0 and 255.".
While a buffer is absolutely an array of bytes, you may interact with it as integers or other types using the buf.read* class of functions available on the buffer object. Each of these has a specific number of bytes that are affected by the operations.
For more specifics on the internals, Node just passes the length through to smalloc which just uses malloc as you'd expect to allocate the specified number of bytes.

Fastest Way to Copy Buffer or C-String into a std::string

Let's say I have char buffer[64] and uint32_t length, and buffer might or might not be null terminated. If it is null terminated, the rest of the buffer will be filled with nulls. the length variable holds the length of buffer.
I would like to copy it into a std::string without extra nulls at the end of the string object.
Originally, I tried:
std::string s(buffer, length);
which copies the extra nulls when buffer is filled with nulls at the end.
I can think of:
char buffer2[128];
strncpy(buffer2, buffer, 128);
const std::sring s(buffer2);
But it is kind of wasteful because it copies twice.
I wonder whether there is a faster way. I know I need to benchmark to tell exactly which way is faster...but I would like to look at some other solutions and then benchmark...
Thanks in advance.
If you can, I'd simply add a '\0' at the end of your buffer and
then use the c-string version of the string constructor.
If you can't, you need to determine if there's a '\0' in your
buffer and while you're at it, you might as well count the number of
characters you encounter before the '\0'. You can then use that
count with the (buffer,length) form of the string constructor:
#include <string.h>
//...
std::string s(buffer, strnlen(buffer, length));
If you can't do 1. and don't want to iterate over buffer twice (once to determine the length, once in the string constructor), you could do:
char last_char = buffer[length-1];
buffer[length-1] = '\0';
std::string s(buffer); //the c-string form since we're sure there's a '\0' in the buffer now
if(last_char!='\0' && s.length()==(length-1)) {
//With good buffer sizes, this might not need to cause reallocation of the strings internal buffer
s.push_back(last_char);
}
I leave the benchmarking to you. It is possible that the c-string version of the constructor uses something like strlen internally anyway to avoid reallocations so there might not be much to gain from using the c-string version of the string constructor.
You can use all the canonical way to do this.
Faster way is surely implement by yourself smartpointer (or use anything already done as std::shared_ptr
).
Each smartpointer (sp) point to first char of array and contain.
Each time you do array.copy you don't do a true copy, but you simply add a reference do that array.
So, a "copy" take O(1) instead of O(N)

vtkImageData from 3rd party structure

I have a volume stored as slices in c# memory. The slices may not be consecutive in memory. I want to import this data and create a vtkImageData object.
The first way I found is to use a vtkImageImporter, but this importer only accepts a single void pointer as data input it seems. Since my slices may not be consecutive in memory, I cannot hand a single pointer to my slice data.
A second option is to create the vtkImageData from scratch and use vtkImageData->GetScalarPointer()" to get a pointer to its data. Than fill this using a loop. This is quite costly (although memcpy could speed things up a bit). I could also combine the copy approach with the vtkImageImport ofcourse.
Are these my only options, or is there a better way to get the data into a vtk object? I want to be sure there is no other option before I take the copy approach (performance heavy), or modify the low level storage of my slices so they become consecutive in memory.
I'm not too familiar with VTK for C# (ActiViz). In C++ is a good approach and rather fast one to use vtkImageData->GetScalarPointer() and manually copy your slices. It will increase your speed storing all memory first as you said, perhaps you want to do it this more robust way (change the numbers):
vtkImageData * img = vtkImageData::New();
img->SetExtent(0, 255, 0, 255, 0, 9);
img->SetSpacing(sx , sy, sz);
img->SetOrigin(ox, oy, oz);
img->SetNumberOfScalarComponents(1);
img->SetScalarTypeToFloat();
img->AllocateScalars();
Then is not to hard do something like:
float * fp = static_cast<float *>(img->GetScalarPointer());
for ( int i = 0; i < 256* 256* 10; i ++) {
fp[i] = mydata[i]
}
Another fancier option is to create your own vtkImporter basing the code in the vtkImageImport.

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources