I am implementing a mutual authentication and I was wondering about Random number generation
RandomData rnd = RandomData.getInstance(RandomData.ALG_SECURE_RANDOM);
rnd.generateData(RP, (short) 0, (short) 16);
This works of course, but according to my specifications
(I am again referring to Cipurse) in order to do a three-way challenge-and-response I
need to have a Random Number Generator on the terminal and on the picc. This is confusing me, since I am only aware of this way to create random data (and would use this way of implementing it twice, once in the install-method, once in the relevant process-method).
The terminal, IFD, inspection system etc. all point to the same thing: the system sending the commands to the card. So if you would run a Java application with javax.smartcardio then you could use the Java Standard Edition SecureRandom class.
Related
The Microsoft documentation for the WAVEOUTCAPS struct lists a number of formats that an audio device can support:
I do not see any 24-bit variables listed here, although through I have confirmed my sound card is capable of opening a 24-bit output with a call to WaveOutOpen (and playing 24bit audio files through that output).
I'm guessing that Microsoft defined additional variables somewhere for 18/20/24/32/48/64 bit output, but I can't find them. I tried searching on the net and nothing came up, and I tried using Visual Studio to search for variables in my current name space that start with "WAVE_FORMAT_" but did not find any additionally defined formats that way.
Is it possible to check for 4/18/20/24/32/48/64 bit output availability on Windows using the function WaveOutGetDevCap(), or any similar function? If so, how?
waveOutXxx is legacy API you, generally speaking, should not be used nowadays. This API is an emulation layer on top of real audio API and does not have to support 24-bit scenarios which did not exist at the time of waveOutXxx. There are no specific new constants defined for newer formats and there are so many of them that there cannot be a separate bit for every one.
You can compose a WAVEFORMATEX structure that describes your high bitness format (you would typically use WAVEFORMATEXTENSIBLE variant of it) and check it against waveOutOpen function.
Or, rather, use WASAPI and IAudioClient::Initialize, see Rendering a Stream for details and the way WAVEFORMATEX structure is used there.
Normally, when we write an applet containing a feature that our card not support it, the on-card verifier prevents installing its CAP file.
I want to know if is there any way to write an applet that can install on all cards, but returns an already defined error during run-time for those card that not support one of its features on that method invocation.
More clear, assume that we know all cards support DES cryptography algorithm and some cards support AES as a supplementary algorithm also. Now I want to write an applet that encrypts 8 byte data with AES algorithm if this algorithm is available,or with DES algorithm if AES is not available. Can I do that?
The problem is I think I can't install my applet on those cards that not support AES.
I think you are mixing two problems:
1. Algorithm support
You can easily install your applet, which uses AES, on a card without AES. The absence of AES would cause runtime exceptions in the moment you try to create an instance of the cryptographic object:
Cipher.getInstance(Cipher.ALG_AES_BLOCK_128_CBC_NOPAD, false);
or
KeyBuilder.buildKey(KeyBuilder.TYPE_AES, KeyBuilder.LENGTH_AES_128);
and so on... Note that the exception is an instance of CryptoException with CryptoException.NO_SUCH_ALGORITHM as the reason code (output of getReason() method). That is how your applet can easily decide if the card supports AES. You can surround one of the lines above with try-catch during the installation and downgrade to the more basic algorithm if necessary:
Cipher cipher = null;
try {
//trying AES
cipher = Cipher.getInstance(Cipher.ALG_AES_BLOCK_128_CBC_NOPAD, false);
} catch (CryptoException e) {
if (e.getReason() == CryptoException.NO_SUCH_ALGORITHM) {
//AES missing, so trying DES instead
cipher = Cipher.getInstance(Cipher.ALG_DES_CBC_NOPAD, false);
}
}
You can use a similar approach to hash functions, signatures etc.
2. Libraries
Another problem, which cannot be solved that easily, is the library dependency. If your applet needs to use some proprietary class (as for example com.nxp.id.jcopx.UtilX supported by NXP cards), you will not be able to install it on cards without the particular library. The only way for you is to split the problematic code into two packages and make a decision which package to upload based on packages already present on the card.
I have made a 24 x 15 LED matrix using an Arduino, shift registers and TLC5940s.
The Arduino Uno has a measly 32 KB of memory, so the graphics are not stored to arrays beforehand. Rather, I write algorithms to generate artistic animations using math equations.
Example code for a rainbow sine wave is:
for (int iterations = 0; iterations < times; iterations++)
{
val += PI/500;
for (int col = 0; col < NUM_COLS; col++)
{
digitalWrite(layerLatchPin, LOW);
shiftOut(layerDataPin, layerClockPin, MSBFIRST, colMasks[col] >> 16 );
shiftOut(layerDataPin, layerClockPin, MSBFIRST, colMasks[col] >> 8 );
shiftOut(layerDataPin, layerClockPin, MSBFIRST, colMasks[col] );
digitalWrite(layerLatchPin, HIGH);
Tlc.clear();
int rainbow1 = 7 + 7*sin(2*PI*col/NUM_COLS_TOTAL + val);
setRainbowSinkValue(rainbow1, k);
Tlc.update();
}
}
Where the setRainbowSinkValue sets one of the LEDS from 1 to 15 to a certain colour, and val shifts the wave to the right every iteration.
So I'm looking for simple graphics routines like this, in order to get cool animations, without having to store everything in arrays, as 15 x 24 x RGB quickly uses up all 32 KB of RAM.
I will try get an Arduino Mega, but let's assume this isn't an option for now.
How can I do it?
There are many effects you can get if you start to overlay simple functions like sin or cos. This guy creates the "plasma" effect which I think is always a cool thing to watch :)
Another way is to use noise functions to calculate the color of your pixels. You get a lot of examples if you google for "Arduino Perlin noise" (depending on your Arduino model you might not be able to get high framerates because Perlin noise requires some CPU power).
I've been working on similar graphics style projects with the Arduino and have considered a variety of strategies to deal with the limited. Personally I find algorithmic animations rather banal and generic unless they are combined with other things or directed in some way.
At any rate, the two approaches I have been working on:
defining a custom format to pack the data as bits and then using bitshifting to unpack it
storing simple SVG graphics in PROGMEM and then using sprite techniques to move them around the screen (with screen wrap around etc.). By using Boolean operations to merge multiple graphics together it's possible to get animated layer effects and build up complexity/variety.
I only use single color LEDs so things are simpler conceptually and datawise.
A good question but you're probably not going to find anything due to the nature of the platform.
You have the general idea to use algorithms to generate effects, so you should go ahead and write more crazy functions.
You could package your functions and make them available to everyone.
Also, if you allow it, use the serial port to communicate with a host that has more resources and can supply endless streams of patterns.
Using a transmitter and receiver will also work for connecting to another computer.
I will answer related questions but not exactly the question you asked because I am not a graphics expert....
First of all, don't forget PROGMEM, which allows you to store data in flash memory. There is a lot more flash than SRAM, and the usual thing to do, in fact, is to store extra data in flash.
Secondly, there are compression techniques that are available that will reduce your memory consumption. And these "compression" techniques are natural to the kinds of tasks you are doing anyway, so the word "compression" is a bit misleading.
First of all, we observe that because human perception of light intensity is exponential (shameless link to my own answer on this topic), depending on how exactly you use the LED drivers, you need not store the exact intensity. It looks like you are using only 8 bits of intensity on the TLC5940, not full 12. For 8 bits of LED driver intensity, you only have 8 or 9 different intensity values (because the intensity you tell the LED driver to use is 2^perceptible_intensity). 8 different values can be stored in only three bits. Storing three bit chunks in bytes can be a bit of a pain, but you can still treat each "pixel" in your array as a uint16_t, but store the entire color information. So you reduce your memory consumption to 2/3 of what it was. Furthermore, you can just palettize your image: each pixel is a byte (uint8_t), and indexes a place in a palette, which could be three bytes if you'd like. The palette need not be large, and, in fact, you don't have to have a palette at all, which just means having a palette in code: your code knows how to transform a byte into a set of intensities. Then you generate the actual intensity values that the TLC5940 needs right before you shift them out.
How do I disable entropy sources?
Here's a little background on what I'm trying to do. I'm building a little RNG device that talks to my PC via USB. I want it to be the only source of entropy used. I'll use rngd to add my device as a source of entropy.
Quick answer is "you don't".
Don't ever remove sources of entropy. The designers of the random number generator rigged it so any new random bits just get mixed in with the current state.
Having multiple sources of entropy never weaken the random number generator's output, only strengthen it.
The only reason I can think to remove a source of entropy is that it sucks CPU or wall-clock time that you cannot afford. I find this highly unlikely but if this is the case, then your only option is kernel hacking. As far as hacking the kernel goes, this should be fairly simple. Just comment out all the calls to the add_*_randomness() functions throughout the kernel source code (the functions themselves are found in drivers/char/random.c). You could just comment out the contents of the functions but you are trying to save time in this case and the minuscule time the extra function call takes could be too much.
One solution is to to run separate linux instance in a virtual machine.
Additional note, too big for comment:
Depending on its settings, rngd can dominate the kernel's entropy pool,
by feeding it so much data, so often, that other sources of entropy are
mostly ignored or lost. Do not to that unless you trust rngd's source
of random data ultimately.
http://man.he.net/man8/rngd
I suspect you might want a fast random generator.
Edit I should have read the question better
Anyways, frandom comes with a complete tarball for the kernel module so you might be able to learn how to build your own module around your USB device. Perhaps, you can even have it replace/displace /dev/urandom so arbitrary applications would work with it instead of /dev/urandom (of course, given enough permissions, you could just rename the device nodes and 'fool' most applications).
You could look at http://billauer.co.il/frandom.html, which implements that.
Isn't /dev/urandom enough?
Discussions about the necessity of a faster kernel random number generator rise and fall since 1996 (that I know of). My opinion is that /dev/frandom is as necessary as /dev/zero, which merely creates a stream of zeroes. The common opposite opinion usually says: Do it in user space.
What's the difference between /dev/frandom and /dev/erandom?
In the beginning I wrote /dev/frandom. Then it turned out that one of the advantages of this suite is that it saves kernel entropy. But /dev/frandom consumes 256 bytes of kernel random data (which may, in turn, eat some entropy) every time a device file is opened, in order to seed the random generator. So I made /dev/erandom, which uses an internal random generator for seeding. The "F" in frandom stands for "fast", and "E" for "economic": /dev/erandom uses no kernel entropy at all.
How fast is it?
Depends on your computer and kernel version. Tests consistently show 10-50 times faster than /dev/urandom.
Will it work on my kernel?
It most probably will, if it's > 2.6
Is it stable?
Since releasing the initial version in fall 2003, at least 100 people have tried it (probably many more) on i686 and x86_64 systems alike. Successful test reports have arrived, and zero complaints. So yes, it's very stable. As for randomness, there haven't been any complaints either.
How is random data generated?
frandom is based on the RC4 encryption algorithm, which is considered secure, and is used by several applications, including SSL. Let's start with how RC4 works: It takes a key, and generates a stream of pseudo-random bytes. The actual encryption is a XOR operation between this stream of bytes and the cleartext data stream.
Now to frandom: Every time /dev/frandom is opened, a distinct pseudo-random stream is initialized by using a 2048-bit key, which is picked by doing something equivalent to reading the key from /dev/urandom. The pseudo-random stream is what you read from /dev/frandom.
frandom is merely RC4 with a random key, just without the XOR in the end.
Does frandom generate good random numbers?
Due to its origins, the random numbers can't be too bad. If they were, RC4 wouldn't be worth anything.
As for testing: Data directly "copied" from /dev/frandom was tested with the "Diehard" battery of tests, developed by George Marsaglia. All tests passed, which is considered to be a good indication.
Can frandom be used to create one-time pads (cryptology)?
frandom was never intended for crypto purposes, nor was any special thought given to attacks. But there is very little room for attacking the module, and since the module is based upon RC4, we have the following fact: Using data from /dev/frandom as a one-time pad is equivalent to using the RC4 algorithm with a 2048-bit key, read from /dev/urandom.
Bottom line: It's probably OK to use frandom for crypto purposes. But don't. It wasn't the intention.
I understand that time is an insecure seed for random number generation because it effectively reduces the size of the seed space.
But say I don't care about security. For example, say I'm doing a Monte Carlo simulation for a card game. I DO however, care about getting as close to true randomness as possible. Will time as a seed affect the randomness of my output? I would think the choice of PRNG matters more than the seed in this case.
For security purposes you obviously need a high entropy seed. And time alone cannot provide that.
For simulation purposes the quality of the seed doesn't matter much, as long as it's unique. As you noted the quality of the PRNG is more important here.
Even a PRNG in a game may need to be secure. For example in multiplayer games a player might find out the internal state of the PRNG and use that to predict future random events, guess the opponent cards, get better loot,...
One common pitfall using time to seed a PRNG is that the time doesn't change very often. For example on windows most time related functions only change their return value every few milliseconds. So all PRNGs created withing that interval will return the same sequence.
Just for the sake of completeness, this paper by Matsumoto et al. nicely illustrates how important the initialization scheme (ie. the way of choosing your seed(s)) is for simulation. Turns out a bad initialization scheme may strongly bias the results, even though the RNG algorithm as such is rather good in principle.
If you are just running a single instance of your program, then there should not be too many problems.
However I have seen people who starts multiple programs at the same time and then each program seed with time. In that case all the program gets the same sequence of random numbers -- In particular I have seen people seeding an apache process at each call to use a random numer as session-id, only to find that different people hitting the webserver at the same time get exactly the same IDs.
Hence if you are expecting to run multiple simultanous version of the program, then using time is a very bad idea.
Think that your program runs very fast and asks for the system's time to use as a seed in a great sequence, with a very few interval. You could get the same time as the answer, so it would end up generating the same random number. So, even in a simulation, a low-entropy can be a problem.
Considering that it's not that hard to have some other sources of entropy in your system, ot that even your operating system can provide you some almost-random numbers, you could use them to increase the entropy of your time-based-seed.