How can a system like Tesla’s AutoPilot keep up with constant changing requests for multiple process? [closed] - multithreading

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
As a software developer, I am trying to understand how a system could possibly work as fast and efficiently enough and operate with consistanly and flawlessly with such precision for all the ongoing actions it needs to account for in a system such as a Tesla AutoPilot (self driving car system)...
In a car driving driving 65 MPH, if a deer runs out in front of the car, it immediately makes adjustments to protect the vehicle from a crash - while having to keep up with all the other sensors requests constantly firing off at the same time for possible actions on a micro-milllisecond, without skipping a beat.
How is all of that accomplished sysinctly? And have processing reporting back to it so quickly that it almost intentaniously is able to respond (without getting backed up with requests)?

I don't know anything about Tesla code, but I have read other real time code and analysed time slips in it. One basic idea is that if you check something every millisecond you will always respond to change within a millisecond. The simplest possible real time system has a "cyclic executive" built around a repeating schedule that tells it what to do when, worked out so that in all possible cases everything that has to be dealt with is dealt with within its deadline. Traditionally you are worrying about cpu time here, but not necessarily. The system I looked at was most affected by the schedule for a serial bus called a 1553 (https://en.wikipedia.org/wiki/MIL-STD-1553)- there almost wasn't enough time to get everything transmitted and received on time.
This is a bit too simple because it doesn't cope with rare events which have to be dealt with really quickly, such as response to interrupts. Clever schemes for interrupt handling don't have as much of an advantage as you would expect, because there is often a rare worst case that makes the clever scheme underperform a cyclic executive and real time code has to work in the worst case, but in practice you do need something with interrupt handlers and high priority processes that must be run on demand and with low priority processes that can be ignored when other stuff needs to make deadlines but will be run otherwise. There are various schemes and methodologies for arguing that these more complex systems will always make their deadlines. One of the best known ones is https://en.wikipedia.org/wiki/Rate-monotonic_scheduling. See also https://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling.
An open source real time operating system that has seen real life use is https://en.wikipedia.org/wiki/RTEMS.

Related

How to make sure to not miss event [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Say I have a program that does the following:
wait for key press
Once key pressed, do complex query (takes 10 seconds) and print the result
repeat
Now if I my key presses are 10 seconds apart, this would not be a problem. How do I handle key presses really close together. Even worse, keys pressed at the exact same time.
Is this information bound to be lost?
I know threading might be an option but what is general way of doing this?
Is it possible to just store every key pressed even though other code is running and be able to tend to it later?
Interrupts. Universally, computers provide a mechanism for peripherals to request the attention of a CPU by asserting an Interrupt Request Signal. The CPU, when it is ready to accept interrupts, responds by saving minimal state somewhere, then executing code which can query the peripheral, possibly accepting data (keypress) from it.
If you are using an OS this is all hidden by the kernel, which typically exposes some mechanisms for you to choose how you want to deal with it:
Queue up the keypresses and process them later. Thus if I want to have query 1,3,5 in that order, I can press those keys in succession and go for a smoke while your long processing occurs.
Discard the lookahead keypresses; thus demand the user interact with a lousy UI. Search for "homer simpson work from home" to see how to work around this.
If you are using an OS, you might need to look up various ioctl's to enable this behaviour, use a UI packages similar to curses, or other.
If you aren't using an OS, your job is both tougher and easier: you have to write the code to talk to the keyboard, but implementing the policy is 1/10 th work of figuring out some baroque UI library.

Is it possible to extract instruction specific energy consumption in a program? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What i mean is that given a source code file is it possible to extract energy consumption levels for a particular code block or 1 single instruction, using a tool like perf?
Use jRAPL which is a framework for profiling Java programs running on CPUs.
For example, the following code snippet attempts to measure the energy consumption of any code block, whose value is the difference between beginning and end:
double beginning = EnergyCheck.statCheck();
doWork();
double end = EnergyCheck.statCheck();
System.out.println(end - beginning);
And the detailed paper of this framework titled "Data-Oriented Characterization of Application-Level Energy Optimization" is in http://gustavopinto.org/lost+found/fase2015.pdf
There are tools for measuring power consumption (see #jww's comment for links), but they don't even try to attribute consumption to specific instructions the way perf record can statistically sample event -> instruction correlations.
You can get an idea by running a whole block of the same instruction, like you'd do when trying to microbenchmark the throughput or latency of an instruction. Divide energy consumed by number of instructions executed.
But a significant fraction of CPU power consumption is outside of the execution units, especially for out-of-order CPUs running relatively cheap instructions (like scalar ADD / AND, or different memory subsystem behaviour triggered by different, like hardware prefetching).
Different patterns of data dependencies and latencies might matter. (Or maybe not, maybe out-of-order schedulers tend to be constant power regardless of how many instructions are waiting for their inputs to be ready, and setting up bypass forwarding vs. reading from the register file might not be significant.)
So a power or energy-per-instruction number is not directly meaningful, mostly only relative to a long block of dependent AND instructions or something. (Should be one of the lowest-power instructions, probably fewer transistors flipping inside the ALU than with ADD.) That's a good baseline for power microbenchmarks that run 1 instruction or uop per clock, but maybe not a good baseline for power microbenches where the front-end is doing more or less work.
You might want to investigate how dependent AND vs. independent NOP or AND instructions affect energy per time or energy per instruction. (i.e. how does power outside the execution units scale with instructions-per-clock and/or register read / write-back.)

How is machine language run? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
This question does not really relate to any programming language specifically, it relates to, I think, EVERY programming language out there.
So, the developer enters code into an IDE or something of the sort. The IDE turns that, directly or indirectly (maybe there's many steps involved: A turns it into B turns it into C turns it into D, etc.), into a machine language (which is just a bunch of numbers). How is machine language interpreted and run? I mean, doesn't code have to come down to some mechanical thing in the end, or how would it be run? If chips run the code, what runs the chips? And what runs that? And what runs that? On and on and on.
There is nothing really mechanical about it - the way a computer works is electrical.
This is not complete description - that would take a book. But it is the basis of how it works.
The basis of the whole thing is the diode and the transistor. A diode or transistor is made from a piece of silicon with some impurities that can be made to conduct electricity sometimes. A diode only allows electricity to flow in one direction and a transistor only allows electricitry to flow in one direction with an amount proportional to the electricity provided at the "base". So a transistor acts like a switch but it is turned on and off using electricity instead of something mechanical.
So when a computer loads a byte from memory, it does so by turning on individual wires for each bit of the address address and the memory chip turns on the wires for each data bit depending on the value stored in the location designated by those address wires.
When a computer loads bytes containing an instruction, it then decodes the instruction by turning on individual wires that control to other parts of the CPU:
If the instruction is arithmetic then one wire may determine which registers are connected to the arithmetic logic unit (ALU) while other wires determine whether the ALU adds or subtracts and another may determines whether it shifts left or does not shift left.
If the instruction is a store then the wires that get turned on are the address lines, the wire that determine which register is attached to the data lines, and the line that tells the memory to store the value.
The way these individual wires are turned on and off is via this huge collection of diodes and transistors, but to make designing circuits manageable these groups of diodes and transistors are clumped into groups that are standardized components: logic gates like AND, OR and NOT gates. These logic gates have one or two wires coming in and one coming out with a bunch of diodes and transistors inside. Here is an electrical schematic for how all the diodes and transistors can be wired up to make an OR gate: http://www.interfacebus.com/exclusive-or-gate-internal-schematic.png
Then when you have the abstraction level of logic gates it is a much more manageable job to design a CPU. Here is an example of someone who built a CPU using just a bunch of logic gate chips: http://cpuville.com
Turns out there is already a book! I just found a book (and accompanying website with videos and course materials) for how to make a computer from scratch. Have a look at this: http://nand2tetris.org/

How large is the average delay from key-presses

I am currently helping someone with a reaction time experiment. For this experiment reaction times on the keyboard are measured. For this experiment it might be important to know, how much error could be introduced because of the delay between the key-press and the processing in the software.
Here are some factors that I found out using google already:
The USB-bus is polled at 125Hz at minimum and 1000Hz at maximum (depending on settings, see this link).
There might be some additional keyboard buffers in Windows that might delay the keypresses further, but I do not know about the logic behind those.
Unfortunately it is not possible to control the low level logic of the experiment. The experiment is written in E-Prime a software that is often used for this kind of experiments. However the company that offers E-Prime also offers additional hardware, that they advertise for precise reaction-timing. Hence they seem to be aware about this effect (but do not tell how large it is).
Unfortunately it is necessary to use a standart keyboard, so I need to provide ways to reduce the latency.
any latency from key presses can be attributed to the debounce routine (i usually use 30ms to be safe) and not to the processing algorithms themselves (unless you are only evaluating the first press).
If you are running an experiment where millisecond timing is important you may want to use http://www.blackboxtoolkit.com/ to find sources of error.
Your needs also depend on the nature of your study. I've run RT experiments in Eprime with a keyboard. Since any error should be consistent on average across participants, for some designs it is not a big problem. If you need to sync up the data though with something else (like Eye tracking or EEG) or want to draw conclusions about RT where specific magnitude is important then E-Primes serial resp box (or another brand, though I have had compatibility issues in the past with other brand boxes and eprime) is a must.

Looking for programs on audio tape/cassette containing programs for Sinclair ZX80 PC? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
OK, so back before ice age, I recall having a Sinclair ZX80 PC (with TV as a display, and a cassette tape player as storage device).
Obviously, the programs on cassette tapes made a very distinct sound (er... noise) when playing the tape... I was wondering if someone still had those tapes?
The reason (and the reason this Q is programming related) is that IIRC different languages made somewhat different pitched noises, but I would like to run the tape and listen myself to confirm if that was really the case...
I have the tapes but they've been stored in the garage at my parents' house and the last thirty years hasn't been kind to them.
You can get images here though: http://www.zx81.nl/dload if that's any use. Perhaps there is a tool out there for converting from the bytes back to the audio ;)
Edit: Perhaps here: http://ldesoras.free.fr/prod.html#src_ay3hacking
On the ZX80, ZX81 and ZX Spectrum, tape output is achieved by the CPU toggling the output line level between a high state and a low state. Input is achieved by having the CPU watch an input line level. The very low level of operation was one of Sir Clive's cost-saving measures; rival machines like the BBC Micro had dedicated hardware for serialisation and deserialisation of data, so the CPU would just say "output 0xfe" and then the hardware would make the relevant noises and raise an interrupt when it was ready for the next byte. The BBC Micro specifically implements the Kansas City Standard, whereas the Sinclair machines in every instance use whatever adhoc format best fitted the constraints of the machine.
The effect of that is that while almost every other machine that uses tape has tape output that sounds much the same from one program to the next by necessity, programs on a Sinclair machine could choose to use whatever encoding they wanted, which is the principle around which a thousand speed loaders were written. It's therefore not impossible that different programs would output distinctively different sounds. Some even used the symmetry between the tape input and output to do crude digital sampling, editing and playback, though they were never more than novelties for obvious reasons.
That being said, the base units of the ZX80 and ZX81 contained just 1kb RAM so it's quite likely that programmers would just use the ROM routines for reading and writing data, due to space constraints if nothing else. Then the sound differences would just be on account of characteristic data, as suggested by slugster.
I know these come up on auction sites like Ebay quite frequently - if you want to buy them yourself. If you get someone else who owns one to listen then you are going to get their subjective opinion :)
In any case, the language used to save it would be the secondary cause of the pitch changes - it will be related to the data. IOW you could probably create a straight binary data file that sounded very similar to a BASIC program (the BASIC would have been saved as text, as it is interpreted).
I know the threads old but... I was playing about with something similar last night and I've got a wav of an old zx81 game if you're still interested? pm me and I'll post it somewhere.
You can use something like http://www.wintzx.fr/ or pick something from http://www.worldofspectrum.org/utilities.html#tzxtools to convert an emulator file to an audio file and then you can just play it on your PC. Some tools also allow you to play the file directly. Emulator files can be found at http://www.zx81.nl/files.html and many other places.

Resources