I have a laptop whose touchpad is very sensitive to spurious light, grazing touches of anything other than the finger being used, causing unwanted gesture input--even with the sensitivity set to low in the control panel. I can (and will) probably learn over time to hold my wrists in a manner to minimize the problem--but as someone interested in algorithms in things like signal processing, vision, etc., I thought it might be a fun project to try and write a more intelligent filtering algorithm for touch input.
I'm not scared by the math/algorithmic aspect--but what I have zero knowledge of is how the software stack for input devices works, on what level in the stack such code would need to run, and how privileged/close to the kernel I would need to get to have access to that (and whether such a level is even sufficiently documented and accessible to make this possible). Most of the stack presumably handles touch data at "mouse level" abstraction, i.e. as a pointer x/y pair, whereas filtering to eliminate spurious touches would presumably need to act on a sort of "pixel map" of the pad with the areas registering touch "bright", before some sort of "blob detection" on this computes the pointer coordinates.
Where is this transformation ("pad image" to "pointer") performed--in the driver for the touchpad, in the OS kernel, in some userspace code, etc.? Is it even performed at all, or does the capacitive sensing circuitry directly detect only the centroid of the points of contact to begin with? (I can't find a good description of even how multi-touch with capacitive sensing works, fundamentally on a physics level) Is this the sort of thing that's only possible to modify in something like Linux where every line of code in the whole system is modifiable, or is there a good way to "hook" this process even in OSes that are otherwise proprietary?
Related
Can anyone tell me why is /dev/random is preferred for security while wiping data from a hard drive?
Simple answer, /dev/random is not preferred. Both are equally secure. Use /dev/zero for easier verification. Also less CPU usage and possibly faster.
More complete answer. For modern hard drives platter density is such that it's impossible to obtain signals from incompletely overwritten sectors of the drive, that people such as Gutmann wrote about many, many years ago. As far as modern hard drives are concerned (I'd place this as any hard drive whose capacity can be measured in Gigabytes or better), if it's overwritten it's gone. End of story. So it doesn't matter what you change the data to. Just that you change the data.
To add onto this, even if you wipe a hard drive completely, there may still be data left on the drive in sectors that were remapped by the hard drive's firmware but these are relatively rare, and only a very small amount of data would be contained within, not to mention that you'd need very specialized equipment to retrieve that data (you'd have to edit the G-List within the System Area of the drive to get at it), not to mention that the reason why those sectors were remapped in the first place is because they were failing.
So to sum up, DoD wipes are stupid, Gutmann wipes are stupider, use /dev/zero, it's good in nearly 100% of all cases. And if it's an edge case then you need to have very specialized know how to get at the data and also remove the data.
"thanks! so, what about usb stick?"
USB stick is a different animal altogether, you'd need to bypass the flash controller in order to clean it out, even a Gutmann wipe won't completely remove the data because of wear leveling algorithms. But just like a hard drive, if you overwrite the data once, it's gone, the trick is forcing the device to actually overwrite the data.
That being said, if you have a cheap USB stick without a controller which does wear leveling then a single pass 0-fill should be sufficient to remove the data within. Otherwise, you're looking at custom hardware and soldering work.
SSDs should be considered USB sticks with a controller that performs wear leveling. SSDs will always do wear leveling, I do not know of any exceptions to this rule. Many USB sticks do not.
How do you tell if a USB stick does wear leveling? You need to take it apart and inspect the controller chip and look up a datasheet on it.
"Would you give a source for the statement that it is "impossible to obtain signals from incompletely overwritten sectors of the drive" ? I am not talking about tests from computer magazines concerning data recovery stores, I am talking of the worst case scenario: a well-equipped government laboratory. So I really would like to know how can you guarantee that statement, preferably a scientific paper."
I'll give some justification and information regarding the analog storage of digital data on magnetic media. The following is mostly things that I was taught while on the job at a data recovery company, and may partially inaccurate in places. If so, let me know, I will correct it. But this is my best understanding of the material.
After a hard drive is manufactured the first thing that happens is it receives servo labels from a servo label writing machine. This is a separate machine whose sole job is to take a completely blank hard drive and bootstrap it. (This is why hard drives have holes in them covered with aluminum tape, that's where the servo labeling machine places its write heads.) If you've ever had a drive that when you powered it on it just generated "click click click" it's is because it could not read the servo labels. When a hard drive is powered on the first thing it tries to do is fling its read heads somewhere onto the platter and acquire a track. Servo labels define tracks. If it can't see a servo label it reaches the middle, makes a clack, pulls the arm back and tries again.
The reason why I mention this, is that is pretty much the only instance that an external device reads and writes to the hard drive and it describes approximately the limit that hardware outside of that drives own read heads can work with the data on a platter. If it were possible to make servo labels smaller and more space efficient hard drive manufacturers would. Servo labels are comparatively space inefficient for two reasons.
It is absolutely critical that they do not fail. If a servo label fails then every time the head goes over that particular servo label it will lose track, this pragmatically means that the entire track is unusable.
It places some idea of how much better hard drive hardware is at dealing with information on platters than external machinery.
A ring of servo labels defines a track. There are some things you must know about tracks.
They are not necessarily circular. They are imperfect and can contain warps. This is because the servo label machine is not accurate.
They are not necessarily concentric. They can and do cross. This means that certain sectors or whole tracks can be unusable just because the servo label machine is inaccurate.
After servo labels are written, then comes the low level format. An actual low level 1980s format of a drive, except more complicated. Because platters are circular but hard drive speeds are constant the amount of area passing under the read head is a variable function of the distance to the middle of the platter. So, in an effort to squeeze every last drop of storage out of a platter the density of the platter is variable and defined in zones. On a typical 3.5" hard drive there will be several dozen zones with different platter densities.
One of which is special and extra low density called the System Area. The System Area is where all of the firmware and configuration settings are stored on the drive. It has an extra low density because that information is more important. The lower the density the less chance there is that something will randomly screw up. It happens all of the time of course, but less often than something in the user area.
After the drive is low level formatted the firmware is written to the System Area. The firmware is different for every drive. In order to optimize the drive for the ridiculously fine requirements of the platters, each drive must be tuned. (This actually takes place before the low level format, of course, because you have to know how good the equipment is in order to decide how dense to make the platters.) This data is known as adaptives and is saved in the System Area. Information in the adaptives area is stuff like "how much voltage should I use to correct myself when the servo labels tell me I'm drifting off track", and other information required to make the hard drive actually work. If the adaptives are off slightly it might be impossible to access the user area. The system area is easier to access, so only very few adaptives are required to be stored on the PCB CMOS.
Take aways from this paragraph:
Lower density means easier to read.
The higher the density the more likely it is for things to randomly screw up.
The user area has as high a density as the hard drive manufacturer can possibly make it.
If this seems slamdash and slipshod, that's because it really is. Hard drive manufacturers compete and win on price per GB. Hard drive design isn't really about making very carefully manufactured pieces of equipment and putting them together very carefully, because that simply isn't enough anymore. Sure, they still do do that, but they also have to make the pieces work together with each other in software because the hardware tolerances are too broad to be competitive anymore.
So. Because the user are has such a high density, it actually is very (very (very very)) likely to get screwed up bits in the normal course of things. This can be caused by many, many factors including very slight timing issues and platter degradation. A good percentage of sectors of your hard drive actually contain screwed up bits. (You can verify this yourself by issuing an ATA28 READLONG command to your drive (only valid for the first 127 GB or so. There is no ATA48 equivalent it was dropped!) several times on many sectors and comparing the output. You'll find that it isn't a rare occurrence that certain bits will misbehave and act suck on or off or even flip randomly.) It's a fact of life. Which is why we have ECC.
ECC is a checksum contained after the 512 (or 4096 in newer drives) bytes of data that will correct that data if it has few enough incorrect bits. The exact number depends on firmware and manufacturer, but all drives have it and all drives need it (and it's surprisingly higher than you'd expect, something like 48-60 bytes that can detect and correct up to 6-8 error bytes. Crazy math going on.) This is because the density of the platters is too high for even the highly specialized and tuned internal hard drive equipment.
Finally, I want to talk about the preamp chip. It's located on the arm of the hard drive and acts as a megaphone. Because the signals are being generated from very small magnetic fields, acting on very small heads they have a very small potential. So you cannot use the hard drive head for the Gutmann method, because you cannot get an accurate enough reading from it to make Gutmann's technique worthwhile.
But let's posit that the NSA has a piece of magic equipment, and they can get a very accurate read (accurate enough to calculate the potential and derive the previously written data) of any particular bit in 1 ms. What do they need first?
First, they need the System Area. Because that's where the Translator is stored (the translator is the things that turns an LBA address into a PCHS address (Physical Cylinder Head Sector as opposed to the logical CHS address which is fake and only around for legacy reasons). The size of the System Area varies, and you can get it without resorting to magic tools. Normally, it's only around 50-100MB. The layout of the translator is firmware specific, so you have to reverse it (but it's been done before, no big deal.)
So first problem, signal to noise. As mentioned, platter density is tuned way higher that is strictly safe. Gutmann's method requires a very low variance in normal read/write activity to calculate previous states of the bits with any accuracy. If the variance in signal is significant then it can screw over these attempts. And the variance is significant enough to completely screw you over (that's why ECC is so crazy in modern drives.) An analogy would be like trying to perfectly hear someone whispering to you while someone is talking to you in the middle of a noisy room.
Second problem, time. Even if the electron microscope is very fast and accurate (1ms per bit! That's lightning for an electron microscope. It's also slower than a 1200 baud modem), there is a LOT of data on a hard drive and a full image will take a very long time. (WA says 126 years for an entire 500GB hard drive, and that's NOT including ECC data (which you need). There's also lots of other metadata associated with hard drive sectors that I didn't mention, like ID fields, and Address markers, but these don't get overwritten, perhaps you can come up with a faster way to image them normally? Doubtless there are ways to speed up this process (such as selectively imaging portions of the drive) but even that will take you months of 24/7 around the clock work just to get the $MFT file on a standard hard drive (typically around 50-300MB on a drive with Windows installed)).
Third problem, admissibility. If the government is after you they're after you for only a few reasons, they want to know something that you know, or they want to arrest you and put you in prison. There are easier ways to get the former (rubber hose cryptography), and the latter will require regular evidence procedures. Going back to the analogy, if someone testified that someone told them something in a whisper, while someone else was talking to them in the middle of a crowded and noisy room, there is a lot of room for doubt there. It would never be the sort of strong evidence that would want to spend lots of time and money.
You're asking the wrong question. Attempting to securely erase a drive by writing to user-visible blocks completely ignores the fact that there could be user data in sectors marked as bad (but which still contain readable sensitive data).
Of course it is possible to work around that by issuing ATA commands, but then a single ATA secure erase command will do everything you want in the first place. See https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase for details on how to use hdparm to issue the Secure Erase command with the --security-erase option.
I want to read the framebuffer of the videocard at the lowest level possible for a security application I'm writing.
I want to be as sure as possible that what I'm reading is exactly what will be finally
put on the bits of the hardware lighting the pixels of the screen,
and that no software layer is in the middle (or at least I want to have the lowest number possible of layers in the middle).
I've seen it's pretty easy to use X to grab the screen in a precise moment, but that call
is still passing through the X server.
I would like to have something really more low level,
even if this means messing up with some ioctl with the video card.
I've seen the existence of DRI and DRI2, but they are very very badly documented, especially
the latter.
I can't really understand how they work.
Do you have any idea, reference or starting point for a good research?
Anything would be appreciated!
I'm not sure how much reading the framebuffer will help you (even disregarding the issue pointed out by timday in his comment, deciding whether what you read there is what you want it to be may not be very easy), but if you are doing this on Linux you could map the kernel framebuffer devices, possibly using DirectFB to help you. Alternatively, if you are on a non-Linux PC platform you could use VESA (take a look at the VESA code in X.Org and the X.Org VESA driver (the actual code is split between the two). Be aware that you will probably also have some fun with things like multi-monitor setups.
Previously, when I've built tools, I've used D3D version 9, where the call to Present() can take a target window and rectangle, and you can thus draw from a single device into many different windows. This is great when using D3D to accelerate desktop applications, and/or building tools rather than games!
I've also built a game renderer with D3D11 before, which is also great, because the state management and threading interfaces are well designed, and you can even target D3D 9 level hardware that's still pretty common in the wild (as opposed to D3D 10, which can only target 10-and-better).
However, now I want to build a tool with D3D11. Unfortunately, the IDXGISwapChain that comes back from D3D11CreateDeviceAndSwapChain() seems to "remember" its HWND, and only wants to present to that window. This is highly inconvenient, because I may have a large number of windows that each need fairly simple graphics drawn to them, and only in response to a WM_PAINT (again, this is for a tool, not a game).
What I want to do is to save back buffer RAM. Specifically, I used to be able to create a single back buffer, the size of the desktop, that I knew could cover all rendering needs, and then that would be the single copy allocated. Even if there are 10 overlapping windows, they all render through the same back buffer, so there's no waste of memory beyond the initial allocation. I can create textures that are not swap chains, and use them as "render targets," but I can't find a good way of presenting to an arbitrary rectangle of an arbitrary client window, without reading back the bitmap and copying it into a DIBSection, which would be really inefficient. Also, there is no way to create many swap chains, and having them share the same back buffer.
The best I can do is to create one swap chain per window, and resize the back buffer of each swap chain to be really small, except when I render to the swap chain, at which point I resize it to match the window. However, this seems inefficient, because resizing the targets is not a "free" operation AFAICT. So, is there a better way?
The answer I ended up with was to create one back buffer per separate display area, and not size it to the back buffer. I imagine that, in a world where desktop composition and transparency can happy to "anything" behind my back, that's probably helpful to the system.
Learn to love the VVM system, I guess :-) (VVM for Virtual Video Memory)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Like if say the instruction is something like 100010101 1010101 01010101 011101010101. Now how is this translating to an actual job of deleting something from memory? Memory consists of actual physical transistors the HOLD data. What causes them to lose that data is some external signal?
I want to know how that signal is generated. Like how some binary numbers change the state of a physical transistor. Is there a level beyond machine code that isn't explicitly visible to a programmer? I have heard of microcode that handle code at hardware level, even below assembly language. But still I pretty much don't understand. Thanks!
I recommend reading the Petzold book "Code". It explains these things as best as possible without the physics/electronics knowledge.
Each bit in the memory, at a functional level, HOLDs either a zero or a one (lets not get into the exceptions, not relevant to the discussion), you cannot delete memory you can set it to zeros or ones or a combination. The arbitrary definition of deleted or erased is just that, a definition, the software that erases memory is simply telling the memory to HOLD the value for erased.
There are two basic types of ram, static and dynamic. And are as their names imply, so long as you dont remove power the static ram will hold its value until changed. Dynamic memory is more like a rechargeable battery and there is a lot of logic that you dont see with assembler or microcode or any software (usually) that keeps the charged batteries charged and empty ones empty. Think about a bunch of water glasses, each one is a bit. Static memory the glasses hold the water until emptied, no evaporation, nothing. Glasses with water lets say are ones and ones without are zeros (an arbitrary definition). When your software wants to write a byte there is a lot of logic that interprets that instruction and commands the memory to write, in this case there is a little helper that fills up or empties the glasses when commanded or reads the values in the glasses when commanded. In the case of dynamic memory, the glasses have little holes in the bottom and are constantly but slowly letting the water drain out. So glasses that are holding a one have to be filled back up, so the helper logic not only responds to the read and write commands but also walks down the row of glasses periodically and fills back up the ones. Why would you bother with unreliable memory like that? It takes twice (four times?) as many transistors for an sram than a dram. Twice the heat/power, twice the size, twice the price, with the added logic it is still cheaper all the way around to use dram for bulk memory. The bits in your processor that are used say for the registers and other things are sram based, static. Bulk memory, the gigabytes of system memory, are usually dram, dynamic.
The bulk of the work done in the processor/computer is done by electronics that the instruction set or microcode in the rare case of microcoding (x86 families are/were microcoded but when you look at all processor types, microcontrollers that drive most of the everyday items you touch they are generally not microcoded, so most processors are not microcoded). In the same way that you need some worker to help you turn C into assembler, and assembler into machine code, there is logic to turn that machine code into commands to the various parts of the chip and peripherals outside the chip. download either the llvm or gcc source code to get an idea of the percentages of your program being compiled is compared to the amount of software it takes to do that compiling. You will get an idea of how many transistors are needed to turn your 8 or 16 or 32 bit instruction into some sort of command to some hardware.
Again I recommend the Petzold book, he does an excellent job of teaching how computers work.
I also recommend writing an emulator. You have done assembler, so you understand the processor at that level, in the same assembler reference for the processor the machine code is usually defined as well, so you can write a program that reads the bits and bytes of the machine code and actually performs the function. An instruction mov r0,#11 you would have some variable in your emulator program for register 0 and when you see that instruction you put the value 11 in that variable and continue on. I would avoid x86, go with something simpler pic 12, msp430, 6502, hc11, or even the thumb instruction set I used. My code isnt necessarily pretty in anyway, closer to brute force (and still buggy no doubt). If everyone reading this were to take the same instruction set definition and write an emulator you would probably have as many different implementations as there are people writing emulators. Likewise for hardware, what you get depends on the team or individual implementing the design. So not only is there a lot of logic involved in parsing through and executing the machine code, that logic can and does vary from implementation to implementation. One x86 to the next might be similar to refactoring software. Or for various reasons the team may choose a do-over and start from scratch with a different implementation. Realistically it is somewhere in the middle chunks of old logic reused tied to new logic.
Microcoding is like a hybrid car. Microcode is just another instruction set, machine code, and requires lots of logic to implement/execute. What it buys you in large processors is that the microcode can be modified in the field. Not unlike a compiler in that your C program may be fine but the compiler+computer as a whole may be buggy, by putting a fix in the compiler, which is soft, you dont have to replace the computer, the hardware. If a bug can be fixed in microcode then they will patch it in such a way that the BIOS on boot will reprogram the microcode in the chip and now your programs will run fine. No transistors were created or destroyed nor wires added, just the programmable parts changed. Microcode is essentially an emulator, but an emulator that is a very very good fit for the instruction set. Google transmeta and the work that was going on there when Linus was working there. the microcode was a little more visible on that processor.
I think the best way to answer your question, barring how do transistors work, is to say either look at the amount of software/source in a compiler that takes a relatively simple programming language and converts it to assembler. Or look at an emulator like qemu and how much software it takes to implement a virtual machine capable of running your program. The amount of hardware in the chips and on the motherboard is on par with this, not counting the transistors in the memories, millions to many millions of transistors are needed to implement what is usually few hundred different instructions or less. If you write a pic12 emulator and get a feel for the task then ponder what a 6502 would take, then a z80, then a 486, then think about what a quad core intel 64 bit might involve. The number of transistors for a processor/chip is often advertised/bragged about so you can also get a feel from that as to how much is there that you cannot see from assembler.
It may help if you start with an understanding of electronics, and work up from there (rather than from complex code down).
Let's simplify this for a moment. Imagine an electric circuit with a power source, switch and a light bulb. If you complete the circuit by closing the switch, the bulb comes on. You can think of the state of the circuit as a 1 or a 0 depending on whether it is completed (closed) or not (open).
Greatly simplified, if you replace the switch with a transistor, you can now control the state of the bulb with an electric signal from a separate circuit. The transistor accepts a 1 or a 0 and will complete or open the first circuit. If you group these kinds of simple circuits together, you can begin to create gates and start to perform logic functions.
Memory is based on similar principles.
In essence, the power coming in the back of your computer is being broken into billions of tiny pieces by the components of the computer. The behavior and activity of such is directed by the designs and plans of the engineers who came up with the microprocessors and circuits, but ultimately it is all orchestrated by you, the programmer (or user).
Heh, good question! Kind of involved for SO though!
Actually, main memory consists of arrays of capacitors, not transistors, although cache memories may be implemented with transistor-based SRAM.
At the low level, the CPU implements one or more state machines that process the ISA, or the Instruction Set Architecture.
Look up the following circuits:
Flip-flop
Decoder
ALU
Logic gates
A series of FFs can hold the current instruction. A decoder can select a memory or register to modify, and the state machine can then generate signals (using the gates) that change the state of a FF at some address.
Now, modern memories use a decoder to select an entire line of capacitors, and then another decoder is used when reading to select one bit out of them, and the write happens by using a state machine to change one of those bits, then the entire line is written back.
It's possible to implement a CPU in a modern programmable logic device. If you start with simple circuits you can design and implement your own CPU for fun these days.
That's one big topic you are asking about :-) The topic is generally called "Computer Organization" or "Microarchitecture". You can follow this Wikipedia link to get started if you want to learn.
I don't have any knowledge beyond a very basic level about either electronics or computer science but I have a simple theory that could answer your question and most probably the actual processes involved might be very complex manifestations of my answer.
You could imagine the logic gates getting their electric signals from the keystrokes or mouse strokes you make.
A series or pattern of keys you may press may trigger particular voltage or current signals in these logic gates.
Now what value of currents or voltages will be produced in which all logic gates when you press a particular pattern of keys, is determined by the very design of these gates and circuits.
For eg. If you have a programming language in which the "print(var)" command prints "var",
the sequence of keys "p-r-i-n-t-" would trigger a particular set of signals in a particular set of logic gates that would result in displaying "var" on your screen.
Again, what all gates are activated by your keystrokes depends on their design.
Also, typing "print(var)" on your desktop or anywhere else apart from the IDE will not yield same results because the software behind that IDE activates a particular set of transistors or gates which would respond in an appropriate way.
This is what I think happens at the Fundamental level, and the rest is all built layer upon layer.
This isn't exactly specifically a programming question (or is it?) but I was wondering:
How are graphics and sound processed from code and output by the PC?
My guess for graphics:
There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor.
IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space
Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh.
Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space.
When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel.
All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc
Here are my questions:
a) How close is the above guess (the three steps)?
b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day?
c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above?
d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction?
e) Finally, how is all of this done for sound? (I have no idea :) )
a.
A long time ago, that was the case, but it hasn't been for quite a while. Most hardware will still support that type of configuration, but mostly as a fall-back -- it's not how they're really designed to work. Now most have a block of memory on the graphics card that's also mapped to be addressable by the CPU over the PCI/AGP/PCI-E bus. The size of that block is more or less independent of what's displayed on the screen though.
Again, at one time that's how it mostly worked, but it's mostly not the case anymore.
Mostly right.
b. OpenGL normally comes in a few parts -- a core library that's part of the OS, and a driver that's supplied by the graphics chipset (or possibly card) vendor. The exact distribution of labor between the CPU and GPU varies somewhat though (between vendors, over time within products from a single vendor, etc.) SDL is built around the general idea of a simple frame-buffer like you've described.
c. You usually build bitmaps, textures, etc., into separate files in formats specifically for the purpose.
d. There are quite a few registers involved, though the main graphics chipset vendors (ATI/AMD and nVidia) tend to keep their register-level documentation more or less secret (though this could have changed -- there's constant pressure from open source developers for documentation, not just closed-source drivers). Most hardware has capabilities like dedicated line drawing, where you can put (for example) line parameters into specified registers, and it'll draw the line you've specified. Exact details vary widely though...
e. Sorry, but this is getting long already, and sound covers a pretty large area...
For graphics, Jerry Coffin's got a pretty good answer.
Sound is actually handled similarly to your (the OP's) description of how graphics is handled. At a very basic level, you have a "buffer" (some memory, somewhere).
Your software writes the sound you want to play into that buffer. It is basically an encoding of the position of the speaker cone at a given instant in time.
For "CD quality" audio, you have 44100 values per second (a "sample rate" of 44.1 kHz).
A little bit behind the write position, you have the audio subsystem reading from a read position in the buffer.
This read position will be a little bit behind the write position. The distance behind is known as the latency. A larger distance gives more of a delay, but also helps to avoid the case where the read position catches up to the write position, leaving the sound device with nothing to actually play!