I am planning an advertisement display solution that might use a browser with multiple monitors. One of the questions is: is there a limit in the maximum screen size of a browser?
No, there isn't. You're generally safe deeming 2,000 pixels or so to be the 99.999%-of-users limit, but there's no hard technological limit preventing some nutter from having a 100,000 pixel browser window displaying on a Times Square-style billboard or something.
Related
Is there any tradeoff between latency and reliability in an audio stream on Windows using WASAPI?
That is, if I'm programming a WASAPI application and I want the minimum latency, will the audio stream be subject to more pops and clicks and audio disturbances than if I use a higher latency? Is using the lowest latency possible a "free lunch", or something that comes with caveats?
Latency is a function of the buffer size (e.g. bufferSize/sampleRate). By increasing the buffer size you reduce the chance that say a D/A converter is going to consume the current buffer of samples before the OS has given your application a chance to fill the next buffer. The downside is that the latency is increased. The effect of the increased latency is that changes you make during playback (e.g. increasing the volume) won't be heard until the next buffer is played. At the extreme you could imagine a 1 sample buffer between your application and the sound card. You'd have zero latency but there is no way the Windows scheduler would service you regularly enough to prevent drop outs.
There is no such thing as a free lunch but as computers have continued to speed up the chances of buffer underruns have decreased quite a bit.
how to limit size of .xsession-error to certain size, say 10 MB instead of deleting it every time or preventing its creation altogether?
there are solutions offered around internet with the strategies to delete or prevent but I guess it would be wiser to keep it and only limit it. who knows, error logs could/should be useful after all.
I frequently see screen size and desktop size in graphics programming forum or some documents, Isn't screen size equal to desktop size? take the following as an example, what's the differences?
For windowed mode, the size of the destination surface should be the
size of the desktop. For full-screen mode, the size of the destination
surface should be the screen size.
That is because the Desktop Size does NOT always fill the screen. Change your resolution to maybe 800x600 or (if possible) 640x480, and you will see that on many screens, it will only occupy a small area on the display. Screen size = the full amount your screen can show. Desktop size is that box that I just described. MOST cases, they are the same - but not always.
It also, depending on the context, may be describing what fits inside the current window, minus the title bar, tool bars, task bar, etc.
Ok.I can find simulation designs for simple architectures.(Edit :definitely like not x86) For example use an int as the program counter , use a byte array as the Memory and so on.But how can I simulate the graphic card's(the simplest graphic card imaginable) functionality ?
like use an array to represent each pixel and "paint" each pixel one by one.
But when to paint- synchronized with CPU or asynchronously ? Who stores graphic data in that array ? Is there an instruction for storing a pixel and painting a pixel ?
Please consider all the question marks ('?') doesn't mean "you are asking a lot of questions" but explains the problem itself - How to simulate a Graphic Card ?
Edit : LINK to a basic implementation design for CPU+Memory simulation
Graphic cards typically carry a number of KBs or MBs of memory that stores colors of individual pixels that are then displayed on the screen. The card scans this memory a number of times per second turning the numeric representation of pixel colors into video signals (analog or digital) that the display understands and visualizes.
The CPU has access to this memory and whenever it changes it, the card eventually translates the new color data into appropriate video signals and the display shows the updated picture. The card does all the processing asynchronously and doesn't need much help from the CPU. From the CPU's point of view it's pretty much like write the new pixel color into the graphic card's memory at the location corresponding to the coordinates of the pixel and forget about it. It may be a little more complex in reality (due to poor synchronization artifacts such as tearing, snow and the like), but that's the gist of it.
When you simulate a graphic card, you need to somehow mirror the memory of the simulated card in the physical graphic card's memory. If in the OS you can have direct access to the physical graphic card's memory, it's an easy task. Simply implement writing to the memory of your emulated computer something like this:
void MemoryWrite(unsigned long Address, unsigned char Value)
{
if ((Address >= SimulatedCardVideoMemoryStart) &&
(Address - SimulatedCardVideoMemoryStart < SimulatedCardVideoMemorySize))
{
PhysicalCard[Address - SimulatedCardVideoMemoryStart] = Value;
}
EmulatedComputerMemory[Address] = Value;
}
The above, of course, assumes that the simulated card has exactly the same resolution (say, 1024x768) and pixel representation (say, 3 bytes per pixel, first byte for red, second for green and third for blue) as the physical card. In real life things can be slightly more complex, but again, that's the general idea.
You can access the physical card's memory directly in MSDOS or on a bare x86 PC without any OS if you make your code bootable by the PC BIOS and limit it to using only the BIOS service functions (interrupts) and direct hardware access for all the other PC devices.
Btw, it will probably be very easy to implement your emulator as a DOS program and run it either directly in Windows XP (Vista and 7 have extremely limited support for DOS apps in 32-bit editions and none in 64-bit editions; you may, however, install XP Mode, which is XP in a VM in 7) or better yet in something like DOSBox, which appears to be available for multiple OSes.
If you implement the thing as a Windows program, you will have to use either GDI or DirectX in order to draw something on the screen. Unless I'm mistaken, neither of these two options lets you access the physical card's memory directly such that changes in it would be automatically displayed.
Drawing individual pixels on the screen using GDI or DirectX may be expensive if there's a lot of rendering. Redrawing all simulated card's pixels every time when one of them gets changed amounts to the same performance problem. The best solution is probably to update the screen 25-50 times a second and update only the parts that have changed since the last redraw. Subdivide the simulated card's buffer into smaller buffers representing rectangular areas of, say, 64x64 pixels, mark these buffers as "dirty" whenever the emulator writes to them and mark them as "clean" when they've been drawn on the screen. You may set up a periodic timer driving screen redraws and do them in a separate thread. You should be able to do something similar to this in Linux, but I don't know much about graphics programming there.
Is the page size constant? To be more specific , getconf PAGE_SIZE gives 4096, fair enough. But can this change through a program's runtime? Or is it constant throughout the entire OS process spawn. I.e. , is it possible for a process to have 1024 and 2048 AND 4096 page sizes? Let's just talk virtual page sizes for now. But going further is it possible for a virtual page to span a physical page of greater size?
It is possible for a process to use more than one pagesize. On newer kernels this may even happen without notice, see Andrea Arcangelis transparent huge pages.
Other than that, you can request memory with a different (usually larger) page size over hugetlbfs.
The main reason for having big pages is performance, the TLB in the processor is very limited in size, and fewer but bigger pages mean more hits.