Why can't emulation ROMS just be recompiled to x86? - emulation

I've wondered this for a while. Why can't you just take a binary ROM of for example a NES or N64 ROM, and just convert the executable parts from MIPS or 6502 to x86 assembly? I mean all the opcodes are covered anyway.
I know theres a handful of ARM codes that can't be converted over, but for consoles up until the WII, couldn't one just convert the assembly code and intercept IO stuff like rendering and gamepad input?
Because it feels like that would speed up emulation considerably.

Related

Hardware-accelerated alpha-enabled in-memory put-bitmap on linux?

Would live to draw bitmap 1 onto bitmap 2, and bitmap 1 alpha-channel must be used.
Both are in memory, both a RGBA.
I need this operation be as fast as possible. So hardware acceleraton would be very helpful.
Actually There will be thousands of small bitmaps drawn on one big bitmap (similar to text rendering). I need to save the result to disk. What library/function could you recommend?
I was thinking about something like OpenGL+CreateTexture, but it's been long ago when I wrote my OpenGL "helloworld"... And yes, it's C/C++.
If you're simply doing 2D blits, you should consider using OpenVG (instead of OpenGL).
You have mentioned Linux, but not the hardware platform. If it's an embedded processor, OpenVG support is usually pretty good. Can't say the same for desktop (x86), where you'd have to use OpenGL.

Driving the sound card in Linux

On a basic embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
I'd like to do something similar on a modern Linux desktop. A brief look-see of Portaudio, OpenAL, and ALSA suggests to me that most people do things at a considerable higher level. That's ok, but not what I'm looking for.
(I've never worked with sounds on Linux before, so if a tutorial exists, I'd love to see it).
Actually, it... kinda is. While you can generate the waveform yourself, you still need to use an API to queue it and send it to the audio hardware; there no longer even exists a sane way to twiddle the audio line directly. Plus you get cross-platform compatibility for free.
[...] embedded systems speaker with a single line of output, wiggling the output as 0 or 1 in a for given periods produces sound.
Sounds a lot like the old PC speaker. You might still find code for it in the Linux kernel.
I'd like to do something similar on a modern Linux desktop.
Then you need AFAIK a driver for ALSA. There you can find infos on how to write an ALSA driver. Use PWM to produce the sound.
Since there are many different sound cards and audio interfaces produced by different companies, there is no uniform way to have a low level access to them. With most sound I/O APIs what you need to do is to generate the PCM data and send that to the driver. That's pretty much the lowest level you can go.
But PCM data is very similar to the 0-1 approach you describe. It's just that you have the in-between options too. 0-1 is 1-bit audio. 8-, 16-, 24-bit audio is what you'll find on a modern sound card. There are also 32- and 64-bit float formats. But they're still similar.

Running an audio synthesis/analysis language on an embedded device

What is the experience running programs written in an audio synthesis/analysis language such as ChucK, Pure Data, Csound, Supercollider, etc. in an embedded device such as an Arduino Mega, Beagle Board or a custom board with a microprocessor or DSP chip?
I would like to know which language and hardware you chose and why. What were the obstacles, etc.? My objective is to run programs that can be easily programmed by musicians/producers in a board that is not too expensive.
I received input from someone who is successfully running ChucK programs in a Beagle Board (Ubuntu Linux on a Beagle Board running ChucK), but his choice of language and hardware was made very lightly, his setup is not using the DSP in the Beagle Board and it seems like overkill to run a whole Linux install to process audio signals.
Any input is appreciated!
Update: I found Zengarden which is a Pd runtime implementation (as a standalone C++ library) and runs well on ARM based devices. For now, I'll go with the BeagleBoard and Zengarden but in a later stage of the project, I'll need to replace the BeagleBoard with something that costs less.
I'd love to hear more input from the community.
Thanks everyone for your comments and answers. For everybody else's reference, I ended up writing a JACK client in C++ that parses and interprets PureData patches and ran it on a BeagleBoard with Angstrom Linux and JACK server. Here's a video and a tutorial that I wrote: http://elsoftwarehamuerto.org/articulos/691/puredata-beagleboard/
First, I am not an audio programmer, so I'm not familiar with the actual demands of the signal processing necessary to achieve what you want to achieve.
But, it's difficult to contrast something like the Beagle Board and the Arduino Mega, since they're really in different leagues of base performance. The Beagle Board is a 1 GHz ARM vs the Arduino Mega's 16 MHz. That tells me that whatever processing you may be interested in doing may well be within the capabilities of the Beagle Board, but the Arduino Mega would have almost no chance without an attached DSP to do the actual work.
The next consideration, is whether any of the packages you were considering using actually target DSPs for their runtimes. At a glance they seem like high level sound processing languages. With the Beagle Board, you may well have the processing power to evaluate and compile the sound source code that these packages use and let them compile in to their targets, but on the Arduino Mega, that seems unlikely.
If all you're doing is working with a piece of hardware that will be running the artifacts created by the packages you mentioned, then the Arduino Mega may well be suitable as the "development" is done on a more powerful machine. But if you want to work with these packages as is, and use them as a development tool, then running them on a Linux port to something like the may simply be a better option.
Again, after casual looking about, the Arduino Mega is roughly half the price of the Beagle Board, but the Beagle Board may well let you work at a much higher level (generic Linux). Whether either will be powerful enough for your final vision, I can't say. But I would imagine you could get a lot farther, a lot faster, using the more powerful system -- at least in the short term.

Programming graphics in assembler?

I've developed a running Super Mario Sprite using Visual C++ 6.0 and DirectX. But this isn't very satisfying to me (raping a 3D-Multimedia-framework for displaying a 2D sprite only), so I would like to be able to program an animated sprite using C and assembler only.
So, when looking to old games (Wolfenstein, for example) it looks that most of the game is written in C and everytime it comes to graphics output there is the use of assembler.
Unfortunatly when trying to use this old assembler code there is always the error message "NTVDM.exe has found an invalid instruction" so this things don't seem to work nowadays.
Is there any tutorial on graphics programming in assembler that is still usefull?
(I don't want to use any bloated frameworks or libraries, I just want to develop everything on my own. WinAPI would be OK for creating a full screen window and for catching user input, but not for graphics because I read GDI is too slow for fast graphics.)
I'm using WindowsXP and MASM or A86.
I totally agree with samcl
The main reason for not using assembler anymore is that you cannot access the Videomemory anymore. Back in the early days (you mentioned Castle Wolfenstein) there was a special video mode called 0x13h where your graphic was just a block of memory(each pixel was a palette color ranging from 0-255<--1 Byte) You were able to access this memory through this specific video mode, however, today things are much more complicated
Today you have very fast Videomemory and using your CPU for accessing it will just tear down all performance, as you CPU is connected through PCI-Express/AGP/PCI/VESA-LOCALBUS/ISA (<- remembering anyone!?)
Graphicsprogramming is often a lot of read and write accesses(read pixel, check if it is transparent, multiply with alpha, write pixel, etc.)
The modern memory Interfaces are much slower than direct access inside the graphic card. That's why you really should use shaders, as Robert Gould suggests. In this way you can write faster and easier to understand code and it will not stall your GFX-Memory.
IF you are more interested in GFX Programming, you can wet your appetite with shadertoy, a community dedicated to shaderbased effects complete with WebGLbased Shadercode execution.
Also your beginner assembler code will be pretty lame. in quality as in performance. Trust me. It needs a lot of time for optimizing such primitive code. So your compiled C/C++ Code will outperform your handwritten asm easily.
If you are interested in Assembler, try to code something like diskaccess. This is where you can gain a lot of performance.
It sounds like you only use Assembler because you seem to think that this is necessary. This isn't the case. If you don't have any other reason for it (i.e. wanting to learn it), don't use Assembler here, unless you know exactly what you're doing.
For your average graphics engine, Assembler programming is completely unnecessary. Especially when it comes to a Super Mario style 2D sprite engine. Even “slow” scripting languages like Python are fast enough for such things nowadays.
Adding to that, if you don't know very precisely what you're doing, Assembler will not be faster than C (in fact, chances are it will be slower because you'll re-implement existing C functions less efficiently).
I'm guessing if you are already using C with DirectX, speed is not the issue, and that this is more of a learning exercise. For 2D under Windows, C and DirectX is going to be very fast indeed, and as Konrad Rudolph points out, hand cranked assembler is unlikely to be faster than a highly optimized SDK.
From a purely educational standpoint, it is an intersting activity, but quite complex. Back in the early days of home computers and the first PCS, the graphics screen appeared pretty much as a block of memory, where bytes corresponded to one or more coloured pixels. By changing the value of the screen memory you could plot points, and hence lines, and on to sprites etc... On modern PCs this tends not to be an option, in that you program a graphics card, usually via an SDK, to do the same job. The card then does the hard work, and you are provided with a much higher level of abstraction. If you really wanted to get a feel for what it was like back in the day, I would recommend an emulator. For a modern game, stick with your SDKs.
It is possible to program your own 2D engine in a recent version of Directx, if you wish to investigate this avenue. You can create a "screen space" aligned polygon, with no perspective correction, of which is texture mapped. You can then plot your sprites on a pixel-by-pixel basis onto this texture map.
As for mode 13h (Peter Parker), it brings back some memories!
__asm
{
mov ax,0x13
int 10h // 16-bit code only, not Windows
}
But of course this will fault in a 32-bit or 64-bit Windows program; 16-bit BIOS calls are not supported by the Windows kernel (which installs its own interrupt table as part of booting and switching the CPU to 64-bit mode.)
I would tend to avoid assembler with a barge pole, it can be particulary difficult to debug, and maintain; however if you wish to explore this subject in more detail, I can recommend Michal Abrash's Graphics Programming Black Book. It's a bit old, but a good read and will give you some insight into graphics programming techniques before 3D hardware.
Assembler for graphics was because, back then, most people lacked graphics card with 3d support, so it had to be done on the CPU, not anymore. Nowadays it's about shader programming. Shader languages allow you to cuddle up with the bare metal. So if anything you should try to code your 2d graphics to be shadered base, that way the experience will have value as a career skill.
Try CUDA for a starter.
My recommendation is to experiment. Take your sprite code and write in a number of forms, starting with C/GDI and C++/DirectDraw. Don't worry about assembler yet.
DirectX is your best bet for fast action graphics. Learn it, then figure out how to micro-optimize with assembler. In general, assembler isn't going to make your API calls faster. It is going to open up flexibility for faster computation for things like 3D rotation, texture mapping, shading, etc.
Start with DirectDraw. Here's a FAQ. Technically, DirectDraw is deprecated after DirectX 7, but you can still use it and learn from it. It'll allow you direct framebuffer modification, which is what you're probably looking for.
There's some helpful tutorials and forums at TripleBuffer Software.
Also consider upgrading your compiler to Visual C++ 2008 Express. VC++ 6 has a buggy compiler that can be problematic with trying to compile certain C++ libraries.

GPU-based video cards to accelerate your program calculations, How?

I read in this article that a company has created a software capable of using multiple GPU-based video cards in parallel to process hundreds of billions fixed-point calculations per second.
The program seems to run in Windows. Is it possible from Windows to assign a thread to a GPU? Do they create their own driver and then interact with it? Any idea of how they do it?
I imagine that they are using a language like CUDA to program the critical sections of code on the GPUs to accelerate their computation.
The main function for the program (and its threads) would still run on the host CPU, but data are shipped off the the GPUs for processing of advanced algorithms. CUDA is an extension to C syntax, so it makes it easier to programmer than having to learn the older shader languages like Cg for programming general purpose calculations on a GPU.
A good place to start - GPGPU
Also, for the record, I don't think there is such a thing as non-GPU based graphic cards. GPU stands for graphics processing unit which is by definition the heart of a graphics card.

Resources