Starting FPGA Programming [closed] - io

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to start FPGA programming. I don't have any knowledge at all about how FPGAs work and such. I would like to get a development board, not too expensive, but it should have at least 40 I/O pins. Anything up to $300 is OK.
I decided that I want to program in Verilog. I am not sure about the following:
How will my compiled 'program' be stored on the chip? I would guess the chip has some kind of EEPROM to save my program, but from what I have read, it is apparently stored in RAM. I want my program to remain on the chip (or to be loaded somehow) every time it powers up.
Can I buy a separate FPGA chip (not a whole development board) for production? And if yes, how can I upload my program to the separate chip? Does it in some way connect to the development board?

I'd recommend the Digilent Basys board as an introduction. It only has 16 external I/O, but it already has RAM, USB, switches, buttons, LEDs, 7-segment displays, a VGA connector, and a PS/2 connector onboard - You're unlikely to find an FPGA with fewer than 40 I/O pins. If you want I/O for another project, use the Nexys instead - More peripherals than I care to list, and also has a high-speed Hirose 43-pin connector if you have a project which specifically needs about 40 connections.
Also, consider how you want to interface with your PC. Is your goal to make an embedded system, or to interface with a computer through a PCI/Ethernet/USB connection?
Yes, you can buy separate FPGA boards for production - There's a dizzying array of options, though - Digikey has 5,300 at this time. You do need some way to program the FPGA, and an onboard NVM chip that programs the FPGA on startup is a popular option. However, you should start with a development board that's well supported and already has a programmer, toolchain and simulator available before you get too far into designing your board or worrying about how to save your program onto the chip. Those are good things to know, but they're not what you want to worry about right now. Good luck!

The whole point of using an FPGA is that your "program" is actually a circuit, not RAM. There are physical logic components that are configured when you write the bitstream to the FPGA. This is why they can run so much faster for specialized applications--you are basically making custom hardware.
Xilinx is one of the main FPGA manufacturers. Try their website. Check out the Boards & Kits section.
Try reading more about the technology before you get ahead of yourself. You will need a strong understanding of how FPGAs work before you can program them effectively. Wikipedia is a great place to start.

In Xilinx FPGA terminology the "program" is called bitstream. There are some FPGAs that have embedded flash to store the bitstream (e.g. Spartan 3AN). Most of the FPGAs require some external bitstream storage. Here is a configuration guide on how to configure an FPGA.
Yes you can. There are multiple ways to do configuration. Most of them require some external circuitry.

Check out Actels's new Smart Fusion FPGA. Its has a FPGA fabric of course, with a hard ARM MCU with a good analog end (DAC, ADC etc).
The Eval board is only 100$
http://www.actel.com/products/hardware/devkits_boards/smartfusion_eval.aspx
And all the software you need to get up and running if free.

Related

I2C/SPI interview questions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can devices be added and removed while the system is running (Hot swapping) in I2C and SPI ?
Is it better to use I2C or SPI for data communication between a microprocessor and DSP ?
Is it better to use I2C or SPI for data communication from ADC ?
1) Of course, given no limitations.
SPI would be easiest, as each device would have an independent chip select. You'd just have to make sure that the SPI device didn't think its chip select was active at some point during the connect/disconnect process or it might assert its MISO line and cause contention. There would be many ways to sequence the power or reset of that SPI device to ensure that part worked.
I2C would be trickier... as a device connecting during a transfer might interpret the first bytes it sees in the ongoing transfer as a command and potentially do bad things. You could put a bidirectional level shifter or buffer with an enable and perhaps a flop to latch the enable in front of the hot swapped I2C device, and strobe a secondary "bus_clear" message to enable any new hot swapped buffers during these at idle periods on the bus. There'd be lots of ways to accomplish the same thing.
2) and 3) have no general answer.
SPI would be capable of going faster, so if you need speed that'd be the way to go. The micro to DSP implies that. However, just the fact that one is a micro and the other is a DSP doesn't tell you anything at all unless you start assuming things. What if the micro is just getting small result packets from a whole set of DSPs? I2C would make that easier.
Similarly, ADCs might need speed, or they might not. If there are a lot of them and the required data rate is low, again I2C makes more sense.
SPI is almost always simpler to use, so that's a plus if there's not too many devices. I2C uses fewer pins.
Many times, the choice between I2C or SPI is made based on what the parts you want to use are providing as the interface. Another very important factor is what ELSE needs to happen in the system and what bus makes sense for as much of the parts as possible.
There's more: design reuse, weird firmware/RTOS limitations, etc, etc. There are many reasons you can't say one is "better" than the other in general. What's better for your design is just not that simple.
I think these questions are "get you talking" questions...

How would i program analog to digital conversion using a microconroller in c

Im doing a dsp project and i want to take an anolog file and convert it to a digital output using a microconroller attached to a ADC on a dsp board. How would i program this in c?
Pretty much its as simple as that, atleast i think.
This is what i need.
Input --------- Output
Angolog --> Digital
Digital --> Anolog
You really need to clarify your question. Like what do you mean by analog file? File systems are binary from a programming perspective, sure the media is magnetic or other technologies and there is analog involved. An ADC goes from Analog to Digital, so it is an an analog input not a digital output.
ADC analog to digital converter, takes analog inputs to the device and converts them to digital so you can use them inside the chip, program, save to files, etc.
DAC digital to analog converter, takes digital values and converts them to analog outputs.
In both cases you need to look at the specific details for the chips and the board. From a programming perspective if nothing else you need to look into the details for the ADC and or DAC. Microcontrollers having an ADC is not uncommon, but you need to read up on how to get the ADC on that microcontroller to initiate a sample, how to know when the sample has completed and how to read the digital data once the sample has been taken. DACs are often external, sometimes serial, so you may have to bit bang spi or i2c or look into what hardware the microcontroller might have for speaking spi or i2c or if there is a dac in the microcontroller, how to use it (what registers to write, etc).
If you have a specific publicly available microcontroller board, for example an eval board, then that makes it much easier for folks here or elsewhere to show you where to look for the schematics, data sheets, etc. Otherwise, even knowing exactly which microcontroller and what I/O pins are used, would be helpful when asking such a question. There are probably lots of example programs out there that could be borrowed from. And it could be as simple as a few lines of C to an existing library, or as complicated as many lines of C with interrupt service routines, and possibly some assembler.
This is extremely dependent on your hardware and there's no information in the question that would enable a real answer.
In general, you should see the documentation for your system, especially the AD/DA parts. There should be good examples. If there's a particular problem, post a more specific question.

Running an audio synthesis/analysis language on an embedded device

What is the experience running programs written in an audio synthesis/analysis language such as ChucK, Pure Data, Csound, Supercollider, etc. in an embedded device such as an Arduino Mega, Beagle Board or a custom board with a microprocessor or DSP chip?
I would like to know which language and hardware you chose and why. What were the obstacles, etc.? My objective is to run programs that can be easily programmed by musicians/producers in a board that is not too expensive.
I received input from someone who is successfully running ChucK programs in a Beagle Board (Ubuntu Linux on a Beagle Board running ChucK), but his choice of language and hardware was made very lightly, his setup is not using the DSP in the Beagle Board and it seems like overkill to run a whole Linux install to process audio signals.
Any input is appreciated!
Update: I found Zengarden which is a Pd runtime implementation (as a standalone C++ library) and runs well on ARM based devices. For now, I'll go with the BeagleBoard and Zengarden but in a later stage of the project, I'll need to replace the BeagleBoard with something that costs less.
I'd love to hear more input from the community.
Thanks everyone for your comments and answers. For everybody else's reference, I ended up writing a JACK client in C++ that parses and interprets PureData patches and ran it on a BeagleBoard with Angstrom Linux and JACK server. Here's a video and a tutorial that I wrote: http://elsoftwarehamuerto.org/articulos/691/puredata-beagleboard/
First, I am not an audio programmer, so I'm not familiar with the actual demands of the signal processing necessary to achieve what you want to achieve.
But, it's difficult to contrast something like the Beagle Board and the Arduino Mega, since they're really in different leagues of base performance. The Beagle Board is a 1 GHz ARM vs the Arduino Mega's 16 MHz. That tells me that whatever processing you may be interested in doing may well be within the capabilities of the Beagle Board, but the Arduino Mega would have almost no chance without an attached DSP to do the actual work.
The next consideration, is whether any of the packages you were considering using actually target DSPs for their runtimes. At a glance they seem like high level sound processing languages. With the Beagle Board, you may well have the processing power to evaluate and compile the sound source code that these packages use and let them compile in to their targets, but on the Arduino Mega, that seems unlikely.
If all you're doing is working with a piece of hardware that will be running the artifacts created by the packages you mentioned, then the Arduino Mega may well be suitable as the "development" is done on a more powerful machine. But if you want to work with these packages as is, and use them as a development tool, then running them on a Linux port to something like the may simply be a better option.
Again, after casual looking about, the Arduino Mega is roughly half the price of the Beagle Board, but the Beagle Board may well let you work at a much higher level (generic Linux). Whether either will be powerful enough for your final vision, I can't say. But I would imagine you could get a lot farther, a lot faster, using the more powerful system -- at least in the short term.

When machine code is generated from a program how does it translates to hardware level operations? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Like if say the instruction is something like 100010101 1010101 01010101 011101010101. Now how is this translating to an actual job of deleting something from memory? Memory consists of actual physical transistors the HOLD data. What causes them to lose that data is some external signal?
I want to know how that signal is generated. Like how some binary numbers change the state of a physical transistor. Is there a level beyond machine code that isn't explicitly visible to a programmer? I have heard of microcode that handle code at hardware level, even below assembly language. But still I pretty much don't understand. Thanks!
I recommend reading the Petzold book "Code". It explains these things as best as possible without the physics/electronics knowledge.
Each bit in the memory, at a functional level, HOLDs either a zero or a one (lets not get into the exceptions, not relevant to the discussion), you cannot delete memory you can set it to zeros or ones or a combination. The arbitrary definition of deleted or erased is just that, a definition, the software that erases memory is simply telling the memory to HOLD the value for erased.
There are two basic types of ram, static and dynamic. And are as their names imply, so long as you dont remove power the static ram will hold its value until changed. Dynamic memory is more like a rechargeable battery and there is a lot of logic that you dont see with assembler or microcode or any software (usually) that keeps the charged batteries charged and empty ones empty. Think about a bunch of water glasses, each one is a bit. Static memory the glasses hold the water until emptied, no evaporation, nothing. Glasses with water lets say are ones and ones without are zeros (an arbitrary definition). When your software wants to write a byte there is a lot of logic that interprets that instruction and commands the memory to write, in this case there is a little helper that fills up or empties the glasses when commanded or reads the values in the glasses when commanded. In the case of dynamic memory, the glasses have little holes in the bottom and are constantly but slowly letting the water drain out. So glasses that are holding a one have to be filled back up, so the helper logic not only responds to the read and write commands but also walks down the row of glasses periodically and fills back up the ones. Why would you bother with unreliable memory like that? It takes twice (four times?) as many transistors for an sram than a dram. Twice the heat/power, twice the size, twice the price, with the added logic it is still cheaper all the way around to use dram for bulk memory. The bits in your processor that are used say for the registers and other things are sram based, static. Bulk memory, the gigabytes of system memory, are usually dram, dynamic.
The bulk of the work done in the processor/computer is done by electronics that the instruction set or microcode in the rare case of microcoding (x86 families are/were microcoded but when you look at all processor types, microcontrollers that drive most of the everyday items you touch they are generally not microcoded, so most processors are not microcoded). In the same way that you need some worker to help you turn C into assembler, and assembler into machine code, there is logic to turn that machine code into commands to the various parts of the chip and peripherals outside the chip. download either the llvm or gcc source code to get an idea of the percentages of your program being compiled is compared to the amount of software it takes to do that compiling. You will get an idea of how many transistors are needed to turn your 8 or 16 or 32 bit instruction into some sort of command to some hardware.
Again I recommend the Petzold book, he does an excellent job of teaching how computers work.
I also recommend writing an emulator. You have done assembler, so you understand the processor at that level, in the same assembler reference for the processor the machine code is usually defined as well, so you can write a program that reads the bits and bytes of the machine code and actually performs the function. An instruction mov r0,#11 you would have some variable in your emulator program for register 0 and when you see that instruction you put the value 11 in that variable and continue on. I would avoid x86, go with something simpler pic 12, msp430, 6502, hc11, or even the thumb instruction set I used. My code isnt necessarily pretty in anyway, closer to brute force (and still buggy no doubt). If everyone reading this were to take the same instruction set definition and write an emulator you would probably have as many different implementations as there are people writing emulators. Likewise for hardware, what you get depends on the team or individual implementing the design. So not only is there a lot of logic involved in parsing through and executing the machine code, that logic can and does vary from implementation to implementation. One x86 to the next might be similar to refactoring software. Or for various reasons the team may choose a do-over and start from scratch with a different implementation. Realistically it is somewhere in the middle chunks of old logic reused tied to new logic.
Microcoding is like a hybrid car. Microcode is just another instruction set, machine code, and requires lots of logic to implement/execute. What it buys you in large processors is that the microcode can be modified in the field. Not unlike a compiler in that your C program may be fine but the compiler+computer as a whole may be buggy, by putting a fix in the compiler, which is soft, you dont have to replace the computer, the hardware. If a bug can be fixed in microcode then they will patch it in such a way that the BIOS on boot will reprogram the microcode in the chip and now your programs will run fine. No transistors were created or destroyed nor wires added, just the programmable parts changed. Microcode is essentially an emulator, but an emulator that is a very very good fit for the instruction set. Google transmeta and the work that was going on there when Linus was working there. the microcode was a little more visible on that processor.
I think the best way to answer your question, barring how do transistors work, is to say either look at the amount of software/source in a compiler that takes a relatively simple programming language and converts it to assembler. Or look at an emulator like qemu and how much software it takes to implement a virtual machine capable of running your program. The amount of hardware in the chips and on the motherboard is on par with this, not counting the transistors in the memories, millions to many millions of transistors are needed to implement what is usually few hundred different instructions or less. If you write a pic12 emulator and get a feel for the task then ponder what a 6502 would take, then a z80, then a 486, then think about what a quad core intel 64 bit might involve. The number of transistors for a processor/chip is often advertised/bragged about so you can also get a feel from that as to how much is there that you cannot see from assembler.
It may help if you start with an understanding of electronics, and work up from there (rather than from complex code down).
Let's simplify this for a moment. Imagine an electric circuit with a power source, switch and a light bulb. If you complete the circuit by closing the switch, the bulb comes on. You can think of the state of the circuit as a 1 or a 0 depending on whether it is completed (closed) or not (open).
Greatly simplified, if you replace the switch with a transistor, you can now control the state of the bulb with an electric signal from a separate circuit. The transistor accepts a 1 or a 0 and will complete or open the first circuit. If you group these kinds of simple circuits together, you can begin to create gates and start to perform logic functions.
Memory is based on similar principles.
In essence, the power coming in the back of your computer is being broken into billions of tiny pieces by the components of the computer. The behavior and activity of such is directed by the designs and plans of the engineers who came up with the microprocessors and circuits, but ultimately it is all orchestrated by you, the programmer (or user).
Heh, good question! Kind of involved for SO though!
Actually, main memory consists of arrays of capacitors, not transistors, although cache memories may be implemented with transistor-based SRAM.
At the low level, the CPU implements one or more state machines that process the ISA, or the Instruction Set Architecture.
Look up the following circuits:
Flip-flop
Decoder
ALU
Logic gates
A series of FFs can hold the current instruction. A decoder can select a memory or register to modify, and the state machine can then generate signals (using the gates) that change the state of a FF at some address.
Now, modern memories use a decoder to select an entire line of capacitors, and then another decoder is used when reading to select one bit out of them, and the write happens by using a state machine to change one of those bits, then the entire line is written back.
It's possible to implement a CPU in a modern programmable logic device. If you start with simple circuits you can design and implement your own CPU for fun these days.
That's one big topic you are asking about :-) The topic is generally called "Computer Organization" or "Microarchitecture". You can follow this Wikipedia link to get started if you want to learn.
I don't have any knowledge beyond a very basic level about either electronics or computer science but I have a simple theory that could answer your question and most probably the actual processes involved might be very complex manifestations of my answer.
You could imagine the logic gates getting their electric signals from the keystrokes or mouse strokes you make.
A series or pattern of keys you may press may trigger particular voltage or current signals in these logic gates.
Now what value of currents or voltages will be produced in which all logic gates when you press a particular pattern of keys, is determined by the very design of these gates and circuits.
For eg. If you have a programming language in which the "print(var)" command prints "var",
the sequence of keys "p-r-i-n-t-" would trigger a particular set of signals in a particular set of logic gates that would result in displaying "var" on your screen.
Again, what all gates are activated by your keystrokes depends on their design.
Also, typing "print(var)" on your desktop or anywhere else apart from the IDE will not yield same results because the software behind that IDE activates a particular set of transistors or gates which would respond in an appropriate way.
This is what I think happens at the Fundamental level, and the rest is all built layer upon layer.

Video capture on Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We need to capture live video and display easily on Linux. We need a cheap card or USB device with a simple API. Anyone want to share some experience?
Use the video4linux library. I've used it with a c++ program and was able to capture webcam frames within about an hour. (Very easy to use and setup)
If you need to program, you're best off using GStreamer, a multimedia framework under Linux.
Cheese, mentioned by jackbravo, is based on GStreamer, as is Flumotion, a streaming server I work on.
As mentioned, Use dvgrab to capture from a Firewire interface from the camera, then use tools such as ffmpeg (command line) or kino (simple gui video editor) to process the video as needed. PCI based Firewire cards are relatively inexpensive and easy to find.
Here are some examples:
continuous capture from firewire, autosplit every couple of minutes
dvgrab --size 500 --autosplit <filename>
watch the camera live
dvgrab - | mplayer -
Be aware that some recent distros (e.g. Fedora8) are using new but half-baked firewire drivers. However, Ubuntu works great.
There are "sealed" camera solutions out there with mini-webservers and an ethernet port on the back. Just plug it in to the network, set its IP, and open up a browser... in linux or wherever
If you want to capture in linux, I once had a cheap webcam capturing single frames in a perl script, which could have been modified for real time - though that was about 10 years ago. Anyway, its possible :-/
There's the cheese gnome application. Really simple to use. Not too much features, just video capture.
openCV will allow you to capture individual frames from a camera and save to disk. If you need to then manipulate these to create a video, I would suggest netpbm, a pretty powerful set of command line tools you can use with some shell scripting to make a video or do whatever it is you need.
Another option is to use Firewire (IEEE1394) cameras, such as most common DV camcorders. They tend to work really well and give a lot better video than cheap web cams, and there is a plethora of tools in Linux for working with dv video, such as dvgrab.
If you use java, v4l4j makes it very simple to capture frames from any V4L device. It also allows you to control the device from java. I used it with a PTZ webcam (logitech quickam orbit), and I could control usual thigs like brightness, saturation and auto-white balance, but also the tilt and pan of the camera. Very handy !

Resources