Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can devices be added and removed while the system is running (Hot swapping) in I2C and SPI ?
Is it better to use I2C or SPI for data communication between a microprocessor and DSP ?
Is it better to use I2C or SPI for data communication from ADC ?
1) Of course, given no limitations.
SPI would be easiest, as each device would have an independent chip select. You'd just have to make sure that the SPI device didn't think its chip select was active at some point during the connect/disconnect process or it might assert its MISO line and cause contention. There would be many ways to sequence the power or reset of that SPI device to ensure that part worked.
I2C would be trickier... as a device connecting during a transfer might interpret the first bytes it sees in the ongoing transfer as a command and potentially do bad things. You could put a bidirectional level shifter or buffer with an enable and perhaps a flop to latch the enable in front of the hot swapped I2C device, and strobe a secondary "bus_clear" message to enable any new hot swapped buffers during these at idle periods on the bus. There'd be lots of ways to accomplish the same thing.
2) and 3) have no general answer.
SPI would be capable of going faster, so if you need speed that'd be the way to go. The micro to DSP implies that. However, just the fact that one is a micro and the other is a DSP doesn't tell you anything at all unless you start assuming things. What if the micro is just getting small result packets from a whole set of DSPs? I2C would make that easier.
Similarly, ADCs might need speed, or they might not. If there are a lot of them and the required data rate is low, again I2C makes more sense.
SPI is almost always simpler to use, so that's a plus if there's not too many devices. I2C uses fewer pins.
Many times, the choice between I2C or SPI is made based on what the parts you want to use are providing as the interface. Another very important factor is what ELSE needs to happen in the system and what bus makes sense for as much of the parts as possible.
There's more: design reuse, weird firmware/RTOS limitations, etc, etc. There are many reasons you can't say one is "better" than the other in general. What's better for your design is just not that simple.
I think these questions are "get you talking" questions...
Related
Good day!
Problem definition:
Current implementations of Bluetooth does not allow to simply support good quality of Audio(Earphones mode) and 2-way audio transition (Headset mode).
Also, even if one would manage to set this configuration up, which have huge limitations on the hardware/software used, there is no way to handle sound input from 2 different audio devices simultaneously.
So, technically - one cannot just play the Game, communicate on the Discord, and optionally listen to some music, unless he is bound to some USB-bundled earphones. Which are usually really crappy, or really expensive. Or both.
Solution sketch:
So, I came up with an idea that one can actually build such device, using Raspberry Pi, Arduino, or even barebone-component-based stacks.
Theoretical layout of connections per-se would look somehow like that:
Idea is to create 2 "simple" devices
One, not-so-portable, that would handle several analog inputs, and one analog output
One, portable, that would handle single analog Input and Output, and could be used with any analog earphones.
"Requirements" to such system would be quite simple:
This bundle have to handle Data Transition on some distance, preferably up to 10 meters, or more.
The "Inlet" device should be portable enough to keep it in the pocket, or in an arm band, or something
Sound Quality should be at the very least on the level of Bluetooth headphones profile, or if possible - even better
If possible - it would be nice to keep the price of the Solution under 500 Euros, but I'm so tired of current state of things that I might consider raising the budget...
Don't mind the yellow buttons on the Outlet device. Those are optional, and will depend on the implementation stack :)
Question:
Can anyone advice me which component-base would be a better solution to making such a tool, and why?
And maybe someone actually knows of similar systems already existing?
Personally I would prefer anything but the barebone-components-based solution, just because I'm really rusty with that area, and it requires quite the amount of tools, to handle it properly.
While using pre-built modules can save me from buying most of the hardware tools, minifying my "hardware customization" part of this solution, leaving only software part to handle (which is my main area of expertise).
But then again, if there are some experts here, that would consider other stacks non-viable - I would really appreciate to see their reasonings.
P.S. Just to be clear: If this project will prove viable - I will implement it, and share the implementation details with the communities. I am not the first one who needs such system, and unfortunately it seems that Hardware/Software vendors are not really interested in designing similar solutions...
I happen to find a "temporary" solution.
I've came across a wireless headset, that allows to simultaneously support Wireless USB Bundle connection, and Bluetooth connection to different devices, and provide nice way of controlling sound input/output with both connections.
This was almost a pure luck, as this "feature" was not described anywhere in the specs...
Actual headset name is:
JBL Quantum 800
This does not closes the question per-se, as I still plan to implement this "Summer Project" at some point, but I believe this information might be useful to those searching for similar solutions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
As a software developer, I am trying to understand how a system could possibly work as fast and efficiently enough and operate with consistanly and flawlessly with such precision for all the ongoing actions it needs to account for in a system such as a Tesla AutoPilot (self driving car system)...
In a car driving driving 65 MPH, if a deer runs out in front of the car, it immediately makes adjustments to protect the vehicle from a crash - while having to keep up with all the other sensors requests constantly firing off at the same time for possible actions on a micro-milllisecond, without skipping a beat.
How is all of that accomplished sysinctly? And have processing reporting back to it so quickly that it almost intentaniously is able to respond (without getting backed up with requests)?
I don't know anything about Tesla code, but I have read other real time code and analysed time slips in it. One basic idea is that if you check something every millisecond you will always respond to change within a millisecond. The simplest possible real time system has a "cyclic executive" built around a repeating schedule that tells it what to do when, worked out so that in all possible cases everything that has to be dealt with is dealt with within its deadline. Traditionally you are worrying about cpu time here, but not necessarily. The system I looked at was most affected by the schedule for a serial bus called a 1553 (https://en.wikipedia.org/wiki/MIL-STD-1553)- there almost wasn't enough time to get everything transmitted and received on time.
This is a bit too simple because it doesn't cope with rare events which have to be dealt with really quickly, such as response to interrupts. Clever schemes for interrupt handling don't have as much of an advantage as you would expect, because there is often a rare worst case that makes the clever scheme underperform a cyclic executive and real time code has to work in the worst case, but in practice you do need something with interrupt handlers and high priority processes that must be run on demand and with low priority processes that can be ignored when other stuff needs to make deadlines but will be run otherwise. There are various schemes and methodologies for arguing that these more complex systems will always make their deadlines. One of the best known ones is https://en.wikipedia.org/wiki/Rate-monotonic_scheduling. See also https://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling.
An open source real time operating system that has seen real life use is https://en.wikipedia.org/wiki/RTEMS.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
This question does not really relate to any programming language specifically, it relates to, I think, EVERY programming language out there.
So, the developer enters code into an IDE or something of the sort. The IDE turns that, directly or indirectly (maybe there's many steps involved: A turns it into B turns it into C turns it into D, etc.), into a machine language (which is just a bunch of numbers). How is machine language interpreted and run? I mean, doesn't code have to come down to some mechanical thing in the end, or how would it be run? If chips run the code, what runs the chips? And what runs that? And what runs that? On and on and on.
There is nothing really mechanical about it - the way a computer works is electrical.
This is not complete description - that would take a book. But it is the basis of how it works.
The basis of the whole thing is the diode and the transistor. A diode or transistor is made from a piece of silicon with some impurities that can be made to conduct electricity sometimes. A diode only allows electricity to flow in one direction and a transistor only allows electricitry to flow in one direction with an amount proportional to the electricity provided at the "base". So a transistor acts like a switch but it is turned on and off using electricity instead of something mechanical.
So when a computer loads a byte from memory, it does so by turning on individual wires for each bit of the address address and the memory chip turns on the wires for each data bit depending on the value stored in the location designated by those address wires.
When a computer loads bytes containing an instruction, it then decodes the instruction by turning on individual wires that control to other parts of the CPU:
If the instruction is arithmetic then one wire may determine which registers are connected to the arithmetic logic unit (ALU) while other wires determine whether the ALU adds or subtracts and another may determines whether it shifts left or does not shift left.
If the instruction is a store then the wires that get turned on are the address lines, the wire that determine which register is attached to the data lines, and the line that tells the memory to store the value.
The way these individual wires are turned on and off is via this huge collection of diodes and transistors, but to make designing circuits manageable these groups of diodes and transistors are clumped into groups that are standardized components: logic gates like AND, OR and NOT gates. These logic gates have one or two wires coming in and one coming out with a bunch of diodes and transistors inside. Here is an electrical schematic for how all the diodes and transistors can be wired up to make an OR gate: http://www.interfacebus.com/exclusive-or-gate-internal-schematic.png
Then when you have the abstraction level of logic gates it is a much more manageable job to design a CPU. Here is an example of someone who built a CPU using just a bunch of logic gate chips: http://cpuville.com
Turns out there is already a book! I just found a book (and accompanying website with videos and course materials) for how to make a computer from scratch. Have a look at this: http://nand2tetris.org/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to start FPGA programming. I don't have any knowledge at all about how FPGAs work and such. I would like to get a development board, not too expensive, but it should have at least 40 I/O pins. Anything up to $300 is OK.
I decided that I want to program in Verilog. I am not sure about the following:
How will my compiled 'program' be stored on the chip? I would guess the chip has some kind of EEPROM to save my program, but from what I have read, it is apparently stored in RAM. I want my program to remain on the chip (or to be loaded somehow) every time it powers up.
Can I buy a separate FPGA chip (not a whole development board) for production? And if yes, how can I upload my program to the separate chip? Does it in some way connect to the development board?
I'd recommend the Digilent Basys board as an introduction. It only has 16 external I/O, but it already has RAM, USB, switches, buttons, LEDs, 7-segment displays, a VGA connector, and a PS/2 connector onboard - You're unlikely to find an FPGA with fewer than 40 I/O pins. If you want I/O for another project, use the Nexys instead - More peripherals than I care to list, and also has a high-speed Hirose 43-pin connector if you have a project which specifically needs about 40 connections.
Also, consider how you want to interface with your PC. Is your goal to make an embedded system, or to interface with a computer through a PCI/Ethernet/USB connection?
Yes, you can buy separate FPGA boards for production - There's a dizzying array of options, though - Digikey has 5,300 at this time. You do need some way to program the FPGA, and an onboard NVM chip that programs the FPGA on startup is a popular option. However, you should start with a development board that's well supported and already has a programmer, toolchain and simulator available before you get too far into designing your board or worrying about how to save your program onto the chip. Those are good things to know, but they're not what you want to worry about right now. Good luck!
The whole point of using an FPGA is that your "program" is actually a circuit, not RAM. There are physical logic components that are configured when you write the bitstream to the FPGA. This is why they can run so much faster for specialized applications--you are basically making custom hardware.
Xilinx is one of the main FPGA manufacturers. Try their website. Check out the Boards & Kits section.
Try reading more about the technology before you get ahead of yourself. You will need a strong understanding of how FPGAs work before you can program them effectively. Wikipedia is a great place to start.
In Xilinx FPGA terminology the "program" is called bitstream. There are some FPGAs that have embedded flash to store the bitstream (e.g. Spartan 3AN). Most of the FPGAs require some external bitstream storage. Here is a configuration guide on how to configure an FPGA.
Yes you can. There are multiple ways to do configuration. Most of them require some external circuitry.
Check out Actels's new Smart Fusion FPGA. Its has a FPGA fabric of course, with a hard ARM MCU with a good analog end (DAC, ADC etc).
The Eval board is only 100$
http://www.actel.com/products/hardware/devkits_boards/smartfusion_eval.aspx
And all the software you need to get up and running if free.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I'm looking for a program that can read the weight on a USB scale, namely the Pelouze 10lb USB Portable Scale. I thought it would have a virtual COM port driver but instead, it uses HID drivers. I've been searching for a whole month for a program that can help me transfer the reading of the scale to Microsoft Excel. Can someone help me out or point me in the right direction? I am absolutely illiterate to programming. Much gratitude any help given.
I can offer some helpful links.
The scales use the "HID point of sale" USB spec, a link to which can be found in the comments section of the blog post I mention below. I'd link direct, but the spam prevention mechanism prevents me.
There's a blog post on addressing the 25lb version of the scale from C#, here: http://nicholas.piasecki.name/blog/2008/11/reading-a-stamps-com-usb-scale-from-c-sharp/
The comments on that post are helpful - that's where you find the USB spec, there's also a comment showing how to get at the data without using the full HID library.
Good luck with your project!
I wrote some code to read from a Dymo USB scale using Python. It requires libusb and PyUSB.
This code is pretty general; it should work with any HID device that has just one configuration:
import usb.core
import usb.util
VENDOR_ID = 0x0922
PRODUCT_ID = 0x8003
# find the USB device
device = usb.core.find(idVendor=VENDOR_ID,
idProduct=PRODUCT_ID)
# use the first/default configuration
device.set_configuration()
# first endpoint
endpoint = device[0][(0,0)][0]
# read a data packet
attempts = 10
data = None
while data is None and attempts > 0:
try:
data = device.read(endpoint.bEndpointAddress,
endpoint.wMaxPacketSize)
except usb.core.USBError as e:
data = None
if e.args == ('Operation timed out',):
attempts -= 1
continue
print data
For the DYMO M10 scale I'm using, the data packet is a 6-element array like this array('B', [3, 2, 11, 255, 0, 0]).
The last two elements are used to calculate the weight.
In kg mode grams = data[4] + (256 * data[5]), and in pounds/ounces mode ounces = 0.1 * (data[4] + (256 * data[5])).
More info on my blog post.
The first question that you will need to answer is the exact HID use. USB describes a wide family of related protocols. USB HID is a subset, but this still covers a large array of devices. The HID specs defines a 32 bits "usage" identifier supporting up to 4 billion different types of HID devices, although only a fraction of those values have been assigned so far.
The Windows API you'll probably need is Raw Input. Unfortunately, this is not an interface for the programming illiterate.
For USB devices not supported directly by the O/S, you either need to be a programmer, or you need a program supplied by the device vendor. In this case, as it sounds like you're not a programmer, you should contact the manufacturer to see if they can provide any diagnostic or testware programs that could be used to read and capture the scale data. If you're lucky, such a program can be "batched" and it's output redirected, or something like that. Good luck.
Elane sells a line of USB HID scales and also has a Windows program to read the weight. Unfortunately, because of Stamps.com, that program is no longer free. At $15.50, it's not necessarily something I'd want to blindly try, but here it is:
http://www.elane.net/index.php?go=USB_pcsoftware
if you want to play detective this guy's advice might help :
http://nicholas.piasecki.name/blog/2008/11/reading-a-stamps-com-usb-scale-from-c-sharp/
Just ordered a DYMO scale myself so will post findings
http://r.lagserv.net/scalereader.htm
This is great code to get started with for C# and a Dymo scale using HidLibrary.
Just change the vendor ID to match that of your scale.
You can be up and running in a few minutes.
I used it with a Dymo S400.
I wrote my nodejs version and shared the code at https://github.com/PhantomRay/dymo-scale-nodejs.
It is only tested in Windows 10. Also, it has a simple web socket server mode built to it so that any web page can connect to it to read data.
I would assume that output from a USB HID device like this should appear to the USB host exactly as if the input was typed on a keyboard, i.e. on the scale stabilising, the weight would be sent over USB. This is the way most USB barcode scanners work.
If this is the case then the scale should just fire across the weight and it would appear in whatever application was active; Excel if that is where you wanted the input to end up.
However, I've looked over the specs and manual for the scale you specify and, unless I'm looking at the wrong model, although it is powered by USB I don't see any mention of the fact that it also communicates over USB. Do you have a URL or part number?