How can a WinCE 5 driver cycle the power on a PCMCIA socket under its control?
I have tried CardPowerOff() in pcmcia.dll, but it doesn't actually remove power. Any advice there would be helpful, but I'm not picky about the mechanism.
If possible, the mechanism to achieve it is hardware/OEM dependant. You're going to have to go to your device OEM and ask them if/how it might be done on their hardware. My guess is that if CardPowerOff isn't actually removing card power, the OEM didn't provide a way to fully shut it down (I've seen them hard wired to power before).
Related
In the old days you would look at finding the base addr, DMA and IRQ to communicate with a device. I'm kinda looking for the equiviant.
I'm looking to communicate directly with an audio device, not through a driver, in Linux. Time isn't an issue, but I am struggling to find the information that I need and I know there is a possibility of needing a lot of code, that's fine.
I was wondering if anyone could point me in the right direction to achieving this.
Thankyou very much.
As far as I was aware you couldn't use IRQs or DMA if you're in a user-mode process from linux and this guide (heading 3) seems to confirm that, however after checking I managed to find this Linux driver (udmabuf) that lets you access DMA buffers through the user space; maybe this is what you are looking for?
Otherwise I would mabye try and write a similar, but more customized version of (udmabuf), specific to your purpose.
I'm not sure entirely what you are planning on using it for, but the first thing I would look for is building a driver for what you wanted to do (here's how to get started for ALSA just as an example). At least to communicate at this level, unless you wanted to do some of your own OS development? (I think this would be the way in the end if you really couldn't use drivers for whatever purpose)
I feel bad by answering this myself, WoodyDev, thankyou for pointing me in the right direction.
You are right, a driver is the best way to go.
The best solution is to read the PCI Address Space, the first 64 bytes contain all the data needed.
https://www.safaribooksonline.com/library/view/linux-device-drivers/0596000081/ch15.html
The title seems to be too general (I can't think of a good title). I'll try to be specific in the question description.
I was required to make an industrial control box that collects data periodically (maybe 10-20 bytes of data per 5 seconds). The operator will use a laptop or mobile phone to collect the data (without opening the box) via Bluetooth, weekly or monthly or probably at even longer period.
I will be in charge of selecting proper modules/chips, doing PCB, and doing the embedded software too. Because the box is not produced in high volume, so I have freedom (modules/chips to use, prices, capabilities, etc.) in designing different components.
The whole application requires an USART port to read in data when available (maybe every 5-10 seconds), a SPI port for data storage (SD Card reader/writer), several GPIO pins for LED indicator or maybe buttons (whether we need buttons and how many is up to my design).
I am new to Bluetooth. I read through wiki and some googling pages. Knowing about the pairing, knowing about the class 1 and class 2 differences, knowing about the 2.1 and 4.0 differences.
But I still have quite several places not clear to decide what Bluetooth module/chip to use.
A friend of mine mentioned TI CC2540 to me. I checked and it only supports 4.0 BLE mode. And from Google, BT4.0 has payload of at most 20 bytes. Is BT4.0 suitable for my application when bulk data will need to be collected every month or several months? Or it's better to use BT2.1 with EDR for this application? BT4.0 BLE mode seems to have faster pairing speed but lower throughput?
I read through CC2540, and found that it is not a BT only chip, it has several GPIO pins and uart pins (I am not sure about SPI). Can I say that CC2540 itself is powerful enough to hold the whole application? Including bluetooth, data receiving via UART, and SD card reading/writing?
My original design was to use an ARM cortex-M/AVR32 MCU. The program is just a loop to serve each task/events in rounds (or I can even install Linux). There will be a Bluetooth module. The module will automatically take care of pairing. I will only need to send the module what data to send to the other end. Of course there might be some other controlling, such as to turn the module to low power mode because the Bluetooth will only be used once per month or something like that. But after some study of Bluetooth, I am not sure whether such BT module exists or not. Is programming chips like CC2540 a must?
In my understanding, my designed device will be a BT slave, the laptop/phone will be the master. My device will periodically probe (maybe with longer period to save power) the existence of master and pair with it. Once it's paired, it will start sending data. Is my understanding on the procedure correct? Is there any difference in pairing/data sending for 2.1 and 4.0?
How should authentication be designed? I of course want unlimited phones/laptops to pair with the device, but only if they can prove they are the operator.
It's a bit messy. I will be appreciated if you have read through the above questions. The following is the summary,
2.1 or 4.0 to use?
Which one is the better choice? Meaning that suitable for the application, and easy to develop.
ARM/avr32 + CC2540 (or the like)
CC2540 only or the like (if possible)
ARM/avr32 + some BT modules ( such as Bluegiga https://www.bluegiga.com/en-US/products/ )
Should Linux be used?
How the pairing and data sending should be like for power saving? Are buttons useful to facilitate the sleep mode and active pairing and data sending mode for power saving?
How the authentication should be done? Only operators are allowed but he can use any laptops/phones.
Thanks for reading.
My two cents (I've done a fair amount of Bluetooth work, and I've designed consumer products currently in the field)... Please note that I'll be glossing over A LOT of design decisions to keep this 'concise'.
2.1 or 4.0 to use?
Looking over your estimates, it sounds like you're looking at about 2MB of data per week, maybe 8MB per month. The question of what technology to use here comes down to how long people are willing to wait to collect data.
For BLE (BT 4.0) - assume your data transfer is in the 2-4kB/s range. For 2.1, assume it's in the 15-30kB range. This depends on a ton of factors, but has generally been my experience between Android/iOS and BLE devices.
At 2MB, either one of those is going to take a long time to transfer. Is it possible to get data more frequently? Or maybe have a wifi-connected base station somewhere that collects data frequently? (I'm not sure of the application)
Which one is the better choice? Meaning that suitable for the
application, and easy to develop. ARM/avr32 + CC2540 (or the like)
CC2540 only or the like (if possible) ARM/avr32 + some BT modules (
such as Bluegiga https://www.bluegiga.com/en-US/products/ ) Should
Linux be used?
This is a tricky question to answer without knowing more about the complexity of the system, the sensors, the data storage capacity, BOM costs, etc...
In my experience, a Linux OS is HUGE overkill for a simple GPIO/UART/I2C based system (unless you're super comfortable with Linux). Chips that can run Linux, and add the additional RAM are usually costly (e.g. a cheap ARM Cortex M0 is like 50 cents in decent volume and sounds like all you need to use).
The question usually comes down to 'external MCU or not' - as in, do you try to get an all-in-one BT module, which has application space to program on. This is a size and cost savings to using it, but it adds risks and unknowns vs a braindead BT module + external MCU.
One thing I would like to clarify is that you mention the TI CC2540 a few times (actually, CC2541 is the newer version). Anyways, that chip is an IC level component. Unless you want to do the antenna design and put it through FCC intentional radiator certification (the certs are between 1k-10k usually - and I'm assuming you're in the US when I say FCC).
What I think you're looking for is a ready-made, certified module. For example, the BLE113 from Bluegiga is an FCC certified module which contains the CC2541 internally (plus some bells and whistles). That one specifically also has an interpreted language called BGScript to speed up development. It's great for very simple applications - and has nice, baked in low-power modes.
So, the BLE113 is an example of a module that can be used with/without an external MCU depending on application complexity.
If you're willing to go through FCC intentional radiator certification, then the TI CC2541 is common, as well as the Nordic NRF51822 (this chip has a built in ARM core that you can program on as well, so you don't need an external MCU).
An example of a BLE module which requires an external MCU would be the Bobcats (AMS001) from AckMe. They have a BLE stack running on the chip which communicates to an external MCU using UART.
As with a comment above, if you need iOS compatibility, using a Bluetooth 2.1 (BT Classic) is a huge pain due to the MFI program (which I have gone through - pure misery). So, unless REALLY necessary, I would stick with BT Classic and Android/PC.
Some sample BT classic chips might be the Roving Networks RN42, the AmpedRF BT 33 (or 43 or 53). If you're interested, I did a throughput test on iOS devices with a Bluetooth classic device (https://stackoverflow.com/a/22441949/992509)
How the pairing and data sending should be like for power saving? Are
buttons useful to facilitate the sleep mode and active pairing and
data sending mode for power saving?
If the radio is only on every week or month when the operator is pulling data down, there's very little to do other than to put the BT module into reset mode to ensure there is no power used. BT Classic tends to use more power during transfer, but if you're streaming data constantly, the differences can be minimal if you choose the right modules (e.g. lower throughput on BLE for a longer period of time, vs higher throughput on BT 2.1 for less time - it works itself out in the wash).
The one thing I would do here is to have a button trigger the ability to pair to the BT modules, because that way they're not always on and advertising, and are just asleep or in reset.
How the authentication should be done? Only operators are allowed but
he can use any laptops/phones.
Again, not knowing the environment, this can be tricky. If it's in a secure environment, that should be enough (e.g. behind locked doors that you need to be inside to press a button to wake up the BT module).
Otherwise, at a bare minimum, have the standard BT pairing code enabled, rather than allowing anyone to pair. Additionally, you could bake extra authentication and security in (e.g. you can't download data without a passcode or something).
If you wanted to get really crazy, you could encrypt the data (this might involve using Linux though) and make sure only trusted people had decryption keys.
We use both protocols on different products.
Bluetooth 4.0 BLE aka Smart
low battery consumption
low data rates (I came up to 20 bytes each 40 ms. As I remember apple's minimum interval is 18 ms and other handset makers adapted that interval)
you have to use Bluetooth's characteristics mechanism
you have to implement chaining if your data packages are longer
great distances 20-100m
new technology with a lot of awful premature implementations. Getting better slowly.
we used a chip from Bluegiga that allowed a script language for programming. But still many limitations and bugs are build in.
we had a greater learning curve to implement BLE than using 2.1
Bluetooth 2.1
good for high data rates depending on used baud rate. The bottle neck here was the buffer in the controller.
weak distances 2-10 m
Its much easier to stream data
Did not notice a big time difference in pairing and connecting with both technologies.
Here are two examples of devices which clearly demand either 2.1 or BLE. Maybe your use case is closer to one of those examples:
Humidity sensors attached to trees in a Forrest. Each week the ranger walks through the Forrest and collects the data.
Wireless stereo headsets
This question straddles the blurry line between software and hardware so I hope the community feels it belongs on stackoverflow otherwise I would appreciate some advice as to where to move it.
My team is currently in the process of developing a robotic controller and although most of our I/O requirements are pretty tame we have one specific input that occasionally triggers at a 2 microsecond interval (Either a digital rise or fall). We need to be able to accurately timestamp that input and record it. We do not need to act on it in any sort of immediate way. We feel as if we should be able to take advantage of a high speed USB or PCI-E bus in order to facilitate this kind of high speed input but we lack expertise in this area and do not know what a solution to this problem would look like or if it even can be achieved without expensive/very invovled proprietary solutions. In addition there is a desire in our group to move away from Windows CE to a Windows Embedded (probably XP) x86 based platform however if this is something that could only be achieved on a CE system that would be important to know.
In summary:
Is there a way to read and timestamp accurately a 2 microsecond (500 KHz) input under a windows or windows CE environment?
Bonus points for implementation ideas/visions/methods!
Thank you!
I wouldnt try to do it directly in windows? What GPIO do you have anyway on your windows machine? For around or under $20 there are a long list of solutions that are either serial or usb based (of the shelf microcontroller boards) which you could program to have their only job to be to watch that I/O and time stamp it and then casually send this data over usb or serial (or other) to the host.
I am thinking along the lines of the msp430 launchpad, or stellaris lauchpad, or stm32f0 discovery or stm32f4 discovery, the family of arduinos, mbed, leaf labs maple mini, and on and on.
To do it properly under windows you would need to add some hardware anyway...
I've got a WinCE device powered over ethernet (PoE) and I want to prevent File-System corruption following a potential power loss, e.g. user pulling the plug.
As a side note, I'm already using TexFAT which is supposed to prevent FS corruptions. While the later certainly does help reducing FS corruptions (over using plain old FAT), it doesn't entirely prevent some to still occur from time to time... So, I'm considering using a small rechargeable backup battery that would give WinCE enough time to cleanly shut down. Now, I can't find any info on the shutdown process: how to trigger it, how long it takes, and so on... MSDN is pretty quiet on this topic. Any idea?
The powerdown sequence is totally platform dependent.
The following answer is relevant to Windows CE 6. It may be different for previous versions of CE.
If you include the power manager component in your system, then the sequence is plus minus this:
Send go to D4 to all the drivers that are powermanageable and that reported they support this state. Otherwise, the driver gets the lowest powerstate it supports.
XXX_PowerDown is called, but it is not commonly used in Windows CE 6.
In between the registry is flushed in case you have a Hive Based registry and you enabled the registry flush thread. You should disable this in a fragile system such as yours
OEMPowerOff
device down
Just found a post by Bruce Eitman on what happens when Suspending. He puts it better than I do.
The Suspend sequence is what you'd do before loosing power.
For a project I'm working on I have to talk to a multi-function chip via I2C. I can do this from linux user-space via the I2C /dev/i2c-1 interface.
However, It seems that a driver is talking to the same chip at the same time. This results in my I2C_SLAVE accesses to fail with An errno-value of EBUSY. Well - I can override this via the ioctl I2C_SLAVE_FORCE. I tried it, and it works. My commands reach the chip.
Question: Is it safe to do this? I know for sure that the address-ranges that I write are never accessed by any kernel-driver. However, I am not sure if forcing I2C communication that way may confuse some internal state-machine or so.(I'm not that into I2C, I just use it...)
For reference, the hardware facts:
OS: Linux
Architecture: TI OMAP3 3530
I2C-Chip: TWL4030 (does power, audio, usb and lots of other things..)
I don't know that particular chip, but often you have commands that require a sequence of writes, first to one address to set a certain mode, then you read or write another address -- where the function of the second address changes based on what you wrote to the first one. So if the driver is in the middle of one of those operations, and you interrupt it (or vice versa), you have a race condition that will be difficult to debug. For a reliable solution, you better communicate through the chip's driver...
I mostly agree with #Wim. But I would like to add that this can definitely cause irreversible problems, or destruction, depending on the device.
I know of a Gyroscope (L3GD20) that requires that you don't write to certain locations. The way that the chip is setup, these locations contain manufacturer's settings which determine how the device functions and performs.
This may seem like an easy problem to avoid, but if you think about how I2C works, all of the bytes are passed one bit at a time. If you interrupt in the middle of the transmission of another byte, results can not only be truly unpredictable, but they can also increase the risk of permanent damage exponentially. This is, of course, entirely up to the chip on how to handle the problem.
Since microcontrollers tend to operate at speeds much faster than the bus speeds allowed on I2C, and since the bus speeds themselves are dynamic based on the speeds at which devices process the information, the best bet is to insert pauses or loops between transmissions that wait for things to finish. If you have to, you can even insert a timeout. If these pauses aren't working, then something is wrong with the implementation.