Android read image from camera's sdcard with otg - usb-otg

devices for test
I would like to read images from camera's sdcard(e.g Canon or Nikon) to my Android App with an otg line.
I have learn the example of libaums(https://github.com/magnusja/libaums), but it just support USBMassStorageDevice, not support digital camera devices.
I have no C or C++ experience, so I have no idea about it.
Do you have some idea to help me!
Thanks!

Related

Flash NodeMCU v3 without tools

I've got NodeMCU v3 board with ESP8266 chip on it. I'd like to flash it with my firmware through usb without using any tool like esptool. How can I do this from Linux?
I've got several questions:
1) Can I just write for example to /dev/ttyUSB0? Will board get this signal?
2) What should I send before sending
binary? How to tell to board that I want to flash it?
It won't work to just write the file to a /dev device. That's why we have flashing tools, to deal with setting up the board and properly transmitting the binary to it. To do what you're asking you'd have to write your own flashing tools. What's the problem with just using the existing ones?

Do i need motherboard for NRF51822 BLE UART?

I'm working on the iOS app to interact with Arduino boards. On Arduino side i use "transparent" serial implemented in HM-10/11 firmware. So i just wire HM-10/11 RT/TX pins with Arduino ones and it works just perfect and write to specific characteristic to send data and subscribe/read to some specific to read from BLE module. No need to use any SDK or BLE library in arduino sketch, no need to modify bootloader.
Now i need to support NRF51822 BLE chips. Nordic has implemented UART serial in firmware sources but for my NRF51822 board (purchased on ebay) it's not uploaded to the chip by default as i can't see 0001 service and 0002 and 0003 characteristics. Do i need to purchase NRF motherboard and compile and upload this firmware? Can i do it without purchasing this dev kit? Can i upload over USB only or over BLE too?
If your are using NRF51822 standalone module ,then to update its firmware through SWIO/SWCLK pins ,you need a compatible JTAG programmer/debugger (you can check in segger website) . You can use Keil IDE to upload your updated firm ware to the flash memory of NRF51822 .
Although you need to upload using a SWD programmer, you are not limited to Segger.
There are open source alternatives like the Black Magic Proble which you can flash into $5 hardware (stm32f103 boards) from ebay.
You are also not limited to Keil or IAR for your compiler toolchain.
Its possible to compile using Eclipse and GCC ( Nordic have a blog entry in how setup an Eclipse base dev environment)
You can even program these devices using the Arduino IDE.
It is possible to upload OTA, but you would need to flash an OTA bootloader onto the device first using SWD and even then, you would probably need to upload via a mobile phone, as Im not aware of any PC ble devices that support transfer via the DFU protocol ( though some may exist)
These devices do not natively support USB.
Some boards using these device have a separate processor that allows upload as if the chip was a mass storage device, but I am not aware of a motherboard into which you could plug this module, which has that functionality.
BTW. The module you linked to, is actually designed and manufactured by Waveshare.com . Take a look at their website, it has full documentation on the hardware including schematics
The nRF51 and nRF52 dev boards has an onboard Segger / JLink. So then you can develop on and debug on the nRF51822 and flash other nRF51822s as well.

Kinect for Xbox 360 freezes and disconnects from USB after running Processing SimpleOpenNi depth image example

please help
I've been trying to set up kinect for XBOX 360 to run on ubuntu in order to start developing an application to control a humanoid robot. for the past four days I've been searching , downloading , installing and trying dozens of libraries and drivers to get the kinect to work on Ubuntu. in the beginning none was working and I was only able to read the RGB camera with "Camorama" and "guvcview" no matter what library or driver I attempted to run.. Finally, I installed a fresh copy of Ubuntu and installed libfreenect libraries using synaptic (I'm kinda newbie) and I also installed the following packages
https://code.google.com/p/simple-openni/downloads/detail?name=OpenNI_NITE_Installer-Linux64-0.27.zip&can=4&q=
along with Processing 2.0 and SimpleOpenNi-0.27
I start Processing -> examples -> OpenNi -> DepthImage & RUN
and the kinect starts for 3 to 10 seconds giving the image below some times along with the RGB image and some times with out it, then the frame freezes, and when I try listing the USB devices ($lsusb) there is no Kinect camera or audio devices listed, so the Kinect must be unplugged from the Adaptor and USB and then re-inserted and the problem still occur after running the sketch.
Attempted solutions:
1- removing and black listing gspca kernel module
2- disabling USB auto-suspend
but the problem still occur...
I'm using Kinect for XBOX 360 with (12V - 1.08A) USB AC Power Adapter
http://www.walmart.com/ip/INSTEN-USB-AC-Power-Adapter-For-Microsoft-Xbox-360-Kinect-Sensor/28882271
My laptop is : DELL Inspiron.1525 Intel Core2Duo RAM 2GB
Running Ubuntu 14.04.2 LTS ,, Release: 14.04 ,, Codename: trusty
Can any one help me please!!.
I had a similar problem and after I used Kinect With windows, I found that the problem was from the Kinect itself.
The following tips will help you get started using your Kinect:
If a non-Microsoft driver for the Kinect is installed on your computer, the Kinect for Windows drivers might not install or function correctly. To fix this, uninstall the non-Microsoft drivers before installing the Kinect for Windows SDK.
Connect the power supply for the Kinect to an external power source; if the Kinect has only power from the USB connection, it will be minimally functional and light the LED, but it must be connected to an external power source to be fully functional.
No tools are required for calibration of audio and video.
Your Kinect should be the only device plugged into a USB hub on your computer. If you have more than one Kinect, connect them to different USB controllers. If 2 hubs are connected to the same controller, only 1 Kinect can work at a time.
The Kinect is protected from overheating by a fan. It is controlled by the sensor's firmware, which turns off the camera at 90 degrees Celsius. There is no software interface for applications to control the fan.
Reasonable lighting, neither extremely dark nor extremely bright, is important for capturing images with the RGB camera. Incandescent, fluorescent, and natural lighting provide no special obstacles, but do not point an intense or constant light source at the camera because this can blind the RGB sensor.
The depth sensor functions adequately in typical and reduced lighting, although in near darkness there is increased noise in the signal.
The depth sensor reads depth information from reflected light. Objects that are highly reflective (mirrors and shiny metal) or highly absorptive (fluffy and/or dark materials) may not be registered by the depth sensor as successfully as other objects.
for detailed instructions on Setting Up a Kinect Sensor please follow
: https://msdn.microsoft.com/en-us/library/hh855356

Sending audio to a bluetooth enabled speaker, IOS

I want to add a function to my App, where the user can choose to play the audio on a bluetooth enabled speaker. I have a Parrot Easydrive in my car and this works for phonecalls and for example the Dictafoon App among others.
I understand that I should use the Core Audio framework. WHen a bluetooth device is connected it is said that it is easy to stream the audio to that connection. I am now looking for Core Audio sample code (or a book) where connecting and streaming to a bluetooth device with Core Audio is explained.
Can anyone shed a light on this? If there is another framework or sample code which I can use please mention it!
Many thanks in advance!
You don't write any specific Core Audio code, it is the same process as is used to play audio via AirPlay.
Basically you put a MPVolumeView into your UI, and the underlying framework will redirect your audio output for you. Once you implement this you will be able to use Bluetooth and any AirPlay enabled device with your app.

Build ADC Core Audio Compliant USB or Firewire

I'm looking for documentation on how to build an ADC Core Audio compliant to connect to a mac USB or Firewire. All I've been finding is info on how to deal with Core audio on programing the computer side.
I need info on how to make audio hardware Core Audio compliant.
Can anyone send me the right direction?
This a nice solution. It does all the hard work for you. If you have even basic hardware engineering experience this should get you on your way. This chip will work great. Very few external components needed.
http://www.silabs.com/products/interface/usbtouart/Pages/usb-to-i2s-digital-audio-bridge.aspx

Resources