How to programmatically use the mobile phone's IrDA to remote control a media player? - mobile-phones

which API or library on which mobile OS is to be used when one needs to write a code to use the phone's IrDA to create the necessary impulses to remote control consumer electronics e.g. a HDD media player?
Is maybe a certain mobile OS better suited for that kind of application than others?

First you need to know that IrDA is not the best choice for remote control. It can be done, but IrDA is by design high speed/low range, you can emulate low speeds but ranges (IMO) are far from practical usage (Nokia e50 is able to control digital camera shutter from 2-3m... with very, very careful aiming). The amount of hacking needed to achieve this is shown here, you basically need to trick IrDA to send correct impulses with correct frequency.
The second thing is that CIR remote control is not as simple as you might think. There are countless standards that differ in used frequency, modulation, wavelength, command codes and so on. You need to know what you want to support. LIRC site can be very helpful in determining that http://lirc.sourceforge.net/remotes/. Approachable explanation of what it all means is available here: http://www.sbprojects.com/knowledge/ir/ir.htm
As for ready made libraries and platforms... I honestly don't know. I've seen it done on PocketPC (nevo among others) and Symbian S60 (irRemote). Haven't seen working J2ME app yet.
Last time I needed the IR remote I hacked it together using IR diode, AVR ATTiny and surprisingly short piece of assembly :)

Related

Stream audio from place 1 to place 2 over the internet

So I'm kinda stuck here.
I have a radio station, but we are mobile. So I have a studio on wheels. The problem is, we have an antenna, but we always have to place that really close to our studio. Now I want to make an device that can stream the audio from the audio mixer to the internet and can be received by another device in another network and send that signal to the antenna (audio output).
to make this clear, I made a schema with raspberry pi's;
I want this to be plug and play So I only have to plug in the device in the modem (or network we have) on both sides and the devices should find each other.
I don't know HOW I can do this, so I need to know a couple of things:
What hardware should I use?
What software should I use?
What is the best configuration to accomplish this?
Can I use 2 raspberry pi's?
How can I let the devices find each other over the internet?
There need to be some features;
The system needs to be able to buffer the audio for 5-10 seconds
It needs to be direct, so it's live and not a file that needs to be played
The system must be failless (beside the fact the internet can die).
Plug and play is a must, I don't want to have a really messy configuration to do. (if possible, without any kind of portforwarding).
I would really appreciate help and a decent explaination.
regards,
Robin
Well, it depends on your capabilities as a programmer.
If you're really fixated on the RPi for it's convenient form factor, there's a ton of community support, so I'd start with something like this project to kick start you in the right direction. If you already know python pretty well, modify away and have fun.
If you have no programming experience, you'll probably want to put a desktop in place of the RPi and launch some instances of VLC. It's not necessarily plug n play, but you can get close enough by getting a command line VLC to launch at startup.
Either way, the more difficult problem here is the "over the internet" part. This would really need to be a server-client model, but who is your server depends on who is more stationary (I'm guessing Location 2?) because the client will need to know the IP address of the server somehow. There are dozens of ways to make this happen, but at the end of the day, you'll want to use sockets accomplish the
It needs to be direct, so it's live and not a file
... which unfortunately gets complicated. See this answer for confirmation. Would love to help with some tips on implementation, but we need more information about your willingness to "dig into the code", the necessity of the RPi, and whether the stationary location has a static web address.

Trying to make an application that can communicate with other phones nearby

I have been tirelessly trying to decide on the best option for getting phones to talk to each other that are nearby, I need something with the ability to broadcast and receive. It is kind of like NFC with more range, I'd like to be able to send messages 30 to 50 feet away using nothing but a phone.
Bluetooth cannot broadcast and receive to more than 8 devices still, there might be changes to that in Apple's new OS but Android and Windows are still going to be lacking, so Bluetooth is out of the question.
I was thinking of maybe trying to use Wifi, but I have not found very many good resources on how I would go about doing that without making a virtual server, I'd much rather not go that route if possible.
I could even use GPS although with the power consumption of GPS and having to be an always on feature I am not certain I would like to use GPS if I can avoid it.
The one I really want to use, uses sounds made and received by the phone. I have been playing around with a listener that converts different frequencies to 1's and 0's, but with all things sound, it gets increasingly hard if lots of people are talking, or there is music playing, or if there are objects in the way, the Doppler effect and more. Is there someone out there who has already made a filter for this? Some other problems would be, what is the range sound travels at 20khz through air? I can also not find much good documentation anywhere for devices whose speakers can make sound above 20khz but it seems most can, the problem then is what microphones can hear sounds above 20khz.
I would really love to use sound as I think it is interesting, and it would make the app work without any internet or phone connection which I think is pretty cool. This is a side-project I am working on, and really don't want to spend hours down a path that will ultimately fail.
If anyone thinks it's possible to do this with sound over other devices, I'd much rather like to do it that way, I think there is a lot of interesting things you could do with that technology, I just don't know how viable it is over using wifi or bluetooth or even GPS.
At ios you have no controll to low level "things". You can read the current connected wlan ssid, but not all wlan ids which the operation system can see.
I would first try the location services approach. Settig to 1000m acuarcy will usually disable GPS, but enable cell-tower an wlan locationing.
Especially the wlan locationing gives an indirect hint that the persons are near the same wlan

Record audio from various internal devices in Android (via undocumented API)

I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.

Is there a way to enumerate the video devices on a Java ME phone?

I recently downloaded a barcode reading application for my phone, an LG KU990i (AKA the Viewty) However, there's a problem that renders the application nearly useless: the Viewty has 2 cameras -- the main one, and a secondary camera located on the face of the unit -- and it is the secondary camera that is unfortunately set as the phone's default video capture device. As you can't point the secondary at anything and see what it's pointing at at the same time, it makes it a bit difficult to snap a barcode!
According to the JSR-135 spec, it is possible to specify a video capture device other than the default... if you know the device name. This does not appear to be documented anywhere on LG's Web site, nor does the JSR-135 spec describe any way of enumerating the devices on a phone... or is there? Failing that, are there any naming conventions for video devices commonly in use that LG might be using?
I've logged a ticket with LG, but as it's an old device, I don't imagine them breaking their backs in getting back to me... I should also point out that this is purely for my own curiosity so no-one here should feel obliged to break their backs either!
As far as I know there is no way to get list of all available catpure:// urls.
All urls I know:
capture://image,
capture://video
capture://devcam0
capture://devcam1
Source:
http://www.forum.nokia.com/info/sw.nokia.com/id/bc00e4ce-7df3-4527-962c-d39843a808d0/MIDP_Mobile_Media_API_Support_In_Nokia_Devices_v1_0_en.pdf.html
LG responded to my support ticket. Apparently, it's not possible to access the primary camera on the Viewty from Java, making it pretty much useless for barcode scanning. Answer reproduced here for search engines.
You support ticket has been answered. Please visit the LG Mobile Developer Network and login to check the answer at [My Page > My Tickets].
KU990i default video capture device is the secondary camera
Answer :
Hi,
KU990i have to Two camera module
differently.
Main camera using Joran chipset and
sub(front camera) using Qualcomm
chipset.
Joran chip doesn’t supported JSR135.
Therefore, we couldn’t supported to
the JSR135 using for main camera.
(it is H/W limitation)
It was inform to operator already and
we remember operator was confirm it.
So that, we only supported sub camera
for JSR135.
BR,

Best cross-platform audio library for synchronizing audio playback

I'm writing a cross-platform program that involves scrolling a waveform along with uncompressed wav/aiff audio playback. Low latency and accuracy are pretty important. What is the best cross-platform audio library for audio playback when synchronizing to an external clock? By that I mean that I would like to be able to write the playback code so it sends events to a listener many times per second that includes the "hearing frame" at the moment of the notification.
That's all I need to do. No recording, no mixing, no 3d audio, nothing. Just playback with the best possible hearing frame notifications available.
Right now I am considering RTAudio and PortAudio, mostly the former since it uses ALSA.
The target platforms, in order of importance, are Mac OSX 10.5/6, Ubuntu 10.11, Windows XP/7.
C/C++ are both fine.
Thanks for your help!
The best performing cross platform library for this is jack. Properly configured, jack on Linux can outperform Windows asio easily (in terms of low latency processing without dropouts). But you cannot expect normal users to use jack (the demon should be started by the user before the app is started, and it can be a bit tricky to set up). If you are making an app specifically for pro-audio I would highly recommend looking in to jack.
Edit:
Portaudio is not as high-performance, but is much simpler for the user (no special configuration should be needed on their end, unlike jack). Most open source cross platform audio programs that I have used use portaudio (much moreso than openal), but unlike jack I have not used it personally. It is callback based, and looks pretty straightforward though.
OpenAL maybe an option for you.

Resources