Resources for determining how widely JSRs are implemented for J2ME - java-me

Does anyone know of any resources that can give me an idea of how widely various JSRs are implemented?
Links
Which phones support which spec

There are a few options none of them are complete :(
J2mePolish's db
Device Atlas
WURFL
Nokia, Sony-Ericsson,* providers websites.
This information isn't aggregated any where really but for JRS information I would say j2mepolish has the greatest source of this information with +- 1000 devices. Although Wurfl and DA have more devices they are mostly targeted and WAP+Mobile Web developers

JBenchmark is quite a substantial database of API compatibilities.
www.jbenchmark.com

Related

Is Opus supported for VoLTE?

There are so many different codecs for phone calls and many of them have very high license fees, meaning it will take a lot of time before everyone can use normal telephony with wide band audio.
Is Opus supported for VoLTE?
The usual codecs for VoLTE are AMR, AMR-WB and EVS (see links below for more info - thanks, #Mikael DĂși Bolinder).
As with most mainstream voice (and video codecs) there is IPR and licensing associate with these. However, for end users the network providers and device manufacturers have included the licensing and the codecs in their rollouts so a typical operator service will use these.
I'm not aware of any restrictions from 3GPP on using other codecs if the devices and the network support them, but the above are definitely the default and the most widely used.
If you want to create your own voice service, e.g a VoIP service running over the data connection to the phone, then in theory you can use whatever codec you want. It's worth being aware that for software based codecs, which they will be unless they are tightly integrated in the device's hardware, the efficiency is important as an inefficient implementation may impact performance, battery life etc.
For Opus in particular there are several open source projects which provide Android libraries for this, for example. Opus is also supposed to be supported on devices from Android 5+ (https://developer.android.com/guide/topics/media/media-formats).
amr-licensing-wikipedia: https://en.wikipedia.org/wiki/Adaptive_Multi-Rate_audio_codec#Licensing_and_patent_issues "AMR licensing (and issues) on Wikipedia"
amr-wb-licensing-wikipedia: https://en.wikipedia.org/wiki/Adaptive_Multi-Rate_Wideband#Licensing "AMR-WB licensing on Wikipedia"
evs-news-patent-pool: http://www.mpegla.com/Lists/MPEG%20LA%20News%20List/Attachments/97/n-16-01-20.pdf "MPEG developing a patent pool for EVS"

What is Kinect + Linux being used for?

An article on Hackaday piqued my curiosity, and I see Kinect + Linux questions being asked here (mostly about configuration), so I'll venture this question:
It is clear to me that Kinect can be used together with Linux on a "regular pc" -- but I can't help wondering why, that is, what might you actually use this for?
I don't suppose people really like the human/computer interface presented in movies such as "Minority Report" -- surely, nobody is actually doing text editing, coding, or business data processing by "hand-waving". So besides just games & exercises, what are examples of actual, real-world, useful (ie. 'professional') applications of such a setup?
For instance, can it be used for 3D scanning of real-world objects to obtain digital models? What sort of accuracy would such a scan yield?
The Kinect can be used for a wide variety of useful applications. I'm not sure if you are asking specifically about Linux or if Windows ("regular PC") is acceptable, but I'll provide you with some examples that come to mind.
For Linux specifically, it is likely that applications on Linux are using the sensor's raw sensor data only, rather the skeletal tracking feature. Many Kinect applications are on Windows because Microsoft's Kinect SDK is available only on Windows, and it provides the best skeletal tracking accuracy to-date.
You are right that the Kinect is rarely used where a keyboard & mouse would be faster and more accurate, but note that it is potentially relevant for accessibility.
And yes, it can be used for 3D scanning of real-world objects. I'm not sure about the exact accuracy, but I think it is acceptable for many applications. The main benefits are its low cost and speed.
For examples of 3D scanning, check out:
KinectFusion, a Microsoft Research project
Occipital Structure sensor for 3D scanning. (This is not the Kinect sensor, but provides an example application for 3D scanning. The company has a Kinect-related history as well.)
Styku - 3D body scanning for clothes fitting
Aside from 3D scanning, here are some other examples of applications:
Atlas5D - at-home patient monitoring
GestSure - 'Minority Report' interface for surgical rooms
Jintronix - games, exercises, assessments for physical therapy
There are many depth sensors like the Kinect3D on the market. The latest notable application would be iPhone X's depth sensor and FaceID. Many companies in the space are working actively in FaceID now, which would also be useful on Linux. Check out Microsoft's Window Hello biometric facial ID system - see Microsoft's official website:
Manufacturing of the Kinect sensor and adapter has been discontinued,
but the Kinect technology continues to live on in products like the
HoloLens, Cortana voice assistant, the Windows Hello biometric facial
ID system, and a context-aware user interface.
Kinect has applications in the robotics community as well, though I don't know the specifics. I assume many in robotics community use Linux when working with the Kinect. The depth and color cameras can be used to provide vision and the microphone array for audio input.
Generally, the Kinect had a big impact when it was released not just because of its technology but also because of its low price point, even if it's not the most accurate for every application. As this technology improves, I hope many other applications will emerge and become mainstream.
EDIT: also, check out this Hacker News discussion: "Microsoft Has Stopped Manufacturing The Kinect"

Getting started with Bluetooth Low Energy (BLE) beacon development

I have a couple of questions concerning BLE beacons:
1) Are beacons based on nRF51822 chip the best solution? Or are there any other chips better than nRF51822? I want to take up BLE beacon development and struggling to find the right hardware for these needs. As a novice developer I want the beacon to be as cheap as possible in order not to waste money in case of a failure.
2) Is it possible to buy pure Eddystone beacon (not iBeacon)? The reason for choosing Eddystone is that Eddystone is capable of broadcasting URLs that are essential for me.
The second question stems from my failed attempts to find a pure Eddystone beacon on Chinese electronics sites like alibaba.com or aliexpress.com where the only firmware available is iBeacon. But iBeacon is not an option because it can't broadcast URL the way Eddystone does.
Apart from the above questions It would be great if someone wrote a quick guide for taking up BLE development with Eddystone and covered basic topics like: chip to use, beacon model, best website to buy beacons at, etc.
Thanks in advance,
Pavel
1) I've worked with Estimote beacons and Chinese beacons from Amazon and in my opinion, they do not differ in terms of accuracy too much. Especially for prototyping, I'd buy cheaper ones to test if your use case can be satisfied with BLE beacons. If it is too inaccurate with Chinese beacons, chances are that it won't work with more expensive ones either.
2) Why do you need the URL broadcast? If the app is going to use the url, it would have to be connected to the internet. Therefore, you can just query the beacon's IDs to a web service to get back an URL and use that. Personally, I think this is a better approach as you can configure the web service from anywhere to change the url for beacons where as if you want to change the URL of the Eddystone, you have to go to the beacon to configure it.
The nRF51822 is a common implementation, is flexible, well understood and can be very inexpensive. Be aware though that development costs, add on circuitry for power and/or peripherals, and packaging can easily eclipse the Bluetooth chip when you get to production cost savings.
If you want to buy an off the shelf beacon, most models supporting Eddystone also support iBeacon, simply because supporting both adds no additional hardware cost. Newer Radius Networks and Estimote beacons all support both. And, yes, cheaper generic Chinese suppliers often have bulk manufactured inventory from before Eddystone existed at only support iBeacon.

Develop applications to mobiles

I have very easy question, but I simply have any idea of the answer.
I have developed a small mobile-application using java, for my nokia.
The problem is that when installed on my samsung the application simply crashed.
Then I tried on my other nokia but different model, and I didn't got the normal behavior.
So my question is, does anyone have any idea how companies that develop mobiles applications/games test their software.
Does they have to have all models for all mobiles phones??
Companies that target many phones in many countries usually only let you install the application on your phone if they recognise your handset User Agent in the HTTP headers of the request to download the .jad or .jar file.
There are multiple ways to test an application on many handsets for many mobile network operators.
From simply buying the phones, to establishing commercial parternships with handset manufacturers and mobile network operators, to having a Device Anywhere account.
I don't know if you need all models of all phones. But you will definitely need separate test (and probably different builds) for different phones regarding:
MIDP version
Screen Size
Input Devices
Speed & Memory
Java, in this case is, WOTA (Write Once Test Anywhere) instead of WORA (Write Once Run Anywhere). :-)
Phone specs and Java implementations vary a lot, but within each manufacturers range there will be groups of phones that share the same specs and implementation.
I used to work at a company making J2ME games, what we did there was test on every handset we released the game on, but we had 2 types of test - Complete and Compatability.
We would adapt a version of the game for a specific phone, eg Sony Erricson K800i, and have it thoroughly tested according to the Complete Test spec.
Once that had passed, we then used that build on phone known to have similar specs and good previous compatability with other games (we kept a database of specs and compatability records), eg Sony Erricson W910i, and submit it for a compatability test, which was a bit less thorough and a bit quicker.
Once you've been doing it a while you get to know the capabilities of phones and which phones you could use the same build on, but there is often a bit of guesswork involved :) Sometimes you get matches you wouldn't expect, and sometimes a match you would expect to work doesn't.
Edit: I was going to post this as a comment, but I can't (because i'm an SO noob :), out of interest, what phones are your Nokia's and Samsung?
I can't remember many specific handset names, but here is a quick rundown of compatability across manufacturers:
Sony Erricsons are generally excellent - if it works on one, it will likely work on all SE handsets with the same resolution.
Nokia's are generally good within a certain smaller group eg N95 builds work well on most nokias with the same res that were released after the N95, but some handsets are a bit of a pain.
Samsungs are pretty bad - the J2ME implementation on most is flawed (Hide/Show Notify methods not being called is an example), and the memory and speed are typicly a bit crap.
Motorola phones are not great, but are generally quite compatable with oneanother. Same goes for LG, although their more recent models are much better.
Testing is one of the most labour intensive part of mobile phone development. Typically a company might simply buy a lot of different phones to test on for real, or target a particular subset such only as Series 40 Nokia phones.
But alternatives exist out there where you can remotely deploy your app to phones, such as Nokia's Remote Device Access Services.
One way that might limit the problems is to target J2ME MSA (Mobile Service Architecture) compliant phones, where MSA attempts to reduce variations in vendor implementations of J2ME.

Has anyone got a tutorial up on getting your own smartcard and getting pkcs#11 working on it?

Has anyone got a tutorial up on getting your own smartcard and getting pkcs#11 working on it? In Linux? (Windows would be fine too).
Most of the vendors seem to assume you'll be wanting enough for your whole company, not one or two.
This heavily depends on the driver and application you use. We use OpenSC/OpenCT for all non-Enterprise Smartcard uses. They have decent documentation.
Yes, check out what OpenSC supports.
Make sure that you know what you want - USB tokens or full-size smart cards. There are pros and cons with both solutions - USB tokens require drivers, often by the manufacturer, to use on some platforms (eg Windows7 or OSX can be troublesome). But they are easy to use once set up and sometimes offer better performance than ISO smartcards. Casual smart cards on the other hand have also contactless interfaces and can be used with pinpad readers which provide higher security than USB tokens.
If you're into fancier features and may want to extend your card infra further than just pkcs#11 crypto, javacards might be useful (OpenSC can not work with JavaCards directly but certain applets are supported, like Muscle) Otherwise look for a supported card operating system.

Resources