Issues regarding frames per second in j2me - java-me

I'm developing a racing game. The speed is varying on different devices. I'm moving the background using a count to maintain constant speed.
But the speed is an issue. How to maintain a constant in all devices like nokia,samsung,etc. ?
I am using java.util.Timer API. Timer count duration varies in different devices.

Related

Limit To BLE Devices?

Is there a limit to the number of BLE (Bluetooth Low Energy) devices that can transmit at the same time?
For example- if I plan to implement an IT solution that has to include several thousands of BLE Beacons / iBeacons- would it be a problem to monitor all these Beacons?
Would their transmissions interfere with each other?
Thanks!
BLE devices use multiple radio frequency channels for advertising and vary their specific packet transmission times in order to avoid transmission collisions with other BLE devices on the same channel. I have successfully tested such a scenario with several dozen beacons visible at the same time, but there are limits to the built-in collision avoidance approach.
If you expect to have many hundreds of devices visible within the same ~50 meter transmission radius, you may run into trouble. See this discussion for details.
Collisions of the transmissions will make it take longer for detection of each beacon. CoreLocation on iOS and the Android Beacon Library provide a ranging update once per second for each device, but you may find that each of these updates will include only a smaller percentage of the theoretically visible beacons because collisions prevented many of their packets from being received in a one second interval. It all depends on your application whether or not less frequent updates are acceptable.
On Both iOS and Android there is no problem monitoring this large number of beacons as long as only a few dozen are in range at any given time. On iOS, however, you need to make sure that you use only a maximum of 20 ProximityUUIDs across all the beacons, as this is the maximum number of Beacon Regions you can monitor at the same time on that platform.

What exactly is a GPU binning pass

As I'm reading VideoCoreIV-AG100-R spec of BCM vc4 chip, there is a paragraph talking about:
All rendering by the 3D system is in tiles, requiring separate binning and rendering passes to render a frame. In
normal operation the host processor creates a control list in memory defining all the operations and supplying
all the data for rendering for a complete frame.
It mentions of rendering a frame requires binning and rendering pass. Could anybody explain in details how exactly those 2 passes playing roles in a graphic pipeline? Thanks a lot.
For tile based render architecture passes are:
Binning pass - generates stream\map between frame tiles & corresponding geometry which should be rendered into particular tile
Rendering pass - takes map between tiles & geometry and renders the appropriate pixels per tile.
In mobile GPUs due to many limitations compared to Desktops GPUs (such as memory bandwidth due to in mobile devices memory is shared between GPU & CPU,etc) vendors uses approaches to split work into small pieces to decrease overall memory bandwidth consumption - for ex. apply Tile Based Rendering - to achieve efficient utilization of all available resources and gain acceptable performance.
Details
Tile Based Rendering approach described on many GPU vendors sites such as:
A look at the PowerVR graphics architecture: Tile-based rendering
GPU Framebuffer Memory: Understanding Tiling

Classifying a program as compute intensive based on performance counters

I'm trying to classify few parallel programs as compute / memory/ data intensive. Can I classify them from values obtained from performance counters like perf. This command gives couple of values like number of page faults that I think can be used to know if a program needs to access memory frequently, else otherwise.
Is this approach correct and possible way. If not can someone guide me in classifying programs into respective categories.
Cheers,
Kris
Yes you should in theroy be able to do that with perf. I don't think page faults events are the one to observe if you want to analyse memory activity. For this purpose, on Intel processors you should use uncore events that allow you to count memory traffic (read/write separately). On my Westmere-EP these counters are UNC_QMC_NORMAL_READS.ANY and UNC_QMC_WRITES_FULL.ANY
The following article deals exactly with your problem (on Intel processors):
http://spiral.ece.cmu.edu:8080/pub-spiral/pubfile/ispass-2013_177.pdf

multiple CPU's usage symetrical

I have noticed this a number of times while doing computational expensive tasks on my computer, anywhere from computing hashes, to rendering videos.
In this specific situation I was rendering a video using all 4 of my cores under Linux, and when I opened my system monitor once again I noticed it.
2 or more of my cores were under symmetrical usage, when one went up the other went down completely symmetrical and in sync.
I have no idea why this is the case and would love to know!
System monitor picture

Thermal aware scheduler in linux

Currently i'm working on making a temperature aware version of linux for my university project. Right now I have to create a temperature aware scheduler which could take into account processor temperature and perform some scheduling. Is there any generalized way to get the temperature of the processor cores or can I integrate the coretemp driver with the linux kernel in any way ( I didn't find a way to do so on the internet ).
lm-sensors simply uses some device files exported by the kernel for CPU temperature, you can just read whatever these device files have as backing variables in the kernel to get the temperature information. In terms of a scheduler I would not write one from scratch and would start with the kernels CFS implementation and in your case modify the load balancer check to include temperature (currently it uses a metric that is the calculated cost of moving a task from one core to another in terms of cache issues, etc... I'm not sure if you want to keep this or not).
Temperature control is very difficult. The difficulty is with thermal capacity and conductance. It is quite easy to read a temperature. How you control it will depend on the system model. A Kalman filter or some higher order filter will be helpful. You don't know,
Sources of heat.
Distance from sensors.
Number of sensors.
Control elements, like a fan.
If you only measure at the CPU itself, the hard drive could have over heated 10 minutes ago, but the heat is only arriving at the CPU now. Throttling the CPU at this instance is not going to help. Only by getting a good thermal model of the system can you control the heat. Yet, you say you don't really know anything about the system? I don't see how a scheduler by itself can do this.
I have worked on mobile freezer application where operators would load pallets of ice cream, etc from a freezer to a truck. Very small distances between sensors and control elements can create havoc with a control system. Also, you want your ambient temperature to be read instantly if possible. There is a lot of lag in temperature control. A small distance could delay a reading by 5-15 minutes (ie, it take 5-15 minutes for heat to transfer 1cm).
I don't see the utility of what you are proposing. If you want this for a PC, then video cards, hard drives, power supplies, sound cards, etc. can create as much heat as the CPU. You can not generically model a PC; maybe you could with an Apple product. I don't think you will have a lot of success, but you will learn a lot from trying!

Resources