I require some direction to incorporate gauge in J2ME which allows me to configure its minimum and maximum value, labels etc.
Currently, this is my gauge code:
levelGauge = new Gauge("Level", true, 12, valX - 16);
I am setting the maximum value as 12 (hence it becomes 0 to 12) but I need it to be from 16 to 28. The labels appear between 0 to 12 on movement.
Note I want the look and feel of the gauge which is ranging from 0 to 12 but should be from 16 to 28 actually. I do not want the current level of the gauge to go below 16 at any point of time.
Straightforward way to have 16-to-28 gauge would be to use ItemStateListener.
For this, you'd use gauge with max value 28 and item state listener such that code in the itemStateChanged would check the value of the gauge and, if it is lower than 16, set it back to 16.
If you want to avoid displaying values lower than 16, consider some other options to do that, like CustomItem for your own "hand made" Gauge, or 3rd party UI libraries like LWUIT or J2ME Polish.
Related
I am currently working on a Turbidity meter that returns a value of the voltage from 0-5.0 (this number will change depending on how turbid the water so a lower voltage reading indicates a more turbid water).
What I am trying to do is take the voltage reading that I get and convert it to a reading that expresses the turbidity in the water (so a voltage reading of 4.8 would equal 0 to a voltage reading of 1.2 would equal 4000).
I have written some code using MicroLogic and I know that in there there is a box that looks at the incoming reading and will scale the outgoing reading between a Min and Max number that you put in (an example in MicroLogic is that I get a 4-20mA signal in and it will scale the output to mean a water level in a tank based on that 4mA = 0ft and 20mA = 12ft)
Is there a scaling code for python, or how to I go about doing this? Thanks!
You could do something like this:
def turbidity(voltage):
return 4000 - (voltage-1.2)/(4.8-1.2)*4000
print(turbidity(1.2), turbidity(2.4), turbidity(4.8))
Which would print out:
4000.0 2666.6666666666665 0.0
If you want integers instead, add an int() call like so:
4000 - int((voltage-1.2)/(4.8-1.2)*4000)
Good luck with your turbidity!
Following the code here: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
No matter the image inputted, there seems to be a hard limit of 20 objects detected. Example:
The problem is also seen in this post: TensorFlow Object Detection API Weird Behavior
Is there some configuration or parameter that can be changed to raise the number of objects detected?
EDIT: I can confirm that greater than 20 objects are detected, but there is a maximum of 20 that will be shown in the final output. Is there a way to increase this limit?
The max number of detections can be set in your config file. By default it's usually 300, so you should be fine.
Your problem here is the number of displayed detections. Towards the end of your code you have a call to vis_util.visualize_boxes_and_labels_on_image_array. Just add max_boxes_to_draw=None to its arguments to display all the detections (or choose some bigger number if you want).
I am writing a custom control which inherits from FrameworkElement. I use DrawingVisual to render to the screen. Therefore, I have to calculate myself the optimal size of the Visuals I draw. If HorizontalAlignment is set to Stretch, I would like to use all available space. Only WPF doesn't tell how much space is really available :-(
In MeasureOverride, the constraint.Width is Infinity. I need to return a desired size which is less than Infinity, otherwise I get the exception "InvalidOperationException: Layout measurement override of element 'xyz' should not return PositiveInfinity as its DesiredSize, even if Infinity is passed in as available size."
I tried to return 0 in the hope that Arrange would pass all the available space this time, since in MeasureOverride it said Infinity width is available. But ArrangeOverride says the arrangeBounds.Width is 0.
I tried to return an arbitrary width (well, actually I returned the Monitor width to MeasureOverride), but then ArrangeOverride tells me I can use the complete monitor, which is, of course, not true :-(
So which width do I return in MeasureOverride to get the maximum space really available ?
Update 23.1.23
My control is in a library and has no idea who is the container holding it. I inherit from that control to make many different controls, for example a line graph that can contain millions of measurements. So if there is unlimited space, the graph gets millions of pixel wide. If the space is limited, let's say 1000 pixels, then the graph displays in 1 pixel the average of thousands of measurements.
After using WPF now for nearly 20 years, I think the proper solution is to raise an exception when the available width is Infinity, but my control cannot deal with that. The exception demands that the control is put into a container which gives limited width to its children.
For example a ScrollViewer gives unlimited space to its children. But since my line graph, or any other control I design that can not deal with unlimited space, gets unreasonably big, it is better to alert the developer using the control immediately that the control needs to be hosted in a container like a Grid, where the Grid tells the child the exact size available. Of course, also the Grid might give unlimited space when the column width is set to auto, but in this case my Control should raise an exception, so that the column width can get changed to a number of pixels or *.
I have some serial code that I have started to parallelize using Intel's TBB. My first aim was to parallelize almost all the for loops in the code (I have even parallelized for within for loop)and right now having done that I get some speedup.I am looking for more places/ideas/options to parallelize...I know this might sound a bit vague without having much reference to the problem but I am looking for generic ideas here which I can explore in my code.
Overview of algo( the following algo is run over all levels of the image starting with shortest and increasing width and height by 2 each time till you reach actual height and width).
For all image pairs starting with the smallest pair
For height = 2 to image_height - 2
Create a 5 by image_width ROI of both left and right images.
For width = 2 to image_width - 2
Create a 5 by 5 window of the left ROI centered around width and find best match in the right ROI using NCC
Create a 5 by 5 window of the right ROI centered around width and find best match in the left ROI using NCC
Disparity = current_width - best match
The edge pixels that did not receive a disparity gets the disparity of its neighbors
For height = 0 to image_height
For width = 0 to image_width
Check smoothness, uniqueness and order constraints*(parallelized separately)
For height = 0 to image_height
For width = 0 to image_width
For disparity that failed constraints, use the average disparity of
neighbors that passed the constraints
Normalize all disparity and output to screen
Just for some perspective, it may not always be worthwhile to parallelize something.
Just because you have a for loop where each iteration can be done independently of each other, doesn't always mean you should.
TBB has some overhead for starting those parallel_for loops, so unless you're looping a large number of times, you probably shouldn't parallelize it.
But, if each loop is extremely expensive (Like in CirrusFlyer's example) then feel free to parallelize it.
More specifically, look for times where the overhead of the parallel computation is small relative to the cost of having it parallelized.
Also, be careful about doing nested parallel_for loops, as this can get expensive. You may want to just stick with paralellizing the outer for loop.
The silly answer is anything that is time consuming or iterative. I use Microsoft's .NET v4.0 Task Parallel Library and one of the interesting things about their setup is its "expressed parallelism." An interesting term to describe "attempted parallelism." Though, your coding statements may say "use the TPL here" if the host platform doesn't have the necessary cores it will simply invoke the old fashion serial code in its place.
I have begun to use the TPL on all my projects. Any place there are loops especially (this requires that I design my classes and methods such that there are no dependencies between the loop iterations). But any place that might have been just good old fashion multithreaded code I look to see if it's something I can place on different cores now.
My favorite so far has been an application I have that downloads ~7,800 different URL's to analyze the contents of the pages, and if it finds information that it's looking for does some additional processing .... this used to take between 26 - 29 minutes to complete. My Dell T7500 workstation with dual quad core Xeon 3GHz processors, with 24GB of RAM, and Windows 7 Ultimate 64-bit edition now crunches the entire thing in about 5 minutes. A huge difference for me.
I also have a publish / subscribe communication engine that I have been refactoring to take advantage of TPL (especially on "push" data from the Server to Clients ... you may have 10,000 client computers who have stated their interest in specific things, that once that event occurs, I need to push data to all of them). I don't have this done yet but I'm REALLY LOOKING FORWARD to seeing the results on this one.
Food for thought ...
I'v got a bitmap 24bits, I am writing application in c++, MFC,
I am using libjpeg for encoding the bitmap into jpeg file 24bits.
When this bitmap's width is M, and height is N.
How to estimate jpeg file size before saving it with certain quality factor N (0-100).
Is it possible to do this?
For example.
I want to implement a slide bar, which represent save a current bitmap with certain quality factor N.
A label is beside it. shows the approximate file size when decode the bitmap with this quality factor.
When user move the slide bar. He can have a approximate preview of the filesize of the tobe saved jpeg file.
In libjpeg, you can write a custom destination manager that doesn't actually call fwrite, but just counts the number of bytes written.
Start with the stdio destination manager in jdatadst.c, and have a look at the documentation in libjpeg.doc.
Your init_destination and term_destination methods will be very minimal (just alloc/dealloc), and your empty_output_buffer method will do the actual counting. Once you have completed the JPEG writing, you'll have to read the count value out of your custom structure. Make sure you do this before term_destination is called.
It also depends on the compression you are using and to be more specific how many bits per color pixel are you using.
The quality factor wont help you here as a quality factor of 100 can range (in most cases) from 6 bits per color pixel to ~10 bits per color pixel, maybe even more (Not sure).
so once you know that its really straight forward from there..
If you know the Sub Sampling Factor this can be estimated. That information comes from the start of frame marker.
In the same marker right before the width and height so is the bit depth.
If you let
int subSampleFactorH = 2, subSampleFactorV = 1;
Then
int totalImageBytes = (Image.Width / subSampleFactorH) * (Image.Height / subSampleFactorV);
Then you can also optionally add more bytes to account for container data also.
int totalBytes = totalImageBytes + someConstantOverhead;