I would like to capture only a few pixels from the Google Glass camera at regular intervals, to obtain color data over time. Is there a way, to save battery life, to only capture a few pixels rather than take a full image every time and have to post-render it (which is much more intensive and battery-consuming)? Perhaps this is configured on the hardware level, and thus I cannot do such a thing.
As an alternative, I was hoping the light sensor would give RGB data, but it appears to be a monochromatic light level that is provided in units of lux.
Related
I'm reading/watching anything I can about color management/color science and something that's not making sense to me is the scene-referred and display-referred workflows. Isn't everything display-referred, because your monitor is converting everything you see into something it can display?
While reading this article, I came across this image:
So, if I understand this right to follow a linear workflow, I should apply an inverse power function to any imported jpg/png/etc files that contain color data, to get it's gamma to be linear. I then work on the image, and when I'm ready to export, say to sRGB and save it as a png, it'll bake in the original transfer function.
But, even while it's linear, and I'm working on it, is't my monitor converting everything I see to what I can display? Isn't it basically applying it's own LUT? Isn't there already a gamma curve that the monitor itself is applying?
Also, from input to output, how many color space conversions take place, say if I'm working in the ACEScg color space. If I import a jpg texture, I linearize it and bring it into the ACEScg color space. I work on it, and when I render it out, the renderer applies a view transform to convert it from ACEScg to sRGB, and then also what I'm seeing is my monitor converting then from sRGB to my monitor's own ICC profile, right (which is always happening since everything I'm seeing is through my monitor's ICC profile)?
Finally, if I add a tone-mapping s curve, where does that conversion sit on that image?
I'm not sure your question is about programming, and the question has not much relevance to the title.
In any case:
light (photons) behave linearly. The intensity of two lights is the sum of the intensity of each light. For this reason a lot of image mangling is done in linear space. Note: camera sensors have often a near linear response.
eyes see nearly as with a gamma exponent of 2. So for compression (less noise with less bit information) gamma is useful. By accident also the CRT phosphors had a similar response (else the engineers would have found some other methods: in past such fields were done with a lot of experiments: feed back from users, on many settings).
Screens expects images with a standardized gamma correction (now it depends on the port, setting, image format). Some may be able to cope with many different colour spaces. Note: now we have no more CRT, so the screen will convert data from expected gamma to the monitor gamma (and possibly different value for each channel). So a sort of a LUT (it may just be electronically done, so without the T (table)). Screens are setup so that with a standard signal you get expected light. (There are standards (images and methods) to measure the expected bahavious, but so ... there is some implicit gamma correction of the gamma corrected values. It was always so: on old electronic monitor/TV technicians may get an internal knob to regulate single colours, general settings, etc.)
Note: Professionals outside computer graphic will use often opto-electronic transfer function (OETF) from camera (so light to signal) and the inverse electro-optical transfer function (EOTF) when you convert a signal (electric) to light, e.g. in the screen. I find this way to call the "gamma" show quickly what it is inside gamma: it is just a conversion between analogue electrical signal and light intensity.
The input image has own colour space. You now assume a JPEG, but often you have much more information (RAW or log, S-log, ...). So now you convert to your working colour space (it may be linear, as our example). If you show the working image, you will have distorted colours. But you may not able to show it, because you will use probably more then 8-bit per channel (colour). Common is 16 or 32bits, and often with half-float or single float).
And I lost some part of my answer (after last autosave). The rest was also complex, but the answer is already too long. In short. You can calibrate the monitor: two way: the best way (if you have a monitor that can be "hardware calibrated"), you just modify the tables in monitor. So it is nearly all transparent (it is just that the internal gamma function is adapted to get better colours). You still get the ICC, but for other reasons. Or you get the easy calibration, where the bytes of an image are transformed on your computer to get better colours (in a program, or now often by operating system, either directly by OS, or by telling the video card to do it). You should careful check that only one component will do colour correction.
Note: in your program, you may save the image as sRGB (or AdobeRGB), so with standard ICC profiles, and practically never as your screen ICC, just for consistency with other images. Then it is OS, or soft-preview, etc. which convert for your screen, but if the image as your screen ICC, just the OS colour management will see that ICC-image to ICC-output will be a trivial conversion (just copying the value).
So, take into account that at every step, there is an expected colour space and gamma. All programs expect it, and later it may be changed. So there may be unnecessary calculation, but it make things simpler: you should not track expectations.
And there are many more details. ICC is also use to characterize your monitor (so the capable gamut), which can be used for some colour management things. The intensions are just the method the colour correction are done, if the image has out-of-gamut colours (just keep the nearest colour, so you lose shade, but gain accuracy, or you scale all colours (and you expect your eyes will adapt: they do if you have just one image at a time). The evil is in such details.
I am looking for a real time image processing to measure the velocity and output it to another control system.
I have attached an image of a yellow stripe. This has markings on the surface that I would like to automatically detect and use for calculation. In the first step the material moves only in one direction. Here for example to the right. For me only the horizontal part of the movement is of interest, so quasi only the velocity along an x-axis. But the material moves relatively fast. At maximum speed in 28 ms the current mark (spike) is at the position of the one in front of it.
The idea is to use a Raspberry Pi 4 with a camera at the maximum of 120 fps. So every 8.3 ms a picture is generated and should make it possible to clearly detect the movement of the marks.
My questions are:
Is it possible to process the images and detaction that fast to get the velocity in nearly real time? And which algorithem should I use for this configuration? Because it would be best if I could use two or three markers per image and average the velocity of them.
And I would like to use the velocity as an input signal for another system. What is the easiest and fastest way to send the information directly to another control system?
I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!
I have seen searching for infrared and ultraviolet images of plants eg: grape plants. but i can not find a reasonable images from the search. And also I have tried to buy the camera to take the picture myself but you know those cameras are so expensive that I cannot afford.
I am doing my thesis under image processing using different imaging techniques. if you all could help me to find images of infrared and ultraviolet images or you could let me know if i can buy those pictures please?
Or you if could please tell me that is it possible to convert the normal picture to infrared picutre or ultraviolet picture ?
Thanks.!
It's not possible to convert an ordinary photo into a physically correct infrared or ultraviolet photo, just as it's not possible to convert a picture taken with a red filter to a photo taken by a green filter. They use different frequencies of light. There are some details lost if the specific prequencies are not captured.
For example: flowers have very rich colouring in the ultraviolet range, but they are mostly monochromatic in the optical frequencies. This is very hard to fake. Also, the human body emits radiation in the infrared range, meaning it's perfectly visible even without any light sources (if you have the correct equipment). A human body does not emit in the visible range.
If that were possible, all the expensive real-infrared and real-ultraviolet cameras would use this process, thus being much cheaper. All the expensive X-Ray machines in the hospitals would be completely useless - just take a picture with an ordinary cell phone and process.
However, you can fake the effect. It's no good for scientific measurements but it is usable for artistic reasons. Open the picture in any decent image editor, and play with the colors. Play with channel mixing (you have at least 12 sliders to play with). Play with selections. You can apply different color effects to different regions of the image. Play with magic lasso to select specific objects or feather the selection to create a gradual change. Gimp has a nice feature where you can paint into the selection mask rather than to the image itself - so you can select a region or tweak a selection with paint brush too. Play with layers. You can have two layers with different color effects and blur between them by painting the top layer alpha channel. Play with the built-in filters. Play with different layer mixing modes. Play with color balance curves. Do what you need to get the artistic effect you want. You can mix different photos of different aspects of the same object, or photos of the same object with different lighting conditions (natural sunlight, artificial lighting, taken through a pair of sunglasses ...) to get more color channels to mix...
You can get a rough estimate of what's really happening in the near ultraviolet region as long as the camera is at least a little sensitive in that region. You you can't find an UV-pass filter, take a photo with an UV-blocking filter and without the filter, balance and subtract. do not expect much, though. Still, it could give you some hints how to achieve a semi-realistic color effect.
I would like to calculate the distance between my camera and a recognized "object".
The recognized "object" is a black rectangle sticker on a white board for example. I know the values of the rectangle (x,y).
Is there a method that I can use to calculate the distance with the values of my original rectangle, and the values of the picture of the rectangle I took with the camera?
I searched the forum for answeres, but none of the were specified to calculate the distance with these attributes.
I am working on a robot called Nao from Aldebaran Robotics, I am planing to use OpenCV to recognize the black rectangle.
If you could compute the angle taken up by the image of the target, then the distance to the target should be proportional to cot (i.e. 1/tan) of that angle. You should find that the number of pixels in the image corresponded roughly to the angles, but I doubt it is completely linear, especially up close.
The behaviour of your camera lens is likely to affect this measurement, so it will depend on your exact setup.
Why not measure the size of the target at several distances, and plot a scatter graph? You could then fit a curve to the data to get a size->distance function for your particular system. If your camera is close to an "ideal" camera, then you should find this graph looks like cot, and you should be able to find your values of a and b to match dist = a * cot (b * width).
If you try this experiment, why not post the answers here, for others to benefit from?
[Edit: a note about 'ideal' cameras]
For a camera image to look 'realistic' to us, the image should approximate projection onto a plane held infront of the eye (because camera images are viewed by us by holding a planar image in front of our eyes). Imagine holding a sheet of tracing paper up in front of your eye, and sketching the objects silhouette on that paper. The second diagram on this page shows sort of what I mean. You might describe a camera which achieves this as an "ideal" camera.
Of course, in real life, cameras don't work via tracing paper, but with lenses. Very complicated lenses. Have a look at the lens diagram on this page. For various reasons which you could spend a lifetime studying, it is very tricky to create a lens which works exactly like the tracing paper example would work under all conditions. Start with this wiki page and read on if you want to know more.
So you are unlikely to be able to compute an exact relationship between pixel length and distance: you should measure it and fit a curve.
It is a big topic. If you want to proceed from a single image, take a look at this old paper by A. Criminisi. For an in-depth view, read his Ph.D. thesis. Then start playing with the OpenCV routines in the "projective geometry" sectiop.
I have been working on Image/Object Recognition as well. I just released a python programmed android app (ported to android) that recognizes objects, people, cars, books, logos, trees, flowers... anything:) It also shows it's thought process as it "thinks" :)
I've put it out as a test for 99 cents on google play.
Here's the link if you're interested, there's also a video of it in action:
https://play.google.com/store/apps/details?id=com.davecote.androideyes
Enjoy!
:)