I'm writing a program to generate some wild visuals. So far I can paint each pixel with a random blue value:
for (y = 0; y < YMAX; y++) {
for (x = 0; x < XMAX; x++) {
b = rand() % 255;
setPixelColor(x,y,r,g,b);
}
}
I'd like to do more than just make blue noise, but I'm not sure where to start (Google isn't helping me much today), so it would be great if you could share anything you know on the subject or some links to related resources.
I used to do such kind of tricks in the past. Unfortunately, I don't have the code :-/
You'll be amazed of what effects bitwise and integer arithmetic operators can produce:
FRAME_ITERATION++;
for (y = 0; y < YMAX; y++) {
for (x = 0; x < XMAX; x++) {
b = (x | y) % FRAME_ITERATION;
setPixelColor(x,y,r,g,b);
}
}
Sorry but I don't remember the exact combinations, so b = (x | y) % FRAME_ITERATION;
might actually render nothing beautiful. But, you can try your own combos.
Anyway, with code like the above, you can produce weird patterns and even water-like effects.
Waves are usually done with trig functions (sin/cos) or tables that approximate them.
You can also do some cool water ripples with some simple math. See here for code and an online demo.
Related
I am having trouble configuring ICP in PCL 1.6(I use Android) and I believe that is the cause of an incorrect transformation matrix. What I've tried so far is this;
Downsample the point clouds I use in ICP with the following code:
pcl::VoxelGrid <pcl::PointXYZ> grid;
grid.setLeafSize(6, 6, 6);
grid.setInputCloud(output);
grid.filter(*tgt);
grid.setInputCloud(cloud_src);
grid.filter(*src);
Try to align the two point clouds with the following code :
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputCloud(tgt);
icp.setInputTarget(src);
pcl::PointCloud<pcl::PointXYZ> Final;
pcl::PointCloud<pcl::PointXYZ> transformedCloud;
icp.setMaxCorrespondenceDistance(0.08);
icp.setMaximumIterations(500);
//icp.setTransformationEpsilon (0.000000000000000000001);
icp.setTransformationEpsilon(1e-12);
icp.setEuclideanFitnessEpsilon(1e-12);
icp.setRANSACIterations(2000);
icp.setRANSACOutlierRejectionThreshold(0.6);
I have tried all kinds of values in the options of ICP but with no success.
The code I use to create the point cloud:
int density = 1;
for (int y = 0; y < depthFrame.getHeight(); y = y + density) {
for (int x = 0; x < depthFrame.getWidth(); x = x + density) {
short z = pDepthRow[y * depthVideoMode.getResolutionX() + x];
if (z > 0 && z < depthStream.getMaxPixelValue()) {
cloud_src->points[counter].x = x;
cloud_src->points[counter].y = y;
cloud_src->points[counter].z = z;
}
}
}
Any chance somebody can help me out configuring ICP?
From what I know, PCL uses meters as a unit of measurement. so, is it intended that leaf size for downsampling is 6*6*6 !? I think it's huge !!
Try with some lower values and if it didn't work either, try ICP with the most simple configuration as follows :
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputCloud(tgt);
icp.setInputTarget(src);
pcl::PointCloud<pcl::PointXYZ> Final;
icp.align(Final);
once, you got this working (and it should be), then start tuning the other parameters and it shouldn't be that hard to do.
Hope that helps, Cheers!
I'm having troubles finding a way to calculate a "cross-field" inside an arbitrary polygon.
A Cross field, as defined by one paper is the smoothest field that is tangential to the domain boundary (in this case the polygon) I find it a lot in quad re-topology papers but surprisingly not even in Wikipedia I can find the definition of a Cross field.
I have images but since I'm new here the system said I need at least 10 reputation points to upload images.
Any ideas?
I think it could be something along the lines of an interpolation? given an inner point determine the distance to each edge and integrate or weight sum the tangent and perpendicular vector of every edge by the distance? (or any other factor in fact)
But other simpler approaches may exist?
Thanks in advance!
//I've come up with something like this (for the 3D case), very raw, educational purposes
float ditance2segment(Vector3D p, Vector3D p0, Vector3D p1){
Vector3D v = p1 - p0;
Vector3D w = p - p0;
float c1 = v.Dot(w);
if (c1 <= 0)
return (p - p1).Length();
float c2 = v.Dot(v);
if (c2 <= c1)
return (p - p1).Length();
float b = c1 / c2;
Vector3D pb = p0 + b*v;
return (p - pb).Length();
}
void CrossFieldInterpolation(List<Vector3D>& Contour, List<Vector3D>& ContourN, Vector3D p, Vector3D& crossU, Vector3D& crossV){
int N = Contour.Amount();
for (int i=0; i < N; i++){
Vector3D u = Contour[(i + 1) % N] - Contour[i];
Vector3D n = 0.5*(ContourN[(i + 1) % N] + ContourN[i]);
Vector3D v = -Vector3D::Cross(u,n); //perpendicular vector
u = Vector3D::Normalize(u);
n = Vector3D::Normalize(n);
v = Vector3D::Normalize(v);
float dist = ditance2segment(p, Contour[i], Contour[(i+1)%N]);
crossU += u / (1+dist); //to avoid infinity at points over the segment
crossV += v / (1+dist);
}
crossU = Vector3D::Normalize(crossU);
crossV = Vector3D::Normalize(crossV);
}
You can check the OpenSource Graphite software that I'm developping, it implements the "Periodic Global Parameterization" algorithm [1] that was developed in my research team. You may be also interested in the following research articles with algorithms that we developed more recently [2],[3]
Graphite website:
http://alice.loria.fr/software/graphite
How to use Periodic Global Parameterization:
http://alice.loria.fr/WIKI/index.php/Graphite/PGP
[1] http://alice.loria.fr/index.php/publications.html?Paper=TOG_pgp%402006
[2] http://alice.loria.fr/index.php/publications.html?Paper=DGF#2008
[3] http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=DFD#2008&Author=vallet
I am trying to use Project Tango C API, but the application crashed with no error if number of point cloud are more than ~6.5k (after some testing) with the following code
int width = mImageSource->getDepthImageSize().x;
int height = mImageSource->getDepthImageSize().y;
double fx = mImageSource->calib.intrinsics_d.projectionParamsSimple.fx;
double fy = mImageSource->calib.intrinsics_d.projectionParamsSimple.fy;
double cx = mImageSource->calib.intrinsics_d.projectionParamsSimple.px;
double cy = mImageSource->calib.intrinsics_d.projectionParamsSimple.py;
memset(inputRawDepthImage->GetData(MEMORYDEVICE_CPU), -1, sizeof(short)*width*height);
for (int i = 0; i < XYZ_ij->xyz_count; i++) {
float X = XYZ_ij->xyz[i*3][0];
float Y = XYZ_ij->xyz[i*3][1];
float Z = XYZ_ij->xyz[i*3][2];
if (Z < EPSILON || (X < EPSILON && -X < EPSILON) || (Y < EPSILON && -Y < EPSILON) || X != X || Y != Y || Z != Z)
continue;
int x_2d = (int)(fx*X/Z+cx);
int y_2d = (int)(fy*Y/Z+cy);
if (x_2d >=0 && x_2d < width && y_2d >= 0 && y_2d < height && (x_2d != 0 || x_2d != 0)) {
inputRawDepthImage->GetData(MEMORYDEVICE_CPU)[x_2d + y_2d*width] = (short) (Z*1000);
} else {
continue;
}
}
However, if I use for (int i = 0; i < XYZ_ij->xyz_count && i < 6500; i++) everything works fine. I am just wondering if there is an upper bound for access point cloud with C API or I did something wrong?
(width is 320, height is 180, and other intrinsics are loaded from Tango API)
In addition, Google mentioned to use nearest- neighbor filter to get dense depth map in bottom of this page, is there an interface in Tango API for this? Or would anyone suggest an open source implementation for it.
I am also wondering if there is anyway to "pull" colored image(1280x720) in onXYZijAvailable because I need a dense synchronized colored point cloud. Do I need to apply external matrix to align both coordinate frame, or I only need to subsample color image (assume their coordinate system are the same)?
Thank you for any advice!
In your code that looks up the depth sample coordinates...
for (int i = 0; i < XYZ_ij->xyz_count; i++) {
float X = XYZ_ij->xyz[i*3][0];
float Y = XYZ_ij->xyz[i*3][1];
float Z = XYZ_ij->xyz[i*3][2];
...you should be using an index of i, not i*3. It is a 2D array so you don't have to manage the stride for the higher dimension yourself.
The SDK does not provide a call to fill in locations with no depth samples, probably because there are many approaches with different tradeoffs. The Wikipedia page on nearest neighbor search is a reasonable place to start. There is an interface to FLANN in OpenCV.
The SDK will only deliver the latest color image to you. If you want a prior image (e.g. with a timestamp close to your depth samples) you will have to manage that yourself. Because you can never get a color image at exactly the same timestamp with your depth samples (as the same camera is used in different modes for both), you theoretically should apply the extrinsic pose to align them. In practice if the motion is small over the 0.5 frame time or less between the timestamps, I think most people are going to ignore it.
I'm using OsmDroid on OpenStreetMaps and can make markers and polylines, but I can't find any examples on how I'd make 161m/528ft circles around a marker.
a) How do I make circles?
b) How do I make them 161m/528ft in size?
Thanks to MKer, I got an idea on how to solve the problem and made this piece of code, which works:
oPolygon = new org.osmdroid.bonuspack.overlays.Polygon(this);
final double radius = 161;
ArrayList<GeoPoint> circlePoints = new ArrayList<GeoPoint>();
for (float f = 0; f < 360; f += 1){
circlePoints.add(new GeoPoint(latitude , longitude ).destinationPoint(radius, f));
}
oPolygon.setPoints(circlePoints);
oMap.getOverlays().add(oPolygon);`
I know this can be optimized. I'm drawing 360 points, no matter what the zoom is!
If you want a "graphical" circle, then you can implement easily your own CircleOverlay, using the DirectedLocationOverlay as a very good starting point.
If you want a "geographical" circle (than will appear more or less as an ellipse), then you can use the OSMBonusPack Polygon, that you will define with this array of GeoPoints:
ArrayList<GeoPoint> circlePoints = new ArrayList<GeoPoint>();
iSteps = (radius * 40000)^2;
fStepSize = M_2_PI/iSteps;
for (double f = 0; f < M_2_PI; f += fStepSize){
circlePoints.add(new GeoPoint(centerLat + radius*sin(f), centerLon + radius*cos(f)));
}
(warning: I translated from a Nominatim piece of code in PHP, without testing)
I need to process the first "Original" image to get something similar to the second "Enhanced" one. I applied some naif calculation and the new image has more contrast and more strong colors but in the higher color regions a color hole appears. I have no idea about image processing, it would be great if you can suggest me which concepts and/or algorithms I could apply to get the result without this problem.
Convert the image to the HSB (Hue, Saturation, Brightness) color space.
Multiply the saturation by some amount. Use a cutoff value if your platform requires it.
Example in Mathematica:
satMult = 4; (*saturation multiplier *)
imgHSB = ColorConvert[Import["http://i.imgur.com/8XkxR.jpg"], "HSB"];
cs = ColorSeparate[imgHSB]; (* separate in H, S and B*)
newSat = Image[ImageData[cs[[2]]] * satMult]; (* cs[[2]] is the saturation*)
ColorCombine[{cs[[1]], newSat, cs[[3]]}, "HSB"]] (* rebuild the image *)
A table increasing the saturation value:
The "holes" that you see in the processed picture are the darker areas of the original picture, which went to negative values with your darkening algorithm. I suspect these out of range values are then written to the new image as positive numbers, so they end up in the higher part of the brightness scale. For example, let's say a pixel value is 10, and you are substracting 12 from all pixels to darken them a bit. This pixel will underflow and become -2. When you write it back to the file, -2 gets represented as 0xfe in hex, and this is 254 if you take it as an unsigned number.
You should use an algorithm that keeps the pixel values within the valid range, or at least you should "clamp" the values to the valid range. A typical clamp function defined as a C macro would be:
#define clamp(p) (p < 0 ? 0 : (p > 255 ? 255 : p))
If you add the above macro to your processing function it will take care of the "holes", but instead you will now have dark colors in those places.
If you are ready for something a bit more advanced, here on Wikipedia they have the brightness and contrast formulas that are used by The GIMP. These which will do a pretty good job with your image if you choose the proper coefficients.
This wikipedia article does a good job of explaining histogram equalization for contrast enhancement.
Code for grayscale images:
unsigned char* EnhanceContrast(unsigned char* data, int width, int height)
{
int* cdf = (int*) calloc(256, sizeof(int));
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
int val = data[width*y + x];
cdf[val]++;
}
}
int cdf_min = cdf[0];
for(int i = 1; i < 256; i++) {
cdf[i] += cdf[i-1];
if(cdf[i] < cdf_min) {
cdf_min = cdf[i];
}
}
unsigned char* enhanced_data = (unsigned char*) malloc(width*height);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
enhanced_data[width*y + x] = (int) round(cdf[data[width*y + x]] - cdf_min)*255.0/(width*height-cdf_min);
}
}
free(cdf);
return enhanced_data;
}