I am using Pandas3D 1.10 (with Python 3.6) and I am trying to generate terrain on the fly.
Currently, I was able to perform this:
Now, my idea is to add textures to this terrain. I plan to add different textures for different kinds of ground and biome, but when I try to add a texture, this is added on the whole terrain.
I only want to add the texture to certain parts of the mesh, so I can combine different textures (dirt, grass, sand, etc) and make a better terrain.
From this Panda3D documentation you can see an example of how to make the terrain:
from panda3d.core import ShaderTerrainMesh, Shader, load_prc_file_data
# Required for matrix calculations
load_prc_file_data("", "gl-coordinate-system default")
# ...
terrain_node = ShaderTerrainMesh()
terrain_node.heightfield_filename = "heightfield.png" # This must be in gray scale, also you can make an image with PNMImage() with code.
terrain_node.target_triangle_width = 10.0
terrain_node.generate()
terrain_np = render.attach_new_node(terrain_node)
terrain_np.set_scale(1024, 1024, 60)
terrain_np.set_shader(Shader.load(Shader.SL_GLSL, "terrain.vert", "terrain.frag"))
In that link, there is also an example of both terrain.vert and terrain.frag.
I tried to apply this guide but it seem that doesn't work on ShaderMeshTerrain, or I think.
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
terrain_np.setTexture(ts, myTexture)
terrain_np.setTexScale(ts, 10, 10)
terrain_np.setTexOffset(ts, -25, -25)
The output is the same. It doesn't matter how much I change the numbers of setTextScale and setTexOffset the output is always all with grass.
How can I only implement the texture on a certain part of the model?
Obviously, I can make the image on the fly and do all the modifications with PNMImage(), but it will be slow and difficult, and I am very sure it may be possible to do without re-made the texture each time.
EDIT
I've discovered that I can do this in order to only put a texture in a place:
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
myTexture.setWrapU(Texture.WM_border_color)
myTexture.setWrapV(Texture.WM_border_color)
myTexture.setBorderColor(VBase4(0, 0, 0, 0))
terrain_np.setTexture(ts, myTexture)
The problem is that I am not able to change the location of this texture nor its size. Also, note that I don't want to reduce the scale of the texture, when I want to make a texture smaller I mean "cut" or "erase" all the parts that don't fit on the place, not reduce the overall texture size.
Sadly these commands aren't working:
myTexture.setTexScale(ts, LVecBase2(0.5, 250))
myTexture.setTexOffset(ts, LVecBase2(0.15, 0.5))
Related
I'm new to Open Inventor 3D Graphics API and I just want to draw a line between given to 3-D coordinates. Let's say the first point is 0,0,0 and the second is 1,1,1. The documentation and examples of this API are really awful and can't get it out right. I'm using Visual Studio.
If you just need to set the base color (what Open Inventor & OpenGL call diffuse color), which is usually the case for line geometry, then you can set it directly in the SoVertexProperty node.
For example, to make the line in the previous example 'red', add this line:
vprop->orderedRGBA = 0xff0000ff; // By default applies to all vertices
or, more conveniently:
vprop->orderedRGBA = SbColor(1,0,0).getPackedValue();
If you need more control over the appearance of the geometry, add an SoMaterial node to the scene graph before the geometry node.
Assuming you're just asking about creating the line shape - just store the coordinates in an SoVertexProperty node, set that node in an SoLineSet node, then add the line set to your scene graph. Open Inventor will assume that you want to use all the coordinates given, so that's all you need to do.
For just two coordinates it may be easiest to use the set1Value method, but you can also set the coordinates from an array. You didn't say which language you're using, so I'll show the code in C++ (C# and Java would be essentially the same except for language syntax differences):
SoVertexProperty* vprop = new SoVertexProperty();
vprop->vertex.set1Value( 0, 0,0,0 ); // Set first vertex to be 0,0,0
vprop->vertex.set1Value( 1, 1,1,1 ); // Set second vertex to be 1,1,1
SoLineSet* line = new SoLineSet();
line->vertexProperty = vprop;
sceneGraph->addChild( line );
Line thickness is specified by creating an SoDrawStyle property node and adding it to the scene graph before/above the geometry node. Like this:
SoDrawStyle* style = new SoDrawStyle();
style->lineWidth = 3; // "pixels" but see OpenGL docs
parent->addChild( style );
I would like to use the divide blendMode in PIXI.js. How to do it?
I've tried all the other blendModes availables but none were good enough for the effect I'm trying to make. http://i.stack.imgur.com/YlqVm.jpg
Using the phaser based on pixi.js:
base = game.add.sprite(350, 350, 'base');
base.anchor.set(0.5);
base.tint = '0x7f4f2c';
blur = game.add.sprite(350, 350, 'blur');
blur.anchor.set(0.5);
blur.blendMode = PIXI.blendModes.DIVIDE; // Of course it doesn't work because pixi doesn't have it
I somehow tried to inverse the image and use the COLOR_DODGE blend mode which is available in pixi and it worked. This is a just very lucky. The effect is not 100% equal, but very close and good enough in this case.
I have been trying to figure out how to get two nodes to sense when they are close to each other and then snap together but can't make it work correctly. Basically, I have an AnchorPane that I am dropping new Nodes onto. The new nodes are also anchor panes with several other components on them. When I drop the Node I save anchor points along the outer edge. Then, when I drag another Node next to it, the sides will light up indicating the other node is in range.
I am attempting to make a node that is being dragged next to another node snap to that node. I cannot seem to get the coordinates to translate correctly between each other and I am just ending up with random placement and edge detection.
Here is my code where I am saving the anchor points for the nodes:
double kromaDeviceWidth = kromaDevice.getBoundsInParent().getWidth();
double kromaDeviceHeight = kromaDevice.getBoundsInParent().getHeight();
//This x,y represents the top left corner of the node
double kromaDeviceX = kromaDevice.localToParent(0.0, 0.0).getX();
double kromaDeviceY = kromaDevice.localToParent(0.0, 0.0).getY();
kromaDevice.setTopAnchorPoint(new double[]{kromaDeviceX + kromaDeviceWidth / 2, kromaDeviceY});
kromaDevice.setRightAnchorPoint(new double[]{kromaDeviceX + kromaDeviceWidth, kromaDeviceY + kromaDeviceHeight / 2});
kromaDevice.setBottomAnchorPoint(new double[]{kromaDeviceX + kromaDeviceWidth / 2, kromaDeviceY + kromaDeviceHeight});
kromaDevice.setLeftAnchorPoint(new double[]{kromaDeviceX, kromaDeviceY + kromaDeviceHeight / 2});
The code is identical for when I initially drop a new node and when I am dragging the node. Then, I compare the two node's anchor positions to tell if they are within range:
if (Math.abs(bottomAnchorX - topAnchorPointX) <= ANCHOR_DISTANCE && Math.abs(bottomAnchorY - topAnchorY) <= ANCHOR_DISTANCE) {
....show correct edge highlight
}
I simplified the above if statement as I am using arrays to store and recall the anchor points.
Here is an image of what I am seeing:
You can see the slight yellow highlight when I drag one node over the other when it is offset. It should detect the other node when it is in the position in the second image. My next issue is trying to get them to snap to the right coordinates.
droppedKromaDevice.setLayoutX(parentKromaDevice.getLayoutX());
droppedKromaDevice.setLayoutY(parentKromaDevice.getLayoutY() - droppedKromaDevice.getBoundsInParent().getHeight());
I tried the above with both getLayoutX() and localToParent(0,0).getX() and they produce the same result. If I place two nodes that are exactly the same size than it actually works but if the are different sizes at all than it places them offset from each other. If I subtract the height from the y it should matter the size.
Please help. I have been trying to get this to work right for 3 days now and have tried everything I can think of.
Update:
I figured out my proximity issue. The layout for the new node was not being set right. I tried doing a Platform.runLater before I saved the anchor points of the new node but that had no impact. I fixed it by setting the anchor points for all of the nodes in the pane when I click on a node to drag it. That saved the anchor points correctly.
This however did not fix my issue of nodes of different sizes not laying out in the pane correctly. Here is a screenshot of two nodes of the same size snapping together correctly and two nodes of different sizes not snapping correctly. This makes no sense as the math should be the same.
Here is the code to set the layout for the dropped node relative to the other node:
droppedKromaDevice.setLayoutX(parentKromaDevice.getLayoutX());
droppedKromaDevice.setLayoutY(parentKromaDevice.getLayoutY() - droppedKromaDevice.getBoundsInParent().getHeight());
I found the solution to my two problems.
First, when I was creating new nodes and dropping them on the panel the bounds were not being evaluated correctly so my anchor points were off. I just changed it so when I click on a node to drag it around I loop through all of the other nodes on the panel and build their anchors instead of when I first drop/create it.
Second, in order to get the snap positioning to work accurately I had to base their layout on the delta between the opposite anchor points and not on the bounds of the node. Basically, I get the current x/y of the node I am dropping and than move it using the delta between the dropped node and the node I need to snap it to. The code below is what I used. 0 represents the x coordinate and 1 represents the y coordinate in the array
droppedKromaDevice.setLayoutX(droppedKromaDevice.getLayoutX() - droppedKromaDevice.getBottomAnchorPoint()[0] + parentKromaDevice.getTopAnchorPoint()[0]);
droppedKromaDevice.setLayoutY(droppedKromaDevice.getLayoutY() - droppedKromaDevice.getBottomAnchorPoint()[1] + parentKromaDevice.getTopAnchorPoint()[1]);
I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.
This question kind of starts where this question ends up. MATLAB has a powerful and flexible image display system which lets you use the imshow and plot commands to display complex images and then save the result. For example:
im = imread('image.tif');
f = figure, imshow(im, 'Border', 'tight');
rectangle('Position', [100, 100, 10, 10]);
print(f, '-r80', '-dtiff', 'image2.tif');
This works great.
The problem is that if you are doing a lot of image processing, it starts to be real drag to show every image you create - you mostly want to just save them. I know I could start directly writing to an image and then saving the result. But using plot/rectangle/imshow is so easy, so I'm hoping there is a command that can let me call plot, imshow etc, not display the results and then save what would have been displayed. Anyone know any quick solutions for this?
Alternatively, a quick way to put a spline onto a bitmap might work...
When you create the figure you set the Visibile property to Off.
f = figure('visible','off')
Which in your case would be
im = imread('image.tif');
f = figure('visible','off'), imshow(im, 'Border', 'tight');
rectangle('Position', [100, 100, 10, 10]);
print(f, '-r80', '-dtiff', 'image2.tif');
And if you want to view it again you can do
set(f,'visible','on')
The simple answer to your question is given by Bessi and Mr Fooz: set the 'Visible' setting for the figure to 'off'. Although it's very easy to use commands like IMSHOW and PRINT to generate figures, I'll summarize why I think it's not necessarily the best option:
As illustrated by Mr Fooz's answer, there are many other factors that come into play when trying to save figures as images. The type of output you get is going to be dependent on many figure and axes settings, thus increasing the likelihood that you will not get the output you want. This could be especially problematic if you have your figures set to be invisible, since you won't notice some discrepancy that could be caused by a change in a default setting for the figure or axes. In short, your output becomes highly sensitive to a number of settings that you would then have to add to your code to control your output, as Mr Fooz's example shows.
Even if you're not viewing the figures as they are made, you're still probably making MATLAB do more work than is really necessary. Graphics objects are still created, even if they are not rendered. If speed is a concern, generating images from figures doesn't seem like the ideal solution.
My suggestion is to actually modify the image data directly and save it using IMWRITE. It may not be as easy as using IMSHOW and other plotting solutions, but I think it is more efficient and gives more robust and consistent results that are not as sensitive to various plot settings. For the example you give, I believe the alternative code for creating a black rectangle would look something like this:
im = imread('image.tif');
[r,c,d] = size(im);
x0 = 100;
y0 = 100;
w = 10;
h = 10;
x = [x0:x0+w x0*ones(1,h+1) x0:x0+w (x0+w)*ones(1,h+1)];
y = [y0*ones(1,w+1) y0:y0+h (y0+h)*ones(1,w+1) y0:y0+h];
index = sub2ind([r c],y,x);
im(index) = 0;
im(index+r*c) = 0;
im(index+2*r*c) = 0;
imwrite(im,'image2.tif');
I'm expanding on Bessi's solution here a bit. I've found that it's very helpful to know how to have the image take up the whole figure and to be able to tightly control the output image size.
% prevent the figure window from appearing at all
f = figure('visible','off');
% alternative way of hiding an existing figure
set(f, 'visible','off'); % can use the GCF function instead
% If you start getting odd error messages or blank images,
% add in a DRAWNOW call. Sometimes it helps fix rendering
% bugs, especially in long-running scripts on Linux.
%drawnow;
% optional: have the axes take up the whole figure
subplot('position', [0 0 1 1]);
% show the image and rectangle
im = imread('peppers.png');
imshow(im, 'border','tight');
rectangle('Position', [100, 100, 10, 10]);
% Save the image, controlling exactly the output
% image size (in this case, making it equal to
% the input's).
[H,W,D] = size(im);
dpi = 100;
set(f, 'paperposition', [0 0 W/dpi H/dpi]);
set(f, 'papersize', [W/dpi H/dpi]);
print(f, sprintf('-r%d',dpi), '-dtiff', 'image2.tif');
If you'd like to render the figure to a matrix, type "help #avifile/addframe", then extract the subfunction called "getFrameForFigure". It's a Mathworks-supplied function that uses some (currently) undocumented ways of extracting data from figure.
Here is a completely different answer:
If you want an image file out, why not just save the image instead of the entire figure?
im = magic(10)
imwrite(im/max(im(:)),'magic.jpg')
Then prove that it worked.
imshow('magic.jpg')
This can be done for indexed and RGB also for different output formats.
You could use -noFigureWindows to disable all figures.