I would like to use the divide blendMode in PIXI.js. How to do it?
I've tried all the other blendModes availables but none were good enough for the effect I'm trying to make. http://i.stack.imgur.com/YlqVm.jpg
Using the phaser based on pixi.js:
base = game.add.sprite(350, 350, 'base');
base.anchor.set(0.5);
base.tint = '0x7f4f2c';
blur = game.add.sprite(350, 350, 'blur');
blur.anchor.set(0.5);
blur.blendMode = PIXI.blendModes.DIVIDE; // Of course it doesn't work because pixi doesn't have it
I somehow tried to inverse the image and use the COLOR_DODGE blend mode which is available in pixi and it worked. This is a just very lucky. The effect is not 100% equal, but very close and good enough in this case.
Related
I am using vuforia video playback demo with cloud recognition.
I have combined both projects and it is working properly. But currently video dimension is according to detected object. But i need fixed width and height when video plays.
Can anyone help me ?
Thanks in advance.
Well apparently Vuforia fixes the width and height at the start of the game no matter what the size of the object is. I could not find when exactly this operation is conducted but it is done at beginning of your game. When you change the size of the ImageTarget in runtime it is not fixed anymore. Add these lines to your OnTrackingFound function of DefaultTrackableEventHandler.cs
if (this.name == "WhateverTheNameOfYourRelatedImageTarget"&& !isScaled)
{
//Increase the size however you want i just added 1 to each dimension
this.transform.localScale += Vector3.one;
// make isScaled true not to scale every time it is found initially it shoud be false
isScaled = true;
}
Good luck!
What i usually do is , instead of Videoplayback , i play the video on canvas object and hook this object to Defaulttrackableeventhandler script. So when target is found, gameobject.setactive(true) and gameobject.setactive(false) when target is lost. By this method the size of the gameobject is fixed and it stays in a fixed location .
I just made an example you can get here (have to import it to any project and open the scene Assets/VideoExample/Examples). You can see there a bit clearer what the ScreenSpace - Overlay does ... it might be better to just switch to ScreenSpace - Camera in general
I am using Pandas3D 1.10 (with Python 3.6) and I am trying to generate terrain on the fly.
Currently, I was able to perform this:
Now, my idea is to add textures to this terrain. I plan to add different textures for different kinds of ground and biome, but when I try to add a texture, this is added on the whole terrain.
I only want to add the texture to certain parts of the mesh, so I can combine different textures (dirt, grass, sand, etc) and make a better terrain.
From this Panda3D documentation you can see an example of how to make the terrain:
from panda3d.core import ShaderTerrainMesh, Shader, load_prc_file_data
# Required for matrix calculations
load_prc_file_data("", "gl-coordinate-system default")
# ...
terrain_node = ShaderTerrainMesh()
terrain_node.heightfield_filename = "heightfield.png" # This must be in gray scale, also you can make an image with PNMImage() with code.
terrain_node.target_triangle_width = 10.0
terrain_node.generate()
terrain_np = render.attach_new_node(terrain_node)
terrain_np.set_scale(1024, 1024, 60)
terrain_np.set_shader(Shader.load(Shader.SL_GLSL, "terrain.vert", "terrain.frag"))
In that link, there is also an example of both terrain.vert and terrain.frag.
I tried to apply this guide but it seem that doesn't work on ShaderMeshTerrain, or I think.
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
terrain_np.setTexture(ts, myTexture)
terrain_np.setTexScale(ts, 10, 10)
terrain_np.setTexOffset(ts, -25, -25)
The output is the same. It doesn't matter how much I change the numbers of setTextScale and setTexOffset the output is always all with grass.
How can I only implement the texture on a certain part of the model?
Obviously, I can make the image on the fly and do all the modifications with PNMImage(), but it will be slow and difficult, and I am very sure it may be possible to do without re-made the texture each time.
EDIT
I've discovered that I can do this in order to only put a texture in a place:
ts = TextureStage('ts')
myTexture = loader.loadTexture("textures/Grass.png")
myTexture.setWrapU(Texture.WM_border_color)
myTexture.setWrapV(Texture.WM_border_color)
myTexture.setBorderColor(VBase4(0, 0, 0, 0))
terrain_np.setTexture(ts, myTexture)
The problem is that I am not able to change the location of this texture nor its size. Also, note that I don't want to reduce the scale of the texture, when I want to make a texture smaller I mean "cut" or "erase" all the parts that don't fit on the place, not reduce the overall texture size.
Sadly these commands aren't working:
myTexture.setTexScale(ts, LVecBase2(0.5, 250))
myTexture.setTexOffset(ts, LVecBase2(0.15, 0.5))
I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.
can you suggest a good option for background subtraction using emgucv? my project is real time pedestrian detection.
Not sure if you still need this, but...in EmguCV, if you have 2 images of say type Image<Bgr, Byte> or any other type, called img1 and img2, doing img1 - img2 does work! There is a function called AbsDiff as well, I think it works like this: img1.AbsDiff(img2), you could look into that.
If you already have the picture of the background (img1) and you have the current frame (img2), you could do the above.
This is quite possible take a look at the "MotionDetection" example provided with EMGU this should get you started.
Effectively the code that removes the foreground is effectively named "_forgroundDetector" it is the "_motionHistory" that presents stores what movement has occurred.
The example has everything you need if you have trouble running it let me know,
Cheer
Chris
See:Removing background from _capture.QueryFrame()
This question kind of starts where this question ends up. MATLAB has a powerful and flexible image display system which lets you use the imshow and plot commands to display complex images and then save the result. For example:
im = imread('image.tif');
f = figure, imshow(im, 'Border', 'tight');
rectangle('Position', [100, 100, 10, 10]);
print(f, '-r80', '-dtiff', 'image2.tif');
This works great.
The problem is that if you are doing a lot of image processing, it starts to be real drag to show every image you create - you mostly want to just save them. I know I could start directly writing to an image and then saving the result. But using plot/rectangle/imshow is so easy, so I'm hoping there is a command that can let me call plot, imshow etc, not display the results and then save what would have been displayed. Anyone know any quick solutions for this?
Alternatively, a quick way to put a spline onto a bitmap might work...
When you create the figure you set the Visibile property to Off.
f = figure('visible','off')
Which in your case would be
im = imread('image.tif');
f = figure('visible','off'), imshow(im, 'Border', 'tight');
rectangle('Position', [100, 100, 10, 10]);
print(f, '-r80', '-dtiff', 'image2.tif');
And if you want to view it again you can do
set(f,'visible','on')
The simple answer to your question is given by Bessi and Mr Fooz: set the 'Visible' setting for the figure to 'off'. Although it's very easy to use commands like IMSHOW and PRINT to generate figures, I'll summarize why I think it's not necessarily the best option:
As illustrated by Mr Fooz's answer, there are many other factors that come into play when trying to save figures as images. The type of output you get is going to be dependent on many figure and axes settings, thus increasing the likelihood that you will not get the output you want. This could be especially problematic if you have your figures set to be invisible, since you won't notice some discrepancy that could be caused by a change in a default setting for the figure or axes. In short, your output becomes highly sensitive to a number of settings that you would then have to add to your code to control your output, as Mr Fooz's example shows.
Even if you're not viewing the figures as they are made, you're still probably making MATLAB do more work than is really necessary. Graphics objects are still created, even if they are not rendered. If speed is a concern, generating images from figures doesn't seem like the ideal solution.
My suggestion is to actually modify the image data directly and save it using IMWRITE. It may not be as easy as using IMSHOW and other plotting solutions, but I think it is more efficient and gives more robust and consistent results that are not as sensitive to various plot settings. For the example you give, I believe the alternative code for creating a black rectangle would look something like this:
im = imread('image.tif');
[r,c,d] = size(im);
x0 = 100;
y0 = 100;
w = 10;
h = 10;
x = [x0:x0+w x0*ones(1,h+1) x0:x0+w (x0+w)*ones(1,h+1)];
y = [y0*ones(1,w+1) y0:y0+h (y0+h)*ones(1,w+1) y0:y0+h];
index = sub2ind([r c],y,x);
im(index) = 0;
im(index+r*c) = 0;
im(index+2*r*c) = 0;
imwrite(im,'image2.tif');
I'm expanding on Bessi's solution here a bit. I've found that it's very helpful to know how to have the image take up the whole figure and to be able to tightly control the output image size.
% prevent the figure window from appearing at all
f = figure('visible','off');
% alternative way of hiding an existing figure
set(f, 'visible','off'); % can use the GCF function instead
% If you start getting odd error messages or blank images,
% add in a DRAWNOW call. Sometimes it helps fix rendering
% bugs, especially in long-running scripts on Linux.
%drawnow;
% optional: have the axes take up the whole figure
subplot('position', [0 0 1 1]);
% show the image and rectangle
im = imread('peppers.png');
imshow(im, 'border','tight');
rectangle('Position', [100, 100, 10, 10]);
% Save the image, controlling exactly the output
% image size (in this case, making it equal to
% the input's).
[H,W,D] = size(im);
dpi = 100;
set(f, 'paperposition', [0 0 W/dpi H/dpi]);
set(f, 'papersize', [W/dpi H/dpi]);
print(f, sprintf('-r%d',dpi), '-dtiff', 'image2.tif');
If you'd like to render the figure to a matrix, type "help #avifile/addframe", then extract the subfunction called "getFrameForFigure". It's a Mathworks-supplied function that uses some (currently) undocumented ways of extracting data from figure.
Here is a completely different answer:
If you want an image file out, why not just save the image instead of the entire figure?
im = magic(10)
imwrite(im/max(im(:)),'magic.jpg')
Then prove that it worked.
imshow('magic.jpg')
This can be done for indexed and RGB also for different output formats.
You could use -noFigureWindows to disable all figures.