I am using vuforia video playback demo with cloud recognition.
I have combined both projects and it is working properly. But currently video dimension is according to detected object. But i need fixed width and height when video plays.
Can anyone help me ?
Thanks in advance.
Well apparently Vuforia fixes the width and height at the start of the game no matter what the size of the object is. I could not find when exactly this operation is conducted but it is done at beginning of your game. When you change the size of the ImageTarget in runtime it is not fixed anymore. Add these lines to your OnTrackingFound function of DefaultTrackableEventHandler.cs
if (this.name == "WhateverTheNameOfYourRelatedImageTarget"&& !isScaled)
{
//Increase the size however you want i just added 1 to each dimension
this.transform.localScale += Vector3.one;
// make isScaled true not to scale every time it is found initially it shoud be false
isScaled = true;
}
Good luck!
What i usually do is , instead of Videoplayback , i play the video on canvas object and hook this object to Defaulttrackableeventhandler script. So when target is found, gameobject.setactive(true) and gameobject.setactive(false) when target is lost. By this method the size of the gameobject is fixed and it stays in a fixed location .
I just made an example you can get here (have to import it to any project and open the scene Assets/VideoExample/Examples). You can see there a bit clearer what the ScreenSpace - Overlay does ... it might be better to just switch to ScreenSpace - Camera in general
Related
I'm using Godot 4 beta. I want to skip to a specific frame in an AnimationPlayer, but I'm getting:
Invalid set index 'current_animation_position' (on base: 'AnimationPlayer') with value of type 'float'.
Here's the related documentation: https://docs.godotengine.org/en/latest/classes/class_animationplayer.html#class-animationplayer-property-current-animation-position
I currently have one AnimationPlayer in my scene, named 'animation', with an animation named 'Animation' with "Autoplay on Load". The animation 'Animation' has a length of 4.x seconds.
Here's my code attached to the scene:
func _process(_delta):
if Input.is_action_just_released("skip_intro"):
if animation_player.current_animation_position < 1.3:
animation_player.current_animation_position = 1.3
else:
skip_intro()
Update (2)
I know I can use animation_player.advance(), but it adds to the relative time. I'm looking for a way to go to a fixed frame, not a relative frame.
As you can read in the documentation you linked current_animation_position only has a getter. It is a read-only property.
If you want to go to an specific time you can use the seek method.
I found that I can use play() before advance() to go to an absolute frame. But I'd appreciate any other way to do it inliner.
animation_player.play("Animation")
animation_player.advance(1.3)
IMO it should be allowed to rewrite the current_animation_position property.
I'm programming in C++ using OpenCV in an object oriented approach. Basically I have an array of object called People[8]. For each array, I want to allocate an image to it by taking a picture using webcam. I did something like this:
for (int i=0; i<8; i++){
cvWaitKey(0); //wait for input then take picture
Mat grabbed = cam1.CamCapture();
People[i].setImage(grabbed);
imshow("picture", grabbed);
cvWaitKey(1);
}
I face 2 problems here:
1) The imshow does not display the 'latest' image captured, it display the image previously taken i.e (i-1) instead of i.
2) When I display all the images together, 8 windows appear and all of them are displaying the last image captured on the camera.
I do not have any clue what is wrong, could anyone please advice? Thank you in advance.
"all of them are displaying the last image captured on the camera."
the images you get from the capture point to driver memory. so the former image gets overwritten by the latter.
you need to store a clone() of the mat you get, like:
People[i].setImage( grabbed.clone() );
I have not worked with OpenCV for a while but I would move around cvWaitKey( 1 ), I also would not have 2 calls to it, from what I remember it is similar to glFlush(). Also I would change 1 to 10. For some reason I remember 1 not working.
I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.
First let me say I'm coming from the iOS world and am trying to make my first OSX app. So apologies for the question if the answer is obvious. :)
I'm trying to setup an NSTextView to resize according to the amount of text in it. I've been successful at getting the NSTextView to resize properly but it's superview (NSScrollView) won't resize.
This is what I have so far...
[self.messageBodyTextView setVerticallyResizable:YES];
[self.messageBodyTextView.layoutManager ensureLayoutForTextContainer:self.messageBodyTextView.textContainer];
[self.messageBodyTextView.layoutManager boundingRectForGlyphRange:NSMakeRange(0, [self.messageBodyTextView.layoutManager numberOfGlyphs]) inTextContainer:self.messageBodyTextView.textContainer];
NSRect rect = [self.messageBodyTextView.layoutManager usedRectForTextContainer:self.messageBodyTextView.textContainer];
[self.messageBodyTextView.textContainer setContainerSize:rect.size];
[self.messageBodyTextView setMaxSize:NSMakeSize(self.messageBodyTextView.bounds.size.width, rect.size.height)];
[self.messageBodyTextView.textContainer setHeightTracksTextView:YES];
[self.messageBodyScrollView.documentView setFrameSize:rect.size];
[self.messageBodyScrollView.documentView setFrame:rect];
[self.messageBodyScrollView setFrameSize:rect.size];
self.messageBodyTextView resizes just fine with all this code (I have a feeling a have a bunch of redundant code in there). But self.messageBodyScrollView either doesn't resize at all or if I try to use setBounds then it not only resizes messageBodyTextView to messageBodyScrollView's full size but it also stretches out the text inside.
note: messageBodyTextView and messageBodyScrollView are both attached to my IB doc as IBOutlets.
My code used to be a lot shorter but this is where I've gotten to by adding in anything I can find to make these two views match up.
Any help would be very much appreciated!
So I load a color .png file that has been taken with an iphone using cvLoadImage. And after it's been loaded, when I immediately display it in my X11 terminal, the image is definitely darker than the original png file.
I currently use this to load the image:
IplImage *img3 = cvLoadImage( "bright.png", 1);
For the second parameter I have tried all of the following:
CV_LOAD_IMAGE_UNCHANGED
CV_LOAD_IMAGE_GRAYSCALE
CV_LOAD_IMAGE_COLOR
CV_LOAD_IMAGE_ANYDEPTH
CV_LOAD_IMAGE_ANYCOLOR
but none of these have worked. Grayscale definitely made the image grayscale. But as suggested from http://www.cognotics.com/opencv/docs/1.0/ref/opencvref_highgui.htm, even using CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR to load the image as truthfully as possible resulted in a darker image being displayed in the terminal.
Does anyone have any ideas on how to get the original image to display properly?
Thanks a lot in advance.
Yes, OpenCV does not apply Gamma correction.
// from: http://gegl.org/
// value: 0.0-1.0
static inline qreal
linear_to_gamma_2_2 (qreal value){
if (value > 0.0030402477)
return 1.055 * pow (value, (1.0/2.4)) - 0.055;
return 12.92 * value;
}
// from: http://gegl.org/
static inline qreal
gamma_2_2_to_linear (qreal value){
if (value > 0.03928)
return pow ((value + 0.055) / 1.055, 2.4);
return value / 12.92;
}
It only happens when you load it in OpenCV? Opening with any other viewer doesn't show a difference?
I can't confirm this without a few tests but I believe the iPhone display gamma is 1.8 (source: http://www.colorwiki.com/wiki/Color_on_iPhone#The_iPhone.27s_Display). Your X11 monitor probably is adjusted for 2.2 (like the rest of the world).
If this theory holds, yes, images are going to appear darker on X11 than on the iPhone. You may change your monitor calibration or do some image processing to account for the difference.
Edit:
I believe OpenCV really does not apply gamma correction. My reference to this is here:
http://permalink.gmane.org/gmane.comp.lib.opencv.devel/837
You might want to implement it yourself or "correct" it with ImageMagick. This page instructs you on how to do so:
http://www.4p8.com/eric.brasseur/gamma.html
I usually load an image with:
cvLoadImage("file.png", CV_LOAD_IMAGE_UNCHANGED);
One interesting test you could do to detect if OpenCV is really messing with the image data, is simply creating another image with cvCreateImage(), then copy the data to this newly created image and save it to another file with cvLoadImage().
Maybe, it's just a display error. Of course, I would suggest you to update to the most recent version of OpenCV.