When I play the sound file by itself (in interface) it works fine and plays all the way through. However, when I play it as part of the code (followed by an action) it only plays the first half second. I'm using sound:play-sound-and-wait, so I'm not sure why it isn't working.
extensions [sound] ; I have them in the same folder
to-report mouse-up?
ifelse mouse-down?
[report "false"]
[report "true"]
end
to twirl
if mouse-up?
[ask turtles with [shape = ballerina]
[set shape ballerina-2
twirl]
ask turtles with [shape = ballerina-2]
[set shape ballerina
twirl] ]
end
These are 2 different ballerina's facing different directions. When you switch between them, they look like they're twirling. She keeps doing that until you make her stop
to ballet-box
ask patches [set plabel-color 105] ;gives the background this color
sound:play-sound-and-wait "love.aif" ;this works perfectly fine in interface
twirl ;and then I want the ballerina to twirl until you make her stop
end
Any help would be super appreciated!
Sounds like http://github.com/NetLogo/Sound-Extension/issues/2 or http://github.com/NetLogo/Sound-Extension/issues/3 . I don't think anyone has yet investigated or attempted to fix these issues. I don't think we even know under exactly what circumstances they do or don't occur.
Related
I was experimenting in VPython with my scene's camera and I discovered that scene.camera.pos is always equal to <0, 0, 1.73205> and scene.camera.axis is always equal to <0, 0, -1.73205>. These values don't change even when the camera auto-adjusts or when I use the scene.camera.follow function. Why is that? Also, I am able to change these values. However, for the scene.camera.pos it seems like going below 1 doesn't change anything from setting it to one. This is really odd and I hope someone can clear it up for me.
This has been addressed in the VPython forum:
https://groups.google.com/g/vpython-users
I am using vuforia video playback demo with cloud recognition.
I have combined both projects and it is working properly. But currently video dimension is according to detected object. But i need fixed width and height when video plays.
Can anyone help me ?
Thanks in advance.
Well apparently Vuforia fixes the width and height at the start of the game no matter what the size of the object is. I could not find when exactly this operation is conducted but it is done at beginning of your game. When you change the size of the ImageTarget in runtime it is not fixed anymore. Add these lines to your OnTrackingFound function of DefaultTrackableEventHandler.cs
if (this.name == "WhateverTheNameOfYourRelatedImageTarget"&& !isScaled)
{
//Increase the size however you want i just added 1 to each dimension
this.transform.localScale += Vector3.one;
// make isScaled true not to scale every time it is found initially it shoud be false
isScaled = true;
}
Good luck!
What i usually do is , instead of Videoplayback , i play the video on canvas object and hook this object to Defaulttrackableeventhandler script. So when target is found, gameobject.setactive(true) and gameobject.setactive(false) when target is lost. By this method the size of the gameobject is fixed and it stays in a fixed location .
I just made an example you can get here (have to import it to any project and open the scene Assets/VideoExample/Examples). You can see there a bit clearer what the ScreenSpace - Overlay does ... it might be better to just switch to ScreenSpace - Camera in general
I am trying to export a turtle variable value that is <= item 0 of a patch list. These are the values I am interested in recording, but I am having trouble getting the code right for that.
I've tried below:
file-print turtles with [turtlevariable <= item 0 patchlist]
I know that's not right as I am getting the number of turtles, and not the turtle variable value. I would like to run this model 1000 times and am unsure how to create code for a file that will be manageable to manipulate in Excel.
I'm pretty sure there is a simple answer, but I just can't figure it out! Any help is greatly appreciated.
You have multiple questions here. You need to post each question separately. I will take on the following: how can you get a list of values for turtlevariable, but only for the values that are < item 0 patchlist.
globals [patchlist]
turtles-own [tvar]
patches-own [pvar]
to test
ca
ask patches [set pvar random-float 1.0]
set patchlist [pvar] of patches
let _p00 item 0 patchlist ;;compute only once
crt 100
ask turtles [set tvar random-float 1.0]
let _tset (turtles with [tvar < _p00])
let _tvals [tvar] of _tset
print _tvals
end
You can always file-print anything you can print, so now you need to decide how exactly you want to format this list. This is a separate question. If you want to save as .csv, that is a separate question. (But you will find multiple questions on this site addressing that.) If you want to create one output file for multiple replicates, that is a separate question. (But see questions on this site about BehaviorSpace.) Hth.
I know, silly question, but I am not a programmer and really need to known...
to look-for-food
if food > 0
[ set color orange + 1
set food food - 1
rt 180
stop ]
if (chemical >= 0.05) and (chemical < 2)
[ uphill-chemical ]
end
Thanks
This language is called LOGO. It's often used with Turtle -- drawing lines on the screen.
Related Anecdote:
My first encounter with a computer was around 1985, in my classroom we had an Apple (Apple ][ probably?). This was one of the main programs that was available. I had learned how to do to in order to define a subroutine. I tried it once, but I couldn't remember the syntax for defining the specifics. And then when I tried to exit the to routine definition context, I couldn't remember how to exit it. The teacher could sense that I was confused, and was coming over to help me. I figured I'd be in trouble for straying beyond the scope of the exercise that I was supposed to be working on, and figured the teacher wouldn't know how to recover either. I panicked and powered off the computer. My first "did you try turning it off and on again?" experience! :)
can you suggest a good option for background subtraction using emgucv? my project is real time pedestrian detection.
Not sure if you still need this, but...in EmguCV, if you have 2 images of say type Image<Bgr, Byte> or any other type, called img1 and img2, doing img1 - img2 does work! There is a function called AbsDiff as well, I think it works like this: img1.AbsDiff(img2), you could look into that.
If you already have the picture of the background (img1) and you have the current frame (img2), you could do the above.
This is quite possible take a look at the "MotionDetection" example provided with EMGU this should get you started.
Effectively the code that removes the foreground is effectively named "_forgroundDetector" it is the "_motionHistory" that presents stores what movement has occurred.
The example has everything you need if you have trouble running it let me know,
Cheer
Chris
See:Removing background from _capture.QueryFrame()