I am new to Vuforia SDK. I have an image which acts as a target. I want to place this image on to the Imagemarker. In real time the size of the Imagemarker varies. Is there any method where I can get the width and height of the Imagemarker so that the target image fits exactly on the Imagemarker?
Since you did not specify if you are using the Unity or native APIs I will assume you are using Unity.
This is how you would go about it using the Vuforia API, placing this in a script attached to your ImageTarget GameObject.
IEnumerator Start()
{
Vuforia.ImageTarget img = GetComponent<Vuforia.ImageTargetBehaviour>().ImageTarget;
// This is rounded of in the console display,
// so individual components are printed afterwards
Debug.Log(img.GetSize());
Debug.Log(img.GetSize().x);
Debug.Log(img.GetSize().y);
Debug.Log(img.GetSize().z);
}
Alternatively you can directly use the Bounds of the renderer.
void Start()
{
Renderer r = GetComponent<Renderer>();
Debug.Log(r.bounds.size.x);
Debug.Log(r.bounds.size.y);
Debug.Log(r.bounds.size.z);
}
Needless to say this is just a quick solution, depending on the situation you might want to use this at runtime dynamically create content.
Yes, you can.
While placing the Image on the Image Marker to the relative size you want it to be, and when you run it you'll see that the size of the image will be relative to the Marker you've placed it on.
Related
I have created a base scene that I intend to use for all human characters of my game. I am using an AnimatedSprite where I defined different animations for the different positions of the character, all using a texture that contains all the frames.
This works for a specific character, but now I would like to create other characters. Since I am using a character generator, all the sprite sheets are basically the same, but with different clothes, accessories, etc. I would like to avoid replicating the animation definitions for the other characters. I could achieve that by setting a different texture on each instance of the scene, but I can't find a way to do it.
If I edit the tscn file and set a different image, it does what I want.
I tried updating the atlas property of the animation frames, but doing that affects all instances of the scene:
func update_texture(value: Texture):
for animation in $AnimatedSprite.frames.animations:
for frame in animation.frames:
frame.atlas = value
I also tried cloning a SpriteFrames instance, by calling duplicate(0), updating it with the above code, then setting $AnimatedSprite.frames, but this also updates all instances of the scene.
What is the proper way to change the texture of a specific instance of AnimatedSprite?
I found a solution. The problem was that the duplicate method does not perform a deep clone, so I was having references to the same frame instances.
Here's my updated version:
func update_texture(texture: Texture):
var reference_frames: SpriteFrames = $AnimatedSprite.frames
var updated_frames = SpriteFrames.new()
for animation in reference_frames.get_animation_names():
if animation != "default":
updated_frames.add_animation(animation)
updated_frames.set_animation_speed(animation, reference_frames.get_animation_speed(animation))
updated_frames.set_animation_loop(animation, reference_frames.get_animation_loop(animation))
for i in reference_frames.get_frame_count(animation):
var updated_texture: AtlasTexture = reference_frames.get_frame(animation, i).duplicate()
updated_texture.atlas = texture
updated_frames.add_frame(animation, updated_texture)
updated_frames.remove_animation("default")
$AnimatedSprite.frames = updated_frames
I am trying to integrate PixiJS with an existing custom WebGL engine. The existing custom engine is the host and handles control to PixiJS every frame. The existing custom engine configures the WebGL state to an "almost" default state and then calls into PixiJS; after PixiJS is done, the existing custom engine does a full reset of the WebGL state.
In code:
onFrame() {
resetWebGLStateToDefault(gl);
gl.bindFramebuffer(...)
gl.viewport(...)
thenWeUsePixiJSToDoSomeAdvancedStuff();
resetWebGLStateToDefault(gl);
}
My question
In thenWeUsePixiJSToDoSomeAdvancedStuff(), how can I tell PixiJS that the state is not what it used to be the previous time that it ran? Pretty much everything has been reset; PixiJS should assume that everything is default and I would also like to tell PixiJS what the current viewport and framebuffer are.
I tried Renderer.reset, StateSystem.reset, StateSystem.forceState but I guess that's not enough; PixiJS keeps assuming that some textures that it has set previously are still bound (they are not, the existing custom engine unbinds everything) and I get lots of [.WebGL-0x7fd105545e00]RENDER WARNING: there is no texture bound to the unit ?. Pretty much for all texture units, 1-15, except the first one.
Edit
It's probably worth mentioning that I am calling into the renderer directly; I think I need to because the existing custom engine owns the render loop. I am basically trying something like this, but I am getting the WebGL texture errors after the first frame.
renderer.reset();
renderer.render(sprite);
renderer.reset();
Edit
I tried the same thing with an autoStart: false application, and I get the same error.
pixiApp.renderer.reset();
pixiApp.render();
pixiApp.renderer.reset();
The issue appears to be that I was calling into PixiJS with a currently bound FBO; I fixed all my problems by creating a separate PIXI.RenderTexture, rendering there, and then compositing on top of my WebGL engine using a fullscreen quad.
// Create a render texture
const renderTexture = PIXI.RenderTexture.create(640, 360);
// Render with PixiJS
renderer.reset();
renderer.render(this.stage, renderTexture);
renderer.reset();
// Retrieve the raw WebGL texture
const texture = renderTexture.baseTexture._glTextures[renderer.texture.CONTEXT_UID].texture;
// Now composite on top of the other engine
gl.bindFramebuffer(gl.FRAMEBUFFER, theFramebufferWhereINeededPixiJSToRenderInTheFirstPlace);
gl.useProgram(quadProgram);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(u_Texture, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.vertexAttribPointer(0, 2, gl.BYTE, false, 2, 0);
gl.enableVertexAttribArray(0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.useProgram(null);
You may need to resize() the renderer and/or the render texture, depending on your actual setup.
I have a method that generates an array of bytes, which can be represented as a two dimensional image. I would like to show this image as a background image in batik. I do not want to save the array to disk (as image) and then load it into batik. Instead I would like to provide the array to the batik. I though that using ParsedURLData can help me, but I can not figure out how to make it work. Any suggestions?
I call ParsedURL.registerHandler(new MyProtocolHanlder("myprotocol")); and MyProtocolHanlder.parseURL returns MyParsedURLData.
I thought that returning my own stream would work, but it does not. In the example bellow I simply load an image from disk and try to display it.
class MyParsedURLData extends ParsedURLData {
public MyParsedURLData() {
}
#Override
public InputStream openStreamRaw(String arg0, Iterator arg1) throws IOException {
return new File("some_image_here").toURI().toURL().openStream();
}
}
If in the constructor for MyParsedURLData I set protocol = "file" and path="another_image", then another image will be loaded, no matter what stream is returned by openStreamRaw.
Any suggestions?
The simplest way is to save the image to a temp file, then just reference it via File.toURI().toURL(), and Batik will load it. You don't need a custom protocol handler or ParsedURL for this. Choose a standard image format supported by Batik such as PNG or JPEG for the temp file!
You can save the temp file by implementing both a ParsedURLProtocolHandler and a RegistryEntry and register them with Batik. There is a little bit of documentation on that on the web site, but it is still a pain to do. Took me some hours to figure out (and the code is not really ready for sharing), but I can now pass a BufferedImage directly to Batik without encoding it as a .png or similar file first.
I have a Bitmap that I want to enlarge programatically to ~1.5x or 2x to its original size. Is there an easy way to do that under .NET CF 2.0?
One "normal" way would be to create a new Bitmap of the desired size, create a Graphics for it and then draw the old image onto it with Graphics.DrawImage(Point, Rectangle). Are any of those calls not available on the Compact Framework?
EDIT: Here's a short but complete app which works on the desktop:
using System;
using System.Drawing;
class Test
{
static void Main()
{
using (Image original = Image.FromFile("original.jpg"))
using (Bitmap bigger = new Bitmap(original.Width * 2,
original.Height * 2,
original.PixelFormat))
using (Graphics g = Graphics.FromImage(bigger))
{
g.DrawImage(original, new Rectangle(Point.Empty, bigger.Size));
bigger.Save("bigger.jpg");
}
}
}
Even though this works, there may well be better ways of doing it in terms of interpolation etc. If it works on the Compact Framework, it would at least give you a starting point.
The CF has access to the standard Graphics and Bitmap objects like the full framework.
Get the original image into a Bitmap
Create a new Bitmap of the desired size
Associate a Graphics object with the NEW Bitmap
Call g.DrawImage() with the old image and the overload to specify width/height
Dispose of things
Versions:
.NET Compact Framework
Supported in: 3.5, 2.0, 1.0
I'm writing a mobile phone game using j2me. In this game, I am using multiple Canvas objects.
For example, the game menu is a Canvas object, and the actual game is a Canvas object too.
I've noticed that, on some devices, when I switch from one Canvas to another, e.g from the main menu to the game, the screen momentarily "flickers". I'm using my own double buffered Canvas.
Is there anyway to avoid this?
I would say, that using multiple canvases is generally bad design. On some phones it will even crash. The best way would really be using one canvas with tracking state of the application. And then in paint method you would have
protected void paint(final Graphics g) {
if(menu) {
paintMenu(g);
} else if (game) {
paintGame(g);
}
}
There are better ways to handle application state with screen objects, that would make the design cleaner, but I think you got the idea :)
/JaanusSiim
Do you use double buffering? If the device itself does not support double buffering you should define a off screen buffer (Image) and paint to it first and then paint the end result to the real screen. Do this for each of your canvases. Here is an example:
public class MyScreen extends Canvas {
private Image osb;
private Graphics osg;
//...
public MyScreen()
{
// if device is not double buffered
// use image as a offscreen buffer
if (!isDoubleBuffered())
{
osb = Image.createImage(screenWidth, screenHeight);
osg = osb.getGraphics();
osg.setFont(defaultFont);
}
}
protected void paint(Graphics graphics)
{
if (!isDoubleBuffered())
{
// do your painting on off screen buffer first
renderWorld(osg);
// once done paint it at image on the real screen
graphics.drawImage(osb, 0, 0, Tools.GRAPHICS_TOP_LEFT);
}
else
{
osg = graphics;
renderWorld(graphics);
}
}
}
A possible fix is by synchronising the switch using Display.callSerially(). The flicker is probably caused by the app attempting to draw to the screen while the switch of the Canvas is still ongoing. callSerially() is supposed to wait for the repaint to finish before attempting to call run() again.
But all this is entirely dependent on the phone since many devices do not implement callSerially(), never mind follow the implementation listed in the official documentation. The only devices I've known to work correctly with callSerially() were Siemens phones.
Another possible attempt would be to put a Thread.sleep() of something huge like 1000 ms, making sure that you've called your setCurrent() method beforehand. This way, the device might manage to make the change before the displayable attempts to draw.
The most likely problem is that it is a device issue and the guaranteed fix to the flicker is simple - use one Canvas. Probably not what you wanted to hear though. :)
It might be a good idea to use GameCanvas class if you are writing a game. It is much better for such purpose and when used properly it should solve your problem.
Hypothetically, using 1 canvas with a sate machine code for your application is a good idea. However the only device I have to test applications on (MOTO v3) crashes at resources loading time just because there's too much code/to be loaded in 1 GameCanvas ( haven't tried with Canvas ). It's as painful as it is real and atm I haven't found a solution to the problem.
If you're lucky to have a good number of devices to test on, it is worth having both approaches implemented and pretty much make versions of your game for each device.