ArcGIS - How to move a graphic or symbol on the map - graphics

I'm trying to create an overlay in ArcGIS that has moving graphics/symbols which are updated by coordinates received from roving devices. I'm able to display a simple symbol initially but cannot get it to move on the map. My test code is
GraphicsOverlay machineOverlay = new GraphicsOverlay();
MainMapView.GraphicsOverlays.Add(machineOverlay);
MapPointBuilder rdLocation = new MapPointBuilder(150.864119200149, -32.3478640837185, SpatialReferences.Wgs84);
SimpleMarkerSymbol sRD1234 = new SimpleMarkerSymbol()
{
Color = System.Drawing.Color.Red,
Size = 10,
Style = SimpleMarkerSymbolStyle.Circle
};
Graphic graphicWithSymbol = new Graphic(rdLocation.ToGeometry(), sRD1234);
machineOverlay.Graphics.Add(graphicWithSymbol);
// here the red circle is displayed correctly on the map
rdLocation.SetXY(150.887115, -32.357600);
rdLocation.ReplaceGeometry(rdLocation.ToGeometry());
// here I expect the red circle to move but it doesn't
Do I need to trigger an event to "re-render" or refresh the overlay, or what do I need to do to get the graphic to move on my map?
There was a similar question here and the answer was "just update the geometry" which is what I'm attempting to do, but with no success.
If there is an entirely different or better approach to moving markers on a map please suggest, I'm just getting started in the ArcGIS runtime.
Thanks

After a lot of searching I replaced one line of code and its now working
//rdLocation.ReplaceGeometry(rdLocation.ToGeometry());
graphicWithSymbol.Geometry = rdLocation.ToGeometry();
It seems I misunderstood the function of ReplaceGeometry(). Any clarification on this would be helpful.

Related

How to apply shaders and generate images only once?

I'm trying to apply a pixelation shader to my textures and I need it to be applied only once, after that I can reuse my shader generated images as textures over and over without having to calculate every single time.
so how do I take a few images -> apply a shader and render them only once every time the game loads -> and use them as my textures?
so far I've managed to find the shader to apply:
shader_type canvas_item;
uniform int amount = 40;
void fragment()
{
vec2 grid_uv = round(UV * float(amount)) / float(amount);
vec4 text = texture(TEXTURE, grid_uv);
COLOR = text;
}
but I have no idea how to render out the images using it
Shaders reside in the GPU, and their output goes to the screen. To save the image, the CPU would have to see the GPU output, and that does not happen… Usually. And since it does not go through the CPU, the performance is good. Usually. Well, at least it is better than if the CPU was doing it all the time.
Also, are you sure you don't want to get a pixel art look by other means? Such as removing filter from the texture, changing the stretch mode and working on a small resolution, and perhaps enable pixel snap? No? Watch How to make a silky smooth camera for pixelart games in Godot. Still No? Ok...
Anyway, for what you want, you are going to need a Viewport.
Viewport setup
What you will need is to create a Viewport. Don't forget to set its size. Also may want to set render_target_v_flip to true, this flips the image vertically. If you find the output image is upside down it is because you need to toggle render_target_v_flip.
Then place as child of the Viewport what you want to render.
Rendering
Next, you can read the texture form the Viewport, convert it to an image, and save it to a png. I'm doing this on a tool script attached to the Viewport, so I'll have a workaround to trigger the code from the inspector panel. My code looks like this:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := image.save_png("res://output.png")
if error != OK:
push_error("failed to save output image.")
You can, of course, export a FILE path String to ease changing it in the inspector panel. Here I'm handing common edge cases:
tool
extends Viewport
export(String, FILE) var path:String
export var save:bool setget do_save
func do_save(_new_value) -> void:
var target_path := path.strip_edges()
var folder := target_path.get_base_dir()
var file_name := target_path.get_file()
var extension := target_path.get_extension()
if file_name == "":
push_error("empty file name.")
return
if not (Directory.new()).dir_exists(folder):
push_error("output folder does not exist.")
return
if extension != "png":
target_path += "png" if target_path.ends_with(".") else ".png"
var image := get_texture().get_data()
var error := image.save_png(target_path)
if error != OK:
push_error("failed to save output image.")
return
print("image saved to ", target_path)
Another option is to use ResourceSaver:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := ResourceSaver.save("res://image.res", image)
if error != OK:
push_error("failed to save output image.")
This will only work from the Godot editor, and will only work for Godot, since you get a Godot resource file. Although I find interesting the idea of using Godot to generate images. I'm going to suggest going with ResourceSaver if you want to automate generating them for Godot.
About saving resources from tool scripts
In the examples above, I'm assuming you are saving to a resource path. This is because the intention is to use the output image as a resource in Godot. Using a resource path has a couple implications:
This might not work on an exported game (since the goals is improve the workflow, this is OK).
Godot would need to re-import the resource, but will not notice it changed.
We can deal with the second point from an EditorPlugin, if that is what you are doing, you can do this to tell Godot to scan for changes:
get_editor_interface().get_resource_filesystem().scan()
And if you are not, you can cheat by creating an empty EditorPlugin. The idea is to do this:
var ep = EditorPlugin.new()
ep.get_editor_interface().get_resource_filesystem().scan()
ep.free()
By the way, you will want to cache cache the EditorPlugin instead of making a new one each time. Or better yet, cache the EditorFileSystem you get from get_resource_filesystem.
Automation
Now, I'm aware that it can be cumbersome to have to place things inside the Viewport. It might be Ok for your workflow if you don't need to do it all the time.
But what about automating it? Well, regardless of the approach, you will need a tool script that makes a hidden Viewport, takes a Node, checks if it has a shader, if it does, it moves it temporarily to the Viewport, get the rendered texture (get_texture()) sets it as the texture of the Node, removes the shader, and returns the Node to its original position in the scene. Or instead of looking for a shader in the Node, always apply a shader to whatever Node, perhaps loaded as a resource instead of hard-coded.
Note: I believe you need to let an idle frame pass between adding the Node to the Viewport and getting the texture, so the texture updates. Or was it two idle frames? Well, if one does not work, try adding another one.
About making an EditorPlugin
As you know, you can create an addon from project settings. This will create an EditorPlugin script for you. There you can either add an option to the tools menu (with add_tool_menu_item), or add it to the tool bar of the editor (with add_control_to_container). And have it act on the current selection in the edited scene (you can either use get_selection, or overwrite the edit and handles methods). You may also want to make an undo entry for that, see get_undo_redo.
Or, alternatively you can have it keep track (or look for) the Nodes it has to act upon, and then work on the build virtual method, which runs when the project is about to run. I haven't worked with the build virtual method, so I don't know if it has any quirks to gotchas to be aware of.

Material using plain colours getting burnt when using THREE.ACESFilmicToneMapping

We are updating our three.js app setup so that it uses the THREE.ACESFilmicToneMapping (because our scene uses IBL from an EXR environment map).
In that process, materials using textures are now looking great (map colours used to be washed out before the change as illustrated below).
with renderer.toneMapping = THREE.LinearToneMapping (default)
with renderer.toneMapping = THREE.ACESFilmicToneMapping
However, the problem is that plain colours (without any maps) are now looking burnt...
with renderer.toneMapping = THREE.LinearToneMapping (default)
with renderer.toneMapping = THREE.ACESFilmicToneMapping
It's now totally impossible to get bright yellow or green for example. Turning down the renderer.toneMappingExposure or the material.envMapIntensity can help, but materials with textures then get way too dark... Ie. provided any given parameter, material using plain colours are either too bright, or material using textures are too dark.
I'm not sure if I am missing something, but this looks like there would be an issue in this setup. Would there be any other parameter that we are overlooking that is causing this result?
Otherwise, we are loading all our models using the GLTFLoader, and we have renderer.outputEncoding = THREE.sRGBEncoding; as per the documentation of the GLTFLoader.
Our environment map is an equirectangular EXR loaded with EXRLoader:
import { EXRLoader } from 'three/examples/jsm/loaders/EXRLoader';
const envMapLoader = new EXRLoader();
envMapLoader.load(
environmentMapUrl,
rawTexture => {
const pmremGenerator = new THREE.PMREMGenerator(renderer);
pmremGenerator.compileEquirectangularShader();
const envMapTarget = pmremGenerator.fromEquirectangular(rawTexture);
const { texture } = envMapTarget;
return texture;
},
...
)
The short answer is that this is expected behaviour and there will always be tradeoffs in lightning/colors. One has thus to empirically select settings depending on the specific setup/application and desired results.
From Don McCurdy's comment directly on my question above:
You may need to go to the three.js forums for this question. There is no quick fix to add HDR lighting to colors that are already
100% saturated. Lighting is not a simple topic, and different
tonemapping methods make different tradeoffs here.

How to Pinjoint2D darts(Kinematicbody2D) on moving wall(StaticBody2D) via gdscript

I have a shooting darts (KinematicBody2D) that will need to stick on a moving wall (staticbody2D).
I wants to let the dart stick on the wall, and change position according to how the wall move (currently my wall is moved by updating its position).
However, the dart does not follow fully the moving path of the wall.
I end up adding pinJoint2D, but setting the node via gdscript only give me an error
Invalid set index 'node_b' (on base: 'PinJoint2D') with value of type 'StaticBody2D (StaticBody2DWall.gd)'.
My code in dart node for setting up pinjoint2d goes as below:
var slide_count = get_slide_count()
if slide_count:
var collision = get_slide_collision(slide_count - 1)
var collider = collision.collider
lif collider.is_in_group("wall"):
$PinJoint2D.node_b = collider
Anyone please help. Please let me know if there's a better practice.
The node_b member is a node path, not a node. Try the following:
$PinJoint2D.node_b = collider.get_path()

How to Smooth the Tracking in CamShift

I'm Doing Project on hand Tracking using OpenCV library function. By using Camshift() function I could able to track my hands but it wasn't not stable, even I make my hand stable there is little movement in tracking. So I couldn't able to perform mouse click operation at correct position. Someone please help me to figure out this.
void TrackingObjects::drawRectangle(CvRect objectLocation){
CvPoint p1, p2,mou;
CvRect crop;
p1.x = objectLocation.x;
p2.x = objectLocation.x + objectLocation.width;
p1.y = objectLocation.y;
p2.y = objectLocation.y + objectLocation.height;
cvRectangle(image,p1,p2,CV_RGB(0,255,0),1,CV_AA,0);
mou.x=(p2.x-p1.x)/2;
mou.x=p1.x+mou.x;
mou.y=(p2.y-p1.y)/2;
mou.y=p1.y+mou.y;
SetCursorPos(mou.x,mou.y);
}
In above code I get the tracked object location by obectLocation parameter and I've drawn rectangle over the tracked region.
By getting its center I did mouse movement.
While closing the palm in order to do MouseDown event, the position of tracked object has being changed.
The answer is Kalman filters.
You can use this code. As you can see in the figure below, the filtered results (green line) ignore tracker's sudden displacements (where cyan depicts the original tracking results).

iPhone help with animations CGAffineTransform resetting?

Hi I am totally confused with CGAffineTransform animations. All I want to do is move a sprite from a position on the right to a position on the left. When it has stopped I want to "reset" it i.e. move it back to where it started. If the app exits (with multitasking) I want to reset the position again on start and repeat the animation.
This is what I am using to make the animation..
[UIImageView animateWithDuration:1.5
delay:0.0
options:(UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionCurveLinear
)
animations:^(void){
ufo.transform = CGAffineTransformTranslate(ufo.transform, -270, 100);
}
completion:^(BOOL finished){
if(finished){
NSLog(#"ufo finished");
[self ufoAnimationDidStop];
}
}];
As I understand it the CGAffineTransforms just visually makes the sprite look like it's moved but doesn't actually move it. Therefore when I try and "reset" the position using
ufo.center = CGPointMake(355, 70);
it doesn't do anything.
I do have something working, if I call
ufo.transform = CGAffineTransformTranslate(ufo.transform, 270, -100);
it resets. The problem is if I exit the app half way through the animation then when it restarts it doesn't necessarily start from the beginning and it doesn't go the the right place, it basically goes crazy!
Is there a way to just remove any transforms applied to it? I'm considering just using a timer but this seems silly when this method should work. I;ve been struggling with this for some time so any help would be much appreciated.
Thanks
Applying a transform to a view doesn't actually change the center or the bounds of the view; it just changes the way the view is shown on the screen. You want to set your transform back to CGAffineTransformIdentity to ensure that it looks like "normal." You can set it to that before you start your animation and set it to what you want it to animate to.

Resources