I have a fabricjs object on the canvas in my project. In my UI tests, I'm using Playwright's built-in mouse methods to move this object.
I.e., I have an object on (400, 400). Using the next sequence of commands I'm expecting that my object moved to 400 points in the right direction.
await page.mouse.move(400, 400);
await page.mouse.down;
await page.mouse.move(800, 400);
await page.mouse.up();
However, it only selects the object but does not move it. I can move the object using a keyboard (keyboard.down, keyboard.press('ArrowRight'), etc), and select it by mouse.click(400, 400), but I'm not able to move it.
Finally, I found a solution. All you need is the option "steps" in the "mouse.move" method. I think it is because the web driver needs some time to move objects on the canvas. With the "steps" the option movement is not instantly, but more realistic.
await page.mouse.down;
await page.mouse.move(800, 400, { steps: 20 });
await page.mouse.up();
Related
I have set up a small plunkr demonstrating what problem I am working on. After a line is created in drawing mode, it is not able to be selected with 'point-click' after exiting drawing mode, only 'group-selection'. I want objects (not just lines) to be unselectable during drawing, then selectable afterwards. I have tried to create all lines with selectable: false then after exiting drawing mode,
canvas.forEachObject(function(o){
o.selectable=true;
canvas.renderAll()
})
but that does not work either. Thanks in advance.
You need to use setCoords() function in order to select line.
Update you mouse:up event like this:
canvas.on('mouse:up', function(o){
isDown = false;
line.setCoords();
});
Please see when to use setCoords().
I have two GOJS diagrams on a same page. one is primary diagram. It gets data on page load and displays the nodes. and it also has a search to find path between nodes.
Search takes source node and destination node. then on search action, the second diagram only displays the nodes involved in search and their links.
I am using mouse wheel for zoom in and zoom out.
mainDiagram.addDiagramListener("InitialLayoutCompleted", function (e) {
e.diagram.toolManager.mouseWheelBehavior = go.ToolManager.WheelZoom;
});
primary diagram has no issue. but the second diagram's zoom does not work. Following is the code for my second diagram.
var myPathDiagram = _go(go.Diagram, "policyPathDiv",
{
initialContentAlignment: go.Spot.Center,
layout: _go(go.LayeredDigraphLayout,
{direction: 0}),
autoScale: go.Diagram.UniformToFill,
maxSelectionCount: 1,
"undoManager.isEnabled": false,
hasHorizontalScrollbar: false,
hasVerticalScrollbar: false,
});
then tools manager setting
myPathDiagram.addDiagramListener("InitialLayoutCompleted", function (e) {
e.diagram.toolManager.mouseWheelBehavior = go.ToolManager.WheelZoom;
});
Second diagram gets the data on search action with following code.
myPathDiagram.model = new go.GraphLinksModel(nodes, links);
Everything is right except zoom on second diagram. keyboard shortcut ctrl + and ctrl - also doesn't work for second diagram.
I tried many things, but so far i am not able to make the zoom work on second diagram. any suggestion please...
Don't set Diagram.autoScale if you want the user to scale (a.k.a. zoom) the diagram.
I have been playing with custom filters in Fabric JS. But I just don't know how to undo anything done. Seems the pixels get overwritten by the process, which is fine, but how do I go back to the orginal? The code project starter is here:
https://github.com/pixolith/fabricjs-async-web-worker-filters
So, in the custom filter, the results are placed into the canvas as follows:
imageDataArray.forEach( function( data ) {
cacheCtx.putImageData( data.data, 0, data.blocks );
} );
That shows the processed image in the render. But I don't understand how to "get back" the original. I have tried this before the processing:
var obj = canvas.getActiveObject();
var originalSource = obj._originalElement.currentSrc; // restore the original since filters trash the canvas
obj.filters[index] = filter;
obj.applyFilters(canvas.renderAll.bind(canvas));
But it does not "get it back". I really don't wish to reload the image each time as they can be large at times. Any help appreciated.
As you noted obj._originalElement is the ORIGINAL element of the image you first loaded. No reason to reload it at all, you have it there. Ready to be smashed on canvas.
So just do obj.element = obj._originalElement and you are back to original after a canvas.renderAll();
I am running Win10 IoT on a pi 2. I need to be able to take pictures that are focused but cannot get the focus working. The application is a background app so I don't have a way of previewing the camera on a display. Is there any way of doing this? Currently I have
await _mediaCapture.StartPreviewAsync();
_mediaCapture.VideoDeviceController.FocusControl.Configure(new FocusSettings
{
Mode = FocusMode.Continuous,
WaitForFocus = true
});
await _mediaCapture.VideoDeviceController.FocusControl.FocusAsync();
await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), stream);
await _mediaCapture.StopPreviewAsync();
but I am getting the error
WinRT information: Preview sink not set
when I try to focus. All of the examples I've seen online show that the preview is output to a control and I assume it wires a sink up automagically. Is there a way to do this manually through code? Possibly without the preview?
I wonder if the code may work even without FocusControl.
I propose you follow Customer Media Sink implementation example and use of StartPreviewToCustomSinkIdAsync method described at http://www.codeproject.com/Tips/772038/Custom-Media-Sink-for-Use-with-Media-Foundation-To
I didn't find a way to do this. I ended up converting the background app to a UI app with a Page containing a CaptureElement control in order to preview and focus.
Instead of adding a UI, just create a CaptureElement and set the source to the _mediaCapture before calling await _mediaCapture.StartPreviewAsync();
Something like:
_captureElement = new CaptureElement { Stretch = Stretch.Uniform };
_mediaCapture = new MediaCapture();
await _mediaCapture.InitializeAsync(...);
_captureElement.Source = _mediaCapture;
await _mediaCapture.StartPreviewAsync();
I tried to use createjs to create a movieClip and add an image inside it like this:
rect2 = new Bitmap(_preloader.getResult("rect").result);
mv = new MovieClip("single", 0, false, []);
mv.addChild(rect2);
_stage.addChild(mv);
I expect to see rect2 on stage, but it dose not show up, if I added rect2 to stage it will show up, so what's I am missing here?
Is there any reason you are using MovieClip? It is mainly used to handle export from Flash Pro using Toolkit for CreateJS.
Instead, you might try using a Bitmap instance, which wraps an image, canvas, or video element.
One note for MovieClips is that you need to gotoAndStop/gotoAndPlay in order to set the initial state, it does not "default" to the first frame like Flash.