SwiftCharts cutting off points (clips bounds) - swiftcharts

I am tweaking the Areas Example project from SwiftCharts and I am running into the issue where the point indicators are getting cut off like so:
In the github issues a found this and this other issue where the solutions were to set chartSettings.clipInnerFrame = false
However, when I do this the the area color awkward shape also becomes visible like so:
How can I get the whole shape of the points but all the other stuff?

You have to pass to the ChartPointsViewsLayer with the points clipViews: false (see example). ChartSettings.clipInnerFrame has to be true (i.e. the default). This way only the layer with the points is clipped.
Finally, if you want to clip the unclipped points layer (to limit the area outside of the chart where they are displayed), you can do it passing a custom rectangle, like here (this setting is admittedly not very intuitive).

Related

SVGPanZoom discards original viewBox

I am using SVGPanZoom to manage the zooming of an SVG image in my hybrid Android (for all intents and purposes the same behavior as in Chrome) app. While zooming works well I have found a strange issue. My original inline SVG element goes like this
<svg id='puzzle' viewBox='0 0 1600 770' preserveAspectRatio='none'
width='100vw' height='85.5vh' fill-rule='evenodd' clip-rule='evenodd'
stroke-linejoin='round' stroke-miterlimit='1.414'
xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://
www.w3.org/1999/xlink'>
Initially this SVG element is empty and gets populated programmatically from JavaScript at run time after which I initiate SVGPanZoom as follows
var panZoom = svgPanZoom('#puzzle',
{panEnabled:false,controlIconsEnabled:false,
zoomEnabled:true,dblClickZoomEnabled:true,onZoom:postZoom});
panZoom.refreshRate = 10;
panZoom.zoomScaleSensitivity = 0.02;
The problem I have run into is this - I want my SVG image to fill the available area, 100vw x 85.5vhcompletely to do which I instruct it via the preserveAspectRatio="none"attribute above along with the viewBox="0 0 1600 770" attribute. I have found that this works - so long as I don't use SVGPanZoom. As soon as I initiate panZoom thezoomBox`attribute gets stripped out and I end up with an image that does not quite behave in terms of its default stretching/filling behavior.
SVGPanZoom is widely used so I assume that this behavior is down to me not quite setting it up properly. Dipping into the code I have found SVGPanZoom creates a cacheViewBoxand then proceeds to remove the original zoomBox attribute.
Which is fine if after that zooming works and the original behavior of the application does not change which is not what I find. What am I doing wrong here?
I've also run into this issue recently. From my research, this is just how the library works. I chose to live with this limitation for now but I found a couple other libraries that may work the way you intend (I haven't tried them yet):
jquery.panzoom is a jquery library that provides this functionality and also has some nice features. I know many people try to avoid jquery but it's pretty small and may do what you want. It handles SVG but I don't know what it does with the viewBox attribute.
react-svg-pan-zoom is a react component which may be useful if you are working in react.
I've also tried the PanZoom library but this also suffers the same viewBox limitation.
A note for anyone running into this thread. In the end I abandoned SVGPanZoom and decided to eschew the route of using any pan/zoom library at all. At the same time I decided to completely stop using the SVG viewBox and handle all zooming/panning entirely on my own through SVG transforms. The core steps involved
Wrap the entire SVG contents in a group to make it easier to manage the transform. I use the id attribute gOuter for this group
Set an initial scale for the SVG to occupy the desired client rectangle. In my case I had an original viewBox of 0 0 1600 770 intended to occupy 100% of screen width and 85% of screen height. So my scaling was scaleX = 1600/window.innerWidth and scaleY = 770/)0.85*window.innerHeight).
Apply this initial transform to the wrapping outer group, gOuter.setAttribute('transform','0 0 scaleX,scaleY)
Now in order to zoom to a an object whose virtual top left hand coordinates in the original viewBox were Ox,Oy you would use the transform
gOuter.setAttribute('transform',
scale(scaleX,scaleY) translate(-Ox,-Oy) scale(2*scaleX,2*scaleY) translate(Ox,Oy))
to zoom in by a factor of x 2. The important things to understand here
In SVG transformations are applied right to left.
Here we are translating the zoom point to the top l.h.s. scaling and then translating it back to its original location.
The problem is that we also need to allow for the original level of zoom through the initial scaling so we tag that on as one last transform
This leaves you in complete control of the zooming process and as a fringe benefit the operation becomes considerably more smooth than when using a pan/zoom library.

fabricjs: Issue with multiple paths of SVG having the exact same gradient

For my web app, I am creating SVG elements in Illustrator and then using them in a library of elements that users can add to the fabric canvas.
Some elements are simple but some complex with multiple compound paths etc.
I have came across an unusual issue where if I create a path with a gradient fill, and then copy that path, save the SVG and add it onto the canvas, only the first path would have the gradient and the rest would be flat colors.
Here is a screenshot of what I mean...
After experimenting and trying different things, I finally discovered that this is happening because the paths have the exact same gradient properties.
So if the gradient slider (color stops, opacity, location etc.) of two or more paths have the exact same properties in Illustrator, then the issue occurs.
So the workaround is to alter something like the location (for example) to be 99.9% instead of 100% on the copied path, then the issue goes away. However, this will quickly become a tedious and annoying way to fix this. Basically, each path with a gradient, needs to have a unique gradient set up and cannot be identical to another paths gradient properties.
Here are more screenshots to better explain...
After making this change...
The first and second path's gradient's location are different.
The first, third, fourth and fifth paths have exact same gradient.
This is what it looks like when I add it to the canvas now...
Here is the code I am using to add the SVG to the canvas...
fabric.loadSVGFromURL(image, function(objects, options) {
var oImg = fabric.util.groupSVGElements(objects, options);
oImg.perPixelTargetFind = true;
oImg.targetFindTolerance = 4;
canvas.add(oImg);
canvas.renderAll();
});
Can anyone tell me why this is happening and if there is a way to fix this with code rather than Illustrator? I have hundreds of elements to create that will have many paths with the same gradients. I know it will be a real pain to have to worry about paths not having the exact same gradient.
http://jsfiddle.net/oc70xjsq/
Link to the SVG

Using vtkImageActor for adding mask to vtkImageViewer2

I am developing on an application based on VTK and GDCM for viewing medical (DICOM) images.
The application has three windows that respectively show XY, YZ and XZ orientations (axial, coronal and sagittal). This is similar to the 2D views here. I use vtkImageViewer2 for this. The voxel values of the DICOM images are passed on to an instance of vtkImageData. The instance of the vtkImageData is the passed on the to three instances of vtkImageViewer2 (let's use imageViewerXY, imageViewerYZ and imageViewerXZ). The orientation of each instance of vtkImageViewer2 is then set using SetSliceOrientationToXY(), SetSliceOrientationToYZ() and SetSliceOrientationToXZ(). Without the mask, I can see the slices, couple the windows and scroll through the images perfectly fine.
To add the mask so that it is shown in the three views, I use vtkImageActor. For the XY view, which is the default view, this works fine. I update the instance of vtkImageActor, which I call maskActorXY based on the mouse events of XY window as follows:
int extent[6];
imageViewerXY->GetImageActor->GetDisplayExtent(extent);
maskActorXY->SetDisplayExtent(extent);
maskActorXY->Update();
imageViewerXY->GetRenerer->Render();
Now, when I do the same for the other two windows so that I can see the 3D mask in the other two orientations, for example for the YZ orientation,
imageViewerYZ->GetImageActor->GetDisplayExtent(extent);
maskActorYZ->SetDisplayExtent(extent);
maskActorYZ->Update();
imageViewerYZ->GetRenerer->Render();
I get an error message that traces to vtkImageData and accessing pixel values outside of the extent set for the mask actor.
I have a limited familiarity with VTK, but looking at the source code of vtkImageViewer2 (see UpdateDisplayExtent() on line 341), I don't understand why pixel values out side of the specified display extent are requested from my instances of vtkImageActor that represent the mask.
I found a solution. Since I am not familiar with VTK, I may not be able to provide a clear explanation. All that I needed were the following two lines for each mask to force its mappers to face the camera:
maskActorYZ->GetMapper()->SetAtFocalPointOn();
maskActorYZ->GetMapper()->SliceFacesCameraOn();
(see [vtkImageMapper3D][1] class.)

AVVideocomposition AVPlayerItem position of video-layer

I used Apple's AVEdit-Demo, tweaked it a little and was able to add CALayers with animations and images to the video-composition. So far, this works fine.
It uses AVVideoComposition and AVPlayer/AVPlayerItem to merge videos (and show them - the export rendering is a little different).
I added a layer with a png with some transparent areas, sort of like a mask, that hides parts of the video. Now I need to move the video-layer, so I can adjust the hidden parts (a.k.a. the visible part). The Mask covers the whole screen (in a CALayer), so moving the Mask-Layer isn't an option.
I didn't find any properties or methods, to adjust the position of the video-layer...
Any Ideas?
Found it...
I had to access the AVMutableCompositionTrack in the AVMutableVideoComposition and set the preferredTransform there (CGAffineTransformTranslate).
However - the Docs state, that this should be possible in a AVMutableComposition as well (AVAssetTrack setPreferredTransform).
I couldn't get this to work, though.

How to blend color of two sprites with constant alpha in DirectX?

Essentially, what I want to do (in DirectX) is to take two partially-transparent images and blend them together. This works fine with default blending, insofar as they both show up as overlapping, etc. However, the problem is that the opacity goes up markedly where the two intersect. This causes increasing problems as more sprites overlap. What I'd like to do is keep the blending the same, except keep a global opacity for all these sprites being blended, regardless of how they overlap.
Seems like there would be a render setting for this (all of these sprites are alone in their sprite batch, which keeps that part easy), but if so I don't know it. Right now I'm kind of shooting in the dark, and I've tried a lot of different things and none of them have looked right at all. I know I probably need some sort of variant of D3DBLENDOP, but I just don't know what sort of settings there I really need (I have tried many things, but it is all guessing at this stage).
Here is a screenshot of what is actually happening with standard blending (the best I can get it): http://arcengames.com/share/FFActual.png Here is a screenshot with a mockup of how I would want the blending to turn out (the forcefields were added to the same layer in Photoshop, then given a shared alpha value): http://arcengames.com/share/FFMockup.png
This is how I did it in Photoshop:
1. Take the two images, and remove all transparency (completely transparent pixels excepted).
2. Combine them into one layer, which blends the color but which has no partial alpha at all.
3. Now set the global transparency for that layer to (say) 40%.
The result is something that looks kind of blended together color-wise, but which has no increase in opaqueness on the overlapped sections.
UPDATE: Okay, thanks very much to Goz below, who suggested using the Z-Buffer. That works! The blending, by and large, is perfect and just what I would want. The only remaining problem? Using that new method, there is a huge artifact around the edge of the force field image that is rendered last. See this: http://www.arcengames.com/share/FFZBuffer.png
UPDATE: Below is the final solution in C# (SlimDX)
Clearing the ZBuffer to black, transparent, or white once per frame all has the same effect (this is right before BeginScene is called)
Direct3DWrapper.ClearDevice( SlimDX.Direct3D9.ClearFlags.ZBuffer, Color.Transparent, 0 );
All other sprites are drawn at Z=1, with the ZBuffer disabled for them:
device.SetRenderState( RenderState.ZEnable, ZBufferType.DontUseZBuffer );
The force field sprites are drawn at Z=2, with the ZBuffer enabled and ZWrite enabled and ZFunc as Less:
device.SetRenderState( RenderState.ZEnable, ZBufferType.UseZBuffer );
device.SetRenderState( RenderState.ZWriteEnable, true );
device.SetRenderState( RenderState.ZFunc, Compare.Less );
The following flags are also set at this time, to prevent the black border artifact I encountered:
device.SetRenderState( RenderState.AlphaTestEnable, true );
device.SetRenderState( RenderState.AlphaFunc, Compare.GreaterEqual );
device.SetRenderState( RenderState.AlphaRef, 55 );
Note that AlphaRef is at 55 because of the alpha levels set in the specific source image I was using. If my source image had a higher alpha value, then the AlphaRef would also need to be higher.
Best I can tell is that the forcefields are a whole object. Why not render them last, in front to back order, and with Z-buffering enabled. That will give you the effect you are after.
ie its not blending settings thats your problem at all.
Edit: Can you use render-to-texture then? IF so you could easily do what you did under photoshop. Render them all together into the texture and then blend the texture back over the screen.
Edit2: How about
ALPHATESTENABLE = TRUE;
ALPHAFUNC = LESS
ALPHABLENDENABLE = TRUE;
SRCBLEND = SRCALPHA;
DESTBLEND = INVSRCALPHA;
SEPERATEALPHABLENDENABLE = TRUE;
SRCBLENDALPHA = ONE;
DESTBLENDALPHA = ZERO;
You need to make sure the alpha is cleared to 0xff in the frame buffer each frame. You then do the standard alpha blend. while passing the alpha value straight through to the backbuffer. This is, though, where the alpha test comes in. You test the final alpha value against the one in the back buffer. If it is less than whats in the backbuffer then that pixel has not been blended yet and will be put into the frame buffer. If it is equal (or greater) then it HAS been blended already and the alpha value will be discarded.
That said ... using a Z-Buffer would cost you a load of RAM but would be faster overall as it would be able to throw away the pixels far earlier in the pipeline. Seeing as all the shields would just need to be written to a given Z-plane you wouldn't even need to go through the hell I suggested earlier. If the Z value it receives is less than whats there already then it will render it if it is greater or equal then it will discard it, fortunately before the blend calculation is ever performed.
That said ... you could also do it by using the stencil buffer which would require a Z-buffer anyway.
Anyway ... hope one of those methods is of some help.
Edit3: DO you render the forcefield with some form of feathering around the edge? Most likely that edge is caused by the fact that the alpha fades off slightly and then the "slightly alpha" pixels are getting written to the z-buffer and hence any subsequent draw doesn't overwrite them.
Try the following settings
ALPHATESTENABLE = TRUE
ALPHAFUNC = GREATEREQUAL // if this doesn't work try less .. i may be being a retard
ALPHAREF = 255
To fine tune the feathering around the edge adjust the alpharef but i'd suspect you need to keep it as above.
You can specify the D3DBLENDOP used when blending the two images together for the alpha channel. It sounds like your using D3DBLENDOP_ADD currently - try switching this to D3DBLENDOP_MAX, as that will just use the opacity of the "most opaque" image.
It is hard to tell exactly what you are trying to accomplish from your mock up since both forcefields are the same color; do you want to blend the colors and cap the alpha? Just take one of the colors?
Based off the above discussion it isnt' clear if you are setting all the relevant render states:
D3DRS_ALPHABLENDENABLE = TRUE (default: FALSE)
D3DRS_BLENDOP = D3DBLENDOP_MAX (default: D3DBLENDOP_ADD)
D3DRS_SRCBLEND = D3DBLEND_ONE (default: D3DBLEND_ONE)
D3DRS_DESTBLEND = D3DBLEND_ONE (default: D3DBLEND_ZERO)
It sounds like you are setting the first two, but what about the last two?

Resources