I am using cytoscape.js and I added layouts to application (continuous and normal ones). For normal ones (like circle, random, concetrici etc) animation is working great. I know that continuous layouts are based on simulation in real-time. It is working nice for springy layout, but (for the same data) cose layout and spread layout simulation is not visible (or not working, but I think that it is that fast that is computed before page is fully loaded). Is there any possibilty to slow it down, for simulation/animation to be visible? I was reading documentation and trying change settings (animationThreshold, maxSimulationTime, initialTemp, maxFruchtermanReingoldIterations, maxExpandIterations) but it is not helping.
Here are examples with config values.
for spread layout:
expandingFactor: -1.0
maxFruchtermanReingoldIterations: 20
maxExpandIterations: 20
animate: true
maxSimulationTime: 5000
randomize: false
and for cose:
animationThreshold: 2500
animate: true
nodeRepulsion: (node) ->
400000
gravity: 80
numIter: 1000
initialTemp: 200
coolingFactor: 0.99
minTemp: 1.0
randomize: false
maxSimulationTime: 50000
Thanks for help,
Izabella
Force-directed / physics simulations layouts run/animate in real time. Artificially slowing one down would spread it out but also make it really choppy.
CoSE-Bilkent has animate: 'end' support. That feature, assuming that the end result of the layout is generated near instantly, animates from the start positions to the end positions with the Cytoscape.js animation APIs. This gives a similar effect to a discrete layout.
The bundled CoSE will have animate: 'end' support in 3.2. If you want to see the feature in other layout extensions, feel free to make a pull request (maybe based on CoSE-Bilkent's approach).
Related
How can I adjust the obfuscation of an Azure indoor map?
As per tutorial (https://learn.microsoft.com/en-us/azure/azure-maps/how-to-use-indoor-module), the zoom level on load HTML is defined as:
new atlas.Map("map-id", {....
... zoom: 19,
});
The indoor map will only show on zoomlevel 19 or a higher value.
If I scroll (zoom out) of the map or change the base zoomlevel value in the HTML,
the building details are immediately no longer visible, only an outline remains.
Thanks for the help!
Currently it only shows details at zoom levels 19+. I don't believe their is an option to change this currently, but it is being looked into.
Curious about the need here. Does your building cover a very large area?
Do you want to be able to zoom in past level 22? Is so add maxZoom: 24 to the map options when loading.
I got an Azure Map and I have implemented a custom "change style" button that when pressed calls map.setStyle(). When I call this function using 'satellite' the switch is very quick. Like this:
map.setStyle({ 'style': 'satellite' });
But when I change the style "back" to 'road' the switch is super slow. Like this:
map.setStyle({ 'style': 'road' });
Am I doing something wrong?
The road layer has to load the styling layers for rendering the map, where as the satellite layer only renders image tiles with no road data. The team is working on loading performance of the base map styles which should help with this. Don't have an exact ETA yet but likely in the next few months.
I am using SVGPanZoom to manage the zooming of an SVG image in my hybrid Android (for all intents and purposes the same behavior as in Chrome) app. While zooming works well I have found a strange issue. My original inline SVG element goes like this
<svg id='puzzle' viewBox='0 0 1600 770' preserveAspectRatio='none'
width='100vw' height='85.5vh' fill-rule='evenodd' clip-rule='evenodd'
stroke-linejoin='round' stroke-miterlimit='1.414'
xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://
www.w3.org/1999/xlink'>
Initially this SVG element is empty and gets populated programmatically from JavaScript at run time after which I initiate SVGPanZoom as follows
var panZoom = svgPanZoom('#puzzle',
{panEnabled:false,controlIconsEnabled:false,
zoomEnabled:true,dblClickZoomEnabled:true,onZoom:postZoom});
panZoom.refreshRate = 10;
panZoom.zoomScaleSensitivity = 0.02;
The problem I have run into is this - I want my SVG image to fill the available area, 100vw x 85.5vhcompletely to do which I instruct it via the preserveAspectRatio="none"attribute above along with the viewBox="0 0 1600 770" attribute. I have found that this works - so long as I don't use SVGPanZoom. As soon as I initiate panZoom thezoomBox`attribute gets stripped out and I end up with an image that does not quite behave in terms of its default stretching/filling behavior.
SVGPanZoom is widely used so I assume that this behavior is down to me not quite setting it up properly. Dipping into the code I have found SVGPanZoom creates a cacheViewBoxand then proceeds to remove the original zoomBox attribute.
Which is fine if after that zooming works and the original behavior of the application does not change which is not what I find. What am I doing wrong here?
I've also run into this issue recently. From my research, this is just how the library works. I chose to live with this limitation for now but I found a couple other libraries that may work the way you intend (I haven't tried them yet):
jquery.panzoom is a jquery library that provides this functionality and also has some nice features. I know many people try to avoid jquery but it's pretty small and may do what you want. It handles SVG but I don't know what it does with the viewBox attribute.
react-svg-pan-zoom is a react component which may be useful if you are working in react.
I've also tried the PanZoom library but this also suffers the same viewBox limitation.
A note for anyone running into this thread. In the end I abandoned SVGPanZoom and decided to eschew the route of using any pan/zoom library at all. At the same time I decided to completely stop using the SVG viewBox and handle all zooming/panning entirely on my own through SVG transforms. The core steps involved
Wrap the entire SVG contents in a group to make it easier to manage the transform. I use the id attribute gOuter for this group
Set an initial scale for the SVG to occupy the desired client rectangle. In my case I had an original viewBox of 0 0 1600 770 intended to occupy 100% of screen width and 85% of screen height. So my scaling was scaleX = 1600/window.innerWidth and scaleY = 770/)0.85*window.innerHeight).
Apply this initial transform to the wrapping outer group, gOuter.setAttribute('transform','0 0 scaleX,scaleY)
Now in order to zoom to a an object whose virtual top left hand coordinates in the original viewBox were Ox,Oy you would use the transform
gOuter.setAttribute('transform',
scale(scaleX,scaleY) translate(-Ox,-Oy) scale(2*scaleX,2*scaleY) translate(Ox,Oy))
to zoom in by a factor of x 2. The important things to understand here
In SVG transformations are applied right to left.
Here we are translating the zoom point to the top l.h.s. scaling and then translating it back to its original location.
The problem is that we also need to allow for the original level of zoom through the initial scaling so we tag that on as one last transform
This leaves you in complete control of the zooming process and as a fringe benefit the operation becomes considerably more smooth than when using a pan/zoom library.
I need to build a custom designed bar chart that displays some simple data. Below are my requirements. Can anyone suggest the best web technology for my requirements.
high browser compatibility
ability to draw shapes
ability to fill shapes with gradients
ability to have onclick and onmouseover events for the different shapes (bars in the chart).
Thanks guys. I was thinking of using svg but looking for suggestions.
How about Raphaël - it's SVG/VML.
It says:
Browser compatibility:
Raphaël currently supports Firefox
3.0+, Safari 3.0+, Opera 9.5+ and Internet Explorer 6.0+.
Ability to draw shapes
circle, rect, ellipse, image, text, path
Ability to fill shapes with gradients
yes
Ability to have onclick and onmouseover events
yes:
... every graphical object you
create is also a DOM object, so you
can attach JavaScript event handlers
or modify them later.
Everything in the reference
On top of that, there's a plugin called gRaphael which makes the creation of charts easier.
Simple data - Google Charts API or Google Visualization API may suit you.
Details for all features of image charts can be found on the Chart feature list
You may also take a look at the comparison of the Charts API and the Visualization API.
Another candidate of course is JQuery SVG - if you're already familiar with jquery you may prefer this one.
There's a comparison of JQuery SVG and Raphaël on SO:
jQuery SVG vs. Raphael
I recommend using Adobe Flex. Below is an example of how easy pie chart creation can be in Flex:
<mx:Panel title="Pie Chart">
<mx:PieChart id="myChart"
dataProvider="{expenses}"
showDataTips="true"
>
<mx:series>
<mx:PieSeries
field="Amount"
nameField="Expense"
labelPosition="callout"
/>
</mx:series>
</mx:PieChart>
<mx:Legend dataProvider="{myChart}"/>
Based on your criteria:
high browser compatibility: Flex is used on more than 95% of all browsers and behaves the same in all browsers. No more need to check if your web app is running in ie, firefox, chrome, etc... because any browser that has a flash player is compatible.
ability to draw shapes: Flash's greatest strength is the ability to draw. Charts are completely customizable and skinnable to achieve the look you need.
Ability to fill shapes with gradients - done easily by setting style attributes or a custom skin.
ability to have onclick and onmouseover events for the different shapes - see this link for some easy ways to create user interactions with charts.
Hi i hope this link may help you i found it while searching for a solution similar to what you're looking for:
http://www.artetics.com/Articles/using-various-javascript-libraries-to-create-pie-chart
i'm trying gRaphael, i'm having difficulties on finding documentation though. you have to read the code and use the exploded instead of the min.js
I would like to share jquery.jqplot.js. It has lots of jQuery options, but depends on other plugins such as syntaxhighliter etc.
Essentially, what I want to do (in DirectX) is to take two partially-transparent images and blend them together. This works fine with default blending, insofar as they both show up as overlapping, etc. However, the problem is that the opacity goes up markedly where the two intersect. This causes increasing problems as more sprites overlap. What I'd like to do is keep the blending the same, except keep a global opacity for all these sprites being blended, regardless of how they overlap.
Seems like there would be a render setting for this (all of these sprites are alone in their sprite batch, which keeps that part easy), but if so I don't know it. Right now I'm kind of shooting in the dark, and I've tried a lot of different things and none of them have looked right at all. I know I probably need some sort of variant of D3DBLENDOP, but I just don't know what sort of settings there I really need (I have tried many things, but it is all guessing at this stage).
Here is a screenshot of what is actually happening with standard blending (the best I can get it): http://arcengames.com/share/FFActual.png Here is a screenshot with a mockup of how I would want the blending to turn out (the forcefields were added to the same layer in Photoshop, then given a shared alpha value): http://arcengames.com/share/FFMockup.png
This is how I did it in Photoshop:
1. Take the two images, and remove all transparency (completely transparent pixels excepted).
2. Combine them into one layer, which blends the color but which has no partial alpha at all.
3. Now set the global transparency for that layer to (say) 40%.
The result is something that looks kind of blended together color-wise, but which has no increase in opaqueness on the overlapped sections.
UPDATE: Okay, thanks very much to Goz below, who suggested using the Z-Buffer. That works! The blending, by and large, is perfect and just what I would want. The only remaining problem? Using that new method, there is a huge artifact around the edge of the force field image that is rendered last. See this: http://www.arcengames.com/share/FFZBuffer.png
UPDATE: Below is the final solution in C# (SlimDX)
Clearing the ZBuffer to black, transparent, or white once per frame all has the same effect (this is right before BeginScene is called)
Direct3DWrapper.ClearDevice( SlimDX.Direct3D9.ClearFlags.ZBuffer, Color.Transparent, 0 );
All other sprites are drawn at Z=1, with the ZBuffer disabled for them:
device.SetRenderState( RenderState.ZEnable, ZBufferType.DontUseZBuffer );
The force field sprites are drawn at Z=2, with the ZBuffer enabled and ZWrite enabled and ZFunc as Less:
device.SetRenderState( RenderState.ZEnable, ZBufferType.UseZBuffer );
device.SetRenderState( RenderState.ZWriteEnable, true );
device.SetRenderState( RenderState.ZFunc, Compare.Less );
The following flags are also set at this time, to prevent the black border artifact I encountered:
device.SetRenderState( RenderState.AlphaTestEnable, true );
device.SetRenderState( RenderState.AlphaFunc, Compare.GreaterEqual );
device.SetRenderState( RenderState.AlphaRef, 55 );
Note that AlphaRef is at 55 because of the alpha levels set in the specific source image I was using. If my source image had a higher alpha value, then the AlphaRef would also need to be higher.
Best I can tell is that the forcefields are a whole object. Why not render them last, in front to back order, and with Z-buffering enabled. That will give you the effect you are after.
ie its not blending settings thats your problem at all.
Edit: Can you use render-to-texture then? IF so you could easily do what you did under photoshop. Render them all together into the texture and then blend the texture back over the screen.
Edit2: How about
ALPHATESTENABLE = TRUE;
ALPHAFUNC = LESS
ALPHABLENDENABLE = TRUE;
SRCBLEND = SRCALPHA;
DESTBLEND = INVSRCALPHA;
SEPERATEALPHABLENDENABLE = TRUE;
SRCBLENDALPHA = ONE;
DESTBLENDALPHA = ZERO;
You need to make sure the alpha is cleared to 0xff in the frame buffer each frame. You then do the standard alpha blend. while passing the alpha value straight through to the backbuffer. This is, though, where the alpha test comes in. You test the final alpha value against the one in the back buffer. If it is less than whats in the backbuffer then that pixel has not been blended yet and will be put into the frame buffer. If it is equal (or greater) then it HAS been blended already and the alpha value will be discarded.
That said ... using a Z-Buffer would cost you a load of RAM but would be faster overall as it would be able to throw away the pixels far earlier in the pipeline. Seeing as all the shields would just need to be written to a given Z-plane you wouldn't even need to go through the hell I suggested earlier. If the Z value it receives is less than whats there already then it will render it if it is greater or equal then it will discard it, fortunately before the blend calculation is ever performed.
That said ... you could also do it by using the stencil buffer which would require a Z-buffer anyway.
Anyway ... hope one of those methods is of some help.
Edit3: DO you render the forcefield with some form of feathering around the edge? Most likely that edge is caused by the fact that the alpha fades off slightly and then the "slightly alpha" pixels are getting written to the z-buffer and hence any subsequent draw doesn't overwrite them.
Try the following settings
ALPHATESTENABLE = TRUE
ALPHAFUNC = GREATEREQUAL // if this doesn't work try less .. i may be being a retard
ALPHAREF = 255
To fine tune the feathering around the edge adjust the alpharef but i'd suspect you need to keep it as above.
You can specify the D3DBLENDOP used when blending the two images together for the alpha channel. It sounds like your using D3DBLENDOP_ADD currently - try switching this to D3DBLENDOP_MAX, as that will just use the opacity of the "most opaque" image.
It is hard to tell exactly what you are trying to accomplish from your mock up since both forcefields are the same color; do you want to blend the colors and cap the alpha? Just take one of the colors?
Based off the above discussion it isnt' clear if you are setting all the relevant render states:
D3DRS_ALPHABLENDENABLE = TRUE (default: FALSE)
D3DRS_BLENDOP = D3DBLENDOP_MAX (default: D3DBLENDOP_ADD)
D3DRS_SRCBLEND = D3DBLEND_ONE (default: D3DBLEND_ONE)
D3DRS_DESTBLEND = D3DBLEND_ONE (default: D3DBLEND_ZERO)
It sounds like you are setting the first two, but what about the last two?