Using three.js, I'm working on a web page to display a flip cube (a.k.a. magic cube; see e.g. the video on this page).
On a flip cube, there are typically images that are spread out across multiple pieces of the cube. For example, the boat image shown above is spread across the faces of four cubelets. In three.js terms, there are multiple meshes that need to use the same image for their material texture, but each at a different offset.
As far as I understand it, in three.js, offset is a property of a texture, not of a material or a mesh. Therefore, it would appear that you cannot have a single texture that is used at a different offset in two different places.
So does that mean that in order to have different parts of the boat image shown on four different faces, I have to create four separate textures, meaning that we load the boat image into memory four times? I'm hoping that's not the case.
Here's a relevant piece of the code:
// create an array with the textures
var textureArray = [];
var texNames = ['boat', 'camels', 'elephants', 'hippo',
'natpark', 'ostrich', 'coatofarms-w', 'kenyamap-w', 'nairobi-w'];
texNames.map(function(texName) {
textureArray.push(THREE.ImageUtils.loadTexture(
'images/256/' + texName + '.jpg' ));
});
// Create a material for each texture.
for (var x=0; x <= 1; x++) {
for (var y=0; y <= 1; y++) {
for (var z=0; z <= 1; z++) {
var materialArray = [];
textureArray.map(function(tex) {
// Learned: cannot set this offset for one material,
// without it affecting all materials that use this texture.
tex.offset.x = x * 0.2;
tex.offset.y = y * 0.2;
materialArray.push(new THREE.MeshBasicMaterial( { map: tex }));
});
var cubeMaterial = new THREE.MeshFaceMaterial(materialArray.slice(0, 6));
var cube = new THREE.Mesh( cubeGeom, cubeMaterial );
cube.position.set(x * 50 - 25, y * 50 - 25, z * 50 - 25);
scene.add(cube);
}
}
}
If you look at it on http://www.huttar.net/lars-kathy/tmp/flipcube.html, you'll see that all the texture images are displayed offset by the same amount on each cubelet face, even though they are set to different offsets on different cubelets. This seems to confirm that you can't have different uses of the same texture with different offsets.
How can I get different meshes to use the same texture at different offsets, so I don't have to load the same image multiple times into multiple textures?
What you say is true. Instead of adjusting the texture offsets, adjust the face vertex UVs of the geometry.
EDIT: There is another solution more in line with what you want to do. You can clone a texture like so:
var tex = new THREE.Texture.clone();
Cloning a texture will result in the loaded image being reused, and the new texture can have it's own offsets. Do not try to clone the texture until the image loads, however.
With this alternate approach, you do not have to adjust UVs, and you do not have to load an image more than once.
three.js r.58
Related
I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}
Please, refer to this article.
I have implemented the section 4.1 (Pre-processing).
The preprocessing step aims to enhance image features along a set of
chosen directions. First, image is grey-scaled and filtered with a
sharpening filter (we subtract from the image its local-mean filtered
version), thus eliminating the DC component.
We selected 12 not overlapping filters, to analyze 12 different
directions, rotated with respect to 15° each other.
GitHub Repositiry is here.
Since, the given formula in the article is incorrect, I have tried two sets of different formulas.
The first set of formula,
The second set of formula,
The expected output should be,
Neither of them are giving proper results.
Can anyone suggest me any modification?
GitHub Repository is here.
Most relevalt part of the source code is here:
public List<Bitmap> Apply(Bitmap bitmap)
{
Kernels = new List<KassWitkinKernel>();
double degrees = FilterAngle;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
kernel = new KassWitkinKernel();
kernel.Width = KernelDimension;
kernel.Height = KernelDimension;
kernel.CenterX = (kernel.Width) / 2;
kernel.CenterY = (kernel.Height) / 2;
kernel.Du = 2;
kernel.Dv = 2;
kernel.ThetaInRadian = Tools.DegreeToRadian(degrees);
kernel.Compute();
//SleuthEye
kernel.Pad(kernel.Width, kernel.Height, WidthWithPadding, HeightWithPadding);
Kernels.Add(kernel);
degrees += degrees;
}
List<Bitmap> list = new List<Bitmap>();
Bitmap image = (Bitmap)bitmap.Clone();
//PictureBoxForm f = new PictureBoxForm(image);
//f.ShowDialog();
Complex[,] cImagePadded = ImageDataConverter.ToComplex(image);
Complex[,] fftImage = FourierTransform.ForwardFFT(cImagePadded);
foreach (KassWitkinKernel k in Kernels)
{
Complex[,] cKernelPadded = k.ToComplexPadded();
Complex[,] convolved = Convolution.ConvolveInFrequencyDomain(fftImage, cKernelPadded);
Bitmap temp = ImageDataConverter.ToBitmap(convolved);
list.Add(temp);
}
return list;
}
Perhaps the first thing that should be mentioned is that the filters should be generated with angles which should increase in FilterAngle (in your case 15 degrees) increments. This can be accomplished by modifying KassWitkinFilterBank.Apply as follow (see this commit):
public List<Bitmap> Apply(Bitmap bitmap)
{
// ...
// The generated template filter from the equations gives a line at 45 degrees.
// To get the filter to highlight lines starting with an angle of 90 degrees
// we should start with an additional 45 degrees offset.
double degrees = 45;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
// ... setup filter (unchanged)
// Now increment the angle by FilterAngle
// (not "+= degrees" which doubles the value at each step)
degrees += FilterAngle;
}
This should give you the following result:
It is not quite the result from the paper and the differences between the images are still quite subtle, but you should be able to notice that the scratch line is most intense in the 8th figure (as would be expected since the scratch angle is approximately 100-105 degrees).
To improve the result, we should feed the filters with a pre-processed image in the same way as described in the paper:
First, image is grey-scaled and filtered with a sharpening filter (we subtract from the image its local-mean filtered version), thus eliminating the DC component
When you do so, you will get a matrix of values, some of which will be negative. As a result this intermediate processing result is not suitable to be stored as a Bitmap. As a general rule when performing image processing, you should keep all intermediate results in double or Complex as appropriate, and only convert back the final result to Bitmap for visualization.
Integrating your changes to add image sharpening from your GitHub repository while keeping intermediate results as doubles can be achieve by changing the input bitmap and temporary image variables to use double[,] datatype instead of Bitmap in the KassWitkinFilterBank.Apply method (see this commit):
public List<Bitmap> Apply(double[,] bitmap)
{
// [...]
double[,] image = (double[,])bitmap.Clone();
// [...]
}
which should give you the following result:
Or to better highlight the difference, here is figure 1 (0 degrees) on the left, next to figure 8 (105 degrees) on the right:
Overview
In my app (which is a game), I make use of the batching of items to reduce the number of draw calls. So, I'll, create for example, a Java object called platforms which is for all the platforms in the game. All the enemies are batched together as are all collectible items etc....
This works really well. At present I am able to size and position the individual items in a batch independently of each other however, I've come to the point where I really need to change the opacity of individual items also. Currently, I can change only the opacity of the entire batch.
Batching
I am uploading the vertices for all items within the batch that are to be displayed (I can turn individual items off if I don't want them to be drawn), and then once they are all done, I simply draw them in one call.
The following is an idea of what I'm doing - I realise this may not compile, it is just to give an idea for the purpose of the question.
public void draw(){
//Upload vertices
for (count = 0;count<numOfSpritesInBatch;count+=1){
vertices[x] = xLeft;
vertices[(x+1)] = yPTop;
vertices[(x+2)] = 0;
vertices[(x+3)] = textureLeft;
vertices[(x+4)] = 0;
vertices[(x+5)] = xPRight;
vertices[(x+6)] = yTop;
vertices[(x+7)] = 0;
vertices[(x+8)] = textureRight;
vertices[x+9] = 0;
vertices[x+10] = xLeft;
vertices[x+11] = yBottom;
vertices[x+12] = 0;
vertices[x+13] = textureLeft;
vertices[x+14] = 1;
vertices[x+15] = xRight;
vertices[x+16] = yTop;
vertices[x+17] = 0;
vertices[x+18] = textureRight;
vertices[x+19] = 0;
vertices[x+20] = xLeft;
vertices[x+21] = yBottom;
vertices[x+22] = 0;
vertices[x+23] = textureLeft;
vertices[x+24] = 1;
vertices[x+25] = xRight;
vertices[x+26] = yBottom;
vertices[x+27] = 0;
vertices[x+28] = textureRight;
vertices[x+29] = 1;
x+=30;
}
vertexBuf.rewind();
vertexBuf.put(vertices).position(0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
GLES20.glUseProgram(iProgId);
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
//Enable Alpha blending and set blending function
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Draw it
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * numOfSpritesInBatch);
//Disable Alpha blending
GLES20.glDisable(GLES20.GL_BLEND);
}
Shaders
String strVShader =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = uMVPMatrix * a_position;\n"+
"v_texCoords = a_texCoords;" +
"}";
String strFShader =
"precision mediump float;" +
"uniform float opValue;"+
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"gl_FragColor *= opValue;"+
"}";
Currently, I have a method in my Sprite class that allows me to change the opacty. For example, something like this:
spriteBatch.setOpacity(0.5f); //Half opacity
This works, but changes the whole batch - not what I'm after.
Application
I need this because I want to draw small indicators when the player destroys an enemy - which show the score obtained from that action. (The type of thing that happens in many games) - I want these little 'score indicators' to fade out once they appear. All the indicators would of course be created as a batch so they can all be drawn with one draw call.
The only other alternatives are:
Create 10 textures at varying levels of opacity and just switch between them to create the fading effect. Not really an option as way too wasteful.
Create each of these objects separately and draw each with their own draw call. Would work, but with a max of 10 of these objects on-screen, I could potentially be drawing using 10 draw calls just for these items - while the game as a whole currently only uses about 20 draw calls to draw a hundreds of items.
I need to look at future uses of this too in particle systems etc.... so I would really like to try to figure out how to do this (be able to adjust each item's opacity separately). If I need to do this in the shader, I would be grateful if you could show how this works. Alternatively, is it possible to do this outside of the shader?
Surely this can be done in some way or another? All suggestions welcome....
The most direct way of achieving this is to use a vertex attribute for the opacity value, instead of a uniform. This will allow you to set the opacity per vertex, without increasing the number of draw calls.
To implement this, you can follow the pattern you already use for the texture coordinates. They are passed into the vertex shader as an attribute, and then handed off to the fragment shader as a varying variable.
So in the vertex shader, you add:
...
attribute float a_opValue;
varying float v_opValue;
...
v_opValue = a_opValue;
...
In the fragment shader, you remove the uniform declaration for opValue, and replace it with:
varying float v_opValue;
...
gl_FragColor *= v_opValue;
...
In the Java code, you extend the vertex data with an additional value for the opacity, to use 6 values per vertex (3 position, 2 texture coordinates, 1 opacity), and update the state setup accordingly:
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
vertexBuf.position(5);
GLES20.glVertexAttribPointer(iOpValue, 1, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iOpValue);
I am developing a small game and I would draw a field-ground(land) with a repeated texture. My problem is the rendered result. This gives the impression of seeing everything around my cube looked as if a light shadow.
Is it possible to standardize the light or remove the shadow effect in my drawing function?
Sorry for my bad english..
Here is a screenshot to better understand my problem.
Here my code draw function (instancing model with vertexbuffer)
// Draw Function (instancing model - vertexbuffer)
public void DrawModelHardwareInstancing(Model model,Texture2D texture, Matrix[] modelBones,
Matrix[] instances, Matrix view, Matrix projection)
{
if (instances.Length == 0)
return;
// If we have more instances than room in our vertex buffer, grow it to the neccessary size.
if ((instanceVertexBuffer == null) ||
(instances.Length > instanceVertexBuffer.VertexCount))
{
if (instanceVertexBuffer != null)
instanceVertexBuffer.Dispose();
instanceVertexBuffer = new DynamicVertexBuffer(Game.GraphicsDevice, instanceVertexDeclaration,
instances.Length, BufferUsage.WriteOnly);
}
// Transfer the latest instance transform matrices into the instanceVertexBuffer.
instanceVertexBuffer.SetData(instances, 0, instances.Length, SetDataOptions.Discard);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Tell the GPU to read from both the model vertex buffer plus our instanceVertexBuffer.
Game.GraphicsDevice.SetVertexBuffers(
new VertexBufferBinding(meshPart.VertexBuffer, meshPart.VertexOffset, 0),
new VertexBufferBinding(instanceVertexBuffer, 0, 1)
);
Game.GraphicsDevice.Indices = meshPart.IndexBuffer;
// Set up the instance rendering effect.
Effect effect = meshPart.Effect;
//effect.CurrentTechnique = effect.Techniques["HardwareInstancing"];
effect.Parameters["World"].SetValue(modelBones[mesh.ParentBone.Index]);
effect.Parameters["View"].SetValue(view);
effect.Parameters["Projection"].SetValue(projection);
effect.Parameters["Texture"].SetValue(texture);
// Draw all the instance copies in a single call.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
Game.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
meshPart.NumVertices, meshPart.StartIndex,
meshPart.PrimitiveCount, instances.Length);
}
}
}
}
// ### END FUNCTION DrawModelHardwareInstancing
The problem is the cube mesh you are using. The normals are averaged, but I guess you want them to be orthogonal to the faces of the cubes.
You will have to use a total of 24 vertices (4 for each side) instead of 8 vertices. Each corner will have 3 vertices with the same position but different normals, one for each adjacent face:
If the FBX exporter cannot be configured to correctly export the normals simply create your own cube mesh:
var vertices = new VertexPositionNormalTexture[24];
// Initialize the vertices, set position and texture coordinates
// ...
// Set normals
// front face
vertices[0].Normal = new Vector3(1, 0, 0);
vertices[1].Normal = new Vector3(1, 0, 0);
vertices[2].Normal = new Vector3(1, 0, 0);
vertices[3].Normal = new Vector3(1, 0, 0);
// back face
vertices[4].Normal = new Vector3(-1, 0, 0);
vertices[5].Normal = new Vector3(-1, 0, 0);
vertices[6].Normal = new Vector3(-1, 0, 0);
vertices[7].Normal = new Vector3(-1, 0, 0);
// ...
It looks like you've got improperly calculated / no normals.
Look at this example, specifically part 3.
A normal is a vector that describes the direction that light would reflect off that vertex/poly if shined orthogonally to it.
I like this picture to demonstrate The blue lines are the normal direction at each particular point on the curve.
In XNA, you can calculate the normal of a polygon with vertices vert1,vert2,and vert3 like so:
Vector3 dir = Vector3.Cross(vert2 - vert1, vert3 - vert1);
Vector3 norm = Vector3.Normalize(dir);
In a lot of cases this is done automatically by modelling software so the calculation is unnecessary. You probably do need to perform that calculation if you're creating your cubes in code though.
I created a Tree in D3.js based on Mike Bostock's Node-link Tree. The problem I have and that I also see in Mike's Tree is that the text label overlap/underlap the circle nodes when there isn't enough space rather than extend the links to leave some space.
As a new user I'm not allowed to upload images, so here is a link to Mike's Tree where you can see the labels of the preceding nodes overlapping the following nodes.
I tried various things to fix the problem by detecting the pixel length of the text with:
d3.select('.nodeText').node().getComputedTextLength();
However this only works after I rendered the page when I need the length of the longest text item before I render.
Getting the longest text item before I render with:
nodes = tree.nodes(root).reverse();
var longest = nodes.reduce(function (a, b) {
return a.label.length > b.label.length ? a : b;
});
node = vis.selectAll('g.node').data(nodes, function(d, i){
return d.id || (d.id = ++i);
});
nodes.forEach(function(d) {
d.y = (longest.label.length + 200);
});
only returns the string length, while using
d.y = (d.depth * 200);
makes every link a static length and doesn't resize as beautiful when new nodes get opened or closed.
Is there a way to avoid this overlapping? If so, what would be the best way to do this and to keep the dynamic structure of the tree?
There are 3 possible solutions that I can come up with but aren't that straightforward:
Detecting label length and using an ellipsis where it overruns child nodes. (which would make the labels less readable)
scaling the layout dynamically by detecting the label length and telling the links to adjust accordingly. (which would be best but seems really difficult
scale the svg element and use a scroll bar when the labels start to run over. (not sure this is possible as I have been working on the assumption that the SVG needs to have a set height and width).
So the following approach can give different levels of the layout different "heights". You have to take care that with a radial layout you risk not having enough spread for small circles to fan your text without overlaps, but let's ignore that for now.
The key is to realize that the tree layout simply maps things to an arbitrary space of width and height and that the diagonal projection maps width (x) to angle and height (y) to radius. Moreover the radius is a simple function of the depth of the tree.
So here is a way to reassign the depths based on the text lengths:
First of all, I use the following (jQuery) to compute maximum text sizes for:
var computeMaxTextSize = function(data, fontSize, fontName){
var maxH = 0, maxW = 0;
var div = document.createElement('div');
document.body.appendChild(div);
$(div).css({
position: 'absolute',
left: -1000,
top: -1000,
display: 'none',
margin:0,
padding:0
});
$(div).css("font", fontSize + 'px '+fontName);
data.forEach(function(d) {
$(div).html(d);
maxH = Math.max(maxH, $(div).outerHeight());
maxW = Math.max(maxW, $(div).outerWidth());
});
$(div).remove();
return {maxH: maxH, maxW: maxW};
}
Now I will recursively build an array with an array of strings per level:
var allStrings = [[]];
var childStrings = function(level, n) {
var a = allStrings[level];
a.push(n.name);
if(n.children && n.children.length > 0) {
if(!allStrings[level+1]) {
allStrings[level+1] = [];
}
n.children.forEach(function(d) {
childStrings(level + 1, d);
});
}
};
childStrings(0, root);
And then compute the maximum text length per level.
var maxLevelSizes = [];
allStrings.forEach(function(d, i) {
maxLevelSizes.push(computeMaxTextSize(allStrings[i], '10', 'sans-serif'));
});
Then I compute the total text width for all the levels (adding spacing for the little circle icons and some padding to make it look nice). This will be the radius of the final layout. Note that I will use this same padding amount again later on.
var padding = 25; // Width of the blue circle plus some spacing
var totalRadius = d3.sum(maxLevelSizes, function(d) { return d.maxW + padding});
var diameter = totalRadius * 2; // was 960;
var tree = d3.layout.tree()
.size([360, totalRadius])
.separation(function(a, b) { return (a.parent == b.parent ? 1 : 2) / a.depth; });
Now we can call the layout as usual. There is one last piece: to figure out the radius for the different levels we will need a cumulative sum of the radii of the previous levels. Once we have that we simply assign the new radii to the computed nodes.
// Compute cummulative sums - these will be the ring radii
var newDepths = maxLevelSizes.reduce(function(prev, curr, index) {
prev.push(prev[index] + curr.maxW + padding);
return prev;
},[0]);
var nodes = tree.nodes(root);
// Assign new radius based on depth
nodes.forEach(function(d) {
d.y = newDepths[d.depth];
});
Eh voila! This is maybe not the cleanest solution, and perhaps does not address every concern, but it should get you started. Have fun!