i don't find clear explanation about using context with the mixin #include span-columns(); so i don't find differences between #include span-columns(4 , $tablet); and #include span-columns(4); but i think it's an issue for me because columns doesn't take place for the full width of their parent container like this :
in this example, the second column should use 8 col instead of 4 actually.
Am i on the right way ? Do i need to use context properly ?
thanks
Since Susy grids are relative, the width of the parent is used to calculate the width of the children. Put simply: 4/4 = 100%, 4/8 = 50%, 4/4 != 4/8. Think of the first number in the fraction as the columns you want to span, and then second number as the columns available (the width of the parent, or the context).
Looking at your demo linked from another question, and your screenshot, you are in a context of 8 for the tablet view, and 12 for the desktop view. That means span-columns(2, 8) for left bar on tablet becomes span-columns(4, 12) on desktop. And span-columns(6 omega, 8) for the main content becomes span-columns(8 omega, 12) for the right. The omega is important on the last item in a row.
Also, you need to remove the padding you set on both objects, or use box-sizing: border-box. By default, browsers add padding to width, and so your grid items are becoming up too wide for the space. I recommend setting border-box globally.
Related
TL;DR:
I can't draw an image exactly onto the full screen on wide-aspect (13:6) phones. If I observe the safe area, the error is (predictably) underscan. Using .edgesIgnoringSafeArea() goes (unexpectedly) too far in the other direction.
Update
Apple DTS have suggested this is a bug, refunded me one support incident, and invited me to submit a bug report. It is in the pipeline at https://feedbackassistant.apple.com/feedback/8192204
Caveat Lector
My presumptions about .scaledToFill might be wrong. I address that at the end.
Code
So elementary I can put it here and it won't even slow you down
struct ContentView: View {
var body: some View {
Image("testImage").resizable().scaledToFill()
// .edgesIgnoringSafeArea(.all)
}
}
Test Image
The Test Image is a landscape rectangle, proportioned at 13:6, like the wide phone. (E.g. the 812:375 proportion of the original iPhone X.) The gray periphery is not part of the image.
It has its sub-frames marked, that correspond to the narrow (older) phones (16:9) and pads (4:3).
Runtime Results
The Xcode project settings are explicitly landscape-only, for both pads and phones.
For narrow phones and all pads, the code above, observing safe areas, renders the Test Image like I expect:
But on wide phones, I can't get the red rect to coincide with the screen edges.
Wide Phones
With no call to .edgesIgnoringSafeArea(), that is we are observing safe area. Naturally, our image is mapped to a subset of the full screen.
With the call to .edgesIgnoringSafeArea(). I expected this to exactly fill the screen but it overscans:
Here is the Xcode view-hierarchy debugger's perspective on the previous: the image is being mapped to a rect larger than the full screen. Why?
Order of Events
If I reverse the order of modifiers, and call .edgesIgnoringSafeArea() before .scaledToFill(), I get aspect ratio distortion, which .scaledToFill() is supposed to prevent. (See circle become ellipse in screen shot.) An explanation of how these operations compose, and why they do not commute, might go a long way to answering my primary questions.
Workaround
I think the above should work, and I don't see why not. What does work — on wide phones — is to eliminate the .scaledToFill modifier. Then you get this. But it only works because the test image is already the exact aspect ratio as the display — not a very general solution.
Scale to Fill
In the restricted domain of landscape images and displays, I expect the operation of scale-to-fill on the 13:6 test image to be equivalent to (to have the semantics of):
Center the test image in the destination (container) rect, sized to fit entirely in the container.
I have been expecting that ignoring safe areas means the "destination" will be the full screen, but that may be where I err.
Expand the test image, maintaining proportion and center, until one pair of sides coincides with those of the container.
For narrower displays, the left and right edges will meet first, and the top and bottom will be inside the destination rect.
But don’t stop now. That would be scale to fit, or letterboxing.
Expand until your top and bottom coincide with those of the container.
For narrower displays this means there will be content cropped on both sides
For 13:6 displays all four image edges will to coincide with the display edges at the same time.
I do not know why .edgesIgnoringSafeArea() does not work as it should but here is a workaround that should help you.
GeometryReader { geo in
Image("testImage")
.resizable()
.scaledToFill()
.frame(width: geo.size.width, height: geo.size.height)
}
.edgesIgnoringSafeArea(.all)
Update:
Here is another way to do the same thing without GeometryReader:
Image("testImage")
.resizable()
.scaledToFill()
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity)
.edgesIgnoringSafeArea(.all)
Is it possible to set a x-scrolling in react-virtualized? I have a table with a fixed width, and more columns to display than I got space in my table, so I need a x-scrollinig. In my tests if i did it, the table just shrinked up and did just display '...''s for content if the table runs out of space.
Intro paragraph for react-virtualized Table docs (emphasis added):
Table component with fixed headers and windowed rows for improved
performance with large data sets. This component expects explicit
width and height parameters. Table content can scroll vertically but
it is not meant to scroll horizontally.
You might be able to hack it, but it isn't meant to support horizontal scrolling so it probably won't work. Consider using Grid or MultiGrid instead if this is a requirement for your app.
I was struggling with this myself for a while. I did it by by setting the width of the table to width={Object.keys(rows[0]).length*150} and set each column min width to 150 (or whatever you choose just make sure it is the same in your table).
Then wrap it in a Paper and give it a width and overflowX:'auto'
something like this:
const VirtualizedTable = withStyles(styles)(MuiVirtualizedTable);
export default function DataPreview(props) {
const rows = [{ One: 'one', Two: 'Two',Three: 'Three',Four:'Four', Five:'Five', Six:'Six'}]
return (
<Paper style={{ height: 400, width:700, overflowX: 'auto'}}>
<VirtualizedTable
width={Object.keys(rows[0]).length*150}
rowCount={rows.length}
rowGetter={({ index }) => rows[index]}
columns={Object.keys(rows[0]).map(head=>{return(
{
minWidth:150,
label:head,
dataKey:head
}
)})}
/>
</Paper>
);
}
Building on bvaughn's accepted answer, the hack for a horizontally scrollable table could look something like this, however, beware of the following caveats that come with this hack:
Your overflowing Table columns will not be virtualized
The scrolling focus gets captured by the wrapper's x-axis scrolling and you will need to click within the internal Table component to refocus and regain y-axis scrolling. This is incredibly frustrating to use especially on mobile devices.
I'm using Snap SVG to manipulate SVGs within a web app that I'm making. In this web app I have two rectangles that start out with one being inside of the other, call them rectInner and rectOuter. The aim is to allow the user to transform rectOuter (scale, rotate, translate) such that rectInner is always strictly inside of rectOuter. To be clear, rectInner will never move or be transformed.
My approach to this problem is to get the bounding box of both rectInner and rectOuter, and check to see if the first is strictly contained within the second. Snap SVG provides a function isBBoxIntersect(rectInner, rectOuter), but it only tells me if parts of the bounding boxes intersect, not if one is contained within the other.
Is there a simple way of doing this?
EDIT:
It seems now that I somewhat misunderstood the concept of bounding boxes, but the problem should be simpler. If I can find a way of calculating the four vertices of rectOuter after all of the transformations, then so long as the corners of rectInner are inside the of the path constructed from those vertices, the entire rectangle is. I think.
##### coffeescript
el = Snap('rect#outer')
mat = el.attr('transform').totalMatrix
left = +el.attr('x')
top = +el.attr('y')
right = left + (+el.attr('width'))
bottom = top + (+el.attr('height'))
console.log(left, top, right, bottom)
points = {
x: mat.x(left, top)
y: mat.y(left, top)
x2:mat.x(right, top)
y2:mat.y(right, top)
x3:mat.x(right, bottom)
y3:mat.y(right, bottom )
x4:mat.x(left, bottom)
y4:mat.y(left, bottom)
}
use matrix!
you can find more matrix in mat variable.
if totalMatrix didn't work then try another.
I need to set the width of an element that is used in various places in my fluid Susy layout. The parent element is not always the same width, but I want this element to always have the same width relative to the page width.
Example:
In a 12-column grid, a news article sometimes spans 12 columns, sometimes 6. Editors are able to add a <blockquote> in the news article text. I want a blockquote to always be 3 columns wide (relative to the full page), regardless of its context (12 or 6 columns).
Of course if this was a grid with fixed column widths it would be easier, but I'm looking for a fluid, percentage-based solution.
PS. I am willing to use Susy 2 alpha if that makes it easier to solve the problem.
You would do this the same way in Susy 1 or 2, though it's always more fun in 2. :)
The issue isn't really related to anything specific about Susy, it would be a problem in any fluid CSS situation. You can only solve it if you have a hook for knowing which context you are in. At that point, you can solve it from either end. Something like this:
blockquote {
#include span-columns(3);
.narrow & { #include span-columns(3,6); }
}
There really isn't any way to do it without the hook. CSS doesn't have element-queries (and isn't likely to any time soon, for the reasons given in that article).
Essentially, what I want to do (in DirectX) is to take two partially-transparent images and blend them together. This works fine with default blending, insofar as they both show up as overlapping, etc. However, the problem is that the opacity goes up markedly where the two intersect. This causes increasing problems as more sprites overlap. What I'd like to do is keep the blending the same, except keep a global opacity for all these sprites being blended, regardless of how they overlap.
Seems like there would be a render setting for this (all of these sprites are alone in their sprite batch, which keeps that part easy), but if so I don't know it. Right now I'm kind of shooting in the dark, and I've tried a lot of different things and none of them have looked right at all. I know I probably need some sort of variant of D3DBLENDOP, but I just don't know what sort of settings there I really need (I have tried many things, but it is all guessing at this stage).
Here is a screenshot of what is actually happening with standard blending (the best I can get it): http://arcengames.com/share/FFActual.png Here is a screenshot with a mockup of how I would want the blending to turn out (the forcefields were added to the same layer in Photoshop, then given a shared alpha value): http://arcengames.com/share/FFMockup.png
This is how I did it in Photoshop:
1. Take the two images, and remove all transparency (completely transparent pixels excepted).
2. Combine them into one layer, which blends the color but which has no partial alpha at all.
3. Now set the global transparency for that layer to (say) 40%.
The result is something that looks kind of blended together color-wise, but which has no increase in opaqueness on the overlapped sections.
UPDATE: Okay, thanks very much to Goz below, who suggested using the Z-Buffer. That works! The blending, by and large, is perfect and just what I would want. The only remaining problem? Using that new method, there is a huge artifact around the edge of the force field image that is rendered last. See this: http://www.arcengames.com/share/FFZBuffer.png
UPDATE: Below is the final solution in C# (SlimDX)
Clearing the ZBuffer to black, transparent, or white once per frame all has the same effect (this is right before BeginScene is called)
Direct3DWrapper.ClearDevice( SlimDX.Direct3D9.ClearFlags.ZBuffer, Color.Transparent, 0 );
All other sprites are drawn at Z=1, with the ZBuffer disabled for them:
device.SetRenderState( RenderState.ZEnable, ZBufferType.DontUseZBuffer );
The force field sprites are drawn at Z=2, with the ZBuffer enabled and ZWrite enabled and ZFunc as Less:
device.SetRenderState( RenderState.ZEnable, ZBufferType.UseZBuffer );
device.SetRenderState( RenderState.ZWriteEnable, true );
device.SetRenderState( RenderState.ZFunc, Compare.Less );
The following flags are also set at this time, to prevent the black border artifact I encountered:
device.SetRenderState( RenderState.AlphaTestEnable, true );
device.SetRenderState( RenderState.AlphaFunc, Compare.GreaterEqual );
device.SetRenderState( RenderState.AlphaRef, 55 );
Note that AlphaRef is at 55 because of the alpha levels set in the specific source image I was using. If my source image had a higher alpha value, then the AlphaRef would also need to be higher.
Best I can tell is that the forcefields are a whole object. Why not render them last, in front to back order, and with Z-buffering enabled. That will give you the effect you are after.
ie its not blending settings thats your problem at all.
Edit: Can you use render-to-texture then? IF so you could easily do what you did under photoshop. Render them all together into the texture and then blend the texture back over the screen.
Edit2: How about
ALPHATESTENABLE = TRUE;
ALPHAFUNC = LESS
ALPHABLENDENABLE = TRUE;
SRCBLEND = SRCALPHA;
DESTBLEND = INVSRCALPHA;
SEPERATEALPHABLENDENABLE = TRUE;
SRCBLENDALPHA = ONE;
DESTBLENDALPHA = ZERO;
You need to make sure the alpha is cleared to 0xff in the frame buffer each frame. You then do the standard alpha blend. while passing the alpha value straight through to the backbuffer. This is, though, where the alpha test comes in. You test the final alpha value against the one in the back buffer. If it is less than whats in the backbuffer then that pixel has not been blended yet and will be put into the frame buffer. If it is equal (or greater) then it HAS been blended already and the alpha value will be discarded.
That said ... using a Z-Buffer would cost you a load of RAM but would be faster overall as it would be able to throw away the pixels far earlier in the pipeline. Seeing as all the shields would just need to be written to a given Z-plane you wouldn't even need to go through the hell I suggested earlier. If the Z value it receives is less than whats there already then it will render it if it is greater or equal then it will discard it, fortunately before the blend calculation is ever performed.
That said ... you could also do it by using the stencil buffer which would require a Z-buffer anyway.
Anyway ... hope one of those methods is of some help.
Edit3: DO you render the forcefield with some form of feathering around the edge? Most likely that edge is caused by the fact that the alpha fades off slightly and then the "slightly alpha" pixels are getting written to the z-buffer and hence any subsequent draw doesn't overwrite them.
Try the following settings
ALPHATESTENABLE = TRUE
ALPHAFUNC = GREATEREQUAL // if this doesn't work try less .. i may be being a retard
ALPHAREF = 255
To fine tune the feathering around the edge adjust the alpharef but i'd suspect you need to keep it as above.
You can specify the D3DBLENDOP used when blending the two images together for the alpha channel. It sounds like your using D3DBLENDOP_ADD currently - try switching this to D3DBLENDOP_MAX, as that will just use the opacity of the "most opaque" image.
It is hard to tell exactly what you are trying to accomplish from your mock up since both forcefields are the same color; do you want to blend the colors and cap the alpha? Just take one of the colors?
Based off the above discussion it isnt' clear if you are setting all the relevant render states:
D3DRS_ALPHABLENDENABLE = TRUE (default: FALSE)
D3DRS_BLENDOP = D3DBLENDOP_MAX (default: D3DBLENDOP_ADD)
D3DRS_SRCBLEND = D3DBLEND_ONE (default: D3DBLEND_ONE)
D3DRS_DESTBLEND = D3DBLEND_ONE (default: D3DBLEND_ZERO)
It sounds like you are setting the first two, but what about the last two?