I have an idea for the realization of Hogwarts fic "Harry Potter and the Methods of Rationality". This is not explicitly mentioned, but by describing the possibility of imposing space and the absence of a visible difference in the direction and length between the leads leading to different places, one can understand that Hogwarts is in a pseudo-4D space created using magic. But I came across a problem: I do not know how to make interaction between 3D objects in Unity, but in 4D space.
The figure depicts 2D characters in a pseudo 3D space (This is 2D layers), when they go into one of the passes they do not notice the height changes the way, because they see in 2D. If one of the characters looks from a height 0.5 to a height 1 or 0, he will see what is there, because the difference in height is not enough, but if he looks from a height 0 to a height 1, he can't see what is there (too high). The same thing happens in the pseudo-4D space of Hogwarts, but only the corridors do not change the height, but change the w coordinate, and as we are 3D creatures, we will not see this change.
You can see image there:
The height in the image can vary from 0 to one, it is displayed with color from black to white.
Related
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
Although there's some standardized options for hinting the browser about anti-aliasing in svg, none of them seems to work for my case where I have rectangles with rounded corners - and therefore can't afford turning off anti-aliasing.
Although my rectangles are sized to leave no vertical spaces between them, a thin line shows between them, due to the effects of anti-aliasing. E.g. my svg has one rectangle end at pixel 80 and the next one starts at 81, but still they get a thin background line show between them.
There's no way to force latest version browsers to avoid anti-aliasing for straight lines (crispEdges doesn't force that for my rounded rectangles).
I read some about tweaking by adding 0.5 of a pixel to the y values and about tweaking only even or only odd y values (I believe this is related to the fact that most contemporary LCD screens comprise two hardware vertical pixels per software exposed pixel). I am unsure how precisely this mitigates the problem, and would like to get a definite account of why exactly this makes sense and what is the most correct/solid tweaking approach.
two hardware vertical pixels per software exposed pixel
No that's wrong.
When you specify a coordinate like "81" in an SVG, that coordinate falls on the imaginary line between pixel 80 and 81. If your line has width "1", then the renderer will attempt to draw that by placing 50% of the colour on the 80 pixel and 50% on the 81 pixel. This is anti-aliasing. If you want the one pixel line to not be anti-aliased like that, give it coordinate 81.5. That way the whole line will fall within pixel 81.
However if your line had width 2 (or any other even width) you should not use 81.5 but stay with 81. Because it will render 50% (ie. 1) in pixel 80 and 50% (1) in pixel 81. Resulting in no anti-aliasing effect.
This applies for both horizontal and vertical lines. And applies whether you are on an LCD or old CRT.
Does this explanation make sense now?
I'm trying to create a spectral image with a constant grey-scale value for every row. I've written some fantastically slow code that basically tries 1000 different variation between black and white for a given hue and it finds the one whose grey-scale value most closely approximates the target value, resulting in the following image:
On my laptop screen (HP) there is a very noticeable 'dip' near the blue peak, where blue pixels near the bottom of the image appear much brighter than the neighbouring purple and cyan pixels. On my second screen (Acer, which has far superior colour display) the dip is smaller, but still there.
I use the following function to compute the grey-scale approximation of a colour:
Math.Abs(targetGrey - (0.2989 * R + 0.5870 * G + 0.1140 * B))
when I convert the image to grey-scale using Paint.NET, I get a perfect black to white gradient, so that part of the code at least works.
So, question: Is this purely an artefact of the display qualities of my screens? Or can the above mentioned grey-scale algorithm be improved upon to give a visually more consistent result?
EDIT: The problem seems to be mostly monitor calibration. Not, I repeat not, a problem with the code.
I'm wondering if its more to do with the way our eyes interpret the colors, rather than screen artifacts.
That said... I am using a very-high quality screen (Dell Ultrasharp, IPS) that has incredible color reproduction and I'm not sure what you mean by "dip" in the blue peak. So either I'm just not noticing it, or my screen doesn't show the same picture and it more color-accurate.
The output looks correct given the greyscale conversion you have used (which I believe is the standard one for sRGB colour spaces).
However - there are lots of tradeoffs in colour models and one of these is that you can get results which aren't visually quite what you want. In your case, the fact that there is a very low blue weight means that a greater amount of blue is needed to get any given greyscale value, hence the blue seems to start lower, at least in terms of how the human eye perceives it.
If your objective is to get a visually appealing spectral image, then I'd suggest altering your function to make the R,G,B weights more equal, and see if you like what you get.
I know that NES had 4-color sprites (with 1 usually being transparent Edit: according to zneak, 1 color is always transparent). How then did the original Final Fantasy have so many sprites with 4 colors + transparent? (Example sprite sheet -- especially look at the large ones near the bottom.)
I understand that you can layer sprites to achieve additional colors (For example: Megaman's layering gives him 6 colors: body=3+trans, face=3+trans). It's odd that these FF ones are all exactly 4 colors + transparent. If FF used similar layering, why would they stop at 4+1 instead of taking advantage of 6+1?
Is there another method of displaying sprites that gives you an additional color?
Also interesting is the fact that the big sprites are 18x26. Sprites are 8x8 (and I think I read somewhere that they're sometimes 8x16) but both 18 and 26 are [factor of 8] + 2. Very strange.
As far as I know, 1 isn't usually transparent: it always is.
As you noted, sprites are either 8x8 or 8x16 (this depends on bit 6 of the PPU control register mapped to memory address 0x2000 in the CPU's address space). Character sizes not being a multiple of 8 simply means there are wasted pixels in one or more of the constituting sprites.
For the colors, I beg to differ: the last sprite at the bottom, with the sword raised, has these 8 colors:
Final Fantasy sprite 8 colors: black, brown, beige, sky blue, navy, dark turquoise, turquoise, cyan http://img844.imageshack.us/img844/2334/spritecolors.png
I believe this is more an artistic choice, because each 8x8 block is limited to 3 opaque colors; maybe it just was more consistent to use fewer colors.
I found the answer. I finally broke down and downloaded the ROM and extracted the bitmaps with NAPIT. (btw: staring at extracted ROM bitmaps is really bloody hard on your eyes!)
I matched a few bitmaps and end-results here.
Each character has a color that is mostly relegated to top part of the sprite so I chased that idea a while. It turns out that's a red herring. Comparing the in-game sprites vs. the color masks, you can see that black and transparent use the same color mask. Therefore, IF a black outline is shown, then it must be on a separate layer. However, despite the black outlines on the sprite-sheet, I can't find any real examples of black outlines in the game.
Here's a video on YouTube with lots of good examples. When you are on the blue background screen (# 0:27), the outlines and the black mage's face are the blue of the background (ie: there is no black outline, it's transparent). In combat, the background is black. # 1:46 a spell is cast that makes the background flash grey. All black areas, including outlines and black eyes, flash grey. Other spells are also cast around this part of the video with different colors of flashes. The results are the same.
The real answer is that the black outlines on the sprite sheet don't seem to exist in the game. Whoever made the sprite sheet took the screenshots with a black background and scrubbed the background away.
You might want to check out Game Development StackExchange instead of here.
I've just had a quick glance at the sprite sheet, but it looks to me that sprites with more than 3 colors + 1 transparent either have weapons or use 3 colors + a black outline. Also, if you could show that sprite sheet with a grid separating tiles...
Maybe the extra 2 colors were reserved for the weapons.
I'm working on a UI which needs to work in different aspect ratios, 16:9, 16:10, 4:3
The idea is conceptually simple: Everything is centered to the screen in a rough 4:3 area and anything outside this portion of screen has basic artwork, so something like this:
(not drawn to scale)
Where the pink area represents whre all the UI objects are positioned and the blue area is just background and effects.
The trick is in usability, if I pass in coordinates (0,0) in a 4:3 aspect ratio environment (0,0) would be the top left of the screen. However if I'm in a 16:9 environment (0,0) needs to get renormalized based on the new aspect ratio for it to be in the appropriate place. So my question is: How can I achieve this?
edit: for clarification this is basically for a UI system and while I listed the ratios above as 4:3, 16:9, 16:10 it should be able to dynamically adjust values for whatever aspect ratio it is set to.
edit 2: Just to add more details to the situation: When the positions fo rsetting are passed in they are passed in as a % of the screens current widht height, so basically setting position x would be: [pos x as portion of screen]*SCREEN_WIDTH where screen width is the width of the current screen itself.
The obvious answer seems to be an offset. Since 4x3 is 16x9, it appears you want a 16x9 screen to have 2x9 bands to the left and the right. Hence, the X offset should be (2/16) * width.
For 16x10 screens, the factor is slightly more complicated: 4x3 is 13.33x10, so you have edges of width 1.67, and the X offset should be (1.67/16) * width = (5/48)* width.
So ... Can't you just come up with an abstraction layer, that hides the differences? One idea could be to model a "border" around the active area, that gets added. For 4:3 displays, set the border size to 0 to make the active area cover the full screen.