I was digging Amstrad CPC's hardware features lately and I'm confused with color palette information.
This link;
http://www.exotica.org.uk/mirrors/gfxzone/articles/cpc_graphics-article_01.html
says
Not all those 27 possible colors can be used on the same screen, at
maximum 16 colors can be used simultaneously.
which makes sense to me since all 4 bits graphics modes are limited by 2^4=16 colors. But wikipedia link;
http://en.wikipedia.org/wiki/Amstrad_CPC
says
The redesigned video hardware allows for hardware sprites and soft
scrolling, with a colour palette extended from 17 out of 27 to 32 out
of 4096 colours
this information is at Plus section but while comparing the old hardwares with plus, it says "17 out of 27" not "16".
Another link;
http://cpcwiki.eu/index.php/Video_modes
The Amstrad Plus display the same modes, but 15 more colours can
naturally be displayed thanks to the Hardwired Sprites. This means
32 colours per screen with no tricks (including Border).
So 32-15=17 again.
I guess 17th color might be the background color but I'm not sure about this. Can anybody who has coded on Amstrad CPC platform confirm this?
Also the first link above says
only 16 out of those 4096 colours can be used at the same time
for Plus range but others say 32 instead of 16. Maybe that page didn't count sprite colors and background color. I just wanted to be sure.
I can only answer this one with fond memories rather than proper programming experience. It was the 464 that got me into programming but my programs were very,very simple!
I do remember that when programming in BASIC 16 colours was the maximum. I don't know if anything more hardcore managed to stretch out another one. Your third link mentions the border colour as the 17th, I think this is the most likely explanation for the apparent conflict in specs.
Now you've got me feeling old.
FWIW, I just ran across that article and sentence today and felt the need to amend it to clarify its meaning, which is the same as everyone else's answers here.
Some more info: As hinted at by Matthew, the CPC's CRTC has a bit that determines whether or not it should write the dedicated border colour. It's set when the beam is outwith the actual graphics area. Said are is limited in size by the amount of RAM available to the CPC for the display. This itself defaults to 16 kB but can be altered through various forms of trickery, as seen in some games - and especially many releases by the demoscene.
As an aside, something that I'm revisiting at the moment is how to implement the Spectrum-style scrolling border while tapes are being loaded. This is done by changing the colour of the border whenever the polarity of the bit being input changes, and this is accomplished via the Gate Array rather than the CRTC. Fast changes of the border in that way are, again, often used in demos to accomplish previously 'impossible' things, most often rasterised lines and suchlike.
What everyone said actually, 16 main colors (try INK x,y in Basic where x=0 to 15 and y=0 to 26) plus one border color(type BORDER z in Basic where z=0 to 26 for border color (big area outside main videoram, used for flashes in games)).
In CPC+ hardware sprites, the available colors are 15 because 1 color is reserved for transparent between the sprite and background gfx. This makes 32, 16 background colors + 1 border color + 15 sprite colors.
However, since you can change the palette of any of these any time during the retrace of the screen, typical effect in most oldschool computers, you could have differently colors sprites and background where each line or part of line changes palette (you need a lot of synchronization with the retrace beam to do that, a bit easier to do per line on CPC+ with line IRQ interrupts). So technically the CPC can show off all 27 colors and the plus all 4096 colors at the same time (check the screenshot here, http://www.cpc-power.com/index.php?page=detail&num=8308 , it's just a preview and it looks ugly but shows what's possible with the CPC plus)
In addition to #rgiot, I would like to add this:
In mode 0 (160x200 pixels), the Amstrad CPC computers can display 16 different colors simultaneously (17 if you count the border color) out of a choice of 27 colors & shades.
In mode 1 (320x200) it goes down to 4 colors out of 27.
In mode 2 (640x200) it's 2 colors out of 27.
The Amstrad Plus computers, using enhanced hardware capabilities, can display:
In mode 0 (160x200) 16 colors out of 4096, plus 16 hardware sprites that have their own palette of 16 other colors (out of 4096).
In mode 1 (320x200) 4 colors out of 4096, plus 16 hardware sprites that have their own palette of 16 other colors (out of 4096).
In mode 1 (640x200) 2 colors out of 4096, plus 16 hardware sprites that have their own palette of 16 other colors (out of 4096).
On the 'Plus' range, the hardware sprites resolution is independent of the main screen's resolution.
I had an Amstrad CPC 6128 back in the days. I can confirm that 17th color was the border color.
The Amstrad CPC has 16 inks from 0 to 15, and the border is accessible in ink 16.
Each ink can be set with a color selected in a palette of 27 different colors.
The 17 colors are in fact present in a list of 32 colors, but there are some colors are present two times.
So, in theory you can display 17 colors maximum on a standard screen (configuration of the screen when the machine is switched on):
1 color for the border
16 colors for each 16 inks of the screen, when mode 0 is selected
Of course, in practice you can use the 27 colors on screen with raster tricks by changing the color of the inks:
- more than one time per VBL, for rasters
- one time per HBL, for raster bars
- more than one time per HBL, for split-rasters
You can find more explanations here: http://www.grimware.org/doku.php/documentations/devices/gatearray
The Wikipedia article about raster bars, which mentions the Amstrad CPC, says:
The computers of the 8 and 16 bit era typically did not or could not display video memory across the entire screen, leaving a border around the regular display area. The graphics chip commonly used a fixed entry in the colour look-up table (CLUT) to colour this border area.
This isn't proof, but would certainly fit with 16 main colors plus one for the border.
Related
I have been scouring the internet for weeks trying to figure out exactly how text (such as what you are reading right now) is displayed to the screen.
Results have been shockingly sparse.
I've come across the concepts of rasterization, bitmaps, vector graphics, etc. What I don't understand, is how the underlying implementation works so uniformly across all systems (windows, linux, etc.) in way we as humans understand. Is there a specification defined somewhere? Is implementation code open source and viewable by the general public?
My understanding as of right now, is this:
Create a font with an external drawing program, one character at a time
Add these characters into a font file that is understood by language-specific libraries
These characters are then read from the file as needed by the GPU and displayed to the screen in a linear fashion as defined by the parenting code.
Additionally, if characters are defined in a font file such as 'F, C, ..., Z', how are vector graphics (which rely on a set of coordinate points) supported? Without coordinate points, rasterization would seem the only option for size changes.
This is about as far as my assumptions/research goes.
If you are familiar with this topic and can provide a detailed answer that may be useful to myself and other readers, please answer at your discretion. I find it fascinating just how much code we take for granted that is remarkably complicated under the hood.
The following provides an overview (leaving out lots of gory details):
Two key components for display of text on the Internet are (i) character encoding and (ii) fonts.
Character encoding is a scheme by which characters, such as the Latin capital letter "A", are assigned a representation as a byte sequence. Many different character encoding schemes have been devised in the past. Today, what is almost ubiquitously used on the Internet is Unicode. Unicode assigns each character to a code point, which is an integer value; e.g., Unicode assigns LATIN CAPITAL LETTER A to the code point 65, or 41 in hexadecimal. By convention, Unicode code points are referred to using four to six hex digits with "U+" as a prefix. So, LATIN CAPITAL LETTER A is assigned to U+0041.
Fonts provide the graphical data used to display text. There have been various font formats created over the years. Today, what is ubiquitously used on the Internet are fonts that follow the OpenType spec (which is an extension of the TrueType font format created back around 1991).
What you see presented on the screen are glyphs. An OpenType font contains data for the glyphs, and also a table that maps Unicode code points to corresponding glyphs. More precisely, the character-to-glyph mapping (or 'cmap') table maps Unicode code points to glyph IDs. The code points are defined by Unicode; the glyph IDs are a font-internal implementation detail, and are used to look up the glyph descriptions and related data in other tables.
Glyphs in an OpenType font can be defined as bitmaps, or (far more common) as vector outlines (Bezier curves). There is an assumed coordinate grid for the glyph descriptions. The vector outlines, then, are defined as an ordered list of coordinate pairs for Bezier curve control points. When text is displayed, the vector outline is scaled onto a display grid, based on the requested text size (e.g., 10 point) and pixel sizing on the display. A rasterizer reads the control point data in the font, scales as required for the display grid, and generates a bitmap that is put onto the screen at an appropriate position.
One extra detail about displaying the rasterized bitmap: most operating systems or apps will use some kind of filtering to give glyphs a smoother and more legible appearance. For example, a grayscale anti-alias filter will set display pixels at the edges of glyphs to a gray level, rather than pure black or pure white, to make edges appear smoother when the scaled outline doesn't align exactly to the physical pixel boundaries—which is most of the time.
I mentioned "at an appropriate position". The font has metric (positioning) information for the font as a whole and for each glyph.
The font-wide metrics will include a recommended line-to-line distance for lines of text, and the placement of the baseline within each line. These metrics are expressed in the units of the font's glyph design grid; the baseline corresponds to y=0 within the grid. To start a line, the (0,0) design grid position is aligned to where the baseline meets the edge of a text container within the page layout, and the first glyph is positioned.
The font also has glyph metrics. One of the glyph metrics is an advance width for each given glyph. So, when the app is drawing a line of text, it has a starting "pen position" at the start of the line, as described above. It then places the first glyph on the line accordingly, and advances the pen position by the amount of that first glyph's advance width. It then places the second glyph using the new pen position, and advances again. And so on as glyphs are placed along the line.
There are (naturally) more complexities in laying out lines of text. What I described above is sufficient for English text displayed in a basic text editor. More generally, display of a line of text can involve substitution of the default glyphs with certain alternate glyphs; this is needed, for example, when displaying Arabic text so that characters appear cursively connected. OpenType fonts contain a "glyph substitution" (or 'GSUB') table that provides details for glyph substitution actions. In addition, the positioning of glyphs can be adjusted for various reasons; for example, to position a diacritic glyph correctly over a letter. OpenType fonts contain a "glyph positioning" ('GPOS') table that provides the position adjustment data. Operating system platforms and browsers today support all of this functionality so that Unicode-encoded text for many different languages can be displayed using OpenType fonts.
Addendum on glyph scaling:
Within the font, a grid is set up with a certain number of units per em. This is set by the font designer. For example, the designer might specify 1000 units per em, or 2048 units per em. The glyphs in the font and all the metric values—glyph advance width, default line-to-line distinance, etc.—are all set in font design grid units.
How does the em relate to what content authors set? In a word processing app, you typically set text size in points. In the printing world, a point is a well defined unit for length, approximately but not quite 1/72 of an inch. In digital typography, points are defined as exactly 1/72 of an inch. Now, in a word processor, when you set text size to, say, 12 points, that really means 12 points per em.
So, for example, suppose a font is designed using 1000 design units per em. And suppose a particular glyph is exactly 1 em wide (e.g., an em dash); in terms of the design grid units, it will be exactly 1000 units wide. Now, suppose the text size is set to 36 points. That means 36 points per em, and 36 points = 1/2", so the glyph will print exactly 1/2" wide.
When the text is rasterized, it's done for a specific target device, that has a certain pixel density. A desktop display might have a pixel (or dot) density of 96 dpi; a printer might have a pixel density of 1200 dpi. Those are relative to inches, but from inches you can get to points, and for a given text size, you can get to ems. You end up with a certain number of pixels per em based on the device and the text size. So, the rasterizer takes the glyph outline defined in font design units per em, and scales it up or down for the given number of pixels per em.
For example, suppose a font is designed using 1000 units per em, and a printer is 1000 dpi. If text is set to 72 points, that's 1" per em, and the font design units will exactly match the printer dots. If the text is set to 12 points, then the rasterizer will scale down so that there are 6 font design units per printer dot.
At that point, the details in the glyph outline might not align to whole units in the device grid. The rasterizer needs to decide which pixels/dots get ink and which do not. The font can include "hints" that affect the rasterizer behaviour. The hints might ensure that certain font details stay aligned, or the hints might be instructions to move a Bezier control point by a certain amount based on the current pixels-per-em.
For more details, see Digitizing Letterform Designs and Font Engine from Apple's TrueType Reference Manual, which goes into lots of detail.
I'm trying to design a Userform in Excel 2010, and I'm coming across a very annoying stumbling block where as I'm trying to move things around, resize them and align them so the form looks appealing.
Unfortunately, different mechanisms are snapping to differently sized grids. For example, drawing a box onto the grid snaps it to multiples of 6, which is the default option found in the Tools> Options> General> Grid units. Resizing these objects snaps it to a seemingly arbitrary grid size that seems to be approximately 7.2 units.
I need these units to match up so I'm not constantly fighting myself getting these grids to function. I don't care what the actual number ends up being, I just need them to be the same. While I'm able to change the grid size, it must be a whole number, which the arbitrary grid is not.
Problem us your units Points <-> Pixels
6 Points * 4/3 points per pixel =>> 8 Pixels .. all good
A nice integer of pixels
Your approx 7.2 I suspect was 7.3333333
Maybe say 11 Points Set gets changed as
* 4/3 => 14.6666 pix Rounded to 15 pix
by 3/4 Pts/pix +> 11.33333
Points as single .. Pixels as integer may be the problem in the math
Do you know a program or script that converts from a letter to a matrix (consisting of 0 and 1) representing the letter?
For example, from letter I to a matrix something like this: (it's an LED pannel showing letter I):
Please let me know a way to create such matrix other than hand typing
Thx.
The only solution is to use font.
well for HW implementation I usually used EGA/VGA 8x8 font
extracted from gfx card BIOS you can do it easy in MS-DOS environment
another way is to extract font programaticaly from image
draw entire font to bitmap (in line or in matrix ..., or use some already created like mine). Use fixed pitch, font size most suitable your needs and do not forget that almost none modern font supports fixed pitch so use OEM_CHARSET and System named font from it. Set color properly (ideal is black background and white font) and read image pixel by pixel and store it as table of numbers. Pixel with not background color is set pixel.
Do not compare to font color because of anti-aliasing and filters. Now read all characters and set/res corresponding bit inside font table. First compute start x,y of character in image (from ASCII code and image organization) then do 2 nested 8-steps x,y for loops (in order according to your font[] organization)
set/res corresponding font[] bits at addresses 8*ASCII to 8*ASCII+8
I assume you use MCU to control LED panel
the font organization in memory is usually that 8-bit number represents single row in character. Of course if your LED panel is meant to display animated scroll then column organization of font and also HW implementation will ease things up a lot. If you have 16 bit MCU and IO access than you can use 16-bit / pixels font size.
if you have more than 8 pixels and only 8 bit MCU you can still use 16 bit data but the IO access will be in two steps via two IO ports instead of 1. I strongly recommend whole data-wide IO access instead of set/res individual IO lines its much quicker and can prevent flickering
OK here is my old 8x8 font I used back in the days ... I think this one is extracted from EGA/VGA BIOS but I am not shore ... it was too many years ago
Now the fun part
const BYTE font[8*256]={ 0,0,0,0,0,0,0,0, ... }
any character is represented as 8 numbers. if bit is 0 then it means paper (background pixel) if bit is 1 then it means ink (font pixel). Now there are more combinations (left to right, up to down and their combinations)
OK ( row-vise | left to right | up to down ) organization means:
first number is up most row of char
msb is left most pixel
lsb is right most pixel
so for example char '1' in 8x8 will be something like this (b means binary number):
00000000b,
00001000b,
00011000b,
00101000b,
00001000b,
00001000b,
00111100b,
00000000b,
When you have extracted all characters to font table than save it as source code to file which you will later include in your MCU code (can be placed in EEPROM for pgm-code)
Now the algorithm to print char on LED panel is strongly depended on your HW implementation
so please post a circuit diagram of interconnection between LED panel and control system
specify target platform and language
specify desired functionality
I assume you want left moving scroll by pixel step
the best fit will be if your LED panel is driven by columns not rows
You can activate single column of LEDs by some data IO port (all bits can be active at a time) and selecting which one column is active is driven by another select IO port (only single bit can be active at a time). So in this case compute the start address of the column to display in font table:
address = (8*ASCII + x_offset)
send font[8*ASCII + x_offset] to data IO port
activate select IO port with the correct bit active
wait a while (1-10ms) ... so you can actually see the light if delay is too short then there is no brightness if delay is too long then there is flickering so you need to experiment a little (depends on number of select bits).
deactivate select IO port
repeat with the next column
x_offset is the scrolling shift
if your HW implementation does not fit in such way don't worry
just use bit SHIFT,AND,OR operations to create the data words in memory and then send them in similar manner
Hope it helps a litle
You could try to find a font that looks the way you want (probably a monospaced font such as Courier), draw/rasterize it with a certain size (8pt?), without anti-aliasing, and convert the resulting image to your matrix format.
I know that NES had 4-color sprites (with 1 usually being transparent Edit: according to zneak, 1 color is always transparent). How then did the original Final Fantasy have so many sprites with 4 colors + transparent? (Example sprite sheet -- especially look at the large ones near the bottom.)
I understand that you can layer sprites to achieve additional colors (For example: Megaman's layering gives him 6 colors: body=3+trans, face=3+trans). It's odd that these FF ones are all exactly 4 colors + transparent. If FF used similar layering, why would they stop at 4+1 instead of taking advantage of 6+1?
Is there another method of displaying sprites that gives you an additional color?
Also interesting is the fact that the big sprites are 18x26. Sprites are 8x8 (and I think I read somewhere that they're sometimes 8x16) but both 18 and 26 are [factor of 8] + 2. Very strange.
As far as I know, 1 isn't usually transparent: it always is.
As you noted, sprites are either 8x8 or 8x16 (this depends on bit 6 of the PPU control register mapped to memory address 0x2000 in the CPU's address space). Character sizes not being a multiple of 8 simply means there are wasted pixels in one or more of the constituting sprites.
For the colors, I beg to differ: the last sprite at the bottom, with the sword raised, has these 8 colors:
Final Fantasy sprite 8 colors: black, brown, beige, sky blue, navy, dark turquoise, turquoise, cyan http://img844.imageshack.us/img844/2334/spritecolors.png
I believe this is more an artistic choice, because each 8x8 block is limited to 3 opaque colors; maybe it just was more consistent to use fewer colors.
I found the answer. I finally broke down and downloaded the ROM and extracted the bitmaps with NAPIT. (btw: staring at extracted ROM bitmaps is really bloody hard on your eyes!)
I matched a few bitmaps and end-results here.
Each character has a color that is mostly relegated to top part of the sprite so I chased that idea a while. It turns out that's a red herring. Comparing the in-game sprites vs. the color masks, you can see that black and transparent use the same color mask. Therefore, IF a black outline is shown, then it must be on a separate layer. However, despite the black outlines on the sprite-sheet, I can't find any real examples of black outlines in the game.
Here's a video on YouTube with lots of good examples. When you are on the blue background screen (# 0:27), the outlines and the black mage's face are the blue of the background (ie: there is no black outline, it's transparent). In combat, the background is black. # 1:46 a spell is cast that makes the background flash grey. All black areas, including outlines and black eyes, flash grey. Other spells are also cast around this part of the video with different colors of flashes. The results are the same.
The real answer is that the black outlines on the sprite sheet don't seem to exist in the game. Whoever made the sprite sheet took the screenshots with a black background and scrubbed the background away.
You might want to check out Game Development StackExchange instead of here.
I've just had a quick glance at the sprite sheet, but it looks to me that sprites with more than 3 colors + 1 transparent either have weapons or use 3 colors + a black outline. Also, if you could show that sprite sheet with a grid separating tiles...
Maybe the extra 2 colors were reserved for the weapons.
Is there a recommended smallest button size under normal conditions?
By "recommended" I mean prescribed by some document like:
Apple HCI Guidelines
Windows UX Guidelines
or some ISO standard..
By "normal" conditions I mean:
desktop/office use
standard 96dpi monitor resolution
mouse/touchpad for pointing (no touchscreen)
non-disabled or visually impaired users
standard "theme" (no large fonts/icons)
Microsoft's UX Guide for Windows 7 and Vista recommends:
"Make click targets at least 16x16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23x23 pixels (13x13 DLUs)." where"A dialog unit (DLU) is a device-independent metric where one horizontal dialog unit equals one-fourth of the average character width for the current font and one vertical dialog unit equals one-eighth of the character height for the current font. Because characters are roughly twice as high as they are wide, a horizontal DLU is roughly the same size as a vertical DLU, but it's important to realize that DLUs are not a square unit."
You may also want to look up Fitts' Law, which calculates the time necessary to complete an action as a function of the target size. That can help mathematically determine the trade-offs of different button sizes.
Well, I try to make important/common mouse targets as large as possible without looking bad, something about 20 pixels (assuming 96 DPI) height, and as much width as needed to accomodate labels. If the button has no labels, which is very rare, I found out it's actually comfortable to have an aspect like 20w/50h (with the icon on top, not center), since the mouse is easier to move horizontally. So it's also good to keep them in the same row.
In addition to what MsLis suggested the UX Guide also suggests a minimum width of 75 pixels specifically for Command Buttons.
UX Guide - Recommended sizing and spacing