Do you know a program or script that converts from a letter to a matrix (consisting of 0 and 1) representing the letter?
For example, from letter I to a matrix something like this: (it's an LED pannel showing letter I):
Please let me know a way to create such matrix other than hand typing
Thx.
The only solution is to use font.
well for HW implementation I usually used EGA/VGA 8x8 font
extracted from gfx card BIOS you can do it easy in MS-DOS environment
another way is to extract font programaticaly from image
draw entire font to bitmap (in line or in matrix ..., or use some already created like mine). Use fixed pitch, font size most suitable your needs and do not forget that almost none modern font supports fixed pitch so use OEM_CHARSET and System named font from it. Set color properly (ideal is black background and white font) and read image pixel by pixel and store it as table of numbers. Pixel with not background color is set pixel.
Do not compare to font color because of anti-aliasing and filters. Now read all characters and set/res corresponding bit inside font table. First compute start x,y of character in image (from ASCII code and image organization) then do 2 nested 8-steps x,y for loops (in order according to your font[] organization)
set/res corresponding font[] bits at addresses 8*ASCII to 8*ASCII+8
I assume you use MCU to control LED panel
the font organization in memory is usually that 8-bit number represents single row in character. Of course if your LED panel is meant to display animated scroll then column organization of font and also HW implementation will ease things up a lot. If you have 16 bit MCU and IO access than you can use 16-bit / pixels font size.
if you have more than 8 pixels and only 8 bit MCU you can still use 16 bit data but the IO access will be in two steps via two IO ports instead of 1. I strongly recommend whole data-wide IO access instead of set/res individual IO lines its much quicker and can prevent flickering
OK here is my old 8x8 font I used back in the days ... I think this one is extracted from EGA/VGA BIOS but I am not shore ... it was too many years ago
Now the fun part
const BYTE font[8*256]={ 0,0,0,0,0,0,0,0, ... }
any character is represented as 8 numbers. if bit is 0 then it means paper (background pixel) if bit is 1 then it means ink (font pixel). Now there are more combinations (left to right, up to down and their combinations)
OK ( row-vise | left to right | up to down ) organization means:
first number is up most row of char
msb is left most pixel
lsb is right most pixel
so for example char '1' in 8x8 will be something like this (b means binary number):
00000000b,
00001000b,
00011000b,
00101000b,
00001000b,
00001000b,
00111100b,
00000000b,
When you have extracted all characters to font table than save it as source code to file which you will later include in your MCU code (can be placed in EEPROM for pgm-code)
Now the algorithm to print char on LED panel is strongly depended on your HW implementation
so please post a circuit diagram of interconnection between LED panel and control system
specify target platform and language
specify desired functionality
I assume you want left moving scroll by pixel step
the best fit will be if your LED panel is driven by columns not rows
You can activate single column of LEDs by some data IO port (all bits can be active at a time) and selecting which one column is active is driven by another select IO port (only single bit can be active at a time). So in this case compute the start address of the column to display in font table:
address = (8*ASCII + x_offset)
send font[8*ASCII + x_offset] to data IO port
activate select IO port with the correct bit active
wait a while (1-10ms) ... so you can actually see the light if delay is too short then there is no brightness if delay is too long then there is flickering so you need to experiment a little (depends on number of select bits).
deactivate select IO port
repeat with the next column
x_offset is the scrolling shift
if your HW implementation does not fit in such way don't worry
just use bit SHIFT,AND,OR operations to create the data words in memory and then send them in similar manner
Hope it helps a litle
You could try to find a font that looks the way you want (probably a monospaced font such as Courier), draw/rasterize it with a certain size (8pt?), without anti-aliasing, and convert the resulting image to your matrix format.
Related
I have been scouring the internet for weeks trying to figure out exactly how text (such as what you are reading right now) is displayed to the screen.
Results have been shockingly sparse.
I've come across the concepts of rasterization, bitmaps, vector graphics, etc. What I don't understand, is how the underlying implementation works so uniformly across all systems (windows, linux, etc.) in way we as humans understand. Is there a specification defined somewhere? Is implementation code open source and viewable by the general public?
My understanding as of right now, is this:
Create a font with an external drawing program, one character at a time
Add these characters into a font file that is understood by language-specific libraries
These characters are then read from the file as needed by the GPU and displayed to the screen in a linear fashion as defined by the parenting code.
Additionally, if characters are defined in a font file such as 'F, C, ..., Z', how are vector graphics (which rely on a set of coordinate points) supported? Without coordinate points, rasterization would seem the only option for size changes.
This is about as far as my assumptions/research goes.
If you are familiar with this topic and can provide a detailed answer that may be useful to myself and other readers, please answer at your discretion. I find it fascinating just how much code we take for granted that is remarkably complicated under the hood.
The following provides an overview (leaving out lots of gory details):
Two key components for display of text on the Internet are (i) character encoding and (ii) fonts.
Character encoding is a scheme by which characters, such as the Latin capital letter "A", are assigned a representation as a byte sequence. Many different character encoding schemes have been devised in the past. Today, what is almost ubiquitously used on the Internet is Unicode. Unicode assigns each character to a code point, which is an integer value; e.g., Unicode assigns LATIN CAPITAL LETTER A to the code point 65, or 41 in hexadecimal. By convention, Unicode code points are referred to using four to six hex digits with "U+" as a prefix. So, LATIN CAPITAL LETTER A is assigned to U+0041.
Fonts provide the graphical data used to display text. There have been various font formats created over the years. Today, what is ubiquitously used on the Internet are fonts that follow the OpenType spec (which is an extension of the TrueType font format created back around 1991).
What you see presented on the screen are glyphs. An OpenType font contains data for the glyphs, and also a table that maps Unicode code points to corresponding glyphs. More precisely, the character-to-glyph mapping (or 'cmap') table maps Unicode code points to glyph IDs. The code points are defined by Unicode; the glyph IDs are a font-internal implementation detail, and are used to look up the glyph descriptions and related data in other tables.
Glyphs in an OpenType font can be defined as bitmaps, or (far more common) as vector outlines (Bezier curves). There is an assumed coordinate grid for the glyph descriptions. The vector outlines, then, are defined as an ordered list of coordinate pairs for Bezier curve control points. When text is displayed, the vector outline is scaled onto a display grid, based on the requested text size (e.g., 10 point) and pixel sizing on the display. A rasterizer reads the control point data in the font, scales as required for the display grid, and generates a bitmap that is put onto the screen at an appropriate position.
One extra detail about displaying the rasterized bitmap: most operating systems or apps will use some kind of filtering to give glyphs a smoother and more legible appearance. For example, a grayscale anti-alias filter will set display pixels at the edges of glyphs to a gray level, rather than pure black or pure white, to make edges appear smoother when the scaled outline doesn't align exactly to the physical pixel boundaries—which is most of the time.
I mentioned "at an appropriate position". The font has metric (positioning) information for the font as a whole and for each glyph.
The font-wide metrics will include a recommended line-to-line distance for lines of text, and the placement of the baseline within each line. These metrics are expressed in the units of the font's glyph design grid; the baseline corresponds to y=0 within the grid. To start a line, the (0,0) design grid position is aligned to where the baseline meets the edge of a text container within the page layout, and the first glyph is positioned.
The font also has glyph metrics. One of the glyph metrics is an advance width for each given glyph. So, when the app is drawing a line of text, it has a starting "pen position" at the start of the line, as described above. It then places the first glyph on the line accordingly, and advances the pen position by the amount of that first glyph's advance width. It then places the second glyph using the new pen position, and advances again. And so on as glyphs are placed along the line.
There are (naturally) more complexities in laying out lines of text. What I described above is sufficient for English text displayed in a basic text editor. More generally, display of a line of text can involve substitution of the default glyphs with certain alternate glyphs; this is needed, for example, when displaying Arabic text so that characters appear cursively connected. OpenType fonts contain a "glyph substitution" (or 'GSUB') table that provides details for glyph substitution actions. In addition, the positioning of glyphs can be adjusted for various reasons; for example, to position a diacritic glyph correctly over a letter. OpenType fonts contain a "glyph positioning" ('GPOS') table that provides the position adjustment data. Operating system platforms and browsers today support all of this functionality so that Unicode-encoded text for many different languages can be displayed using OpenType fonts.
Addendum on glyph scaling:
Within the font, a grid is set up with a certain number of units per em. This is set by the font designer. For example, the designer might specify 1000 units per em, or 2048 units per em. The glyphs in the font and all the metric values—glyph advance width, default line-to-line distinance, etc.—are all set in font design grid units.
How does the em relate to what content authors set? In a word processing app, you typically set text size in points. In the printing world, a point is a well defined unit for length, approximately but not quite 1/72 of an inch. In digital typography, points are defined as exactly 1/72 of an inch. Now, in a word processor, when you set text size to, say, 12 points, that really means 12 points per em.
So, for example, suppose a font is designed using 1000 design units per em. And suppose a particular glyph is exactly 1 em wide (e.g., an em dash); in terms of the design grid units, it will be exactly 1000 units wide. Now, suppose the text size is set to 36 points. That means 36 points per em, and 36 points = 1/2", so the glyph will print exactly 1/2" wide.
When the text is rasterized, it's done for a specific target device, that has a certain pixel density. A desktop display might have a pixel (or dot) density of 96 dpi; a printer might have a pixel density of 1200 dpi. Those are relative to inches, but from inches you can get to points, and for a given text size, you can get to ems. You end up with a certain number of pixels per em based on the device and the text size. So, the rasterizer takes the glyph outline defined in font design units per em, and scales it up or down for the given number of pixels per em.
For example, suppose a font is designed using 1000 units per em, and a printer is 1000 dpi. If text is set to 72 points, that's 1" per em, and the font design units will exactly match the printer dots. If the text is set to 12 points, then the rasterizer will scale down so that there are 6 font design units per printer dot.
At that point, the details in the glyph outline might not align to whole units in the device grid. The rasterizer needs to decide which pixels/dots get ink and which do not. The font can include "hints" that affect the rasterizer behaviour. The hints might ensure that certain font details stay aligned, or the hints might be instructions to move a Bezier control point by a certain amount based on the current pixels-per-em.
For more details, see Digitizing Letterform Designs and Font Engine from Apple's TrueType Reference Manual, which goes into lots of detail.
I have a text file of DNA sequences, more than 3 billions characters of four letters - A, T, C and G-.
I'd like to have an image of this file and convert each character into a right pixel image. I greatly appreciate your comments? any software to do so?
Sorry my main question was how to convert a text file like this:
ATCGAATTCCGGAAATACGATCGGCTCA...
to an image?
Of course there is a way. My answer at https://bioinformatics.stackexchange.com/questions/14184/how-does-deepvariant-construct-rgb-images-from-dna-sequences would help.
In RGB, each dimension is an NxN image. Since you have three dimensions, so it's 3xNxN. The red dimension was used to encode the nucleotide bases. The green dimension was used to encode quality scores. Finally, the blue dimension was used to encode the strand information.
I don't know what that four letters mean but:
1) Assign each of them a color.You have four colors for the four letters.
2) Obviously you would kill the PC if you read the whole file and stores it on RAM, so you should read it in chunks.
3) So, lets say you will display it on a 1024x768 monitor, then 3,000,000,000/1024=2,929,687.5 ; that is the size of your chunks.
I would:
-1: read the first 2,929,688 letters of your file.
-2: create a global RGB var that could be an array which stores 3 doubles.
-3: for each letter, I divide it's color on its RGB components and add its to the global RGB component, for example:
//letterRGB={red:255,green:125,blue:255};
globalRGB["red"]+=letterRGB["red"]/255; //gives 1
globalRGB["green"]+=letterRGB["green"]/255; // gives 0.5
globalRGB["blue"]+=letterRGB["blue"]/255;//gives 1
-4: divide each component for the number of points, and then multiply that for 255. This would give you the color of the chunck. For example:
globalRGB["red"]=Math.round((globalRGB["red"]/nPoints)*255);//nPoints=2,929,688
So here you are basically calculating the average color of the whole 2,929,688 letters, and thats the color of only 1 point (or pixel) in your screen, one of the 1024 points.
I would repeat the process with the next 2,929,688 leters until I get my 1024 chunks represented.
Lets suppose your user clicks on one point (or chunck) on the screen.Your system should zoom in, and the way to do that is repeating this whole process but only for the 2,929,688 letters on that point.
So your chunks would be made of 2,929,688/1024=2861 points only. And so one. I bet you alredy got the logic.
It should be a point when, by zooming, the user could see one by one the different letters represented in different colors, ordered in the secuence.
Let me know what you think about this, and good luck.
I'm trying to design a Userform in Excel 2010, and I'm coming across a very annoying stumbling block where as I'm trying to move things around, resize them and align them so the form looks appealing.
Unfortunately, different mechanisms are snapping to differently sized grids. For example, drawing a box onto the grid snaps it to multiples of 6, which is the default option found in the Tools> Options> General> Grid units. Resizing these objects snaps it to a seemingly arbitrary grid size that seems to be approximately 7.2 units.
I need these units to match up so I'm not constantly fighting myself getting these grids to function. I don't care what the actual number ends up being, I just need them to be the same. While I'm able to change the grid size, it must be a whole number, which the arbitrary grid is not.
Problem us your units Points <-> Pixels
6 Points * 4/3 points per pixel =>> 8 Pixels .. all good
A nice integer of pixels
Your approx 7.2 I suspect was 7.3333333
Maybe say 11 Points Set gets changed as
* 4/3 => 14.6666 pix Rounded to 15 pix
by 3/4 Pts/pix +> 11.33333
Points as single .. Pixels as integer may be the problem in the math
Does, anyone know, how I can change brightness and contrast of color image. I know about vtkImageMapToWindowLevel, but after setting level or window of image in this class, the color image becomes grayscale.
Thanks for answers;
By definition, a color image is already color mapped, and you cannot change the brightness/contrast of the image without decomposition and recomposition.
First, define a pair of numbers called brightness and contrast in whatever way you want. Normally, I'd take brightness as the maximum value, and contrast as the ratio between minimum and maximum. Similarly, if you want to use Window/Level semantics, "level" is the minimum scalar value, and window is the difference between maximum and minimum.
Next, you find the scalar range - the minimum and maximum values in your desired output image, using the brightness and contrast. If you're applying brightness/contrast, the scalar range is:
Maximum = brightness
Minimum = Maximum / contrast
Assume a color lookup table (LUT), with a series of colors at different proportional values, say, in the range of 0 to 1. Now, since we know the brightness and contrast, we can setup the LUT with the lower value (range 0) mapping to "minimum" and the upper value (range 1) mapping to "maximum". When this is done, a suitable class, like vtkImageMapToColors can take the single-component input and map it to a 3 or 4 component image.
Now, all this can happen only for a single-component image, as the color LUT classes (vtkScalarsToColors and related classes) are defined only on single-component images.
If you have access to the original one-component image, and you're using vtkImageMapToColors or some similar class, I'd suggest handling it at that stage.
If you don't, there is one way I can think of:
Extract the three channels as three different images using vtkImageExtractComponents (you'll need three instances, each with the original image as input).
Independently scale the 3 channels using vtkImageShiftScale (shift by brightness, scale by contrast)
Combine the channels back using vtkImageAppendComponents
Another possibility is to use vtkImageMagnitude, which will convert the image back to grey-scale (by taking the magnitude of the three channels together), and re-applying the color table using vtkImageMapToColors and any of the vtkScalarsToColors classes as your lookup table.
The first method is better if your image is a real photograph or something similar, where the colors are from some 3-component source, and the second would work better if your input image is already using false colors (say an image from an IR camera, or some procedurally generated fractal that's been image mapped).
Is there a recommended smallest button size under normal conditions?
By "recommended" I mean prescribed by some document like:
Apple HCI Guidelines
Windows UX Guidelines
or some ISO standard..
By "normal" conditions I mean:
desktop/office use
standard 96dpi monitor resolution
mouse/touchpad for pointing (no touchscreen)
non-disabled or visually impaired users
standard "theme" (no large fonts/icons)
Microsoft's UX Guide for Windows 7 and Vista recommends:
"Make click targets at least 16x16 pixels so that they can be easily clicked by any input device. For touch, the recommended minimum control size is 23x23 pixels (13x13 DLUs)." where"A dialog unit (DLU) is a device-independent metric where one horizontal dialog unit equals one-fourth of the average character width for the current font and one vertical dialog unit equals one-eighth of the character height for the current font. Because characters are roughly twice as high as they are wide, a horizontal DLU is roughly the same size as a vertical DLU, but it's important to realize that DLUs are not a square unit."
You may also want to look up Fitts' Law, which calculates the time necessary to complete an action as a function of the target size. That can help mathematically determine the trade-offs of different button sizes.
Well, I try to make important/common mouse targets as large as possible without looking bad, something about 20 pixels (assuming 96 DPI) height, and as much width as needed to accomodate labels. If the button has no labels, which is very rare, I found out it's actually comfortable to have an aspect like 20w/50h (with the icon on top, not center), since the mouse is easier to move horizontally. So it's also good to keep them in the same row.
In addition to what MsLis suggested the UX Guide also suggests a minimum width of 75 pixels specifically for Command Buttons.
UX Guide - Recommended sizing and spacing