When you render text on an HTML5 canvas (using the fillText command, for example), the text will render anti-aliased, meaning the text looks smoother. The downside is that it becomes very noticable when trying to render small text or specifically non-aliased fonts (such as Terminal). Because of this, what I want to do is render text aliased, rather than anti-aliased.
Is there any way to do so?
Unfortunately, there is no native way to turn off anti-aliasing for text.
The solution is to use the old-school approach of bitmap fonts, that is, in the case of HTML5 canvas a sprite-sheet where you copy each bitmap letter to the canvas. By using a sprite-sheet with transparent background you can easily change the color/gradient etc. of it as well.
An example of such bitmap:
For it to work you need to know what characters it contains ("map"), the width and height of each character, and the width of the font bitmap.
Note: In most cases you'll probably end up with a mono-spaced font where all cells have the same size. You can use a proportional font but in that case you need to be aware of that you need to map each character with an absolute position and include the width and height as well for its cell.
An example with comments:
const ctx = c.getContext("2d"), font = new Image;
font.onload = () => {
// define some meta-data
const charWidth = 12; // character cell, in pixels
const charHeight = 16;
const sheetWidth = (font.width / charWidth)|0; // width, in characters, of the image itself
// map so we can use index of a char. to calc. position in bitmap
const charMap = " !\"#$% '()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~ยง";
// Draw some demo text
const timeStart = performance.now();
fillBitmapText(font, "Demo text using bitmap font!", 20, 20);
fillBitmapText(font, "This is line 2...", 20, 45);
const timeEnd = performance.now();
console.log("Text above rendered in", timeEnd - timeStart, "ms");
// main example function
function fillBitmapText(font, text, x, y) {
// always make sure x and y are integer positions
x = x|0;
y = y|0;
// current x position
let cx = x;
// now, iterate over text per char.
for(let char of text) {
// get index in map:
const i = charMap.indexOf(char);
if (i >= 0) { // valid char
// Use index to calculate position in bitmap:
const bx = (i % sheetWidth) * charWidth;
const by = ((i / sheetWidth)|0) * charHeight;
// draw in character on canvas
ctx.drawImage(font,
// position and size from font bitmap
bx, by, charWidth, charHeight,
// position on canvas, same size
cx, y, charWidth, charHeight);
}
cx += charWidth; // increment current canvas x position
}
}
}
font.src = "//i.stack.imgur.com/GeawH.png";
body {background:#fff}
<canvas id=c width=640></canvas>
This should produce an output similar to this:
You can modify this to suit your needs. Notice that the bitmap used here is not transparent - I'll leave that to OP.
Related
I am using the p5.js Web Editor
var sketch = function (p) {
with(p) {
p.setup = function() {
createCanvas(400, 400);
secCanvas = createGraphics(400, 400);
secCanvas.clear();
trans = 0;
drop_size = 10;
sun_size = 50;
radius = 10;
};
p.draw = function() {
background(3, 182, 252, 1);
image(secCanvas, 0, 0)
secCanvas.fill(255, 162, 0, 1)
secCanvas.ellipse(width/2, 0 + sun_size, sun_size)
fill(40, trans)
trans = random(255);
ellipse(random(mouseX + radius, mouseX - radius), random(mouseY + radius, mouseY - radius), drop_size)
drop_size = random(50)
};
}
};
let node = document.createElement('div');
window.document.getElementById('p5-container').appendChild(node);
new p5(sketch, node);
body {
background-color:#efefef;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.1.9/p5.js"></script>
<div id="p5-container"></div>
When I set a discrete value of alpha in secCanvas.fill(). The value appears to be gradually increase(and stops soon), while I gave no such instruction. Why is this happening? This only happens when I put background(3, 182, 252, 1); in the draw function but not when I put it in the setup function.
Each frame is drawn on top of all previous frames, so when you draw a semi-transparent background, you can still see the previous frames underneath it.
Think of it as adding a very thin coat of paint over top what you've already painted. Because the color you're adding is semi-transparent, you can still see what's underneath it. Then during the next frame, you add another layer of paint, and the previous frames get just a little more faint.
They stop becoming more faint because of the way the computer calculates the new color, based on the previous frames and the new semi-transparent background color. Long story short, the color you're drawing is almost 100% transparent, so it's not strong enough to completely hide previous frames.
The project in question: https://github.com/matutter/Pixel2 is a personal project to replace some out of date software at work. What it should do is, the user adds an image and it generates a color palette of the image. The color palette should have no duplicate colors. (thats the only important stuff)
My question is: why do larger or hi-res or complex images not work as well? (loss of color data)
Using dropzone.js I have the user put a picture on the page. The picture is a thumbnail. Next I use jquery to find the src out of a <img src="...">. I pass that src to a function that does this
function generate(imgdata) {
var imageObj = new Image();
imageObj.src = imgdata;
convert(imageObj); //the function that traverses the image data pulling out RGB
}
the "convert" function pulls out the data fairly simply by
for(var i=0, n=data.length; i<n; i+=4, pixel++ ) {
r = data[i];
g = data[i+1];
b = data[i+2];
color = r + g + b; // format is a string of **r, g, b**
}
finally, the last part of the main algorithme filters out duplicate colors, I only want just 1 occurrence of each... here's the last part
color = monoFilter(color); // the call
function monoFilter(s) {
var unique = [];
$.each(s, function(i, el){
if($.inArray(el, unique) === -1) unique.push(el);
});
unique.splice(0,1); //remove undefine
unique.unshift("0, 0, 0"); //make sure i have black
unique.push("255, 255, 255"); //and white
return unique;
}
I'm hoping someone can help me identify why there is such a loss of color data in big files.
If anyone is actually interesting enough to look at the github, the relivent files are js/pixel2.js, js/dropzone.js, and ../index.html
This is probably the cause of the problem:
color = r + g + b; // format is a string of **r, g, b**
This simply adds the numbers together and the more pixels you have the higher risk you run to get the same number. For example, these colors generate the same result:
R G B
color = 90 + 0 + 0 = 90;
color = 0 + 90 + 0 = 90;
color = 0 + 0 + 90 = 90;
even though they are completely different colors.
To avoid this you can do it like this if you want a string:
color = [r,g,b].join();
or you can create an integer value of them (which is faster to compare with than a string):
color = (b << 16) + (g << 8) + r; /// LSB byte-order
Even an Euclidean vector would be better:
color = r*r + g*g + b*b;
but with the latter you risk eventually the same scenario as the initial one (but useful for nearest color scenarios).
Anyways, hope this helps.
"The problem was that I wasn't accounting for alpha. So a palette from an image that uses alpha would have accidental duplicate records."
I figured this out after finding this Convert RGBA color to RGB
I am creating a bitmap in the memory which combine with an image and text. My code is:
HDC hdcWindow = GetDC();
HDC hdcMemDC = CreateCompatibleDC(hdcWindow);
HBITMAP hbmDrag = NULL;
if (!hdcMemDC) {
ReleaseDC(hdcWindow);
return NULL;
}
RECT clientRect = {0};
GetClientRect(&clientRect);
hbmDrag = CreateCompatibleBitmap(hdcWindow, 256, 256);
if(hbmDrag) {
SelectObject(hdcMemDC, hbmDrag);
FillRect(hdcMemDC, &clientRect, mSelectedBkgndBrush);
Graphics graphics(hdcMemDC);
// Draw the icon
graphics.DrawImage(mImage, 100, 100, 50, 50);
#if 1
CRect desktopLabelRect(0, y, clientRect.right, y);
HFONT desktopFont = mNameLabel.GetFont();
HGDIOBJ oldFont = SelectObject(hdcMemDC, desktopFont);
SetTextColor(hdcMemDC, RGB(255,0,0));
DrawText(hdcMemDC, mName, -1, desktopLabelRect, DT_CENTER | DT_END_ELLIPSIS | DT_CALCRECT);
#else
// Set font
Font font(hdcMemDC, mNameLabel.GetFont());
// Set RECT
int y = DEFAULT_ICON_HEIGHT + mMargin;
RectF layoutRect(0, y, clientRect.right, y);
// Set display format
StringFormat format;
format.SetAlignment(StringAlignmentCenter);
// Set brush
SolidBrush blackBrush(Color(255, 0, 0, 0));
// Draw the label
int labelWide = DEFAULT_ICON_WIDTH + mMargin;
CString labelName = GetLayOutLabelName(hdcMemDC, labelWide, mName);
graphics.DrawString(labelName, -1, &font, layoutRect, &format, &blackBrush);
#endif
}
DeleteDC(hdcMemDC);
ReleaseDC(hdcWindow);
return hbmDrag;
The image can be outputted to the bitmap success.
For the text, if I use "DrawText", it can't be shown in the bitmap although the return value is correct;
But Graphics::DrawString can output the text success.
I don't know the reason. Anybody can pls tell me?
Thanks a lot.
You are passing the DT_CALCRECT flag to DrawText(). This flag is documented as (emphasis mine):
Determines the width and height of the rectangle. If there are
multiple lines of text, DrawText uses the width of the rectangle
pointed to by the lpRect parameter and extends the base of the
rectangle to bound the last line of text. If the largest word is wider
than the rectangle, the width is expanded. If the text is less than
the width of the rectangle, the width is reduced. If there is only one
line of text, DrawText modifies the right side of the rectangle so
that it bounds the last character in the line. In either case,
DrawText returns the height of the formatted text but does not draw
the text.
I'm learning DirectX, using the book "Sherrod A., Jones W. - Beginning DirectX 11 Game Programming - 2011" Now I'm exploring the 4th chapter about drawing text.
Please, help we to fix my function, that I'm using to draw a string on the screen. I've already loaded font texture and in the function I create some sprites with letters and define texture coordinates for them. This compiles correctly, but doesn't draw anything. What's wrong?
bool DirectXSpriteGame :: DrawString(char* StringToDraw, float StartX, float StartY)
{
//VAR
HRESULT D3DResult; //The result of D3D functions
int i; //Counters
const int IndexA = static_cast<char>('A'); //ASCII index of letter A
const int IndexZ = static_cast<char>('Z'); //ASCII index of letter Z
int StringLenth = strlen(StringToDraw); //Lenth of drawing string
float ScreenCharWidth = static_cast<float>(LETTER_WIDTH) / static_cast<float>(SCREEN_WIDTH); //Width of the single char on the screen(in %)
float ScreenCharHeight = static_cast<float>(LETTER_HEIGHT) / static_cast<float>(SCREEN_HEIGHT); //Height of the single char on the screen(in %)
float TexelCharWidth = 1.0f / static_cast<float>(LETTERS_NUM); //Width of the char texel(in the texture %)
float ThisStartX; //The start x of the current letter, drawingh
float ThisStartY; //The start y of the current letter, drawingh
float ThisEndX; //The end x of the current letter, drawing
float ThisEndY; //The end y of the current letter, drawing
int LetterNum; //Letter number in the loaded font
int ThisLetter; //The current letter
D3D11_MAPPED_SUBRESOURCE MapResource; //Map resource
VertexPos* ThisSprite; //Vertecies of the current sprite, drawing
//VAR
//Clamping string, if too long
if(StringLenth > LETTERS_NUM)
{
StringLenth = LETTERS_NUM;
}
//Mapping resource
D3DResult = _DeviceContext -> Map(_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MapResource);
if(FAILED(D3DResult))
{
throw("Failed to map resource");
}
ThisSprite = (VertexPos*)MapResource.pData;
for(i = 0; i < StringLenth; i++)
{
//Creating geometry for the letter sprite
ThisStartX = StartX + ScreenCharWidth * static_cast<float>(i);
ThisStartY = StartY;
ThisEndX = ThisStartX + ScreenCharWidth;
ThisEndY = StartY + ScreenCharHeight;
ThisSprite[0].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisSprite[1].Position = XMFLOAT3(ThisEndX, ThisStartY, 1.0f);
ThisSprite[2].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[3].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[4].Position = XMFLOAT3(ThisStartX, ThisEndY, 1.0f);
ThisSprite[5].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisLetter = static_cast<char>(StringToDraw[i]);
//Defining the letter place(number) in the font
if(ThisLetter < IndexA || ThisLetter > IndexZ)
{
//Invalid character, the last character in the font, loaded
LetterNum = IndexZ - IndexA + 1;
}
else
{
LetterNum = ThisLetter - IndexA;
}
//Unwraping texture on the geometry
ThisStartX = TexelCharWidth * static_cast<float>(LetterNum);
ThisStartY = 0.0f;
ThisEndY = 1.0f;
ThisEndX = ThisStartX + TexelCharWidth;
ThisSprite[0].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite[1].TextureCoords = XMFLOAT2(ThisEndX, ThisStartY);
ThisSprite[2].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[3].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[4].TextureCoords = XMFLOAT2(ThisStartX, ThisEndY);
ThisSprite[5].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite += VERTEX_IN_RECT_NUM;
}
for(i = 0; i < StringLenth; i++, ThisSprite -= VERTEX_IN_RECT_NUM);
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
return true;
}
Although the piece of code constructing the Vertex Array seems correct to me at first glance, it seems like you are trying to Draw your vertices with a Shader which has not been set yet !
It is difficult to precisely answer you without looking at the whole code, but I can guess that you will need to do something like that :
1) Create Vertex and Pixel Shaders by compiling them first from their respective buffers
2) Create the Input Layout description, which describes the Input Buffers that will be read by the Input Assembler stage. It will have to match your VertexPos structure and your shader structure.
3) Set the Shader parameters.
4) Only now you can Set Shader rendering parameters : Set the InputLayout, as well as the Vertex and Pixel Shaders that will be used to render your triangles by something like :
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext->IASetInputLayout(myInputLayout);
_DeviceContext->VSSetShader(myVertexShader, NULL, 0); // Set Vertex shader
_DeviceContext->PSSetShader(myPixelShader, NULL, 0); // Set Pixel shader
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
This link should help you achieve what you want to do : http://www.rastertek.com/dx11tut12.html
Also, I recommend you to set an IndexBuffer and to use the method DrawIndexed to render your triangles for performance reasons : It will allow the graphics adapter to store vertices in a vertex cache, allowing recently-used vertex to be fetched from the cache instead of reading it from the vertex buffer.
More about this concern can be found on MSDN : http://msdn.microsoft.com/en-us/library/windows/desktop/bb147325(v=vs.85).aspx
Hope this helps!
P.S : Also, don't forget to release the resources after using them by calling Release().
I created a Tree in D3.js based on Mike Bostock's Node-link Tree. The problem I have and that I also see in Mike's Tree is that the text label overlap/underlap the circle nodes when there isn't enough space rather than extend the links to leave some space.
As a new user I'm not allowed to upload images, so here is a link to Mike's Tree where you can see the labels of the preceding nodes overlapping the following nodes.
I tried various things to fix the problem by detecting the pixel length of the text with:
d3.select('.nodeText').node().getComputedTextLength();
However this only works after I rendered the page when I need the length of the longest text item before I render.
Getting the longest text item before I render with:
nodes = tree.nodes(root).reverse();
var longest = nodes.reduce(function (a, b) {
return a.label.length > b.label.length ? a : b;
});
node = vis.selectAll('g.node').data(nodes, function(d, i){
return d.id || (d.id = ++i);
});
nodes.forEach(function(d) {
d.y = (longest.label.length + 200);
});
only returns the string length, while using
d.y = (d.depth * 200);
makes every link a static length and doesn't resize as beautiful when new nodes get opened or closed.
Is there a way to avoid this overlapping? If so, what would be the best way to do this and to keep the dynamic structure of the tree?
There are 3 possible solutions that I can come up with but aren't that straightforward:
Detecting label length and using an ellipsis where it overruns child nodes. (which would make the labels less readable)
scaling the layout dynamically by detecting the label length and telling the links to adjust accordingly. (which would be best but seems really difficult
scale the svg element and use a scroll bar when the labels start to run over. (not sure this is possible as I have been working on the assumption that the SVG needs to have a set height and width).
So the following approach can give different levels of the layout different "heights". You have to take care that with a radial layout you risk not having enough spread for small circles to fan your text without overlaps, but let's ignore that for now.
The key is to realize that the tree layout simply maps things to an arbitrary space of width and height and that the diagonal projection maps width (x) to angle and height (y) to radius. Moreover the radius is a simple function of the depth of the tree.
So here is a way to reassign the depths based on the text lengths:
First of all, I use the following (jQuery) to compute maximum text sizes for:
var computeMaxTextSize = function(data, fontSize, fontName){
var maxH = 0, maxW = 0;
var div = document.createElement('div');
document.body.appendChild(div);
$(div).css({
position: 'absolute',
left: -1000,
top: -1000,
display: 'none',
margin:0,
padding:0
});
$(div).css("font", fontSize + 'px '+fontName);
data.forEach(function(d) {
$(div).html(d);
maxH = Math.max(maxH, $(div).outerHeight());
maxW = Math.max(maxW, $(div).outerWidth());
});
$(div).remove();
return {maxH: maxH, maxW: maxW};
}
Now I will recursively build an array with an array of strings per level:
var allStrings = [[]];
var childStrings = function(level, n) {
var a = allStrings[level];
a.push(n.name);
if(n.children && n.children.length > 0) {
if(!allStrings[level+1]) {
allStrings[level+1] = [];
}
n.children.forEach(function(d) {
childStrings(level + 1, d);
});
}
};
childStrings(0, root);
And then compute the maximum text length per level.
var maxLevelSizes = [];
allStrings.forEach(function(d, i) {
maxLevelSizes.push(computeMaxTextSize(allStrings[i], '10', 'sans-serif'));
});
Then I compute the total text width for all the levels (adding spacing for the little circle icons and some padding to make it look nice). This will be the radius of the final layout. Note that I will use this same padding amount again later on.
var padding = 25; // Width of the blue circle plus some spacing
var totalRadius = d3.sum(maxLevelSizes, function(d) { return d.maxW + padding});
var diameter = totalRadius * 2; // was 960;
var tree = d3.layout.tree()
.size([360, totalRadius])
.separation(function(a, b) { return (a.parent == b.parent ? 1 : 2) / a.depth; });
Now we can call the layout as usual. There is one last piece: to figure out the radius for the different levels we will need a cumulative sum of the radii of the previous levels. Once we have that we simply assign the new radii to the computed nodes.
// Compute cummulative sums - these will be the ring radii
var newDepths = maxLevelSizes.reduce(function(prev, curr, index) {
prev.push(prev[index] + curr.maxW + padding);
return prev;
},[0]);
var nodes = tree.nodes(root);
// Assign new radius based on depth
nodes.forEach(function(d) {
d.y = newDepths[d.depth];
});
Eh voila! This is maybe not the cleanest solution, and perhaps does not address every concern, but it should get you started. Have fun!