Where may one find references on implementing an algorithm for calculating a "dirty rectangle" for minimizing frame buffer updates? A display model that permits arbitrary edits and computes the minimal set of "bit blit" operations required to update the display.
To build the smallest rectangle that contains all the areas that need to be repainted:
Start with a blank area (perhaps a rectangle set to 0,0,0,0 - something you can detect as 'no update required')
For each dirty area added:
Normalize the new area (i.e. ensure that left is less than right, top less than bottom)
If the dirty rectangle is currently empty, set it to the supplied area
Otherwise, set the left and top co-ordinates of the dirty rectangle to the smallest of {dirty,new}, and the right and bottom co-ordinates to the largest of {dirty,new}.
Windows, at least, maintains an update region of the changes that it's been informed of, and any repainting that needs to be done due to the window being obscured and revealed. A region is an object that is made up of many possibly discontinuous rectangles, polygons and ellipses. You tell Windows about a part of the screen that needs to be repainted by calling InvalidateRect - there is also an InvalidateRgn function for more complicated areas. If you choose to do some painting before the next WM_PAINT message arrives, and you want to exclude that from the dirty area, there are ValidateRect and ValidateRgn functions.
When you start painting with BeginPaint, you supply a PAINTSTRUCT that Windows fills with information about what needs to be painted. One of the members is the smallest rectangle that contains the invalid region. You can get the region itself using GetUpdateRgn (you must call this before BeginPaint, because BeginPaint marks the whole window as valid) if you want to minimize drawing when there are multiple small invalid areas.
I would assume that, as minimizing drawing was important on the Mac and on X when those environments were originally written, there are equivalent mechanisms for maintaining an update region.
Vexi is a reference implementation of this. The class is org.vexi.util.DirtyList (Apache License), and is used as part of production systems i.e. thoroughly tested, and is well commented.
A caveat, the currently class description is a bit inaccurate, "A general-purpose data structure for holding a list of rectangular regions that need to be repainted, with intelligent coalescing." Actually it does not currently do the coalescing. Therefore you can consider this a basic DirtyList implementation in that it only intersects dirty() requests to make sure there are no overlapping dirty regions.
The one nuance to this implementation is that, instead of using Rect or another similar region object, the regions are stored in an array of ints i.e. in blocks of 4 ints in a 1-dimensional array. This is done for run time efficiency although in retrospect I'm not sure whether there's much merit to this. (Yes, I implemented it.) It should be simple enough to substitute Rect for the array blocks in use.
The purpose of the class is to be fast. With Vexi, dirty may be called thousands of times per frame, so intersections of the dirty regions with the dirty request has to be as quick as possible. No more than 4 number comparisons are used to determine the relative position of two regions.
It is not entirely optimal due to the missing coalescing. Whilst it does ensure no overlaps between dirty/painted regions, you might end up with regions that line up and could be merged into a larger region - and therefore reducing the number of paint calls.
Code snippet. Full code online here.
public class DirtyList {
/** The dirty regions (each one is an int[4]). */
private int[] dirties = new int[10 * 4]; // gets grown dynamically
/** The number of dirty regions */
private int numdirties = 0;
...
/**
* Pseudonym for running a new dirty() request against the entire dirties list
* (x,y) represents the topleft coordinate and (w,h) the bottomright coordinate
*/
public final void dirty(int x, int y, int w, int h) { dirty(x, y, w, h, 0); }
/**
* Add a new rectangle to the dirty list; returns false if the
* region fell completely within an existing rectangle or set of
* rectangles (i.e. did not expand the dirty area)
*/
private void dirty(int x, int y, int w, int h, int ind) {
int _n;
if (w<x || h<y) {
return;
}
for (int i=ind; i<numdirties; i++) {
_n = 4*i;
// invalid dirties are marked with x=-1
if (dirties[_n]<0) {
continue;
}
int _x = dirties[_n];
int _y = dirties[_n+1];
int _w = dirties[_n+2];
int _h = dirties[_n+3];
if (x >= _w || y >= _h || w <= _x || h <= _y) {
// new region is outside of existing region
continue;
}
if (x < _x) {
// new region starts to the left of existing region
if (y < _y) {
// new region overlaps at least the top-left corner of existing region
if (w > _w) {
// new region overlaps entire width of existing region
if (h > _h) {
// new region contains existing region
dirties[_n] = -1;
continue;
}// else {
// new region contains top of existing region
dirties[_n+1] = h;
continue;
} else {
// new region overlaps to the left of existing region
if (h > _h) {
// new region contains left of existing region
dirties[_n] = w;
continue;
}// else {
// new region overlaps top-left corner of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _y, _x, h, i+1);
return;
}
} else {
// new region starts within the vertical range of existing region
if (w > _w) {
// new region horizontally overlaps existing region
if (h > _h) {
// new region contains bottom of existing region
dirties[_n+3] = y;
continue;
}// else {
// new region overlaps to the left and right of existing region
dirty(x, y, _x, h, i+1);
dirty(_w, y, w, h, i+1);
return;
} else {
// new region ends within horizontal range of existing region
if (h > _h) {
// new region overlaps bottom-left corner of existing region
dirty(x, y, _x, h, i+1);
dirty(_x, _h, w, h, i+1);
return;
}// else {
// existing region contains right part of new region
w = _x;
continue;
}
}
} else {
// new region starts within the horizontal range of existing region
if (y < _y) {
// new region starts above existing region
if (w > _w) {
// new region overlaps at least top-right of existing region
if (h > _h) {
// new region contains the right of existing region
dirties[_n+2] = x;
continue;
}// else {
// new region overlaps top-right of existing region
dirty(x, y, w, _y, i+1);
dirty(_w, _y, w, h, i+1);
return;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// new region overlaps to the above and below of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _h, w, h, i+1);
return;
}// else {
// existing region contains bottom part of new region
h = _y;
continue;
}
} else {
// new region starts within existing region
if (w > _w) {
// new region overlaps at least to the right of existing region
if (h > _h) {
// new region overlaps bottom-right corner of existing region
dirty(x, _h, w, h, i+1);
dirty(_w, y, w, _h, i+1);
return;
}// else {
// existing region contains left part of new region
x = _w;
continue;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// existing region contains top part of new region
y = _h;
continue;
}// else {
// new region is contained within existing region
return;
}
}
}
}
// region is valid; store it for rendering
_n = numdirties*4;
size(_n);
dirties[_n] = x;
dirties[_n+1] = y;
dirties[_n+2] = w;
dirties[_n+3] = h;
numdirties++;
}
...
}
It sounds like what you need is a bounding box for each shape that you're rendering to the screen. Remember that a bounding box of a polygon can be defined as a "lower left" (the minimum point) and an "upper right" (the maximum point). That is, the x-component of the minimum point is defined as the minimum of all the x-components of each point in a polygon. Use the same methodology for the y-component (in the case of 2D) and the maximal point of the bounding box.
If it's sufficient to have a bounding box (aka "dirty rectangle") per polygon, you're done. If you need an overall composite bounding box, the same algorithm applies, except you can just populate a single box with minimal and maximal points.
Now, if you're doing all this in Java, you can get your bounding box for an Area (which you can construct from any Shape) directly by using the getBound2D() method.
What language are you using? In Python, Pygame can do this for you. Use the RenderUpdates Group and some Sprite objects with image and rect attributes.
For example:
#!/usr/bin/env python
import pygame
class DirtyRectSprite(pygame.sprite.Sprite):
"""Sprite with image and rect attributes."""
def __init__(self, some_image, *groups):
pygame.sprite.Sprite.__init__(self, *groups)
self.image = pygame.image.load(some_image).convert()
self.rect = self.image.get_rect()
def update(self):
pass #do something here
def main():
screen = pygame.display.set_mode((640, 480))
background = pygame.image.load(open("some_bg_image.png")).convert()
render_group = pygame.sprite.RenderUpdates()
dirty_rect_sprite = DirtyRectSprite(open("some_image.png"))
render_group.add(dirty_rect_sprite)
while True:
dirty_rect_sprite.update()
render_group.clear(screen, background)
pygame.display.update(render_group.draw(screen))
If you're not using Python+Pygame, here's what I would do:
Make a Sprite class that's update(),
move() etc. method sets a "dirty"
flag.
Keep a rect for each sprite
If your API supports updating a list of rects, use that on the list of rects whose sprites are dirty. In SDL, this is SDL_UpdateRects.
If your API doesn't support updating a list of rects (I've never had the chance to use anything besides SDL so I wouldn't know), test to see if it's quicker to call the blit function multiple times or once with a big rect. I doubt that any API would be faster using one big rect, but again, I haven't used anything besides SDL.
I just recently wrote a Delphi class to calculate the difference rectangles of two images and was quite suprised by how fast it ran - fast enough to run in a short timer and after mouse/keyboard messages for recording screen activity.
The step by step gist of how it works is by:
Sub-dividing the image into logical 12x12 by rectangles.
Looping through each pixel and if there's a difference then I tell the sub-rectangle which the pixel belongs to that there's a difference in one of it's pixels and where.
Each sub-rectangle remembers the co-ordinates of it's own left-most, top-most, right-most and bottom-most difference.
Once all the differences have been found, I loop through all the sub-rectangles that have differences and form bigger rectangles out of them if they are next to each other and use the left-most, top-most, right-most and bottom-most differences of those sub-rectangles to make actual difference rectangles I use.
This seems to work quite well for me. If you haven't already implemented your own solution, let me know and I'll email you my code if you like. Also as of now, I'm a new user of StackOverflow so if you appreciate my answer then please vote it up. :)
Look into R-tree and quadtree data structures.
Related
I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}
I want to achieve selection based on the bounding rectangle but with a different approach.
Scenario: If I draw object inside object, like first text, then rectangle over it, then ellipse and then triangle. Now I should be able to select the text or rectangle or ellipse OR the reverse order anyhow.
As I start hovering the triangle's bounding rect, the selection or active object should be triangle, but as I move my mouse over ellipse's bounding rect, the current object should be shown as ellipse and so on, irrespective of the order I have added the objects on canvas.
I tried with perPixelTargetFind and following solution Fabricjs - selection only via border, both the solutions are not working meeting my requirement.
I am using FabricJS version 3.6.3
Thanks in advance.
First you need to set perPixelTargetFind: true and targetFindTolerance:5.
Now you will face the issue for selection.
Issue: If you mousedown and drag on empty space, the object was getting selected.
Solution: Found a way to do that. I debugged through the fabric's mechanism to get objects on current mouse pointer location. There is a function _collectObjects which checks for intersectsWithRect (intersect with boundingRect points of the current object), isContainedWithinRect (do the points come inside the boundingRect), containsPoint(current mouse pointer points come in the current object location). So you need to override the _collectObjects function and remove containsPoint check. That will work.
Overridden function:
_collectObjects: function(e) {
var group = [],
currentObject,
x1 = this._groupSelector.ex,
y1 = this._groupSelector.ey,
x2 = x1 + this._groupSelector.left,
y2 = y1 + this._groupSelector.top,
selectionX1Y1 = new fabric.Point(min(x1, x2), min(y1, y2)),
selectionX2Y2 = new fabric.Point(max(x1, x2), max(y1, y2)),
allowIntersect = !this.selectionFullyContained,
isClick = x1 === x2 && y1 === y2;
// we iterate reverse order to collect top first in case of click.
for (var i = this._objects.length; i--; ) {
currentObject = this._objects[i];
if (!currentObject || !currentObject.selectable || !currentObject.visible) {
continue;
}
if ((allowIntersect && currentObject.intersectsWithRect(selectionX1Y1, selectionX2Y2)) ||
currentObject.isContainedWithinRect(selectionX1Y1, selectionX2Y2))
) {
group.push(currentObject);
// only add one object if it's a click
if (isClick) {
break;
}
}
}
if (group.length > 1) {
group = group.filter(function(object) {
return !object.onSelect({ e: e });
});
}
return group;
}
I'm trying to draw some rotated texts by using the CGAffineTransform.MakeRotation method at specifc location. I also make use of the TranslateCTM, but something must be wrong as rotated texts do not appear aligned and at the correct x, y position where they should appear, here is simple the code I'm using, anyone know where the problem is? :
public override void Draw (RectangleF rect)
{
DrawTextRotated("Hello1",10,100,30);
DrawTextRotated("Hello2",50,100,60);
SetNeedsDisplay();
}
static public float DegreesToRadians(float x)
{
return (float) (Math.PI * x / 180.0);
}
public void DrawTextRotated(string text,int x, int y, int rotDegree)
{
CGContext c = UIGraphics.GetCurrentContext();
c.SaveState();
c.TextMatrix = CGAffineTransform.MakeRotation((float)DegreesToRadians((float)(-rotDegree)));
c.ConcatCTM(c.TextMatrix);
float xxx = ((float)Math.Sin(DegreesToRadians((float)rotDegree))*y);
float yyy = ((float)Math.Sin(DegreesToRadians((float)rotDegree))*x);
// Move the context back into the view
c.TranslateCTM(-xxx,yyy);
c.SetTextDrawingMode(CGTextDrawingMode.Fill);
c.SetShouldSmoothFonts(true);
MonoTouch.Foundation.NSString str = new MonoTouch.Foundation.NSString(text);
SizeF strSize = new SizeF();
strSize = str.StringSize(UIFont.SystemFontOfSize(12));
RectangleF tmpR = new RectangleF(x,y,strSize.Width,strSize.Height);
str.DrawString(tmpR,UIFont.SystemFontOfSize(12),UILineBreakMode.WordWrap,UITextAlignment.Right);
c.RestoreState();
}
Thanks !
Here's some code that will draw text rotated properly about the top-left corner of the text. For the moment, I'm disregarding your use of text alignment.
First, a utility method to draw a marker where we expect the text to show up:
public void DrawMarker(float x, float y)
{
float SZ = 20;
CGContext c = UIGraphics.GetCurrentContext();
c.BeginPath();
c.AddLines( new [] { new PointF(x-SZ,y), new PointF(x+SZ,y) });
c.AddLines( new [] { new PointF(x,y-SZ), new PointF(x,y+SZ) });
c.StrokePath();
}
And the code to draw the text (note I've replaced all int rotations with float, and you may want negate your rotation):
public void DrawTextRotated(string text, float x, float y, float rotDegree)
{
CGContext c = UIGraphics.GetCurrentContext();
c.SaveState();
DrawMarker(x,y);
// Proper rotation about a point
var m = CGAffineTransform.MakeTranslation(-x,-y);
m.Multiply( CGAffineTransform.MakeRotation(DegreesToRadians(rotDegree)));
m.Multiply( CGAffineTransform.MakeTranslation(x,y));
c.ConcatCTM( m );
// Draws text UNDER the point
// "This point represents the top-left corner of the string’s bounding box."
//http://developer.apple.com/library/ios/#documentation/UIKit/Reference/NSString_UIKit_Additions/Reference/Reference.html
NSString ns = new NSString(text);
UIFont font = UIFont.SystemFontOfSize(12);
SizeF sz = ns.StringSize(font);
RectangleF rect = new RectangleF(x,y,sz.Width,sz.Height);
ns.DrawString( rect, font);
c.RestoreState();
}
Rotation about a point requires translation of the point to the origin followed by rotation, followed by rotation back to the original point. CGContext.TextMatrix has no effect on NSString.DrawString so you can just use ConcatCTM.
The alignment and line break modes don't have any effect. Since you're using NSString.StringSize, the bounding rectangle fits the entirety of the text, snug up against the left and right edges. If you make the width of the bounding rectangle wider and use UITextAlignment.Right, you'll get proper right alignment, but the text will still rotate around the top left corner of the entire bounding rectangle. Which is not, I'm guessing, what you're expecting.
If you want the text to rotate around the top right corner, let me know and I'll adjust the code accordingly.
Here's the code I used in my test:
DrawTextRotated("Hello 0",100, 50, 0);
DrawTextRotated("Hello 30",100,100,30);
DrawTextRotated("Hello 60",100,150,60);
DrawTextRotated("Hello 90",100,200,90);
Cheers.
I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).
I'm using the ShowTextAtPoint method of CGContext to display a Text in a view, but it is displayed in flip mode, anyone knows how to solve this problem ?
Here is the code I use :
ctx.SelectFont("Arial", 16f, CGTextEncoding.MacRoman);
ctx.SetRGBFillColor(0f, 0f, 1f, 1f);
ctx.SetTextDrawingMode(CGTextDrawingMode.Fill);
ctx.ShowTextAtPoint(centerX, centerY, text);
You can manipulate the current transformation matrix on the graphics context to flip it using ScaleCTM and TranslateCTM.
According to the Quartz 2D Programming Guide - Text:
In iOS, you must apply a transform to the current graphics context in order for the text to be oriented as shown in Figure 16-1. This transform inverts the y-axis and translates the origin point to the bottom of the screen. Listing 16-2 shows you how to apply such transformations in the drawRect: method of an iOS view. This method then calls the same MyDrawText method from Listing 16-1 to achieve the same results.
The way this looks in MonoTouch:
public void DrawText(string text, float x, float y)
{
// the incomming coordinates are origin top left
y = Bounds.Height-y;
// push context
CGContext c = UIGraphics.GetCurrentContext();
c.SaveState();
// This technique requires inversion of the screen coordinates
// for ShowTextAtPoint
c.TranslateCTM(0, Bounds.Height);
c.ScaleCTM(1,-1);
// for debug purposes, draw crosshairs at the proper location
DrawMarker(x,y);
// Set the font drawing parameters
c.SelectFont("Helvetica-Bold", 12.0f, CGTextEncoding.MacRoman);
c.SetTextDrawingMode(CGTextDrawingMode.Fill);
c.SetFillColor(1,1,1,1);
// Draw the text
c.ShowTextAtPoint( x, y, text );
// Restore context
c.RestoreState();
}
A small utility function to draw crosshairs at the desired point:
public void DrawMarker(float x, float y)
{
float SZ = 20;
CGContext c = UIGraphics.GetCurrentContext();
c.BeginPath();
c.AddLines( new [] { new PointF(x-SZ,y), new PointF(x+SZ,y) });
c.AddLines( new [] { new PointF(x,y-SZ), new PointF(x,y+SZ) });
c.StrokePath();
}