How to generate random points around the curves of characters using processing? - text

I would like to generate random/noise points along each character of a multiple line text. I've tried this with the Geomerative library, but unfortunately it does not support multi line. Any other solution?

You could find a library to get the path points of the text or if simply adding points, you could get a 2D snapshot(either using get() or PGraphics) of the text and fill in pixels. Here's a minimal example.
PImage snapshot;
int randomSize = 3;
void setup(){
//render some text
background(255);
fill(0);
textSize(40);
text("Hello",0,50);
//grab a snapshot
snapshot = get();
}
void draw(){
int rx = (int)random(snapshot.width);//pick a random pixel location
int ry = (int)random(snapshot.height);//you can pick only the areas that have text or the whole image bot a bit of hit&miss randomness
//check if it's the same colour as the text, if so, pick a random neighbour and also paint it black
if(snapshot.get(rx,ry) == color(0)) snapshot.set(rx+((int)random(randomSize,-randomSize)),ry+((int)random(randomSize,-randomSize)),0);
image(snapshot,0,0);
}

Related

LWJGL Fullscreen while keeping aspect ratio?

I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.

unity3d: Use main camera's depth buffer for rendering another camera view

After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.

Compositing 2 images in J2ME

I intend to fit an image centered horizontally and vertically inside a J2ME form. However I couldn't find useful markup elements to do so. So I intend to create one totally transparent image the size of the form element and superimpose my intended image on it centered. And place the resulting image in the form (without using a canvas). I am looking for ways of doing this because my knowledge of J2ME is limited.
Any help, please?
public static Image CreateCompositeImage(Image oImage,int formWidth,int formHeight){
final int imageWidth=oImage.getWidth();
final int imageHeight=oImage.getHeight();
int[] imge=new int[imageWidth*imageHeight];
oImage.getRGB(imge,0,imageWidth,0,0,imageWidth,imageHeight);
final int topMargin=(formHeight-imageHeight)/2;
final int leftMargin=(formWidth-imageWidth)/2;
final int pixelTop=topMargin*formWidth;
int[] c=new int[formWidth*formHeight];
int p=0, r=0;
for (int i=0;i<pixelTop;i++){
c[p++]=0xff000000;
}
for (int j=0;j<imageHeight;j++){
for (int i=0;i<leftMargin;i++){
c[p++]=0x880000ff;
}
for (int i=0;i<imageWidth;i++){
c[p++]=imge[r++];
}
for (int i=0;i<leftMargin;i++){
c[p++]=0x8800ff00;
}
}
int pixelBottom=formWidth*formHeight-p;
for (int i=0;i<pixelBottom;i++){
c[p++]=0xffffffff;
}
return Image.createRGBImage(c,formWidth,formHeight,true);
}
A better approach is to create new class that inherits from CustomItem, or to use a Canvas instead of the Form.
In both cases you override the paint() method.
There you get a Graphics-Object. You use this object do do your drawing.
Especially for you it has a drawImage() method, where you can just put in the position.
You then need no pixel data manipulation.
Overriding CustomItem or Canvas is something you do often in Java-me programming, so it is worth learn it.

Convert String Data to Binary Image

I am using Qt and I am new to Qt. I am getting stream of string data from server in particular port.
I am receiving 1 and 0. each time I receive one line like this
1111110001111111111111111111100000000000011111111111
After getting n number of times I need to create binary image file from the data. 1 for white and 0 for black.
How to do this? I already implement the receiving data but I have no idea how to convert this data to image.
Please help me to find the solution for this problem.
You must know dimensions of your image (for example NxM)
According to dimensions of image, you must parse string what you got (think on how to write correct cycle to get NxM 2D array from 1D array consisting NxM elements).
For holding your image data you can use QImage class. Create QImage object, passing to constructor height and width, use its method to fill image. For setting some color of pixel, you can use QImages method setPixel ( int x, int y, uint index_or_rgb ).
Thats all. Good luck!
You may try doing this way
QImage Image(500,500, QImage::Format_Indexed8);
for(int i=0;i<500/*image_width*/;i++)
{
for(int j=0;j<500/*image_height*/;j++)
{
QRgb value;
if(data[i*j] == 0)/*the data array should contain all the information*/
{
value = qRgb(0,0,0);
Image.setPixel(i,j,qGray(value))
}
else
{
value = qRgb(255,255,255);
Image.setPixel(i,j,qGray(value))
}
}
}
From Qt docs:
"Because QImage is a QPaintDevice subclass, QPainter can be used to draw directly onto images."
So, you can create QImage sized to 500x500
QImage image = QImage(500,500)
and then draw on this image
QPainter p(&image);
p.drawPoint(0,0);
p.drawPoint(0,1);
etc;
Another way is to save your bit stream into array char[] and simply create QImage with format Format_Mono or Format_MonoLSB.
QImage image = QImage(bitData, 500, 500, Format_Mono);
Thanks For help i created image. here My Code
QImage testClass::GetImage(QString rdata, int iw, int ih)
{
QImage *Image=new QImage(iw,ih,QImage::Format_ARGB32);
for(int i=0;i<ih;i++)
{
for(int j=0;j<iw;j++)
{
if(rdata.at((i*iw)+j) == '0')
Image->setPixel(QPoint(j,i),qRgb(0,0,0));
else
Image->setPixel(QPoint(j,i),qRgb(255,255,255));
}
}
return *Image;
}

HLSL - Global variables are unchanged in Geometry Shader (DirectX11)

My program receives input consists of line segments and expand the lines to cylinder-like object (like the PipeGS project in DX SDK sample browser).
I added an array of radius scaling parameter for the pipes, and modify them procedurally, but the radii of pipes just didn't change.
I'm pretty sure the scaling parameters are updated every frame because I set them as the pixel value. When I modify them, the pipes change color while their radii keep unchanged.
So I am wondering if there's any limitation of using global variables in GS and I didn't find it on the internet. (or just wrong keywords I used)
The shader code is like
cbuffer {
.....
float scaleParam[10];
.....
}
// Pass 1
VS_1 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
GS_1 {
pipeRadius = MaxRadius * scaleParam[PipeID];
....
// calculate pipe positions base on line-segments and pipeRadius
....
OutputStream.Append ( ... );
}
// Pixel shader is disabled in the first pass
// Pass 2
VS_2 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
// Transform the vertices and normals to world coordinate in DS
// No geometry shader in the second pass
PS_2
{
return float4( scaleParam[0], scaleParam[1], scaleParam[2], 0.0f );
}
Edit:
I shrinked the problem.
There are 2 passes in my program, in the first pass I calculate the line-segment expanding in Geometry Shader and stream-out.
In the second pass, the program receives pipe position from the first pass, tessellate the pipes and apply displacement mapping on them so they can be more detailed.
I can change the surface tessellation factor and pixel color which are in the second pass and see the result on screen immediately.
When I modify the scaleParam, the pipes change color while their radii keep unchanged. It means I did change the scaleParam and pass them into shader correctly but something's wrong in the first pass.
Second edit:
I modified shader code above and post some code of the cpp file here.
In the cpp file:
void DrawScene()
{
// Update view matrix, TessFactor, scaleParam etc.
....
....
// Bind stream-output buffer
ID3D11Buffer* bufferArray[1] = {mStreamOutBuffer};
md3dImmediateContext->SOSetTargets(1, bufferArray, 0);
// Two pass rendering
D3DX11_TECHNIQUE_DESC techDesc;
mTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
// First pass
if (p==0)
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mVertexBuffer, &stride, &offset);
md3dImmediateContext->Draw(mVertexCount,0);
// unbind stream-output buffer
bufferArray[0] = NULL;
md3dImmediateContext->SOSetTargets( 1, bufferArray, 0 );
}
// Second pass
else
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mStreamOutBuffer, &stride, &offset);
md3dImmediateContext->DrawAuto();
}
}
HR(mSwapChain->Present(0, 0));
}
Check if you are using a float4 position, the w value of vector is a scale for final position in scene, example:
float4 pos0 = float4(5, 5, 5, 1);
// is equals that:
float4 pos1 = float4(10, 10, 10, 2);
To correct scale a position you must changue only the .xyz value of vector position.
I solved this problem by rebuilding Vertex buffer and Stream-output buffer everytime after I modified the parameters, but I still don't know what exactly causes this problem.

Resources