I have my window
WINDOW *win = newwin(40, 40, 3, 3);
When some text is entered and is spanning more lines, what is the best way to preserve the neat whitespace around the inner borders of the window? I cannot seem to find a way to give a window this kind of property in NCurses.
I guess a way to make padding is to create another window inside this one. There must be a cleaner way.
William McBrine is absolutely sure. The simplest way to retain a box around a window is to create the box in a window which surrounds it. That is because
changes to the inner box have no effect on the box
there is a function box which draws a box along the edges of a given window.
Several of the ncurses test-programs use this feature. For instance, one of the menu entries in the main test-program (in ncurses.c) responds to a w by creating a window to hold the box, then a window to hold its contents, and draws a box in the former before continuing to accept input in the new inner box:
} else if (c == 'w') {
int high = getmaxy(win) - 1 - first_y + 1;
int wide = getmaxx(win) - first_x;
int old_y, old_x;
int new_y = first_y + getbegy(win);
int new_x = first_x + getbegx(win);
getyx(win, old_y, old_x);
if (high > 2 && wide > 2) {
WINDOW *wb = newwin(high, wide, new_y, new_x);
WINDOW *wi = newwin(high - 2, wide - 2, new_y + 1, new_x + 1);
box(wb, 0, 0);
wrefresh(wb);
wmove(wi, 0, 0);
remember_boxes(level, wi, wb);
wgetch_test(level + 1, wi, delay);
delwin(wi);
delwin(wb);
wgetch_help(win, flags);
wmove(win, old_y, old_x);
touchwin(win);
wrefresh(win);
doupdate();
}
Related
I am trying to create a window with custom coloring. I can see how to change the background color of the window when using something like FL_BORDER_BOX (how to change the background color of Fl_Window by pressing Fl_Button), but I can not find out how to change the border color from black. Any help would be appreciated!
Thanks!
This is using C/C++ and FLTK btw.
Instead of using FL_BORDER_BOX, use FL_BORDER_FRAME. The foreground colour of the frame can be changed.
Fl_Box changeling = new Fl_Box(10, 10, 100, 20);
changeling.box(FL_BORDER_FRAME);
changeling.color(FL_RED);
A list of the box types can be found in http://www.fltk.org/doc-1.1/common.html under Box Types
EDIT
If you wish to have a different colour inside, then draw two boxes
int x = 10, y = 10, w = 180, h = 100;
Fl_Box box(x, y, w, h);
box.box(FL_BORDER_FRAME);
box.color(FL_BLUE, FL_RED);
Fl_Box inner(x + 1, y + 1, w - 2, h - 2);
inner.box(FL_FLAT_BOX);
inner.color(FL_YELLOW);
I am trying to simulate a mouse click on the CView window in a legacy code which I must say I don't fully understand. The idea is to search for a particular item in the CView, get its co-ordinates and then simulate a right mouse click on it using SendInput. I want to understand if the basic steps I am following are correct before I proceed digging further into the legacy code which has a bunch of transformations happening across co-ordinate systems :( Here are the steps I follow:
Get the position co-ordinates of the item displayed in CView. at this point the co-ordinates is in the internal co-ordinate system (lets call it CDPoint).
CDPoint gPosn = viewObj->m_point_a ;
Covert the co-ordinates to the client co-ordinate system i.e CDPoint to CPoint using the existing transformations in the code.
CPoint newPosn = GetTransform().Scale(gPosn);
//Note: The basis of arriving that this is the correct transformation to use is the below code with the exact reverse transform happening in the mouse click handler code to convert CPoint to CDPoint:
`CDesignView::OnLButtonDown(UINT nFlags, CPoint p) {
CDPoint np = GetTransform().DeScale(p);
}`
Is this thinking right that CPoint received in the OnLButtonDown() handler will always be in the client co-ordinates and hence the reverse transform should convert CDPoint (internal co-ordinates) to client coordinates (CPoint) ?
Convert client co-ordinates to screen co-ordinates:
ClientToScreen(&newPosn);
Pass these values to SendInput function after converting to pixel co-ordinates:
INPUT buffer[1];
MouseSetup(buffer);
MouseMoveAbsolute(buffer, newPos.x, newPos.y);
MouseClick(buffer);
The Mousexxx() functions are defined as below similar to the sample code in this post:
How to simulate a mouse movement
.
#define SCREEN_WIDTH (::GetSystemMetrics( SM_CXSCREEN )-1)
#define SCREEN_HEIGHT (::GetSystemMetrics( SM_CYSCREEN )-1)
static void inline makeAbsXY(double &x, double &y) {
x = (x * 0xFFFF) / SCREEN_WIDTH ;
y = (y * 0xFFFF) / SCREEN_HEIGHT ;
}
static void inline MouseSetup(INPUT *buffer)
{
buffer->type = INPUT_MOUSE;
buffer->mi.dx = (0 * (0xFFFF / SCREEN_WIDTH));
buffer->mi.dy = (0 * (0xFFFF / SCREEN_HEIGHT));
buffer->mi.mouseData = 0;
buffer->mi.dwFlags = MOUSEEVENTF_ABSOLUTE;
buffer->mi.time = 0;
buffer->mi.dwExtraInfo = 0;
}
static void inline MouseMoveAbsolute(INPUT *buffer, double x, double y)
{
makeAbsXY(x,y) ;
buffer->mi.dx = x ;
buffer->mi.dy = y ;
buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE);
SendInput(1, buffer, sizeof(INPUT));
}
static void inline MouseClick(INPUT *buffer)
{
buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_RIGHTDOWN);
SendInput(1, buffer, sizeof(INPUT));
Sleep(10);
buffer->mi.dwFlags = (MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_RIGHTUP);
SendInput(1, buffer, sizeof(INPUT));
}
Could anyone pls provide pointers on what might be going wrong in these steps since the simulated mosue click always seem to be shifted left by some factor which keeps increasing as x becoems larger. I have verified that is gPosn is pointing to (0,0) it always simulates a mouse click on the top right corner of the client screen.
Thanks for your time.
If you have x and y in client coordinates, you have to convert them to screen coordinates:
POINT point;
point.x = x;
point.y = y;
::ClientToScreen(m_hWnd, point);
Where m_hWnd is the window which owns the objects. x and y are relative to top-left of the client area of this window.
Assuming point.x and point.y are in screen coordinates, the rest of the conversion for SendInput is correct. You can also create INPUT array for SendInput, this will send the mouse messages without interruption.
INPUT input[3];
for (int i = 0; i < 3; i++)
{
memset(&input[i], 0, sizeof(INPUT));
input[i].type = INPUT_MOUSE;
}
input[0].mi.dx = (point.x * 0xFFFF) / (GetSystemMetrics(SM_CXSCREEN) - 1);
input[0].mi.dy = (point.y * 0xFFFF) / (GetSystemMetrics(SM_CYSCREEN) - 1);
input[0].mi.dwFlags = MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE;
input[1].mi.dwFlags = MOUSEEVENTF_RIGHTDOWN;
input[2].mi.dwFlags = MOUSEEVENTF_RIGHTUP;
SendInput(3, input, sizeof(INPUT));
Overview
In my app (which is a game), I make use of the batching of items to reduce the number of draw calls. So, I'll, create for example, a Java object called platforms which is for all the platforms in the game. All the enemies are batched together as are all collectible items etc....
This works really well. At present I am able to size and position the individual items in a batch independently of each other however, I've come to the point where I really need to change the opacity of individual items also. Currently, I can change only the opacity of the entire batch.
Batching
I am uploading the vertices for all items within the batch that are to be displayed (I can turn individual items off if I don't want them to be drawn), and then once they are all done, I simply draw them in one call.
The following is an idea of what I'm doing - I realise this may not compile, it is just to give an idea for the purpose of the question.
public void draw(){
//Upload vertices
for (count = 0;count<numOfSpritesInBatch;count+=1){
vertices[x] = xLeft;
vertices[(x+1)] = yPTop;
vertices[(x+2)] = 0;
vertices[(x+3)] = textureLeft;
vertices[(x+4)] = 0;
vertices[(x+5)] = xPRight;
vertices[(x+6)] = yTop;
vertices[(x+7)] = 0;
vertices[(x+8)] = textureRight;
vertices[x+9] = 0;
vertices[x+10] = xLeft;
vertices[x+11] = yBottom;
vertices[x+12] = 0;
vertices[x+13] = textureLeft;
vertices[x+14] = 1;
vertices[x+15] = xRight;
vertices[x+16] = yTop;
vertices[x+17] = 0;
vertices[x+18] = textureRight;
vertices[x+19] = 0;
vertices[x+20] = xLeft;
vertices[x+21] = yBottom;
vertices[x+22] = 0;
vertices[x+23] = textureLeft;
vertices[x+24] = 1;
vertices[x+25] = xRight;
vertices[x+26] = yBottom;
vertices[x+27] = 0;
vertices[x+28] = textureRight;
vertices[x+29] = 1;
x+=30;
}
vertexBuf.rewind();
vertexBuf.put(vertices).position(0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
GLES20.glUseProgram(iProgId);
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
//Enable Alpha blending and set blending function
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Draw it
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6 * numOfSpritesInBatch);
//Disable Alpha blending
GLES20.glDisable(GLES20.GL_BLEND);
}
Shaders
String strVShader =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_position;\n"+
"attribute vec2 a_texCoords;" +
"varying vec2 v_texCoords;" +
"void main()\n" +
"{\n" +
"gl_Position = uMVPMatrix * a_position;\n"+
"v_texCoords = a_texCoords;" +
"}";
String strFShader =
"precision mediump float;" +
"uniform float opValue;"+
"varying vec2 v_texCoords;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"gl_FragColor = texture2D(u_baseMap, v_texCoords);" +
"gl_FragColor *= opValue;"+
"}";
Currently, I have a method in my Sprite class that allows me to change the opacty. For example, something like this:
spriteBatch.setOpacity(0.5f); //Half opacity
This works, but changes the whole batch - not what I'm after.
Application
I need this because I want to draw small indicators when the player destroys an enemy - which show the score obtained from that action. (The type of thing that happens in many games) - I want these little 'score indicators' to fade out once they appear. All the indicators would of course be created as a batch so they can all be drawn with one draw call.
The only other alternatives are:
Create 10 textures at varying levels of opacity and just switch between them to create the fading effect. Not really an option as way too wasteful.
Create each of these objects separately and draw each with their own draw call. Would work, but with a max of 10 of these objects on-screen, I could potentially be drawing using 10 draw calls just for these items - while the game as a whole currently only uses about 20 draw calls to draw a hundreds of items.
I need to look at future uses of this too in particle systems etc.... so I would really like to try to figure out how to do this (be able to adjust each item's opacity separately). If I need to do this in the shader, I would be grateful if you could show how this works. Alternatively, is it possible to do this outside of the shader?
Surely this can be done in some way or another? All suggestions welcome....
The most direct way of achieving this is to use a vertex attribute for the opacity value, instead of a uniform. This will allow you to set the opacity per vertex, without increasing the number of draw calls.
To implement this, you can follow the pattern you already use for the texture coordinates. They are passed into the vertex shader as an attribute, and then handed off to the fragment shader as a varying variable.
So in the vertex shader, you add:
...
attribute float a_opValue;
varying float v_opValue;
...
v_opValue = a_opValue;
...
In the fragment shader, you remove the uniform declaration for opValue, and replace it with:
varying float v_opValue;
...
gl_FragColor *= v_opValue;
...
In the Java code, you extend the vertex data with an additional value for the opacity, to use 6 values per vertex (3 position, 2 texture coordinates, 1 opacity), and update the state setup accordingly:
vertexBuf.position(0);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
vertexBuf.position(3);
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iTexCoords);
vertexBuf.position(5);
GLES20.glVertexAttribPointer(iOpValue, 1, GLES20.GL_FLOAT, false, 6 * 4, vertexBuf);
GLES20.glEnableVertexAttribArray(iOpValue);
I can't test different Windows versions, but I suspect it's a Windows 8 issue (due to the corner and side hotspots).
I'm trying to move the cursor to specified coordinates using SendInput, SetCursorPos, mouse_event and MoveMouse from Autohotkey and AutoIt. It works when moving the cursor on the same monitor, but not when crossing monitors.
When crossing monitors, if my mouse cursor is at (100, 100) on secondary monitor (to the right), moving to (0, 0) (primary monitor) will move and stay there. GetCursorPos will tell me it's at (0, 0). But soon as I move, the cursor starts from (0, 0) on secondary monitor.
How do I move my cursor across my monitor without having it jump to the original monitor again?
SendInput example C++:
int MouseMove(int x, int y) {
int screenWidth = GetSystemMetrics(SM_CXVIRTUALSCREEN);
int screenHeight = GetSystemMetrics(SM_CYVIRTUALSCREEN);
INPUT input;
input.type = INPUT_MOUSE;
input.mi.dx = round((x * 65535) / (screenWidth - 1));
input.mi.dy = round((x * 65535) / (screenHeight - 1));
input.mi.dwFlags = MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_VIRTUALDESK | MOUSEEVENTF_MOVE;
input.mi.mouseData = 0;
input.mi.time = 0;
input.mi.dwExtraInfo = 0;
return SendInput(1, &input, sizeof(INPUT));
}
AutoHotkey example:
CoordMode, Mouse, Screen
MouseMove, 0, 0, 0
AutoIt example:
MouseMove(0, 0, 0)
I have no way of testing your issue but maybe I can point you in the right direction.
The only thing I can think of is using MouseGetPos to store your current mouse position,SysGet to grab the 2nd Monitor, and use MouseMove to return you to original position after your SendInput.
Hope this helps.
This may be a bug in Autoit or Windows.
Try doing MouseMove in a different way and play with $Window.
Local $WM_MOUSEMOVE = 0x0200
DllCall("user32.dll", "int", "SendMessage", _
"hwnd", WinGetHandle( $Window ), _
"int", $WM_MOUSEMOVE, _
"int", 0, _
"long", _MakeLong($X, $Y))
Are your monitors set to Extend mode?
I'm trying to create loading bar for my game. I create basic rectangle and added to the stage and caluclated size acording to the number of files so I get fixed width. Everything works, but for every step (frame) it creates another rectangle, how do I get only one object?
this is my code:
function test(file) {
r_width = 500;
r_height = 20;
ratio = r_width / manifest.length;
if (file == 1) {
new_r_width = 0
// Draw
r = new createjs.Shape();
r_x = (width / 2) - (r_width / 2);
r_y = (height / 2) - (r_height / 2);
new_r_width += ratio;
r.graphics.beginFill("#222").drawRect(r_x, r_y, new_r_width, r_height);
stage.addChild(r);
} else {
stage.clear();
new_r_width += ratio;
r.graphics.beginFill("#" + file * 100).drawRect(r_x, r_y + file * 20, new_r_width, r_height);
stage.addChild(r);
}
stage.update();
}
https://space-clicker-c9-zoranf.c9.io/loading/
If you want to redraw the rectangle, you will have to clear the graphics first, and then ensure the stage is updated. In your code it looks like you are clearing the stage, which is automatically handled by the stage.update() unless you manually turn off updateOnTick.
There are some other approaches too. If you just use a rectangle, you can set the scaleX of the shape. Draw your rectangle at 100% of the size you want it at, and then scale it based on the progress (0-1).
r.scaleX = 0.5; // 50%
A new way that is supported (only in the NEXT version of EaselJS, newer than 0.7.1 in GitHub), you can save off the drawRect command, and modify it.
var r = new createjs.Shape();
r.graphics.beginFill("red");
var rectCommand = r.graphics.drawRect(0,0,100,10).command; // returns the command
// Later
rectCommand.w = 50; // Modify the width of the rectangle
Hope that helps!