I’m working in C on a project to capture data from a sensor and display it as part of a GUI application on the Raspberry Pi. I am using GTK 3.0, plus Cairo for graphing. I have built an application that works, but I want to make a modification to enable me to change the frequency of data capture.
Within my main code section I have a command like:-
gdk_threads_add_timeout (250, data_capture, widgets);
This all works, the data capture routine is triggered every 250mS, but I want to add functionality to the GUI to enable the user to change the speed. If I try to call this function from anywhere else other than main, it fails.
I have looked for other ways to do it, but I can’t find any examples or explanations of how I can do it.
Ideally what I would like is something like:-
void update_speed(button, widgets)
// Button to change speed has been pressed
read speed from GUI
update frequency
return
int main()
...
setup GUI
set default speed
start main GTK loop
Does anyone have any idea how I could achieve this?
Edit: Additional Code Snippet
(This is not the whole program, but an extract of main)
int main(int argc, char** argv) {
GtkBuilder *builder;
GtkWidget *window;
GError *err = NULL; // holds any error that occurs within GTK
// instantiate structure, allocating memory for it
struct app_widgets *widgets = g_slice_new(struct app_widgets);
// initialise GTK library and pass it in command line parameters
gtk_init(&argc, &argv);
// build the gui
builder = gtk_builder_new();
gtk_builder_add_from_file (builder, "../Visual/gui/main_window.glade", &err);
window = GTK_WIDGET(gtk_builder_get_object(builder, "main_application_window"));
// build the structure of widget pointers
widgets->w_spn_dataspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_dataspeed"));
widgets->w_spn_refreshspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_refreshspeed"));
widgets->w_adj_dataspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_dataspeed"));
widgets->w_adj_refreshspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_refreshspeed"));
// connect the widgets to the signal handler
gtk_builder_connect_signals(builder, widgets); // note: second parameter points to widgets
g_object_unref(builder);
// Set a timeout running to refresh the screen
gdk_threads_add_timeout(SCREEN_REFRESH_TIMER, (GSourceFunc)screen_timer_exe, (gpointer)widgets);
gdk_threads_add_timeout(DATA_REFRESH_TIMER, (GSourceFunc)data_timer_exe, (gpointer)widgets);
gtk_widget_show(window);
gtk_main();
// free up memory used by widget structure, probably not necessary as OS will
// reclaim memory from application after it exits
g_slice_free(struct app_widgets, widgets);
return (EXIT_SUCCESS);
Related
The documentation for Fl_Tree in FLTK 1.3.4 says:
The callback() is invoked depending on the value of when()
FL_WHEN_RELEASE -- callback invoked when left mouse button is released on an item
FL_WHEN_CHANGED -- callback invoked when left mouse changes selection state
but I can't get the callback called if the mouse is released and I can't see a difference between both. Any ideas?
#include <FL/Fl.H>
#include <FL/Fl_Double_Window.H>
#include <FL/Fl_Tree.H>
static void cb_(Fl_Tree*, void*)
{
printf ("callback\n");
}
int main()
{
Fl_Double_Window* w = new Fl_Double_Window(325, 325);
Fl_Tree* o = new Fl_Tree(25, 25, 255, 245);
o->callback((Fl_Callback*)cb_);
o->when(FL_WHEN_RELEASE);
o->add("foo/bar");
o->add("foo/baz");
o->end();
w->show();
return Fl::run();
}
this snippets outputs "callback" on every change, even if FL_WHEN_RELEASE is set.
If you have downloaded, the distribution, have a look at test/input.cxx and test/tree.cxx. Both have tests for the different when selections.
WHEN_CHANGED only makes sense on edit boxes, browsers and tables - you can verify the data as it is typed in. This does not happen with WHEN_RELEASE. For all other widgets, there is virtually no difference.
Edit
In order for release to fire every time, there are one of three options
Modify the source FL_Tree.cxx. Look for FL_Tree::select. Change alreadySelected to false.
If you look at the source, in the same routine, further down, it says
#if FLTK_ABI_VERSION >= 10301
If the library is built with FLTK_ABI_VERSION set to 10301, it will call the reselect but there is also a whole load of other stuff it will do when this #define is set since it affects all widgets
Comment out the #if FLTK_ABI_VERISON and corresponding #endif in FL_Tree::select.
I'm a little new to using MFC and VC++ as such, but I'm doing this as part of a Course and i Have to stick to VC++.
http://www.cprogramming.com/tutorial/game_programming/same_game_part1.html
This is the tutorial I have been following to make a simple samegame. However when i try to display score, the score is getting displayed Underneath or outside my application window, even though I've displayed score before calling updateWindow(). I've tried various methods but I am kinda lost here.
Here is the code I'm using to Display the score:
void CSameGameView::updateScore()
{
CSameGameDoc* pDoc = GetDocument();
CRect rcClient, rcWindow;
GetClientRect(&rcClient);
GetParentFrame()->GetWindowRect(&rcWindow);
int nHeightDiff = rcWindow.Height() - rcClient.Height();
rcScore.top=rcWindow.top + pDoc->GetHeight() * pDoc->GetRows() + nHeightDiff;
rcScore.left=rcWindow.left + 50;
rcScore.right=rcWindow.left + pDoc->GetWidth() - 50;
rcScore.bottom=rcScore.top + 20;
CString str;
double points = Score::getScore();
str.Format(_T("Score: %0.2f"), points);
HDC hDC=CreateDC(TEXT("DISPLAY"),NULL,NULL,NULL);
COLORREF clr = pDoc->GetBoardSpace(-1, -1); //this return background colour
pDC->FillSolidRect(&rcScore, clr);
DrawText(hDC, (LPCTSTR) str, -1, (LPRECT) &rcScore, DT_CENTER);
}
Thank you for any help and I'm sorry if the question doesn't make sense or in ambiguous.
There are several problems with your code:
1. The hDC you are creating is going to have coordinates relative to the desktop window. To paint text in your window, use CClientDC like this: CClientDC dc(this); (see http://msdn.microsoft.com/en-US/library/s8kx4w44%28v=vs.80%29.aspx)
2. The code you have will leak a DC every time the function is called. The method in #1 will fix that.
3. Your paint code should be done in the CView::OnDraw. There you get a DC passed to you and you don't have to worry about creating one with CClientDC. Set the variables you want to draw (e.g. your points or score), store them as class members and draw them in CView::OnDraw.
Don't do the drawing in your updateScore method.
Make sense? Hang in there!
I had to change to directshow for my eyetracking software due to the difficulties to change resolution of the camera when using c++ and opencv.
Directshow is new to me and it is kind of hard to understand everything. But I found this nice example that works perfectly for capturing & viewing the web cam.
http://www.codeproject.com/Articles/12869/Real-time-video-image-processing-frame-grabber-usi
I am using the version that not requires directShow SDK. (But it is still directshow that is used in the example, right??)
#include <windows.h>
#include <dshow.h>
#pragma comment(lib,"Strmiids.lib")
#define DsHook(a,b,c) if (!c##_) { INT_PTR* p=b+*(INT_PTR**)a; VirtualProtect(&c##_,4,PAGE_EXECUTE_READWRITE,&no);\
*(INT_PTR*)&c##_=*p; VirtualProtect(p, 4,PAGE_EXECUTE_READWRITE,&no); *p=(INT_PTR)c; }
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp ) {
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
HRESULT ret = Receive_ ( inst, smp );
return ret;
}
int WINAPI WinMain(HINSTANCE inst,HINSTANCE prev,LPSTR cmd,int show){
HRESULT hr = CoInitialize(0); MSG msg={0}; DWORD no;
IGraphBuilder* graph= 0; hr = CoCreateInstance( CLSID_FilterGraph, 0, CLSCTX_INPROC,IID_IGraphBuilder, (void **)&graph );
IMediaControl* ctrl = 0; hr = graph->QueryInterface( IID_IMediaControl, (void **)&ctrl );
ICreateDevEnum* devs = 0; hr = CoCreateInstance (CLSID_SystemDeviceEnum, 0, CLSCTX_INPROC, IID_ICreateDevEnum, (void **) &devs);
IEnumMoniker* cams = 0; hr = devs?devs->CreateClassEnumerator (CLSID_VideoInputDeviceCategory, &cams, 0):0;
IMoniker* mon = 0; hr = cams->Next (1,&mon,0); // get first found capture device (webcam?)
IBaseFilter* cam = 0; hr = mon->BindToObject(0,0,IID_IBaseFilter, (void**)&cam);
hr = graph->AddFilter(cam, L"Capture Source"); // add web cam to graph as source
IEnumPins* pins = 0; hr = cam?cam->EnumPins(&pins):0; // we need output pin to autogenerate rest of the graph
IPin* pin = 0; hr = pins?pins->Next(1,&pin, 0):0; // via graph->Render
hr = graph->Render(pin); // graph builder now builds whole filter chain including MJPG decompression on some webcams
IEnumFilters* fil = 0; hr = graph->EnumFilters(&fil); // from all newly added filters
IBaseFilter* rnd = 0; hr = fil->Next(1,&rnd,0); // we find last one (renderer)
hr = rnd->EnumPins(&pins); // because data we are intersted in are pumped to renderers input pin
hr = pins->Next(1,&pin, 0); // via Receive member of IMemInputPin interface
IMemInputPin* mem = 0; hr = pin->QueryInterface(IID_IMemInputPin,(void**)&mem);
DsHook(mem,6,Receive); // so we redirect it to our own proc to grab image data
hr = ctrl->Run();
while ( GetMessage( &msg, 0, 0, 0 ) ) {
TranslateMessage( &msg );
DispatchMessage( &msg );
}
};
The method HRESULT Receive is called for every new frame from the cam. the the comments says that buf contains the data. But I have 3 problems/questions.
I cant include the opencv lib. I create a new project in visual studio, and add the same property sheets as I always include. the only difference from earlier projects is that I Now create a totaly empty project, earlier I created a win32 application.
How to add opencv into the directshow project?
The example above. from buf. which is a pointer to the data. How do I get that into iplImage/Mat for the opencv calc?
Is there a way to not show the images from the webcam (I only need to perform some algorithms on the frames, I guess removing the window with the results might give me more power for the analyse algorithms?!)
Thanks!
With DirectShow you typically create a pipeline, that is a graph and you add filters to it, like this:
Camera -> [possibly some extra stuff] -> Sample Grabber -> Null Renderer
Camera, Sample Grabber, Null Renderer are all standard components shipped with clean Windows. Sample Grabber can be set to call you back via ISampleGrabberCB::SampleCB and give you data for every video frame captured. Null Renderer is the termination of pipeline without displaying video on monitor (just video capture).
SampleCB is the keyword to bring you sample code you need. Having data received with this call, you can convert/wrap it into IPL/OpenCV class as suggested by #praks411.
Having it done as simple as this, you don't need DirectShow BaseClasses, and the code will be merely regular ATL/MFC code and project. Make sure to use CComPtr wrapper class to deal with COM interfaces to not lose references and leak objects. Some declarations might be missing in very latest Windows SDK, so you need to either use Windows SDK 6.x or just copy missing parts from there.
See also:
How to capture frames using Delphi/DSPack without displaying it on TVideoWindow? (Delphi code, but good description and figures)
DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and Building a VU Meter
SetLifeCamStudioResolutionSample - A small DirectShow project showing how to set capture up, including resolution on camera, and Sample Grabber, and also missing SDK declarations; related Q is Can't make IAMStreamConfig.SetFormat() to work with LifeCam Studio
Building the Filter Graph on Sample Grabber and Null Renderer
I think you can include opencv in existing. I've done that for console application. You will need to include path to opencv headers and path to opencv lib in property page for you current project.
Go to project property:
1.To addheaders
C/C++ -----> Additional Include Directories ---> Here add opencv include directories (You may want to include multiples directories)
To add libs
Linker -----> Additional Library Directories ----> Here add opencv lib.
To create IplImage from buf. You can use following once you have the height and width of image.
IplImage *m_img_show;
CvSize cv_img_size = cvSize(m_mediaInfo.m_width, m_mediaInfo.m_height);
m_img_show = cvCreateImageHeader(cv_img_size, IPL_DEPTH_8U,3);
cvSetData(m_img_show, m_pBuffer, m_mediaInfo.m_width*3);
I think preview of image is quite helpful. It seems that your filter above take data from renderer. If you do want you may want to change your renderer and use it in windowless mode. Other option could be to use sample grabber filter.
I have a Dual-Display Graphics card, on my system (RHEL 6.3).
I have developed one simple application using qt creator (qt-4.8), which throws two different UIs.
When I execute this then both UIs starts in only one display.
What I need is my one UI should run on primary screen and one on secondary screen (i.e. 0.0 and 0.1).
How should I do this using qt-creator?
xclock -display :0.0
xclock -display :0.1
works fine.
You can use a QDesktopWidget to get screen information. It Allows you to query the amount of screens and the dimension of each one with
int QDesktopWidget::screenCount () const;
const QRect QDesktopWidget::availableGeometry ( int screen = -1 ) const;
From there, you can move your widget to any given screen. For instance, the following code move the widget to a given screen or to the default one if the specified screen is not available:
QDesktopWidget* w = QApplication::desktop();
//some value
int mydesiredscreen = 1;
//fallback to default screen if none
if(mydesiredscreen >= w->screenCount()) mydesiredscreen = -1;
QRect rect1 = w->availableGeometry(mydesiredscreen);
mywindow->move(rect1.topLeft());
Tejas,
To display your Second UI on Secondary Monitor you can use setParent property for your Second UI as :
int screenNumber = 1; /* Desired screen no */
QWidget secondaryUI_widget; /* Secondary UI Object which is to be displayed on secondary monitor */
QDesktopWidget myDesktopWidget; /* Create an object of QDesktopWidget */
secondUI_myDesktopWidget.setParent(myDesktopWidget(screenNumber));
The above line will set the desired screen on which you would like to display your page as parent for your UI object.
Now you can call show() function for your second UI anywhere in your program , the second UI will be displayed on desired screen number as being by screenNumber value
I am trying to draw a GtkLayout using cairo. The layout is huge and I need to get the part that is visible in the container window and update that part only. With GTK2 the expose event data was sufficient to do this. I am not successful with GTK3.
In the function to handle "draw" events, I did the following:
GdkWindow *gdkwin; // window to draw
cairo_region_t *cregion; // update regions
cairo_rectangle_int_t crect; // enclosing rectangle
gdkwin = gtk_layout_get_bin_window(GTK_LAYOUT(layout));
cregion = gdk_window_get_update_area(gdkwin);
cairo_region_get_extents(cregion,&crect);
expy1 = crect.y; // top of update area
expy2 = expy1 + crect.height; // bottom of update area
The problem is that cregion has garbage. Either gdk_window_get_update_area() is buggy or I am not using the right drawing window.
Passing the GtkLayout as follows also does not work (this is the function arg for g_signal_connect):
void draw_function(GtkWidget *layout, cairo_t *cr, void *userdata)
Whatever gets passed is not the GtkLayout from g_signal_connect, but something else.
================= UPDATE ====================
I found a way to do what I want without using GtkLayout.
I am using a GtkDrawingArea inside a viewport.
I can scroll to any window within the large graphic layout
and update that window only. Works well once I figured out
the cryptic docs.
scrwing = gtk_scrolled_window_new(0,0);
gtk_container_add(GTK_CONTAINER(vboxx),scrwing);
drwing = gtk_drawing_area_new();
gtk_scrolled_window_add_with_viewport(GTK_SCROLLED_WINDOW(scrwing),drwing);
gtk_scrolled_window_set_policy(SCROLLWIN(scrwing),ALWAYS,ALWAYS);
scrollbar = gtk_scrolled_window_get_vadjustment(GTK_SCROLLED_WINDOW(scrwing));