I want to retrieve wLength (that can be specified by a hid device). The clear answer maybe is to send a ctrl signal to device. But I tried it by the following code:
struct usbfs_ctrltransfer ctrl = {
.bmRequestType = LIBUSB_ENDPOINT_IN,
.bRequest = LIBUSB_REQUEST_GET_CONFIGURATION,
.wValue = 0,
.wIndex = 0,
.wLength = 1,
...
}
....
r = ioctl(fd, 0, &ctrl);
....
The result of this code was just an error value (I think that was -1!).
I reloaded the hid module (kernel module) in debug mode(modprobe hid debug=100 --> don't panic for this large debug level!). In this case the hid print out the true value of the wLength.
/build/buildd/linux-3.13.0/drivers/hid/usbhid/hid-core.c: submitting ctrl urb: Get_Report wValue=0x0100 wIndex=0x0001 wLength=64
I track it in Linux kernel source code and I found that this information are printed in usb_get_intfdata.
In summary I wonder to know if there is an equal function in userland or not?
The answer to this question is using udev. By reading special attribute, called bmAttributes, you can find the I/O's actual length.
Using below code to read it:
....
udev_device_get_sysattr_value(dev, "bmAttributes")
....
Related
Context
I am toggling the presence of the mouse cursor depending on whether or not a mouse is plugged into my Linux system. I have created a solution that works, although sub optimally. My current solution is the following:
Modified WM: I extended a program supplied with my WM, matchbox-remote.c (see original source here), to allow a remote command that changes the cursor in real time. My change is based on this SO answer to a similar question.
Rules for udev: I added two udev rules that call matchbox-remote when a mouse is plugged in or unplugged, with my toggle argument.
Systemd service: I added a oneshot systemd service enables or disables the cursor on boot (depending on whether a mouse is plugged in or not). This addresses an edge-case where X is not yet ready when the udev rules first run.
If you would like to see how I have accomplished each step, move to the bottom where I have numbered sections with the relevant files
Problem
My solution works, but is suboptimal for two reasons:
The mouse cursor is present onscreen for a few seconds after the desktop loads if no mouse is plugged in, because the systemd rule runs after multi-user.target.
The mouse cursor remains present onscreen sometimes after being disabled. It disappears as soon as I interact with a button or other UI element. However, this can be quite annoying.
Attempted fixes
I have tried to do the following to improve my solution:
Reduce latency on boot: I tried to adjust my service file to start earlier in the boot progress. I specifically targeted it to start after xserver-nodm.service. However, this still fails to find the DISPLAY despite me manually setting it as an environment variable.
Disappear mouse pointer on event: I attempted to restart the WM, hoping that this would more seamlessly make the cursor disappear when it was supposed to. Unfortunately, this is a regressive solution as it makes the screen go black for several seconds and is more disruptive.
Conclusion
I would like help in solving the following:
How can I change the cursor in a timely fashion at boot time (Do I need to modify X server or somehow queue events to send once it's reachable?)
How can I ensure a change to the cursor is reflected immediately (as opposed to sometimes waiting until I interact with a UI element)?
More info
Kernel: Linux 5.4.3
Xorg: Version 1.20.5
(1): matchbox-remote.c (in /usr/bin/)
Display *dpy;
...
#include <X11/cursorfont.h>
#include <X11/extensions/Xfixes.h>
...
void set_show_cursor (int show)
{
Window root;
Cursor cursor;
Pixmap bitmap;
XColor color;
static char data[8] = {0};
root = DefaultRootWindow(dpy);
if (!show) {
color.red = color.green = color.blue = 0;
bitmap = xCreateBitmapFromData(dpy, root, data, 8, 8);
cursor = XCreatePixmapCursor(dpy, bitmap, bitmap, &color, &color, 0, 0);
XDefineCursor(dpy, root, cursor);
XFreeCursor(dpy, cursor);
XFreePixmap(dpy, bitmap);
} else {
cursor = XCreateFontCursor(dpy, XC_left_ptr);
XDefineCursor(dpy, root, cursor);
XFreeCursor(dpy, cursor);
}
}
...
static void usage(char *progname) {
...
printf(" -show-cursor [1|0] Enable or disable the cursor\n");
...
}
...
int main (int argc, char* argv[])
{
...
for (i=1; argv[i]; i++) {
...
switch (arg[1])
{
....
case 's':
if (NULL != argv[i+1]) {
set_show_cursor(atoi(argv[i+1]));
}
break;
...
}
}
XSync(dpy, False);
XCloseDisplay(dpy);
}
(2): 98-cursor-toggle.rules (in /etc/udev/rules.d)
SUBSYSTEMS="usb", ACTION=="add", ENV{ID_INPUT_MOUSE}=="1", RUN+="/bin/sh -c 'DISPLAY=:0 /usr/bin/matchbox-remote -show-cursor 1'"
SUBSYSTEMS="usb", ACTION=="remove", ENV{ID_INPUT_MOUSE}=="1", RUN+="/bin/sh -c 'DISPLAY=:0 /usr/bin/matchbox-remote -show-cursor 0'"
(3) cursor-init.service (in /lib/systemd/system)
[Unit]
Description=X11 cursor initialisation
After=multi-user.target
Requires=multi-user.target
[Service]
Type=simple
ExecStart=/bin/sh -c 'DISPLAY=:0 /usr/bin/matchbox-remote -show-cursor $(ls -1 /dev/input/by-*/*-mouse 2>/dev/null | wc -l)'
RemainAfterExit=true
StandardOutput=journal
Restart=on-failure
RestartUSec=500000
[Install]
WantedBy=graphical.target
I’m working in C on a project to capture data from a sensor and display it as part of a GUI application on the Raspberry Pi. I am using GTK 3.0, plus Cairo for graphing. I have built an application that works, but I want to make a modification to enable me to change the frequency of data capture.
Within my main code section I have a command like:-
gdk_threads_add_timeout (250, data_capture, widgets);
This all works, the data capture routine is triggered every 250mS, but I want to add functionality to the GUI to enable the user to change the speed. If I try to call this function from anywhere else other than main, it fails.
I have looked for other ways to do it, but I can’t find any examples or explanations of how I can do it.
Ideally what I would like is something like:-
void update_speed(button, widgets)
// Button to change speed has been pressed
read speed from GUI
update frequency
return
int main()
...
setup GUI
set default speed
start main GTK loop
Does anyone have any idea how I could achieve this?
Edit: Additional Code Snippet
(This is not the whole program, but an extract of main)
int main(int argc, char** argv) {
GtkBuilder *builder;
GtkWidget *window;
GError *err = NULL; // holds any error that occurs within GTK
// instantiate structure, allocating memory for it
struct app_widgets *widgets = g_slice_new(struct app_widgets);
// initialise GTK library and pass it in command line parameters
gtk_init(&argc, &argv);
// build the gui
builder = gtk_builder_new();
gtk_builder_add_from_file (builder, "../Visual/gui/main_window.glade", &err);
window = GTK_WIDGET(gtk_builder_get_object(builder, "main_application_window"));
// build the structure of widget pointers
widgets->w_spn_dataspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_dataspeed"));
widgets->w_spn_refreshspeed = GTK_SPIN_BUTTON(gtk_builder_get_object(builder, "spn_refreshspeed"));
widgets->w_adj_dataspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_dataspeed"));
widgets->w_adj_refreshspeed = GTK_ADJUSTMENT(gtk_builder_get_object(builder, "adj_refreshspeed"));
// connect the widgets to the signal handler
gtk_builder_connect_signals(builder, widgets); // note: second parameter points to widgets
g_object_unref(builder);
// Set a timeout running to refresh the screen
gdk_threads_add_timeout(SCREEN_REFRESH_TIMER, (GSourceFunc)screen_timer_exe, (gpointer)widgets);
gdk_threads_add_timeout(DATA_REFRESH_TIMER, (GSourceFunc)data_timer_exe, (gpointer)widgets);
gtk_widget_show(window);
gtk_main();
// free up memory used by widget structure, probably not necessary as OS will
// reclaim memory from application after it exits
g_slice_free(struct app_widgets, widgets);
return (EXIT_SUCCESS);
I use XCreateSimpleWindow to create x11 window. xprop shows following allowed actions for my window:
_NET_WM_ALLOWED_ACTIONS(ATOM) = _NET_WM_ACTION_MOVE, _NET_WM_ACTION_RESIZE, _NET_WM_ACTION_STICK, _NET_WM_ACTION_MINIMIZE, _NET_WM_ACTION_MAXIMIZE_HORZ, _NET_WM_ACTION_MAXIMIZE_VERT, _NET_WM_ACTION_FULLSCREEN, _NET_WM_ACTION_CLOSE, _NET_WM_ACTION_SHADE, _NET_WM_ACTION_CHANGE_DESKTOP, _NET_WM_ACTION_ABOVE, _NET_WM_ACTION_BELOW
What happens if I don't set them explicitly (like above) ? A window has a default list which contains all of them ? How to set them explicitly ?
Edit1
Here is example code which sets only one allowed action:
Atom aa = XInternAtom(d, "_NET_WM_ALLOWED_ACTIONS", False);
Atom close = XInternAtom(d, " _NET_WM_ACTION_CLOSE", False);
XChangeProperty(d, w, aa, XA_ATOM, 32, PropertyNewValue, (unsigned char*)&close, 1);
Window manager still let me move or resize window so maybe I should send some client message ? I want to have a window which allows only for close actions.
1) no, by default window does not have properties, but window managers often set some default values. Try to run your program without WM to see difference
2) use "ChangeProperty" request. Window property is some data associated with window + a little bit of metadata: name (atom) and type (atom). If size of data is more than single type would need it's assumed you have array of those. For example, atom is just 32 bit unsigned int. If you see 8 bytes property of type atom, you interpret content as two atoms. See XChangeProperty documentation if you are using xlib
I had to change to directshow for my eyetracking software due to the difficulties to change resolution of the camera when using c++ and opencv.
Directshow is new to me and it is kind of hard to understand everything. But I found this nice example that works perfectly for capturing & viewing the web cam.
http://www.codeproject.com/Articles/12869/Real-time-video-image-processing-frame-grabber-usi
I am using the version that not requires directShow SDK. (But it is still directshow that is used in the example, right??)
#include <windows.h>
#include <dshow.h>
#pragma comment(lib,"Strmiids.lib")
#define DsHook(a,b,c) if (!c##_) { INT_PTR* p=b+*(INT_PTR**)a; VirtualProtect(&c##_,4,PAGE_EXECUTE_READWRITE,&no);\
*(INT_PTR*)&c##_=*p; VirtualProtect(p, 4,PAGE_EXECUTE_READWRITE,&no); *p=(INT_PTR)c; }
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp ) {
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
HRESULT ret = Receive_ ( inst, smp );
return ret;
}
int WINAPI WinMain(HINSTANCE inst,HINSTANCE prev,LPSTR cmd,int show){
HRESULT hr = CoInitialize(0); MSG msg={0}; DWORD no;
IGraphBuilder* graph= 0; hr = CoCreateInstance( CLSID_FilterGraph, 0, CLSCTX_INPROC,IID_IGraphBuilder, (void **)&graph );
IMediaControl* ctrl = 0; hr = graph->QueryInterface( IID_IMediaControl, (void **)&ctrl );
ICreateDevEnum* devs = 0; hr = CoCreateInstance (CLSID_SystemDeviceEnum, 0, CLSCTX_INPROC, IID_ICreateDevEnum, (void **) &devs);
IEnumMoniker* cams = 0; hr = devs?devs->CreateClassEnumerator (CLSID_VideoInputDeviceCategory, &cams, 0):0;
IMoniker* mon = 0; hr = cams->Next (1,&mon,0); // get first found capture device (webcam?)
IBaseFilter* cam = 0; hr = mon->BindToObject(0,0,IID_IBaseFilter, (void**)&cam);
hr = graph->AddFilter(cam, L"Capture Source"); // add web cam to graph as source
IEnumPins* pins = 0; hr = cam?cam->EnumPins(&pins):0; // we need output pin to autogenerate rest of the graph
IPin* pin = 0; hr = pins?pins->Next(1,&pin, 0):0; // via graph->Render
hr = graph->Render(pin); // graph builder now builds whole filter chain including MJPG decompression on some webcams
IEnumFilters* fil = 0; hr = graph->EnumFilters(&fil); // from all newly added filters
IBaseFilter* rnd = 0; hr = fil->Next(1,&rnd,0); // we find last one (renderer)
hr = rnd->EnumPins(&pins); // because data we are intersted in are pumped to renderers input pin
hr = pins->Next(1,&pin, 0); // via Receive member of IMemInputPin interface
IMemInputPin* mem = 0; hr = pin->QueryInterface(IID_IMemInputPin,(void**)&mem);
DsHook(mem,6,Receive); // so we redirect it to our own proc to grab image data
hr = ctrl->Run();
while ( GetMessage( &msg, 0, 0, 0 ) ) {
TranslateMessage( &msg );
DispatchMessage( &msg );
}
};
The method HRESULT Receive is called for every new frame from the cam. the the comments says that buf contains the data. But I have 3 problems/questions.
I cant include the opencv lib. I create a new project in visual studio, and add the same property sheets as I always include. the only difference from earlier projects is that I Now create a totaly empty project, earlier I created a win32 application.
How to add opencv into the directshow project?
The example above. from buf. which is a pointer to the data. How do I get that into iplImage/Mat for the opencv calc?
Is there a way to not show the images from the webcam (I only need to perform some algorithms on the frames, I guess removing the window with the results might give me more power for the analyse algorithms?!)
Thanks!
With DirectShow you typically create a pipeline, that is a graph and you add filters to it, like this:
Camera -> [possibly some extra stuff] -> Sample Grabber -> Null Renderer
Camera, Sample Grabber, Null Renderer are all standard components shipped with clean Windows. Sample Grabber can be set to call you back via ISampleGrabberCB::SampleCB and give you data for every video frame captured. Null Renderer is the termination of pipeline without displaying video on monitor (just video capture).
SampleCB is the keyword to bring you sample code you need. Having data received with this call, you can convert/wrap it into IPL/OpenCV class as suggested by #praks411.
Having it done as simple as this, you don't need DirectShow BaseClasses, and the code will be merely regular ATL/MFC code and project. Make sure to use CComPtr wrapper class to deal with COM interfaces to not lose references and leak objects. Some declarations might be missing in very latest Windows SDK, so you need to either use Windows SDK 6.x or just copy missing parts from there.
See also:
How to capture frames using Delphi/DSPack without displaying it on TVideoWindow? (Delphi code, but good description and figures)
DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and Building a VU Meter
SetLifeCamStudioResolutionSample - A small DirectShow project showing how to set capture up, including resolution on camera, and Sample Grabber, and also missing SDK declarations; related Q is Can't make IAMStreamConfig.SetFormat() to work with LifeCam Studio
Building the Filter Graph on Sample Grabber and Null Renderer
I think you can include opencv in existing. I've done that for console application. You will need to include path to opencv headers and path to opencv lib in property page for you current project.
Go to project property:
1.To addheaders
C/C++ -----> Additional Include Directories ---> Here add opencv include directories (You may want to include multiples directories)
To add libs
Linker -----> Additional Library Directories ----> Here add opencv lib.
To create IplImage from buf. You can use following once you have the height and width of image.
IplImage *m_img_show;
CvSize cv_img_size = cvSize(m_mediaInfo.m_width, m_mediaInfo.m_height);
m_img_show = cvCreateImageHeader(cv_img_size, IPL_DEPTH_8U,3);
cvSetData(m_img_show, m_pBuffer, m_mediaInfo.m_width*3);
I think preview of image is quite helpful. It seems that your filter above take data from renderer. If you do want you may want to change your renderer and use it in windowless mode. Other option could be to use sample grabber filter.
I'm trying to create a DirectX device in full screen (up until this point in time I've been doign windowed), but the device won't get created and I get an invalid call HR fail.
This is my code:
md3dPP.BackBufferWidth = 1280;
md3dPP.BackBufferHeight = 720;
md3dPP.BackBufferFormat = D3DFMT_UNKNOWN;
md3dPP.BackBufferCount = 1;
md3dPP.MultiSampleType = D3DMULTISAMPLE_NONE;
md3dPP.MultiSampleQuality = 0;
md3dPP.SwapEffect = D3DSWAPEFFECT_DISCARD;
md3dPP.hDeviceWindow = mhMainWnd;
md3dPP.Windowed = false;
md3dPP.EnableAutoDepthStencil = true;
md3dPP.AutoDepthStencilFormat = D3DFMT_D24S8;
md3dPP.Flags = 0;
md3dPP.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
md3dPP.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
HR(md3dObject->CreateDevice(
D3DADAPTER_DEFAULT, // primary adapter
mDevType, // device type
mhMainWnd, // window associated with device
devBehaviorFlags, // vertex processing
&md3dPP, // present parameters
&m_pd3dDevice)); // return created device
Notice 'md3dPP.Windowed = false;', if that's true the device creates in windowed mode.
I'm under the impression I've made a mistake in some of my default values but have no idea where to look. Is there a way to get a more detailed report as to why the device creation failed beyond D3DERR_INVALIDCALL?
You need to specify a different value for BackBufferFormat because only windowed apps allow the value D3DFMT_UNKNOWN. Pick one that is supported by your device (you can check by using CheckDeviceFormat()).