I use gnome-terminal (Ubuntu 10.10). I like it, though I'd be willing to switch to another for this feature.
Can anyone tell me how I can broadcast keystrokes to multiple windows? The closest thing I've found is the "Terminator" program, which allows for broadcasting to multiple tabs, but not to multiple windows. Apparently a similar feature was removed from v3 of Konsole when it went to v4 (no idea why). There are also similar capabilities in screen, but not between windows, as far as I can tell.
I've spent a number of hours looking, but no joy.
I'd also be willing for a general solution (input to multiple windows of any kind) that I could adapt for use with terminal windows.
Thanks.
Try with a software called keyboardcast: apt-get install keyboardcast
For the source: http://web.archive.org/web/20100130104001/http://desrt.mcmaster.ca/code/keyboardcast/
Related
I wanted to install the OV7251 camera driver to work with a module I've recently purchased, the Arducam OV7251 MIPI, as I need to perform SLAM-like system called Virtual Inertial Navigation (VIN) and global shutter cameras are preferred for this. As far as my system goes, I'm using ROS Kinetic on an RPI-3B+ running Ubuntu 16.04 . I am using this camera as it is near my price point (<20$), and goes through the RPI's CSI Port, which sources say is easier and faster than ones going through USB.
I wanted to take this camera and publish its data to a topic, that way the repository I'm using for VIN, OpenVINS, can track the camera's position. Now, the camera that I'm using doesn't have much on it other than the manufacturer's github page, which does not work on Ubuntu, and cannot connect to ROS. Now, I'm decently inexperienced with RPI's, ROS included, since I wanted to originally perform this on an Arduino, but that was majorly impossible, so I doubt I would be able to write a simple ROS node, let alone one that connected with the CSI port.
Currently, I am unable to find many libraries for this, and help given to me has proved to be un-substantial. The camera does not natively have drivers supported on RPI, which is why I cannot find any /dev/video libraries, cheese turns up nothing, and the command $ Vcgencmd get_camera returns no detected devices. Someone suggested kernel hacking, in order to enable the module in menuconfig using libraries like the ones here. While I do not know much about kernel hacking, he reccomended that I follow this guide and after I run the defconfig line, I should search for "OV7251" in menuconfig and modularize the only one which popped up. And despite flashing and repeating this process multiple times to ensure I did not choose the wrong branch, the rpi-5.4.y branch, or wrong model, the RPI-3B+, I ended up being stuck on the rainbow screen after I rebooted every time. I know that the rainbow screen either means low power, which it wasn't because I had it run before, or a kernel error, which would most likely make sense.
Now, while I would most definitely like to fix the rainbow screen error, I would also like to know, how after installing the OV7251 driver, how do I get it working with ROS to send data to topics? Since I doubt I could write my own node, is there a library that I could look for to perform this, or would libraries that did not work previously due to a missing driver suddenly work now, or would I have to take an existing one and modify it? In any case, A more low-level tutorial to accomplish this would be quite handy seeing as I am new.
But, in the case this is not software, and the reason this camera is not supported is for good reason, is there any other cheap global shutter camera I can work with? I couldn't seem to find many over my various searches, but maybe you all have better luck/experience in this field. Although, I did manage to find another library by this same manufacturer which happens to support my camera model and even has a ROS node that works on ubuntu. However, I believe that if this can be done, then so can doing so by just the CSI port rather than buying an additional 40$ USB camera hat for the pi, and along with that, I am starting to doubt the validity of this companies repositories.
Yet the fact I am finding little information on the topic of this camera alone on the CSI port of an RPI and how renowned this company it scares me that it could be impossible, which if it is, do link me some other good and hopefully well-documented cameras, which could very well be a lot to ask for. And if it is just simply impossible to get the results I want with the parameters I have set, then how badly would a rolling shutter camera affect VIN'S performance, and furthermore is there any special dataset designed for rolling shutter which could minimize the drop in quality? This terrain is all too new to me.
Ok, so I got a rpi engineer to add a dtoverlay for the ov7251 in the rpi's firmware, and the most recent rpi-update has the overlay in the kernel.
I did sudo rpi-update to install the update, i then added dtoverlay=ov7251 to /boot/config.txt in order to enable the overlay, and i edited it by running sudo nano /boot/config.txt. And the repository only has one dependency, v4l-utils, which is installed easily enough by running sudo apt-get install v4l-utils. Finally i ran sudo reboot to initialize the changes.
And in order to pull the images into ROS, i edited a v4l2 node called usb_cam in order to accept the pixel format that the ov7251 camera uses (Y10). My fork can be found here. In order to install it, (since the docs for the original repo say very little on installation), i ran:
cd ~/catkin_ws/src
git clone https://github.com/ai-are-better-than-humans/usb_cam.git
cd ..
catkin_make
and then after that all you have to do is roslaunch usb_cam usb_cam-test.launch to start the node. Mine started out dark, so i had to go into the launch file and mess around with the brightness for a bit. And while youre there, make sure the pixel_format parameter has a value of Y10
You should get a sensor_msgs::Image message being published to a topic named "<camera_name>/image_raw", you can run rqt_graph to visualize it. Big thanks to 6by9 over at raspberry pi forums, dont think i could have gotten it done without him, he did alot of work that im very thankfull for. Thought id share the knowledge back here in case anyone finds it usefull.
EDIT: I hear you can also compile with catkin_make --pkg usb_cam -DCMAKE_BUILD_TYPE=Release instead of catkin_make if the node takes too much CPU. Also, if you see a ton of error messages while compiling, its fine, it still should work, but if you want to get rid of them you can refer to this answer from a ros thread:
It looks like you need to install libavcodec. I don't know the exact
command to install it off the top of my head, but the format will look
like this:
sudo apt-get install libavcodec
The exact package name might not be
libavcodec. It maybe looks something like libavcodec-VERSION-NUMBER or
libavcodec-dev. In these situations you can search for packages with a
command like this:
apt-cache search libavcodec
This will find all packages that have text
containing "libavcodec". This should find the correct package for you
to install.
I have recently moved to Linux full time, and am enjoying the learning curve. However, one particular thing has me stumped big time: Some of the Fn key combinations on my laptop are not working, spec. Volume up/down, Mute, etc. Combinations that are working include WLAN, Sleep, Video cycle, numeric pad, etc. I can rule out a H/W fault, since the keys worked perfectly fine on Windows 7 (although only when the hotkeys software by the laptop maker was installed).
I have scoured the net for possible explanations, and have come across the concepts of scancode (HW dependent), keycode and keysym. I think I understand the basics, and have found that console and X have their own mappings, and need to be remapped separately. The console uses the kernel mapping of scancodes to keycodes, but X for some reason has its own mapping. For my part, I have tried:
Set the boot parameter atkbd.softraw=0
Switched to console mode by Ctrl + Alt + F1
Used showkey --scancodes. Unfortunately, the keys that I am trying to get working do not show any scancode output
Used dmesg to see if any Unknown key pressed events have occured, but none found.
In my desperation, tried acpi_listen to see if the keys were actually firing any acpi events instead, only sleep and video cycle keys do, others do not output anything
At this point, I thought maybe I should try getting scancodes from the X environment itself, using xev, but no luck.
I have come here as a last resort only. I hope somebody has a good explanation as to why some of the function key combinations are not generating any output in the tools I have tried above. If it helps, I am using Linux Mint 17.3 Cinnamon, and the laptop is made by HCL. evtest shows the keyboard device to be AT Translated Set 2 keyboard. If more info is needed, I would be happy to oblige. Thanks.
EDIT: No relevant BIOS setting is available.
Confession: All my knowledge on this is based on what I have been reading up on Arch wiki, Ubuntu wiki, a whole lot of forum posts and other websites. So, if I am technically wrong about something, please bear with me, and correct me. I love learning this stuff :)
Yes, some keys on USB keyboards might not generate a scan code sent via the USB HID keyboard protocol but instead use a different USB protocol to communicate some user input. From what you've described, that's most likely what's happening here. You may be able to use programs from the evmu-tools package (that's the Debian name) or the older evtest program to find out more about what your particular device is doing for things that appear not to be sending keyboard scan codes.
(It also seems, from reading The Unix & Linux SE question "How to get all my keys to send keycodes" that there's something going on with keycodes above 255, but I'm not clear on what's going on there.)
There's also an error in your understanding of the layering:
The console uses the kernel mapping of scancodes to keycodes, but X for some reason has its own mapping.
This is not quite correct. The kernel maps scan codes to keycodes between 1 and 255; you can see this mapping with getkeycodes(8) and change it with setkeycodes(8) or udev. (The Arch Wiki page Map scancodes to keycodes has many details on this.) Not all scan codes have a mapping to a keycode; receipt of a scan code with no translation entry are what you would have seen in dmesg, had there been any.
Only after the scan code is converted to a kernel keycode do the console and X11 have access to these; each has its own mechanism to translate keycodes to actions.
Note that the console program showkey -s does not show actual scan codes that have been received; it reads keycodes (as shown by showkey -k) and translates those back into scan codes using the kernel table shown by getkeycodes(8).
It might depend upon the X11 window manager. You should try using xev(1) to understand what is going on.
Maybe using some other desktop suite like xfce or lxde or gnome, kde, icewm might help
Maybe configuring explicitly your keyboard (e.g. in /etc/Xorg.conf...) might help.
I have been using MacVim for a while. It has always been the non-terminal version. Recently I started using tmux and I would like to be able to use vim inside a tmux session. Only when I started to tweak my settings, I realised that the terminal experience will not be as smooth as the standalone MacVim one.
I am not talking about speed issues etc, it is mostly things like key mappings behaving differently etc. I already gave up trying to get the Option (Alt) key working, but at least I would like to have things stable in general.
For example a key mapping such as that works perfectly in the non-terminal vim suddenly becomes , CTRL is no more a modifier? Discrepancies like that just make things extremely hard.
What are the most important configuration options that might improve MacVim's stability when running in the terminal?
In fact, the answer is in focusing on various aspects individually and finding solutions for those. The first major issue is with the way keys are interpreted by the terminal application. In most cases, the terminal emulator will not be able to distinguish between CTRL-F10 and F10. So in cases where F10 is performing action_A, SHIFT-F10 is performing action_B and CTRL-F10 is performing action_C, there will be a confusion between action_B and action_C if CTRL key code is not interpreted correctly. I now know that iTerm2 is capable of broadcasting specific ESC sequences to the running process. I will focus on those.
Running an up-to-date Gentoo on my Sager NP8298 (Clevo P177SM-A), and I am heartbreakingly close to having all of my hardware running beautifully. I found a nice open source driver to run my keyboard backlight at this GitHub repo, but the problem was it was made for a Clevo chassis that didn't have the touchpad light that mine does. Kinda tacky, I know, but the problem is that the default color for the touchpad light is blue, and can be kind of distracting when the keyboard is set to a different color.
I'd at least like to be able to turn the light off, if not control its color. I have a Windows install and am able to access the proprietary driver that came with the computer. I just don't quite know where to start on trying to modify this driver, if there were some Windows utilities that I could use to see what the driver is doing and how to access the LED programatically, it would be a huge help. Any ideas?
Other functionality that I'd like to add is Fn+Num pad 7 through 9 for toggling the left, center, and right part of the keyboard individually, and Fn+5 for a num pad light toggle, as the Windows driver does. I just need to know what signals need to be sent to the hardware and how to send them.
Whatever I end up with I'll be sure to fork the project and share the results with other users of this hardware.
You need the source code of driver you want to change. With that and all required bits and bobs (a.k.a. dependences) you can change it to do whatever you want.
That said, there are quite a few things to consider. You need to know, at least at a reasonable level, the language used to build the driver, platform dependencies if any.
I've done similar work for some network drivers like 15 years ago and no it's not a fun job.
I have seen several tricks to switch vim theme depending on the time of day, but I want to switch depending on the light in the room and thought that maybe I could use the webcam. Has anyone seen such a vim bundle?
If I where to take an image with the webcam and take an average rgb value once every minute from the image I would not know how much the image was brightened by the camera/drivers.
I would be using it with Arch Linux with Gnome on a ThinkPad, it would also be nice to use this for theming of other applications as well.
Any ideas?
The accepted answer here claims to get usable results; they use downscaling of the image for averaging and then measure the maximum luminosity(?) of the thumbnail. There's no mention of any correction for adjustments done by the camera driver. You will probably have to do some calibration.
The linked solution uses python and openCV, so it should work with most cameras on linux. Also, you can pretty easily write vim plugins in python. It might not be a good idea to poll a script like this every minute, though, since there is no good support for asynchronous operations in vim.
One cheap alternative would be to have the purely python-based light measurement running in its own process and communicating with vim by calling vim --servername foo --remote-send [command]. This only works as long as you have only one vim instance, though.