Howto create vim bundle to switch theme depending on webcam light measurement - vim

I have seen several tricks to switch vim theme depending on the time of day, but I want to switch depending on the light in the room and thought that maybe I could use the webcam. Has anyone seen such a vim bundle?
If I where to take an image with the webcam and take an average rgb value once every minute from the image I would not know how much the image was brightened by the camera/drivers.
I would be using it with Arch Linux with Gnome on a ThinkPad, it would also be nice to use this for theming of other applications as well.
Any ideas?

The accepted answer here claims to get usable results; they use downscaling of the image for averaging and then measure the maximum luminosity(?) of the thumbnail. There's no mention of any correction for adjustments done by the camera driver. You will probably have to do some calibration.
The linked solution uses python and openCV, so it should work with most cameras on linux. Also, you can pretty easily write vim plugins in python. It might not be a good idea to poll a script like this every minute, though, since there is no good support for asynchronous operations in vim.
One cheap alternative would be to have the purely python-based light measurement running in its own process and communicating with vim by calling vim --servername foo --remote-send [command]. This only works as long as you have only one vim instance, though.

Related

Gesture Recognition for my Grandma (Kinect) Linux

I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).

Shell formatting language

On linux, console applications have the ability to format their output. They can set font color, set background color and can place signs everywehre on the console. Using that it is, for example, possible to implement a tetris game right into the console.
I´m wondering how one can do that. I think they use a output markup language or something else. Can anyone tell me where I can learn more about this?
Thanks very much!
Most console applications involving a lot of motion or color are built using the ncurses library. Some very common examples would be irssi (IRC client), mc (Midnight Commander, the console file browser), mutt (POP3/IMAP mail client)
It seems like you are already aware of the escape codes used to modify console colors. A good list of console color escape sequences (for Bash) can be found here.
You obviously need to get a hold of those every-popular Unix video games, rogue, srogue, larn, hack, and/or nethack. They have a long and venerable history.
Notably, these all use the standard curses — or more recently, ncurses — library. Here’s a screen shot.
Since they have no joystick, motion is with vi commands. They are hands-down the very best way to hone your vi motion skills ever invented: no more two-finger typing for you! You stop thinking about motion; it just becomes a part of your fingers’ muscle memory. You really have to play them to get a feel for the awesome “Zen” state you can get into playing them:
After enough practice, it feels as though your fingers themselves remember how to play the piece. You don’t even watch them. They've a job to do, and once they’ve learned it, can go about that job remarkably free of direct supervision. The key to clearing the mind of the outside world, so that the program becomes the dominant reality, is what a musician would call “finger memory”. (You might have heard athletes or dancers refer to it as muscle memory, but when we’re talking about using the computer, it really is the fingers that count.)
[...] Of course, that's not really what’s going on; it only seems to be. Your fingers don’t really remember. But a part of your brain that controls them does, even though “you” don’t realize it. What’s happened is that you've so successfully assimilated the moves needed that conscious direction is no longer required. The little lighthouse keeper behind your forehead can worry about other things, assured that your fingers will do the job you’ve trained them to do. Your eyes are on the screen, the program in your head, and your head is in the program. Your fingers become an unnoticed extension of your will. [...]
[...] There’s no question that, for certain tasks, the keyboard is clearly the optimally efficient input device. Consider the game of rogue or one of its more recent incarnations. You wouldn’t want to use anything but a keyboard there. The command set is just too rich. Trying to play the game with a mouse‐and‐menu interface instead of a keyboard one would slow you down by at least two orders of magnitude.
The rogue family of video games are also notable for showing how to write a video game for a regular terminal like a vt100 or an xterm, which I believe is what you are looking for. I’d probably use a more modern language than C these days, but all the same principles still apply. Both Perl and Python have good interfaces to these standard libraries.
It's not so much a markup language as a series of escape sequences that trigger the terminal viewer to format in a certain way.
You can send ANSI escape sequences before your output to indicate that the following output should be a certain color, weight, background. You can also send sequences that jump the cursor to specific locations to continue writing output.
If you are going to do a full blown app you should consider using some library such as ncurses which makes these manageable.

Google Chrome over Linux FrameBuffer

I am working on a project where I need to run Google chromium over Linux FrameBuffer, I need to run it without any windowing system dependency ( It should draw on the buffer we provide it to draw, this will make its porting to any embedded system very easy) , I do not need its multi-tab GUI, I just need its renderer window in the buffer, has any body ever tried this? Any help on what approach should I use for this?
If you need to have some direct control of the window functions, or want to poke around in the DOM data, then the right way to solve this problem is to probably look at embedding webkit directly. This will be much faster and cleaner than what I am about to suggest.
Now, let's suppose you don't need all that fancy control and that you are really lazy. An ancient, low tech solution to your problem could be to create a virtual frame buffer and then read its contents directly. To do this, you can set up xvfb on your server:
http://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml
xvfb is an old unix tool that lets you create a virtual x-server with whatever type of configuration you want. More importantly, it can be configured to write the contents of its X server's screen directly to a memory mapped file! You can also set it up to use shared memory, which is a bit faster though also more complicated.
I guess you will have better luck with uzbl and GTK/DirectFB. Same engine, and works with javascripts. For the facebook chat issue, I think you just have to change the user-agent string.
There is the Origyn Web Browser, which is supposed to be an embedded WebKit-based browser that looks portable and does not depend on "heavy" libraries (like GTK). Their web page is http://www.sand-labs.org/owb but it looks like their database crashed, which is a little worrying maybe.
try to port webkit engine to the netsurf framebuffer-based code.
HTH
You could buy one of the remaining 10 (or so) OGD1 boards.
http://en.wikipedia.org/wiki/Open_Graphics_Project
Then you can talk directly to hardware using libpci.
However you will still need code that draws a picture into a memory buffer.
I realize this answer is more a shameless plug.
But people who are interested in your question might want such a board.
I already have a board like this and it would help a lot if it got more exposure.
This project:
http://code.google.com/p/wkhtmltopdf/
Achieves that. It runs Webkit on a virtual display and captures the rendered output in form of PDF. You can customize that do do something else.
OR you can create a display with tigthvnc, and set DISPLAY variable so that Chrome renders in that display.
I suggest using the webkit2pdf package (which is available for many different Linux distributions). Then use fbgs which is a wrapper for the fbi frame buffer program, that displays PDF files right on the frame buffer.

Colour blindness simulator

Like any responsible developer, I'd like to make sure that the sites I produce are accessible to the widest possible audience, and that includes the significant fraction of the population with some form of colour blindness.
There are many websites which offer to filter a URL you feed it, either by rendering a picture or by filtering all content. However, both approaches seem to fail when rendering even moderately complex layouts, so I'd be interested in finding a client-side approach.
The ideal solution would be a system filter over the whole screen that can be used to test any program. The next best thing would be a browser plugin.
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle is great, but another option is KMag, which is part of KDE in Linux. It's ostensibly a screen magnifier, but can simulate protanopia, deuteranopia, tritanopia and achromatopsia.
It differs from Color Oracle by requiring an additional window in which to display the re-coloured image, but an advantage is that one can modify the underlying image at the same time as previewing the simulation.
Here is a screenshot showing the original figure on the left, and the KMag window on the right, simulating protanopia.
Here's a link to a website that simulates various kinds of color blindness:
http://www.vischeck.com/
They let you check URL's and Screenshots with three kinds of different color blindness types (URL checking is a bit dated though. Image-check works better).
I'd encourage everyone to check their applications btw. Seeing your own app with others eyes may be an eye opener (pun intended).
I know this is a quite old question, but I've recently found an interesting solution to transparently simulate color blindness.
When working with Linux, you can simulate color blindness using the Color Filter plugin for Compiz. It comes with profiles for deuteranopia and protonopia und changes the colors of the whole screen in real-time.
It's very nice because it works transparently in all applications (even within Youtube-Videos), but it will only work where Compiz is available, e.g. only under Linux.
Here's an article that has some guidelines for optimizing UI for color blind users:
Particletree » Be Kind to the Color Blind
It contains a link to another article with the kind of tools you were asking for:
10 colour contrast checking tools to improve the accessibility of your design | 456 Berea Street
A great paper that explains a conversion that preserves color differences is:
Detail Preserving Reproduction of color images for Monochromats and Dichromats.(PDF)
I haven't implemented the filter, but I plan to when I have some more free time.
I found Colour Simulations easy to use on Windows 10. This software can apply a color-blind filter to a part of the screen or the whole screen. And what's great is it allows me to interact with my PC normally as if it doesn't exist in fullscreen mode. It runs quite slow in my 4K screen using an integrated graphics card, though.

Fast, Pixel Precision 2D Drawing API for Graphics App?

I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.

Resources