Reviews for programmable, tiling window manager ion3 - keyboard

I find the concept of the programmable, tiling, keyboard-focuessed window manager ion3 very appealing, but I think it takes some time to customize it to your needs until you can really evaluate this totally different UI-concept.
Therefore, I would like to read reviews of people who tried it for a longer time as environment for programming (in particular using emacs/gcc).
(The policies of the ion3-author concerning linux-distros are not easy to follow for me, but this should not be the point here...)

I use ion3 daily. It's a wonderful window manager. The tiling interface really enables you maximize real estate. Once you get it setup to your liking, it is much more efficient to navigate via the keyboard. Even moving applications between tiles isn't that hard once you get used to the tag/attach key sequence.
With ion3, Vimperator and the various shells I have open during the day -- I barely use the rodent.
The author's opinions aside -- a good resource for configuring/extending Ion to your liking can be found at:
Configuring and Extending Ion3 with Lua

I've been using Ion daily for nearly two years now. Good things:
Easy to use from the keyboard.
Handles multiple screens (Xinerama) very well (once you have the mod_xinerama plugin), especially as in my case the screens are different sizes.
Very predictable where windows will appear.
Splitting, resizing and moving windows is very easy.
Multiple, independent workspaces on each screen.
Very fast and reliable.
Bad things:
Too many different shortcuts. e.g. there are separate keys for moving to the next tab, next frame, next screen, and next workspace.
Applications that use lots of small windows together work really badly (e.g. the Gimp) because it maximises all of them on top of each other initially.
Sub-dialogs can cause trouble. Sometimes they open in a separate tab when you want them in the same tab, or sometimes the open in the same tab and take the focus when you want to continue interacting with the main window.
These things can probably be changed in the config files, but I haven't got around to it yet. Also, the actual C code is easy to read, and on the few occasions where I've wanted to fix something it has been very easy. I don't feel tempted to go back to a non-tiling WM, anyway.

I've used it off and on for the last few years, I think its a great window manager, but I keep crawling back to kde3 whatever I use.
Its however difficult to put into quantifiable terms why this happens, but its right up there with the gnome-vs-kde battle. Neither side can understand the other.
I would also just love to have kicker + ion3, but they don't gel awfully well.
Moving applications between tiles ( something I tend to do lots ) also is a bit inefficient ( too addicted to the mouse )
( Kicker + Evilwm is a good combination, but evilwm just can't handle stacking in a user-friendly way )

Related

Edit Windows 10 start menu programmatically

To begin with, I understand that Microsoft offers no way to programatically alter the (modern) start menu - on purpose.
Nevertheless, I'm looking for a way to still do it. I might use it to make a tool to sync the start menu between devices - or to automatically place often used items into thematically sorted groups (office, games, tools). The reason is that I have multiple devices, and really suck at manually managing the start menu - so I just use search or the alphabetic list most of the time.
So, does anybody know how to programatically add, remove, edit tiles? I could imagine solutions including:
Using undocumented APIs (can you still call it an API if it is not documented?)
Directly editing the tile database (e.g. TileDataLayer) - downside is that it seems to be a binary format, which is not known, and you'd have to restart the shell for changes to take effect.
Hooking DLLs or poking around in memory - yikes - but not worse than what other "desktop modding" tools like WindowBlinds would do
Using accessibility APIs, or faking mouse/keyboard input - this would most probably work, but it would be a bit spooky seeing the cursor move around, and it seems even more frail than the others.
I searched a bit, and think there is probably no solution available right now, but you can see this as a challenge to come up with a solution :-)
As you say, there isn't a way to do this.
As an alternative, did you know that you can easily find apps to launch by pressing the windows key and then typing the name of the app you want to launch? This is how I launch anything that isn't pinned to my taskbar. The device I'm on and the order of items in a list or what's pinned where become irrelevant when working this way.

Inefficiency in Vim

I consider myself somewhat familiar with Vim,
hate the arrow keys (let alone the mouse),
regularly look up tips and plugins to get the most out of this tool,
use it daily to manage my cloud servers, etc.
However, I always find myself doing the same mistakes probably inherited from the GUI-world:
too often switching to visual mode to see what piece of code I'm about to manipulate,
undoing changes to retrieve lost statements because I forget to utilize registers (and pasting code on temporary lines just to grab it after other edits),
relying on Ctrl-C & Ctrl-V when working with operating system's clipboard,
keep pressing j button to browse through lengthy files to find certain functions.
Probably my Hungarian keyboard layout prevents me from being faster as most of the special characters (/, [, etc.) are only available as a key combination (with Shift or Alt Gr).
Given this specific situation, what pieces of advice could you give me? Have you faced similar bad habits when you were a Vim-novice? I'd like to see my productivity skyrocket (who wouldn't?). Thanks in advance.
I've found a simple, effective strategy. Choose one action, one task or one set of keys that you think is unnecessarily slow. Figure out a better way of doing this using the vim manual or googling or a plugin etc. Force yourself to use this every time. Rinse, and repeat. The path to efficiency is one-by-one elimination of the slow parts.
I'd also recommend just reading the vim manual from time to time - even if you don't remember everything, knowing something's out there is very helpful.
This probably applies well beyond vim, but
something that worked for me was finding a specific feature that I knew would
be more efficient and concentrate on using that for a week or two.
Just one feature at a time, and possibly using it excessively.
After a couple of weeks it becomes automatic and you can move on to the
next thing.
I learn programming tricks the same way. eg. I'll have a month of using lambda expressions for everything, then a month of mapping and filtering.
(not on production code though)
Probably my Hungarian keyboard layout prevents me from being faster as most of the special > characters (/, [, etc.) are only available as a key combination (with Shift or Alt Gr).
I'm sitting in front of german keyboards all day long and know this problem very well. Some keyboard layouts are simply not very suited for programming / using vim. I think its safe to assume that most programming languages and keyboard shortcuts were designed with the us-layout in mind.
My advice: reset your keyboard layout to us-english and practive touch-typing on that layout (typing without looking at the keys). It won't matter that the keyboard labels are wrong and you'll be much more comfortable using vim hotkeys.
The only problem that still remains for me is to produce language specific characters (german umlauts such as ä,ö,ü) wich i assume will also be a problem for hungarian. For that I use a combination of vim-digraphs, linux window manager digraph-key and windows layout-switching hotkeys.
just keep using it. The more you use it, the better you become at it. VIM isn't too bad. The main thing is you just have to remember that it isn't always in edit mode.

Is there a visual two-dimensional code editor?

Let me explain what I mean by "two-dimensional code editor": imagine of using Inkscape or Gimp in a big canvas (say infinite). The "T - add text" tool is used to write the code. Additionally, all function definitions will be framed and links will connect the called functions.
In other words: you have a very large sheet of (virtual) paper where you can write.
It would be really useful. I don't want to write code as a long list of lines, especially now that big monitors are cheaper.
Is such a code editor out there?
What's your opinion? Would you use a 2d code editor?
I've written 3 or 4 visual editors and my second one worked like this, that was for java and c++ (never published, though I did use it for some published research work)
I still don't like much to write my code 'as a long list of lines'. My point is, after trying a system like this, I tried a windowed system (class outlines in windows, right click to open code editors), then a tree based system...
in the long run (I wrote several apps using all of those), the tree based system with non overlapping windows felt at once most scalable (to different monitor sizes) and foremost, most productive, because dragging the text boxes and links and/or windows in the first version was necessary, without adding much to the programming experience, so it felt wasteful.
If you want to try some of this stuff out, you can google antegram for java (java only) antegram for web (javascript/php/actionscript) and ee-ide (on oogtech.org). I'm not sure if I could dig up the original c++/java textbox + links editor (which could collapse graphs as well, and had an infinite canvas, so pretty close to what you describe).
I'm not working on this as much as I used to as few programmers ever seemed to like it except me, but if you like working the tree way, or feel like adding stuff for your own purposes, ee-ide would be the way to go, as it's nicely modular and easy to extend compared to the rest.
On the commercial side, you can configure visual studio to work with UML-like diagrams. I have a feel it might be a little too heavy (although it's definitely more coding than UML oriented), but I'm not sure, I haven't really tried yet.
This probably doesn't answer your question exactly, but anyway.
Have a look at the NodeBox beta . It is a visual programming environment mostly for creating generative graphics. You can program and edit the nodes with python code, connect and reuse them in multiple ways. (Windows and Mac OS)
Also worth mentioning (in terms of concept) is Field . It is for programming performances and arranges bits of code on a stage/timeline. Very interesting but also very confusing. (Mac OS only)
Third one is vvvv. It is used a lot by graphical artists to create realtime 3d visuals. Node based. (Windows only)
NodeBox and Field are open-source, so if you are looking to create something yourself you can see how it's done there.
Check this out. I came across it today and remembered this question.
Code Bubbles
Developers spend significant time
reading and navigating code fragments
spread across multiple locations. The
file-based nature of contemporary IDEs
makes it prohibitively difficult to
create and maintain a simultaneous
view of such fragments. We propose a
novel user interface metaphor for code
understanding and maintanence based on
collections of lightweight, editable
fragments called bubbles, which form
concurrently visible working sets.
The essential goal of this project is
to make it easier for developers to
see many fragments of code (or other
information) at once without having to
navigate back and forth. Each of these
fragments is shown in a bubble.
A bubble is a fully editable and
interactive view of a fragment such as
a method or collection of member
variables. Bubbles, in contrast to
windows, have minimal border
decoration, avoid clipping their
contents by using automatic code
reflow and elision, and do not overlap
but instead push each other out of the
way. Bubbles exist in a large,
pannable 2-D virtual space where a
cluster of bubbles comprises a
concurrently visible working set.
Bubbles support a lightweight grouping
mechanism, and further support
connections between them.
A quantiative user study indicates
that Code Bubbles increased
performance significantly for two
controlled code understanding tasks. A
qualitative user study with 23
professional developers indicates
substantial interest and enthusiasm
for the approach, despite the radical
departure from what developers are
used to.
http://www.cs.brown.edu/people/acb/codebubbles_site.htm
At one point, LabView had a programming mode like this. You connected program blocks together in a graphical way.
It's been so long since I've used LabView that I don't know if it is still the same.
For me, the MVVM pattern means that there's no code behind the UI controls anyway. The logic is all in a class with properties.
The properties use WPF databinding to update the UI controls. For example, on the form or window, page, whatever, MySearchButton.IsEnabled is bound to ViewModel.MySearchButtonIsEnabled property. So the app logic runs in the ViewModel class and just sets its own properties and the UI updates automatically.
Although this is specific to MS WPF the pattern actually stems from SmallTalk and is found across the development field as MVP. Without WPF one would need to write the databinding or 'presenter' logic, which is common.
This means the UI can be torn off and a new one pasted-in really quickly and with little code knowledge from the UI guy - who, in an ideal world, is a crack creative guy that drives a 70s Citroen.
So my point is that, although it sounds like a neat innovation, a 2D editor like this would be assisting a coding style that is no longer considered optimal.

What's the best tradeoff between text and icons on buttons? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
In a discussion with co-workers today, I lamented that I can't ever remember what an icon means, and have to hover over them to see the tooltips, and thus to find the button I need.
On their side, they were saying that when the text needs to be translated, it might not fit (German vs English for example), and that every place where there is text, including tooltips, it needs a translation. So plain icons are easier.
What is the best tradeoff in useability for the extra work of text vs the subset of users who are icon-challanged?
I personally prefer text and hate icon-only UI's. I know that other people think the other way, equally strongly, either because of internationalization or because their brain works more rapidly with images than with text. If you choose one or the other exclusively for your UI, then part of your user base will be unhappy with your choice. (Sometimes this is the right choice, depending on how extensively the UI will be used.)
Internationalization is really not that difficult, except for finding a firm to do good translations of your text. The programmer portion of internationalization is pretty straightforward. However, I've known a number of programmers who prefer the all-icon method as it's less work. I've personally had to replace one all-icon-no-text UI that the users didn't like. The users said they could not remember what the icons meant.
I think more typically, many advanced users will prefer icons and many beginning users will prefer text. However, a number of advanced users prefer text. IMHO, any good UI will provide tooltips, so you need to translate your interface no matter what you do.
The most friendly solution is to offer both text and icons, possibly with a settings choice to disable one or the other.
I worked with people in a Human/Computer Interactions group and was raked over the coals for using icon-only. They had studies about comprehension, error rate, and speed of using UI's and a good icon/label combination won every time, all else being equal.
Localization should be a non-issue. You may have to localize the icons anyway and localizing a label (as long as it's stored as text and not as bits) is easy. In terms of size in the UI - that's another matter entirely. If you can't fit the text, I'd claim that your UI is too cluttered.
Really they both have advantages and draw backs. Text must always be translated into different langugues, and sometimes a single word will not be able to effectively describe the action of a button. For example, how would you describe the X button which closes a window in Windows. We know what it does, and most people I know call it the X button, but it doesn't describe what it does. It's a lot easier to put a button with an X symbol (or icon if you will) than put something like "close window".
That being said icons also have drawbacks. As you eluded they may not always be clear what they do. The user has to be able to put the icon in a social context to understand what it does. This may not always be possible. Also, icons in one language may not be understood in another, leading back to the translation problem. Icons can be advantageous in certain areas as they can take a complex concept and show the user with a small amount of space. (like to take a picture show a camera, or delete something showing trash can).
The trade off is really in a case by case basis. If you have power users who really understand the application they are using and the surrounding subject, you are probably ok with using them. If you have people who use computers once a month and don't really care to learn it is going to be confusing. Its the amount of information you can convey with a single symbol (icon, picture, letter) vs. the potential frustration of the users and the overall rejection of your program.
Make sure you have a way to get both. Screen readers have a horrible time with icons.
I hate icons, because I never know what they mean even if they're perfectly intuitive (like this world icon that means hyperlink above this box in which I'm typing). Several Unix terminal applications provide a choice between:
text
graphical icons
both
That's nice. I usually like the text on a prominent button, because the meaning of the button is much more clear and the mouse target can be a little bigger.
Its a cultural thing as some symbols (icons) mean different things to people of different cultures, backgrounds and experiences. There are global symbols that one could assume to be 'known' by the general population, i.e. the 'save' icon...
It is a fine line, but i think the tooltips are a good way to help out those who dont understand the meaning of the icon. Perhaps a set of options to have the buttons (icons) render with text instead of the icon image ? This could be a user preference in the application.
Perhaps a good reference would be some of the "extensively" used icons in microsoft applications such as 'Word'. I am generalising here, but microsoft applications are almost eveywhere and they have done all the R&D into suitable / effective icons.
You don't mention if this is for a web application, but if it is then you have to provide the text at least as a backup if the user has disabled images, is using a screen reader, or other limited interface.
two things i guess
the decision should be the result of usability research and properly quantified, rather than a dev's gut feel or whim.
an icon that doesn't carry an obvious meaning is a bad icon and should be changed.
all that said, IMO: Icons with a tooltip/mouseover text equivalent, with bonus that this can carry a reminder of the keyboard shortcut.
(Note: I use "button" here to mean "the UI element on which the icon and/or text is located.")
I think in almost all cases it's important to include text either on the button itself, or at least on a hover-over tooltip on the button, so that in the event that the icon's meaning isn't intuitive to a particular user, the user can find out the meaning by reading the text. (Note that the translation work still needs to be performed in either case.)
A typical case for not including text directly on the button itself is when space is at a premium; when you want to fit a lot of buttons into a small area. Examples include the "toolbars" used in many desktop applications, and also in some web applications -- for example, the buttons that appear just above the StackOverflow answer text entry field!
A good case for including icons is when the button doesn't always appear in the same place, and the user would benefit from being able to quickly visually scan for where the button is located. For example, if I have a lot of programs open on Windows and I want to quickly find my instance of Firefox in the Windows taskbar, I'll look for the little orange icon, rather than reading the text on each taskbar button.

Why are there so few modal-editors that aren't vi*? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts (ctrl+w to delete back a word and so on)
Early software was often modal, but usability took a turn at some point, away from this style.
VI-based editors are total enigmas -- they're the only real surviving members of that order of software.
Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in.
If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error".
To learn more, search for the term "modeless" (and "usability")
As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
Um... maybe there isn't much of a need for one, given that Vi/Vim is pretty much available everywhere and got the whole modal thing right? :)
I think that it's because vi (and its ilk) already occupies the ecological niche of modal editors.
The number of people who prefer modal and haven't yet been attracted to vi is probably 0, so the hypothetical vi competitor would have to be so great as to make a significant number of vi users switch. This isn't likely. The cost of switching editors is huge and the vi-s are probably already as good as modal editors go. Well, maybe a significant breakthrough could improve upon them, but I find this unlikely.
#Leon: Great answer.
#dbr: Modal editing is something that takes a while to get used to. If you were to build a new editor that fits this paradigm, how would you improve on VI/VIM/Emacs? I think that is, in part, an answer to the question. Getting it "right" is hard enough, competing agains the likes of VI/VIM/Emacs would be extremely tough -- most people who use these editors are "die hard" fans, and you'd have to give them a compelling reason to move to another editor. Those people who don't use them already are most likely going to stay in a non-modal editor. IMHO of course ;)
Modal editors have the huge advantage to touch typists that you can navigate around the screen without taking your hands off the home row. My wrists only hurt when I'm doing stuff that requires me to move my hand off the keyboard and onto the mouse or arrow keys and back constantly.
Remember that Notepad is a modal editor!
To see this, try typing E, D, I, T; now try typing Alt, E, D, I, T. In the second case the Alt key activates the "menu mode" so the results are different. :oP People seem to cope with that.
(Yes, this is a feature of Windows rather than specifically of Notepad. I think it's a bad feature because it is easy to hit Alt by mistake and I don't think you can turn it off.)
VIM and emacs make about as much user interface design sense as qwerty. We now have available modern computer optimized key layouts (see the colemak layout and the carpalx project); it's only a matter of time before someone does the same for text editors.
I believe Eclipse has Vi bindings and there is a Visual Studio plugin/extension, too (which is called Vi-Emu, or something).
It's worth noting that the vi input models survival is in part due it's adoption in the POSIX standard, so investing time in learning would mean your guarenteed to be able to work on any system complying to these standards. So, like English, theres power in ubiquity.
As far as alternatives go, I doubt an alternate model editor would survive a 30 day free trial period, so its the same reason more people drive automatics than fly jets.
Since this is a question already at odds with the "no subjective issues" mantra, allow me to face that head on in kind.
Non-Modal editing seeks to solve the problem caused by non-modal editing in the first place.
Simply put, with Modal editing I can do nearly everything without my hands leaving the keyboard, and without even tormenting my pinky with reaching for the control, or interrupting my finger placement by hunting for the arrow keys.
Reaching for mouse completely interrupts the train of thought. I have hated the intense reliance upon this with Intellij IDEA and Netbeans for many years. Even with vim-style addons.
Most of what you do has to do with fine-tuning with very small increments and changes within the same paragraph of code. Move up, move over, change character, etc., etc. These things are interrupted with control keys and arrows and mouse.
Though not really answering your question, there used to be a "modal like" way to write Japanese on cell phones before :
The first letter you hit was a conson let's say K, and then, and then the next key you would hit would have the role of a conson. (Having two conson in a row is impossible in Japanese)
Though it was main a few years ago, today it's only used by people who really want to hit fast.
I think the answer to the question is actually there are quite a few modal text editors that aren't forks of vi/vim. However they all use the vi key bindings. Vi users get the key bindings into their muscle memory so relearning a different set of key bindings would be really hard, so no-one would create a different set of key bindings.
But lots of different editors have re-implemented the vi key bindings from scratch. Just look at this question about IDEs with vi key bindings. At least half of the answers are editors built from scratch that implement vi key bindings, not versions of vi embedded.
I recently came across divascheme - an alternative set of key bindings for DrScheme. This is modal, and part of the justification is to do with RSI - specifically avoiding lots of wrist twisting to hit Ctrl-Alt-Shift-something. The coder has done an informal survey of fellow coders and found that emacs users suffered from more wrist pain than vi coders.
You can see him doing a short talk at LugRadio Live USA. (The video is a series of 5 minute talks and I can't remember how far through it is, sorry - if someone watches it and posts that here I'll edit this post to say when in the video it is).
Note I have not used divascheme.
The invention of the mouse took one mode and moved it to an input device, and context menus took another mode and moved it to a button. Ironically, the advent of touch devices has had the reverse effect, producing multi-modal interfaces:
aware multi-modal - touch and speech are aware of each other and intersect
unaware multi-modal - touch and speech are unaware of each other and conflict
The traditional WIMP interfaces have the basic premise that the information can flow in and out of the system through a single channel or an event stream. This event stream can be in the form of input (mouse, keyboard etc) where the user enters data to the system and expects feedback in the form of output (voice, vibration, visual, etc) when the system responds. But the channel maintains its singularity and can process information one source at a time. For example, in today’s interaction, the computer ignores typed information (through a keyboard) when a mouse button is depressed.
This is very much different from a multimodal interaction where the system has multiple event streams and channels and can process information coming through various input modes acting in parallel, such as those described above. For example, in an IVR system a user can either type or speak to navigate through the menu.
References
User Agent Accessibility Guidelines working group (UAWG): Keyboard Interface use cases
W3C Multimodal Standard Brings Web to More People, More Ways
Next steps for W3C work on Multimodal Standards
The Future of Interaction is Multimodal
Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions - naturalinfovis_infovis2012.pdf
Setting the scope for light-weight Web-based applications
Jan. 26, 1983: Spreadsheet as Easy as 1-2-3
Multi-modal design: Gesture, Touch and Mobile devices...next big thing? | Experience Dynamics

Resources