How to show suggestion chips vertically in google assistant - dialogflow-es

I have created a chatbot using Dialogflow fulfilment (actions-on-google node.js library) and added 8 suggestion chips, but all the chips are shown horizontally in a single row.
Is there any way to show 4 chips in one line and another 4 chips in the second line. so that user doesn't have to scroll.

Eight suggestions are... a lot. Typically you wouldn't want more than about 5, and even that might be a lot in some cases.
To answer your direct question, however - you don't have control over the layout of suggestion chips. How many are visible depends on the screen size and orientation, and future versions of the Assistant may choose different ways to represent them.
While you may wish to use a List visual layout, this is mostly good for more dynamic responses (returning a list of titles) rather than a menu. It also requires a different kind of handling for the reply.

Related

Which resolution to work with in photoshop for websites?

I just need to create a single page website in Photoshop, the display has to be optimized for PC, tablet, phone etc. Which resolution do I work in to achieve this?
You will have to create several designs that are suited for those devices that you expect to be typical. Decisions depend on the product you are selling and the personae of the typical clients.
Since you are doing a single page app the exact height is not as critical but you still need to design to several base heights for the various devices and get the developer to just display more background to fit.
The screen size for PCs is increasing but usually a good bet is to go for either 1280 or 1400 wide. For tablets and phones there is an ever increasing set of form factors. Note that many tablets can display higher than a typical PC, its your call how high you want to support. Google for advice or read here.
So designers choose to do the smallest form factor first to identify the key information and content and then create the next width up etcetera.
I usually design for at least three screen sizes (as defined by research of my clients expected users) and set some guidelines on how the various elements degrade as width in/decreases so the developer knows how to setup his CSS correctly and hopefully support new devices that come to market without a redesign.

xml layout in android that supports different screen sizes

how should we set the xml layout in android that supports different screen sizes.
I tried using wrap content and match parent but its not working properly. Please guide me for this.
Thanks in advance.
The comment about, Supporting Multiple Screens is defiantly a good starting place! By default your xml does support different screen sizes.
Although the system performs scaling and resizing to make your application work on different screens, you should make the effort to optimize your application for different screen sizes and densities. In doing so, you maximize the user experience for all devices and your users believe that your application was actually designed for their devices—rather than simply stretched to fit the screen on their devices.
However, like it says you need to optimize it. This refers to images or a completely different xml per screen size/orientation. Does this help any?
If you need something a little more specific to your situation you'll need to provide more information.

J2ME app menu develop

I'm developing a small j2me game and i want to create a menu for this application. I imagine the menu as a vertical list of items with a cursor on the left or right side that i can move from item to item, something like this menu example but as a main menu.
What elements should i use to obtains such effects? I need only advices or links, i will develope it myself.
Thanks in advance!
import java.util.Vector;
import javax.microedition.lcdui.Canvas;
import javax.microedition.lcdui.Font;
import javax.microedition.lcdui.Graphics;
import javax.microedition.lcdui.Image;
What you plan looks doable. Can't give much links because don't recall any that could help on stuff like you're doing. Actually, most useful link for you will probably be MIDP (JSR 118) API reference - your part is going to be mostly lcdui package, and especially Graphics API.
As for advice, no problem. First thing to note is that there will be more coding and more (much more) testing/debugging than it was in your prior experiment with implicit list. If you can think of some possible deadline / timing requirements that may become a problem - just keep in mind that prior design with implicit list as a fallback. It won't look as fancy but it'll work work safe and correct.
Another important thing is to decide what kind devices you are going to target. For menu like one you are going to develop, it may be rather difficult to get consistent look and feel both at 160x200 basic phone with ITU-T keypad and on 400x600 touchscreen smartphone. Below I am going to assume you'll try to target as wide variety of devices as possible - note the narrower you can get it, the easier it will be to code and test.
When targeting lots of different devices it is helpful to use an emulator that can be configured to simulate various display sizes and resolution, presense or absence of touchscreen input etc. Keep in mind though that emulator alone won't fully simulate real device. To keep your feets on the ground, consider also some regular smoke testing of your application with real device, preferable using over-the-air (OTA) installation.
Here are some particular API tips that I can think of now.
Use Canvas.getGameAction to handle pressed key code - that is likely the most reliable/portable way to figure up/down and select actions for menu.
Use Canvas.hasPointerEvents to figure if there's touch screen support. Users with touch screen devices may get disappointed if it turns out that your fancy menu can't react when they tap on screen.
Use Font.getHeight and Font.stringWidth to figure how much space is occupied by menu item text.
Use Image.getGraphics if you want to draw something over the image object.
As I mentioned, you most likely will do a lot of stuff using lcdui.Graphics API. It's mostly rather simple, but you will probably need to understand somewhat tricky stuff about clipping. Good luck.

Colour blindness simulator

Like any responsible developer, I'd like to make sure that the sites I produce are accessible to the widest possible audience, and that includes the significant fraction of the population with some form of colour blindness.
There are many websites which offer to filter a URL you feed it, either by rendering a picture or by filtering all content. However, both approaches seem to fail when rendering even moderately complex layouts, so I'd be interested in finding a client-side approach.
The ideal solution would be a system filter over the whole screen that can be used to test any program. The next best thing would be a browser plugin.
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Color Oracle is great, but another option is KMag, which is part of KDE in Linux. It's ostensibly a screen magnifier, but can simulate protanopia, deuteranopia, tritanopia and achromatopsia.
It differs from Color Oracle by requiring an additional window in which to display the re-coloured image, but an advantage is that one can modify the underlying image at the same time as previewing the simulation.
Here is a screenshot showing the original figure on the left, and the KMag window on the right, simulating protanopia.
Here's a link to a website that simulates various kinds of color blindness:
http://www.vischeck.com/
They let you check URL's and Screenshots with three kinds of different color blindness types (URL checking is a bit dated though. Image-check works better).
I'd encourage everyone to check their applications btw. Seeing your own app with others eyes may be an eye opener (pun intended).
I know this is a quite old question, but I've recently found an interesting solution to transparently simulate color blindness.
When working with Linux, you can simulate color blindness using the Color Filter plugin for Compiz. It comes with profiles for deuteranopia and protonopia und changes the colors of the whole screen in real-time.
It's very nice because it works transparently in all applications (even within Youtube-Videos), but it will only work where Compiz is available, e.g. only under Linux.
Here's an article that has some guidelines for optimizing UI for color blind users:
Particletree » Be Kind to the Color Blind
It contains a link to another article with the kind of tools you were asking for:
10 colour contrast checking tools to improve the accessibility of your design | 456 Berea Street
A great paper that explains a conversion that preserves color differences is:
Detail Preserving Reproduction of color images for Monochromats and Dichromats.(PDF)
I haven't implemented the filter, but I plan to when I have some more free time.
I found Colour Simulations easy to use on Windows 10. This software can apply a color-blind filter to a part of the screen or the whole screen. And what's great is it allows me to interact with my PC normally as if it doesn't exist in fullscreen mode. It runs quite slow in my 4K screen using an integrated graphics card, though.

What's the best tradeoff between text and icons on buttons? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
In a discussion with co-workers today, I lamented that I can't ever remember what an icon means, and have to hover over them to see the tooltips, and thus to find the button I need.
On their side, they were saying that when the text needs to be translated, it might not fit (German vs English for example), and that every place where there is text, including tooltips, it needs a translation. So plain icons are easier.
What is the best tradeoff in useability for the extra work of text vs the subset of users who are icon-challanged?
I personally prefer text and hate icon-only UI's. I know that other people think the other way, equally strongly, either because of internationalization or because their brain works more rapidly with images than with text. If you choose one or the other exclusively for your UI, then part of your user base will be unhappy with your choice. (Sometimes this is the right choice, depending on how extensively the UI will be used.)
Internationalization is really not that difficult, except for finding a firm to do good translations of your text. The programmer portion of internationalization is pretty straightforward. However, I've known a number of programmers who prefer the all-icon method as it's less work. I've personally had to replace one all-icon-no-text UI that the users didn't like. The users said they could not remember what the icons meant.
I think more typically, many advanced users will prefer icons and many beginning users will prefer text. However, a number of advanced users prefer text. IMHO, any good UI will provide tooltips, so you need to translate your interface no matter what you do.
The most friendly solution is to offer both text and icons, possibly with a settings choice to disable one or the other.
I worked with people in a Human/Computer Interactions group and was raked over the coals for using icon-only. They had studies about comprehension, error rate, and speed of using UI's and a good icon/label combination won every time, all else being equal.
Localization should be a non-issue. You may have to localize the icons anyway and localizing a label (as long as it's stored as text and not as bits) is easy. In terms of size in the UI - that's another matter entirely. If you can't fit the text, I'd claim that your UI is too cluttered.
Really they both have advantages and draw backs. Text must always be translated into different langugues, and sometimes a single word will not be able to effectively describe the action of a button. For example, how would you describe the X button which closes a window in Windows. We know what it does, and most people I know call it the X button, but it doesn't describe what it does. It's a lot easier to put a button with an X symbol (or icon if you will) than put something like "close window".
That being said icons also have drawbacks. As you eluded they may not always be clear what they do. The user has to be able to put the icon in a social context to understand what it does. This may not always be possible. Also, icons in one language may not be understood in another, leading back to the translation problem. Icons can be advantageous in certain areas as they can take a complex concept and show the user with a small amount of space. (like to take a picture show a camera, or delete something showing trash can).
The trade off is really in a case by case basis. If you have power users who really understand the application they are using and the surrounding subject, you are probably ok with using them. If you have people who use computers once a month and don't really care to learn it is going to be confusing. Its the amount of information you can convey with a single symbol (icon, picture, letter) vs. the potential frustration of the users and the overall rejection of your program.
Make sure you have a way to get both. Screen readers have a horrible time with icons.
I hate icons, because I never know what they mean even if they're perfectly intuitive (like this world icon that means hyperlink above this box in which I'm typing). Several Unix terminal applications provide a choice between:
text
graphical icons
both
That's nice. I usually like the text on a prominent button, because the meaning of the button is much more clear and the mouse target can be a little bigger.
Its a cultural thing as some symbols (icons) mean different things to people of different cultures, backgrounds and experiences. There are global symbols that one could assume to be 'known' by the general population, i.e. the 'save' icon...
It is a fine line, but i think the tooltips are a good way to help out those who dont understand the meaning of the icon. Perhaps a set of options to have the buttons (icons) render with text instead of the icon image ? This could be a user preference in the application.
Perhaps a good reference would be some of the "extensively" used icons in microsoft applications such as 'Word'. I am generalising here, but microsoft applications are almost eveywhere and they have done all the R&D into suitable / effective icons.
You don't mention if this is for a web application, but if it is then you have to provide the text at least as a backup if the user has disabled images, is using a screen reader, or other limited interface.
two things i guess
the decision should be the result of usability research and properly quantified, rather than a dev's gut feel or whim.
an icon that doesn't carry an obvious meaning is a bad icon and should be changed.
all that said, IMO: Icons with a tooltip/mouseover text equivalent, with bonus that this can carry a reminder of the keyboard shortcut.
(Note: I use "button" here to mean "the UI element on which the icon and/or text is located.")
I think in almost all cases it's important to include text either on the button itself, or at least on a hover-over tooltip on the button, so that in the event that the icon's meaning isn't intuitive to a particular user, the user can find out the meaning by reading the text. (Note that the translation work still needs to be performed in either case.)
A typical case for not including text directly on the button itself is when space is at a premium; when you want to fit a lot of buttons into a small area. Examples include the "toolbars" used in many desktop applications, and also in some web applications -- for example, the buttons that appear just above the StackOverflow answer text entry field!
A good case for including icons is when the button doesn't always appear in the same place, and the user would benefit from being able to quickly visually scan for where the button is located. For example, if I have a lot of programs open on Windows and I want to quickly find my instance of Firefox in the Windows taskbar, I'll look for the little orange icon, rather than reading the text on each taskbar button.

Resources