Exploring: Gesture Design for Human-Computer Interaction on Mobile Devices

Publisher:精品古钱斋Latest update time:2011-09-15 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
introduction

Today, when mobile devices are popular, gestures are a popular word. What are gestures? Hands are natural tools for various creative activities of human beings. People are born to use hand movements to express emotions. For example, people use handshakes to express friendliness, and deaf-mute people use a set of sign language instead of language communication. These are all applications of gestures in life. It can be seen that gestures have been a specific language system since ancient times, playing an important role in human communication. From the perspective of interaction, gestures are actually an input mode. The human-computer interaction that we now understand intuitively refers to the interaction between people and machines. This interaction has undergone a gradual development process from mouse, physical hardware, screen touch, and long-distance somatosensory operation.

However, in the field of interactive design, the concept of gestures that is widely discussed is different from traditional keyboard and mouse operations. Later, we will focus on the gesture operation of mobile devices, mainly from the problems existing in gestures, application scenarios, and what should be paid attention to when designing.

1. Broadly defined gesture types

1.1 Use the mouse and cursor tracks to simulate gestures

The website www.kakarod.com uses a lot of screen simulation gesture interactions, such as clicking, dragging, etc., which are lively and eye-catching.

1.2 Gestures on physical hardware

Apple Magic Mouse and MacBook touchpad both support single-finger and multi-finger sliding gestures.

1.3 Gestures on touch screens

There are mainly eight gestures: long press, tap, slide, drag, rotate, zoom, and shake.

1.4 Long-distance somatosensory

The camera and sensors can be used to capture the hand or even the whole body posture for control.

1.5 Future Gestures

Using holographic projection and sensors to operate directly in space or on projection has been applied in some fields. I believe it will be widely used in our lives in the near future. PS: The technology of "projection gesture operation" is mentioned in Apple's latest patent application. Let's wait and see what revolutionary products Steve Jobs will bring this time.

Of course, there are other gesture operations in life, which will not be elaborated here. This article mainly studies the touch screen gesture operations on mobile devices that are currently growing explosively, mainly on iOS and Android systems. Gestures on touch screens refer to the integration of a series of multi-touch events into a single event. By analyzing the current status of gesture usage on touch screens, we found that compared with traditional mouse and keyboard, gesture interaction has some obvious characteristics. The following figure is an overview of gestures summarized from the two dimensions of time and space. It provides readers with a reference when designing gestures.

2. Usability issues

Many experts believe that many aspects of the new gesture interaction interface do not follow the established interaction design principles, causing the interaction design standards that have been well tested and understood by the industry to be overturned, ignored, and violated.

There are mainly the following issues:

2.1 Reduced Accuracy

Taking iOS as an example, compared with the 1 pixel accuracy of the cursor, the accuracy of gestures is much lower. The area suitable for finger tapping needs to be 44*44px ( devices below iphONe 4), and there is a deviation of 0~20px according to the weight of the gesture, so the touch screen interface needs to use a larger control response area. The screen resolution density of iPhone 3GS , iPad and iPhone4 are 163 ppi, 132 ppi and 326 ppi respectively. It can be seen that the control response pixels of 3GS and iPad are similar, and the single side should reach the standard of 44px, while iPhone4 needs to be expanded several times.

2.2 Lack of visibility and consistency

Take the iPad Pages app as an example. For example, if there are two objects in a document and you want to make them the same size, there are two ways: you can use two fingers to drag and use the edge guides to make them the same size. Of course, this way of zooming in and out is very common in many apps, so it is easy to think of. In addition, you can also do this: drag one of the objects with one finger, and touch the object you want to be the same with another finger. When the size prompt appears, lift the first finger first and then the second finger, then the size of the two objects will be exactly the same (there is no help or explanation for these two methods in the app). Obviously, no one will easily discover the second gesture. Even if they discover it, they will not know how to use it quickly. For example, the long press operation of Android is the same.

The main reason for this problem is that gesture interfaces usually have no visual elements representing actions, and gestures are actions. If it is a common and natural gesture, there is no problem, but if it is a rare combination of gestures, it will be difficult for users to find it and may cause usability problems.

2.3 Increased operating costs and misoperation

2.3.1 In terms of displacement

Gesture operations are indeed more vivid and interesting than dull mouse clicks, but some operations, such as zooming in and out and pulling down, increase the operating cost. Things that can be done with the scroll wheel on the mouse require many drags of the finger up and down on the touch screen.

2.3.2 In terms of strength

Gesture operation does not have the physical feedback of mouse pressing, and because the strength is difficult to control, sometimes poor design will cause users to mistakenly believe that there is a problem with their own operation, and thus they will try repeatedly.

2.3.3 In terms of sensitivity

The touch screen of iOS is very sensitive, and the boundary between light touch and long press is very vague. In addition, except for fixed buttons, the response area of ​​many operations is very large and is not limited by the button size. Therefore, it is often accidentally triggered to respond to an operation, such as broadcasting a number in the call log and deleting a memo by swiping right.

2.4 Limited by physical factors

2.4.1 Physical buttons

Bringing a real touch and a certain sense of interruption in operation, later mobile phones gradually weakened physical buttons, and gestures were more closely integrated with the screen. Android uses hardware buttons to trigger menus, which means you cannot predict what program and under what circumstances there will be menu options. Because the hardware button is always there, whether the program needs it or not.

From left to right in the picture above are Plam pre, palm pre2, and palm pre3. The return button is becoming more and more closely integrated with the phone screen.

2.4.2 Horizontal and vertical directions

Directly limited by physical buttons, the physical button positions of Android devices are not uniform, and it is not easy to quickly identify when switching between horizontal and vertical screens, which will greatly affect the continuity of gesture operations. If the app supports horizontal orientation, consider displaying the back button and commonly used menus directly on the software interface. Therefore, the app should consider providing a "back" button directly.

2.4.3 Equipment size

The large-screen Pad supports more complex multi-finger gestures, while mobile phones mostly use single-finger operation.

2.4.4 Control form

Button size control (size conversion under different resolutions), feedback prompts when dragging, and conversion between sliding selection and clicking.

3 What to pay attention to in gesture design

Based on the above usability issues, we can summarize the following points that should be paid attention to when designing gesture operations:

3.1 Operation Guide

This can be a detailed help interface or a metaphorical graphical guide (the metaphor must conform to the user's mental model), such as a dot logo for paging, or switching pages to reveal a portion of the content, a system icon that can be long-pressed, a flipped-up footer, or even animations, etc. You can decide the degree of the prompts here by yourself. For efficiency applications, try to make them clear and visible, so that users can click on them as soon as they see them. For immersive applications, you can reserve space for exploration so that users can discover them on their own and bring unexpected surprises. For example, the pull cord on the QQLiveHD homepage shakes. However, it should be noted that hidden gestures and shortcut gestures cannot affect the main operation process and can be used as auxiliary gestures.

3.2 Operation Feedback

Gesture operation is fast and convenient, but it lacks the sense of security of the didadadida sound when the mouse is pressed, and is also very limited by the sensitivity of the device screen, so the role of operation feedback is crucial. For example, the response when the icon is pressed, except for the lack of the mouse over effect, the other three states are consistent with the PC side, and none of them can be missing. In addition, we must also consider the situation where the operation area is too small and blocked by the finger. The feedback must be obvious and presented within the visible range. For example, the name search operation of the QQ address book. In addition to visual feedback, sound is also an effective feedback method, such as the sound of the iPhone sending a successful SMS. Sina Weibo's feed pull, tweetbot, etc., all cleverly use sound feedback.

3.3 Misoperation

Gesture operation is more flexible than mouse operation. If your program is very complex, carries a lot of information, and most of the area is the response area, the probability of misoperation will also increase greatly. Therefore, it is necessary to allow users to undo operations in time and always know what is happening now, rather than just giving a warning after it happens. It is often used for more important or obscure gestures, such as deletion, one-click clearing, long press, etc. Second confirmation operation is crucial.

Reference address:Exploring: Gesture Design for Human-Computer Interaction on Mobile Devices

Previous article:Superconducting energy-saving technology that can be used from maglev to railways (Part 2)
Next article:Design of wireless security system based on GSM and Zigbee technology

Latest Security Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号