INTERACTIVE SYSTEM
DESIGN
Objectives
Learn all the aspects of
design and development
of interactive systems,
which are now an
important part of our lives.
INTERACTIVE SYSTEM DESIGN
The design and usability of
these systems leaves an
effect on the quality of
people’s relationship to
technology.
Know about different web
applications, games, embedded
devices, etc., are all a part of this
system, which has become an
integral part of our lives.
Until the 1980s almost all commercial computer
systems were non-interactive. Computer operators
would set-up the machines to read in large volumes
of data – say customers bank details and
transactions – and the computer would then
process each input and generate appropriate
output.
THE PAST
INTERACTIVE SYSTEM DESIGN
There are still lots of these systems in place but the
world is also now full of interactive computer
systems. These are systems that involve users in a
direct way. In interactive systems the user and
computer exchange information frequently and
dynamically. Norman’s evaluation/execution model
is a useful way of understanding the nature of
interaction:
THE PRESENT
INTERACTIVE SYSTEM DESIGN
THE PRESENT
1. User has a goal (something to achieve)
2. User looks at system and attempts to work out how he would execute a
series of tasks to achieve the goal
3. User carries out some actions (providing input to the system by pressing
buttons, touching a screen, speaking words etc.)
4. System responds to the actions and presents results to the user. System
can use text, graphics, sounds, speech etc.
INTERACTIVE SYSTEM DESIGN
THE PRESENT
5. User looks at the results of his action and attempts to evaluate whether or
not the goals have been achieved
A good interactive system is one where:
• User can easily work out how to operate the system in an attempt to
achieve his goals
• User can easily evaluate the results of his action on the system
INTERACTIVE SYSTEM DESIGN
In his book, ‘The Invisible Computer’ Don Norman argues
the case for ‘information appliances’.
He suggests that the PC is too cumbersome and unwieldy
a tool. It has too many applications and
features to be useful. He sees the future as being one
where we use specific ‘appliances’ for specific
jobs. Norman envisions a world full of information
appliances, a world populated by interactive
computer systems:
THE FUTURE
INTERACTIVE SYSTEM DESIGN
The Invisible Computer by Don Norman
DIGITAL PICTURE FRAMES:
give this frame to a friend or relative.
When you have taken a new picture you
want them to share, simply ‘email’ the
picture direct to the frame. The frame
will be connected to the net wirelessly
THE HOME MEDICAL ADVISOR:
sensors in the home will enable blood
pressure, temperature, weight, body
fluids and so on to be automatically
monitored. A computer could use these
readings to assist with medical advice or
to contact a human doctor.
INTERACTIVE SYSTEM DESIGN
The Invisible Computer by Don Norman
EMBEDDED SYSTEMS WITHIN OUR CLOTHES:
‘consider the value of eyeglass appliances. Many
of us already wear eye glasses … why not
supplant them with more power? Add a small
electronic display to the glasses … and we could
have all sorts of valuable information with us at
all times’ [Norman 99, pg 271-272]
THE WEATHER AND TRAFFIC DISPLAY:
at the moment, when we want the time
we simply look at a clock. Soon, perhaps,
when we want to know the weather or
traffic conditions we will look at a similar
device.
INTERACTIVE SYSTEM DESIGN
Many people believe we will soon enter an age of
ubiquitous computing – we will be as used
to interacting with computing systems as we are with
other people. This dream will only be fulfilled if
the businesses that produce these systems and services
clearly understand the needs of users so that
the systems can be useful and usable.
THE FUTURE
INTERACTIVE SYSTEM DESIGN
USABILITY ENGINEERING is a method in the progress of
software and systems, which includes user contribution from
the inception of the process and assures the effectiveness of
the product through the use of a usability requirement and
metrics. It thus refers to the Usability Function features of the
entire process of abstracting, implementing & testing
hardware and software products. Requirements gathering
stage to installation, marketing and testing of products, all fall
in this process.
Concept of Usability Engineering
Goals of Usability Engineering
1
Efficient to use
2
Error free in use
3
Easy to use
4 5
Effective to use
Enjoyable in use
INTERACTIVE SYSTEM DESIGN
EFFICIENT FRIENDLY
FUNCTIONAL SAFE DELIGHTFUL
EXPERIENCE
DotDash Bank PLC has launched a new telephone-based
banking service. Customers will be able to check balances,
order chequebooks and statements and transfer money all at
the press of a button. Users are presented with lists of
choices and they select an option by pressing the appropriate
touchtone key on their handset. The system development
team is certain that the system is technically very good – the
speech synthesis used to speak out instructions/ options is
the state-of-the-art and the database access times are very
fast. Back Story
INTERACTIVE SYSTEM DESIGN
The new banking system described is clearly a success
from a system point of view: the designers have
thought about the technical demands of the system to
achieve, for example, high through-put of database
queries. How, though, do users feel about the system?
Back Story
INTERACTIVE SYSTEM DESIGN
The bank’s customers have responded badly to the new
system. Firstly, users want to know why
the system does not let them allow them to hear details
of their most recent transactions, pay bills and
do other common functions. Worse still, they find the
large number of key-presses needed to find out
a piece of information tedious and irritating. Often, users
get lost in the list of choices, not sure of where
they are in the system and what to do next
Back Story
INTERACTIVE SYSTEM DESIGN
From a human perspective the system is a real failure. It
fails because it is not as useful as it might
be and has very serious HCI problems – it fails because the
designers have not fully considered what
would be useful and usable from the customers’ point of
view.
Back Story
INTERACTIVE SYSTEM DESIGN
For an interactive system to be useful it should be goal
centered. When a person uses a computer they will have one
or more goals in mind – e.g., ‘work out my expenses for this
month’; ‘buy a book on motor mechanics’. A useful
interactive system is one that empowers users to achieve
their goals. When you build an interactive system you should
make sure you use a range of design and evaluation
methods to discover the goals and associated system
functionality that will make your system useful.
Usability
EFFECTIVENESS EFFICIENCY SATISFACTION
USABILITY COMPONENTS
The completeness with
which users achieve
their goals
The competence used
in using the resources
to effectively achieve
the goals
The ease of the work
system to its users.
INTERACTIVE SYSTEM DESIGN
The methodical study on the interaction between people, products,
and environment based on experimental assessment. Example:
Psychology, Behavioral Science, etc. A cork-screw is a tool for opening
bottles sealed with a cork. They are useful tools. However, if you are a
left-handed person most cork-screws are difficult to use. This is
because they are designed for right-handed people. So, for a left-
handed person the cork-screw has low usability (despite being
useful). Usability is about building a system that takes account of the
users' capabilities and limitations. A system that has good usability is
likely to have the following qualities:
Usability Study
Usability Study
ROBUST
A system is robust if a user is given the
means to achieve their goals, to assess
their progress and to recover from any
errors made.
FLEXIBLE
Users should be able to interact with a
system in ways that best suit their needs.
The system should be flexible enough to
permit a range of preferences.
INTERACTIVE SYSTEM DESIGN
‘Interfaces are something we do at the end of software development.
We want to make the system look nice for the end user’.
Unfortunately, many analysts and programmers might agree with the
above statement. They cannot see the point in spending time and
money on seriously considering and involving the users in design.
Instead they consider they know what is best for the user and can
build effective interfaces without using extensive user-centered
methods. However, experience has shown that badly designed
interfaces can lead to serious implications. If you build poor interfaces
you might find:
Usability Study
USABILITY STUDY
Your company loses
money as its
workforce is less
productive than it
could be
The quality of life of
the users who use
your system is
reduced
Disastrous and
possibly fatal errors
happen in systems
that are safety-
critical
INTERACTIVE SYSTEM DESIGN
The scientific evaluation of the stated usability parameters as per the
user’s requirements, competences, prospects, safety and satisfaction
is known as usability testing. According to Interaction Design
Foundation, the main benefit and purpose of usability testing is
to identify usability problems with a design as early as possible, so
they can be fixed before the design is implemented or mass
produced. As such, usability testing is often conducted on prototypes
rather than finished products, with different levels of fidelity (i.e.,
detail and finish) depending on the development phase.
Usability Testing
Prototypes tend to be more primitive, low-fidelity versions (e.g.,
paper sketches) during early development, and then take the form of
more detailed, high-fidelity versions (e.g., interactive digital mock-
ups) closer to release. To run an effective usability test, you need to
develop a solid test plan, recruit participants, and then analyze and
report your findings.
Usability Testing
Acceptance testing also known as User Acceptance Testing (UAT), is a
testing procedure that is performed by the users as a final checkpoint
before signing off from a vendor. Let us take an example of the
handheld barcode scanner. Let us assume that a supermarket has
bought barcode scanners from a vendor. The supermarket gathers a
team of counter employees and make them test the device in a mock
store setting. By this procedure, the users would determine if the
product is acceptable for their needs. It is required that the user
acceptance testing "pass" before they receive the final product from
the vendor.
Acceptance Testing
Software Tools
A software tool is a programmatic software
used to create, maintain, or otherwise
support other programs and applications.
Some of the commonly used software tools
in HCI are as follows −
INTERACTIVE SYSTEM DESIGN
Specification
Methods
Grammars
Transition
Diagram
SOFTWARE TOOLS
The methods used to specify
the GUI. Even though these are
lengthy and ambiguous
methods, they are easy to
understand
Written Instructions or
Expressions that a program
would understand. They
provide confirmations for
completeness and correctness
Set of nodes and links that can
be displayed in text, link
frequency, state diagram, etc.
They are difficult in evaluating
usability, visibility, modularity
and synchronization
INTERACTIVE SYSTEM DESIGN
Statecharts
Interface
Building Tools
Interface
Mockup Tools
SOFTWARE TOOLS
Chart methods developed for
simultaneous user activities
and external actions. They
provide link-specification with
interface building tools
Design methods that help in
designing command languages,
data entry structures, and
widgets
Tools to develop a quick sketch
of GUI. E.g., Microsoft Visio,
Visual Studio, .Net, etc.
INTERACTIVE SYSTEM DESIGN
Software
Engineering Tools
Evaluation
Tools
SOFTWARE TOOLS
Extensive programming tools to
provide user interface
management system.
Tools to evaluate the
correctness and completeness
of programs
INTERACTIVE SYSTEM DESIGN
SOFTWARE ENGINEERING is the study of designing,
development and preservation of software. It comes in
contact with HCI to make the man and machine
interaction more vibrant and interactive.
Let us see the following model in software engineering for
interactive designing.
HCI and Software Engineering
The Waterfall model is the earliest SDLC approach that was used for
software development. The waterfall Model illustrates the software
development process in a linear sequential flow. This means that
any phase in the development process begins only if the previous
phase is complete. In this waterfall model, the phases do not overlap.
Waterfall approach was first SDLC Model to be used widely in
Software Engineering to ensure success of the project. In "The
Waterfall" approach, the whole process of software development is
divided into separate phases. In this Waterfall model, typically, the
outcome of one phase acts as the input for the next phase
sequentially.
The Waterfall Method
The Waterfall Method
Requirement
Gathering and
analysis
System
Design Implementation
The Waterfall Method Sequential Phases
All possible requirements of the
system to be developed are
captured in this phase and
documented in a requirement
specification document.
The requirement specifications
from first phase are studied in this
phase and the system design is
prepared. This system design
helps in specifying hardware and
system requirements and helps in
defining the overall system
architecture.
With inputs from the system
design, the system is first
developed in small programs
called units, which are integrated
in the next phase. Each unit is
developed and tested for its
functionality, which is referred to
as Unit Testing.
INTERACTIVE SYSTEM DESIGN
Integration and
Testing
Deployment
of system Maintenance
The Waterfall Method Sequential Phases
All the units developed in the
implementation phase are
integrated into a system after
testing of each unit. Post
integration the entire system is
tested for any faults and
failures.
Once the functional and non-
functional testing is done; the
product is deployed in the
customer environment or
released into the market.
There are some issues which come
up in the client environment. To fix
those issues, patches are released.
Also to enhance the product some
better versions are released.
Maintenance is done to deliver
these changes in the customer
environment.
INTERACTIVE SYSTEM DESIGN
The uni-directional movement of the waterfall model of Software
Engineering shows that every phase depends on the preceding phase
and not vice-versa. However, this model is not suitable for
the interactive system design. The interactive system design shows
that every phase depends on each other to serve the
purpose of designing and product creation. It is a continuous process
as there is so much to know and users keep changing all the time. An
interactive system designer should recognize this diversity.
INTERACTIVE SYSTEM DESIGN
INTERACTIVE SYSTEM DESIGN
Prototyping is another type of software engineering models that can have
a complete range of functionalities of the projected system. In HCI,
prototyping is a trial and partial design that helps users in testing design
ideas without executing a complete system. Example of a prototype can be
Sketches. Sketches of interactive design can later be produced into
graphical interface. See the following diagram. The following diagram can
be considered as a Low Fidelity Prototype as it uses manual procedures
like sketching in a paper. A Medium Fidelity Prototype involves some but
not all procedures of the system. E.g., first screen of a GUI. Finally, a Hi
Fidelity Prototype simulates all the functionalities of the system in a
design. This prototype requires, time, money and work force.
Prototyping
Prototyping
User Centered Design (UCD)
The process of collecting feedback from users to
improve the design is known as user centered design
or UCD.
UCD Drawbacks:
• Passive user involvement
• User’s perception about the new interface may be
inappropriate
• Designers may ask incorrect questions to users
INTERACTIVE SYSTEM DESIGN
The stages in the following diagram are repeated until the solution is reached
Interactive System Design Life Cycle (ISLC)
Graphic User Interface (GUI) is the interface from where a user can
operate programs, applications or devices in a computer system. This
is where the icons, menus, widgets, labels exist for the users to
access. It is significant that everything in the GUI is arranged in a way
that is recognizable and pleasing to the eye, which shows the
aesthetic sense of the GUI designer. GUI aesthetics provides a
character and identity to any product.
GUI Design & Aesthetics
INTERACTIVE
DEVICES
Objectives
Understand what are
these known tools and
how some are recently
developed or are a
concept to be
developed in the future
Learn about several
interactive devices
are used for the
human computer
interaction
Discuss on some
new and old
interactive devices.
Overview of Interactive Devices
There are many different types of interaction devices being used and
conceived today. Some are familiar tools from the past and many are
just distant concept dreams of the future. Some of interactive devices
are recently developed and some of them are innovated earlier. This
section describes about some new and old interface devices.
As shown in the figure, though users actually interact physically with a device, they actually require it to execute a
use case to accomplish their need. Hence, users are interacting logically with the service. Software engineers
define the service as a use case that is realized by a certain subsystem/component in the software, while the
interface is considered as boundary class during analysis and as the user interface during the design and
implementation stage.
Keyboard
A keyboard can be considered as a
primitive device known to all of us today.
Keyboard uses an organization of
keys/buttons that serves as a mechanical
device for a computer. Each key in a
keyboard corresponds to a single written
symbol or character.
Keyboard
This is the most effective and ancient
interactive device between man and
machine that has given ideas to develop
many more interactive devices as well as
has made advancements in itself such as
soft screen keyboards for computers and
mobile phones.
Touch Screen
The touch screen concept was prophesized
decades ago, however the platform was
acquired recently. Today there are many
devices that use touch screen. Gadgets like
mobile phone, tablet, ipad, etc uses touch
screen technology which allows the users to
navigate with the installed software on their
devices with the use of their fingertips.
Touch Screen
Unlike earlier design of personal computers,
touch screen technology doesn’t need an input
device such as mouse and keyboard as these
are already built-in to the device. After vigilant
selection of these devices, developers
customize their touch screen experiences. The
cheapest and relatively easy way of
manufacturing touch screens are the ones using
electrodes and a voltage association.
Touch Screen
Other than the hardware differences, software
alone can bring major differences from one
touch device to another, even when the same
hardware is used. Along with the innovative
designs and new hardware and software, touch
screens are likely to grow in a big way in the
future. A further development can be made by
making a sync between the touch and other
devices. In HCI, touch screen can be considered
as a new interactive device.
Gesture Recognition
Gesture recognition is a subject in
language technology that has the objective
of understanding human movement via
mathematical procedures. Hand gesture
recognition is currently the field of focus.
This technology is future based. This new
technology, magnitudes an advanced
association between human and computer
where no mechanical devices are used.
Gesture Recognition
This new interactive device might terminate the
old devices like keyboards and is also heavy on
new devices like touch screens. The general
definition of gesture recognition is the ability of a
computer to understand gestures and execute
commands based on those gestures. Most
consumers are familiar with the concept through
Wii Fit, X-box and PlayStation games such as
“Just Dance” and “Kinect Sports.”
How gesture recognition works
Gesture recognition is an alternative user interface for providing real-time data to a computer.
Instead of typing with keys or tapping on a touch screen, a motion sensor perceives and interprets
movements as the primary source of data input. This is what happens between the time a gesture
is made and the computer reacts. For instance, Kinect looks at a range of human characteristics
to provide the best command recognition based on natural human inputs. It provides both skeletal
and facial tracking in addition to gesture recognition, voice recognition and in some cases the
depth and color of the background scene. Kinect reconstructs all of this data into printable three-
dimensional (3D) models. The latest Kinect developments include an adaptive user interface that
can detect a user’s height
How gesture recognition works
Specially designed software
identifies meaningful gestures
from a predetermined gesture
library where each gesture is
matched to a computer command.
A camera feeds image data into a
sensing device that is connected
to a computer. The sensing device
typically uses an infrared sensor
or projector for the purpose of
calculating depth.
How gesture recognition works
Once the gesture has been
interpreted, the computer
executes the command correlated
to that specific gesture.
The software then correlates each
registered real-time gesture,
interprets the gesture and uses
the library to identify meaningful
gestures that match the library.
Who makes gesture recognition software?
Microsoft is leading the charge with Kinect, a gesture recognition platform that
allows humans to communicate with computers entirely through speaking and
gesturing. Kinect gives computers, “eyes, ears, and a brain.” There are a few other
players in the space such as SoftKinect, GestureTek, PointGrab, eyesight and
PrimeSense, an Israeli company recently acquired by Apple. Emerging
technologies from companies such as eyeSight go far beyond gaming to allow for a
new level of small motor precision and depth perception.
Gesture recognition examples beyond
gaming
Gesture recognition has huge potential in creating interactive, engaging
live experiences. Here are five gesture recognition examples that
illustrate the potential of gesture recognition to to educate, simplify user
experiences and delight consumers.
Gesture recognition examples beyond gaming
Changing how we interact with traditional
computers - A company named Leap Motion
last year introduced the Leap Motion
Controller, a gesture-based computer
interaction system for PC and Mac. A USB
device and roughly the size of a Swiss army
knife, the controller allows users to interact
with traditional computers with gesture control.
It’s very easy to see the live experience
applications of this technology.
In-store retail engagement-Gesture
recognition has the power to deliver an
exciting, seamless in-store experience.
This example uses Kinect to create an
engaging retail experience by immersing
the shopper in relevant content, helping
her to try on products and offering a
game that allows the shopper to earn a
discount incentive.
Gesture recognition examples beyond gaming
Windshield wipers - Google and Ford
are also reportedly working on a system
that allows drivers to control features
such as air conditioning, windows and
windshield wipers with gesture controls.
The Cadillac CUE system recognizes
some gestures such as tap, flick, swipe
and spread to scroll lists and zoom in on
maps.
The operating room - Companies such as
Microsoft and Siemens are working
together to redefine the way that everyone
from motorists to surgeons accomplish
highly sensitive tasks. These companies
have been focused on refining gesture
recognition technology to focus on fine
motor manipulation of images and enable a
surgeon to virtually grasp and move an
object on a monitor.
Gesture recognition examples beyond gaming
Sign language interpreter-There are
several examples of using gesture
recognition to bridge the gap between the
deaf and non-deaf who may not know
sign language. This example showing
how Kinect can understand and translate
sign language from Dani Martinez Capilla
explores the notion of breaking down
communication barriers using gesture
recognition.
Mobile payments - Seeper, a London-
based startup, has created a technology
called Seemove that has gone beyond
image and gesture recognition to object
recognition. Ultimately, Seeper believes
that their system could allow people to
manage personal media, such as photos
or files, and even initiate online payments
using gestures.
Speech Recognition
The technology of transcribing spoken phrases into written text is Speech
Recognition. Such technologies can be used in advanced control of many
devices such as switching on and off the electrical appliances. Only certain
commands are required to be recognized for a complete transcription.
However, this cannot be beneficial for big vocabularies. This HCI device help
the user in hands free movement and keep the instruction based technology up
to date with the users.
Application of Gesture Recognition
Kinect: It is a motion sensing console launched by Microsoft as an extension of the
Microsoft Xbox 360 game console. The main function is to enable you to control the
Xbox through voice or gestures rather than physically using the controller. Kinect is
based on technologies which are developed by Microsoft and PrimeSense. It
basically makes use of an infrared projector which is able to read your gestures
hence enabling you have a complete hands-free control of the gadget or game you
are playing.
Microsoft has already sold more than 18 million copies of Kinect and they plans to implement the
same system and technology for its PC and release it in February this year.
Application of Gesture Recognition
Eon Interactive Mirror: EON Interactive Mirror enables customers to
virtually try-on clothes, dresses, handbags and accessories using gesture-
based interaction. Changing from one dress to another is just a ‘swipe’
away and offers endless possibilities for mixing designs and accessories in
a fun, quick and intuitive way. Customers can snap a picture of his/hers
current selections and share it on Facebook or other Social Media to get
instant feedback from friends.
The EON Interactive Mirror is growing in popularity in the amusement and retail industry. It will be
showcased on its ability to engage crowds through various interactive applications.
Response Time
Response time is the time taken by a device to respond to a request. The request can be
anything from a database query to loading a web page. The response time is the sum of the
service time and wait time. Transmission time becomes a part of the response time when the
response has to travel over a network. In modern HCI devices, there are several applications
installed and most of them function simultaneously or as per the user’s usage. This makes a
busier response time. All of that increase in the response time is caused by increase in the
wait time. The wait time is due to the running of the requests and the queue of requests
following it.
Response Time
So, it is significant that the response time of a device is faster for which
advanced processors are used in modern devices. According to Jakob
Nielsen, the basic advice regarding response times has been about the
same for thirty years [Miller 1968; Card et al. 1991]:
Response Time
is about the limit for the user's flow of
thought to stay uninterrupted, even
though the user will notice the delay.
Normally, no special feedback is
necessary during delays of more than
0.1 but less than 1.0 second, but the
user does lose the feeling of operating
directly on the data.
is about the limit for having the
user feel that the system is
reacting instantaneously,
meaning that no special
feedback is necessary except
to display the result.
is about the limit for keeping the user's
attention focused on the dialogue. For
longer delays, users will want to perform
other tasks while waiting for the
computer to finish, so they should be
given feedback indicating when the
computer expects to be done. Feedback
during the delay is especially important
if the response time is likely to be highly
variable, since users will then not know
what to expect
0.1 Second
1.0 Second
10 Seconds
DESIGN PROCESS
AND TASK ANALYSIS
LESSON OBJECTIVES
01
Understand the different characteristics of
engineering task models and methodologies
03
Learn about the basic activities of interaction
design and principles for user-centered
approach
02
Understand the design process and task
analysis that plays an important part in user
requirement analysis
04
Know the different methodologies being used in
HCI design
HCI design is considered as a problem
solving process that has components
like planned usage, target area,
resources, cost, and viability. It decides
on the requirement of product
similarities to balance trade-offs.
HCI Design
HCI Design
01
Evaluating designs
03
Identifying requirements
02
Developing interactive versions
of the designs
04
Building alternative designs
Three principles for user-centered approach
Empirical
Measurement
Early focus on
users and tasks
Iterative Design
Various methodologies have materialized since
the inception that outline the techniques for
human–computer interaction. Following are few
design methodologies
Design Methodologies
Activity Theory
This is an HCI method that describes
the framework where the human
computer interactions take place.
Activity theory provides reasoning,
analytical tools and interaction
designs.
User-Centered Design
It provides users the center-stage in
designing where they get the
opportunity to work with designers
and technical practitioners.
Principles of User Interface
Design
Tolerance, simplicity, visibility,
affordance, consistency, structure
and feedback are the seven principles
used in interface designing.
Value Sensitive Design
This method is used for developing
technology and includes three types
of studies − conceptual, empirical
and technical.
Value Sensitive Design
Conceptual Empirical Technical
Conceptual
investigations work
towards
understanding the
values of the
investors who use
technology.
Empirical
investigations are
qualitative or
quantitative design
research studies that
shows the designer’s
understanding of the
users’ values.
Technical
investigations
contain the use of
technologies and
designs in the
conceptual and
empirical
investigations.
Participatory Design
Participatory design process involves all stakeholders in the
design process, so that the end result meets the needs they are
desiring. This design is used in various areas such as software
design, architecture, landscape architecture, product design,
sustainability, graphic design, planning, urban design, and even
medicine. Participatory design is not a style, but focus on
processes and procedures of designing. It is seen as a way of
removing design accountability and origination by designers.
Task Analysis plays an important part in User
Requirements Analysis. Task analysis is the
procedure to learn the users and abstract
frameworks, the patterns used in workflows, and
the chronological implementation of interaction
with the GUI. It analyzes the ways in which the
user partitions the tasks and sequence them.
Task Analysis
Task Analysis
Human actions that contributes to a useful
objective, aiming at the system, is a task. Task
analysis defines performance of users, not
computers.
What is a TASK?
Purpose of Task Analysis
What your users’ goals are; what they are trying to achieve
What users actually do to achieve those goals
What experiences (personal, social, and cultural) users bring to the tasks
In their book User and Task Analysis for Interface Design, JoAnn Hackos and Janice
Redish note that performing a task analysis helps you understand:
How users are influenced by their physical environment
How users’ previous knowledge and experience influence:
How they think about their work
The workflow they follow to perform their task
When to Perform a Task Analysis
Website requirements gathering
Developing your content strategy and site structure
It’s important to perform a task analysis early in your process, in particular
prior to design work. Task analysis helps support several other aspects of
the user-centered design process, including:
Wireframing and Prototyping
Performing usability testing
Types of Task Analysis
Hierarchical Task Analysis
- is focused on decomposing a high-
level task subtasks..
Cognitive Task Analysis
- is focused on understanding tasks
that require decision-making,
problem-solving, memory, attention
and judgement.
There are several types of task analysis but among the most
common techniques used are:
How to Conduct a Task Analysis
Your task analysis may have several levels of inquiry,
from general to very specific. In addition to market
research, competitive analysis, and web metrics
analysis, you can identify top tasks through various
user research techniques. UXPA’s Usability Body of
Knowledge breaks down the process for
decomposing a high-level task into the following
steps:
How to Conduct a Task Analysis
Identify the task to be analyzed
Break this high-level task down into 4 to 8 subtasks. The subtask should be
specified in terms of objectives and, between them, should cover the whole area of
interest
It’s important to note that you need to decide to what level of detail you are going to
decompose subtasks so that you can ensure that you are consistent across the
board.
Draw a layered task diagram of each subtasks ensuring that it is complete
Produce a written account as well as the decomposition diagram
Present the analysis to someone else who has not been involved in the
decomposition but who knows the tasks well enough to check for consistency
Hierarchical Task Analysis is the procedure of
disintegrating tasks into subtasks that could be analyzed
using the logical sequence for execution. This would help
in achieving the goal in the best possible way. "A
hierarchy is an organization of elements that, according
to prerequisite relationships, describes the path of
experiences a learner must take to achieve any single
behavior that appears higher in the hierarchy. (Seels &
Glasgow, 1990, p. 94)"
Hierarchical Task Analysis
Techniques for Analysis
Task decomposition
Observation and documentation of actions of
the user. This is achieved by authenticating the
user’s thinking. The user is made to think aloud
so that the user’s mental logic can be
understood
Ethnography
Splitting tasks into sub-tasks and
in sequence
Knowledge-based techniques
Observation of users’ behavior
in the use context
Protocol analysis
Any instructions that users need to
know. ‘User’ is always the
beginning point for a task
Unlike Hierarchical Task Analysis,
Engineering Task Models can be specified
formally and are more useful.
Engineering Task Models
Characteristics of Engineering Task
Models
Engineering task models have flexible notations, which describes the
possible activities clearly
They have organized approaches to support the requirement, analysis, and
use of task models in the design
They support the recycle of in-condition design solutions to problems that
happen throughout applications
Finally, they let the automatic tools accessible to support the different
phases of the design cycle
CTT is an engineering methodology used
for modeling a task and consists of tasks
and operators. Operators in CTT are used
to portray chronological associations
between tasks. Following are the key
features of a CTT:
ConcurTaskTree (CTT)
ConcurTaskTree (CTT)
Focus on actions that users wish to accomplish
Hierarchical structure
Graphical syntax
Rich set of sequential operators
ConcurTaskTree (CTT) Examples
Plug in to main and switch on supply
Locate on/off switch on projector
A person preparing an overhead projector for use would be seen to carry out
the following actions:
Discover which way to press the switch
Press the switch for power
Put on the slide and orientate correctly
Align the projector on the screen
Focus the slide
In Human Computer Interaction, task analysis is
the recording of physical and perceptual actions
of the user whilst executing the task. Take a look
on another example of task analysis. At a bare
minimum to identify tasks, you can simply ask
users what overall tasks they are trying to
accomplish or how they currently accomplish the
task.
ConcurTaskTree (CTT)
ConcurTaskTree (CTT)
Trying to find a nursing home near you for an elderly
relative
Trying to get information about options for treatment for
skin cancer
What overall tasks are users trying to accomplish on our
website?
Trying to sign up to receive an email notice when a
payment is due
ConcurTaskTree (CTT)
Using a search engine
Navigating through your site
How are users currently completing the task? People are
completing that task using:
Using another site
(Through some other means)
THANK
YOU!
DO YOU HAVE ANY QUESTIONS?
Dialogue design
introduction
human computer
interaction
Lesson objectives
Learn all the aspects of dialog
levels and representation
Introduce formalism techniques
that we can use to signify dialogs
Learn about visual materials being
used in communication process
Understand direct manipulation as
a good form of interface design
Know the sequence in item
presentation
Understand the use of proper use of
menu layout and form fill-in dialog boxes
Dialog
Representation
To represent dialogs, we need formal techniques
that serves two purposes:
It helps in understanding the proposed design
in a better way.
It helps in analyzing dialogs to identify usability issues. E.g.,
Questions such as “does the design actually support undo?”
can be answered
Introduction to
Formalism
There are many formalism techniques that we can use
to signify dialogs. In this chapter, we will discuss on
three of these formalism techniques, which are −
The state transition networks (STN)
The state charts
The classical Petri nets
State Transition
Network (STN)
STNs are the most spontaneous, which knows that a dialog
fundamentally denotes to a progression from one state of the
system to the next. The syntax of an STN consists of the following
two entities:
Circles − A circle refers to a state of the system, which
is branded by giving a name to the state.
Arcs − The circles are connected with arcs that refers to
the action/event resulting in the transition from the state
where the arc initiates, to the state where it ends.
State Transition
Network (STN)
StateCharts
StateCharts represent complex reactive systems that extends Finite
State Machines (FSM), handle concurrency, and adds memory to
FSM. It also simplifies complex system representations. StateCharts
has the following states:
Active state − The present state of the underlying FSM
Basic states − These are individual states and are not
composed of other states.
Super states − These states are composed of other states.
For each basic state b, the super state containing b is
called the ancestor state. A super state is called OR
super state if exactly one of its sub states is active,
whenever it is active.
Let us see the StateChart Construction of a machine
that dispense bottles on inserting coins.
illustration
Illustration
StateCharts
The diagram explains the entire procedure of a bottle
dispensing machine. On pressing the button after
inserting coin, the machine will toggle between bottle
filling and dispensing modes. When a required request
bottle is available, it dispense the bottle. In the
background, another procedure runs where any stuck
bottle will be cleared. The ‘H’ symbol in Step 4, indicates
that a procedure is added to History for future access.
illustration
Illustration
Petri Nets
Petri Net is a simple model of active behavior, which has four
behavior elements such as − places, transitions, arcs and
tokens. Petri Nets provide a graphical explanation for easy
understanding.
Place − This element is used to symbolize passive
elements of the reactive system. A place is
represented by a circle
Transition − This element is used to symbolize active
elements of the reactive system. Transitions are
represented by squares/rectangles
Petri Nets
Arc − This element is used to represent causal
relations. Arc is represented by arrows
Token − This element is subject to change. Tokens are
represented by small filled circles
Petri Nets were developed originally by Carl Adam Petri [Pet62], and
were the subject of his dissertation in 1962. Since then, Petri Nets
and their concepts have been extended and developed, and applied
in a variety of areas: Office automation, work-flows, flexible
manufacturing, programming languages, protocols and networks,
hardware structures, real-time systems, performance evaluation,
operations research, embedded systems, defense systems,
telecommunications, Internet, ecommerce and trading, railway
networks, biological systems. Here is an example of a Petri Net
model, one for the control of a metabolic pathway. Tool used: Visual
Object Net++
Petri nets
Petri Nets
Petri nets
Visual materials have assisted in the communication process since ages
in form of paintings, sketches, maps, diagrams, photographs, etc. In
today’s world, with the invention of technology and its further growth, new
potentials are offered for visual information such as thinking and
reasoning. As per studies, the command of visual thinking in human-
computer interaction (HCI) design is still not discovered completely. So, let
us learn the theories that support visual thinking in sense-making activities
in HCI design. An initial terminology for talking about visual thinking was
discovered that included concepts such as visual immediacy, visual
impetus, visual impedance, and visual metaphors, analogies and
associations, in the context of information design for the web.
Visual
thinking
Visual
thinking
Visual thinking is the use of imagery and other visual forms to
make sense of the world and to create meaningful content.
Digital imagery is a special form of visual thinking, one that is
particularly salient for HCI and interaction design. Digital
photographs qualify as digital imagery only when they are also
visual thinking that is, when they are instrumental in making
sense or creating meaning. As such, this design process
became well suited as a logical and collaborative method
during the design process. Let us discuss in brief the
concepts individually.
Visual
thinking
Visual
thinking
Visual
immediacy
It is a reasoning process that helps in
understanding of information in the
visual representation. The term is chosen
to highlight its time related quality, which
also serves as an indicator of how well
the reasoning has been facilitated by the
design.
Visual Impetus
Visual impetus is defined as a
stimulus that aims at the
increase in engagement in the
contextual aspects of the
representation.
Visual
Impedance
It is perceived as the opposite of visual
immediacy as it is a hindrance in the
design of the representation. In relation
to reasoning, impedance can be
expressed as a slower cognition.
Visual Metaphors,
Association, Analogy,
Abduction and Blending
- When a visual demonstration is used to
understand an idea in terms of another
familiar idea it is called a visual metaphor.
- Visual analogy and conceptual blending
are similar to metaphors. Analogy can be
defined as an implication from one
particular to another. Conceptual blending
can be defined as combination of
elements and vital relations from varied
situations.
Visual Metaphors,
Association, Analogy,
Abduction and Blending
The HCI design can be highly
benefited with the use of above
mentioned concepts. The concepts
are pragmatic in supporting the use
of visual procedures in HCI, as well
as in the design processes.
Direct Manipulation
Programming
The action of using your fingertips to
zoom in and out of the image is an
example of a direct manipulation
interaction. Another classic example
is dragging a file from a folder to
another one in order to move it
Direct Manipulation
Programming
Definition: Direct manipulation (DM)
is an interaction style in which users
act on displayed objects of interest
using physical, incremental,
reversible actions whose effects are
immediately visible on the screen.
Direct Manipulation
Programming
Ben Shneiderman first coined the
term “direct manipulation” in the
early 1980s, at a time when the
dominant interaction style was the
command line. In command-line
interfaces, the user must remember
the system label for a desired action,
and type it in together with the
names for the objects of the action.
Direct Manipulation
Programming
Direct manipulation is one of the
central concepts of graphical user
interfaces (GUIs) and is sometimes
equated with “what you see is what you
get” (WYSIWYG). These interfaces
combine menu based interaction with
physical actions such as dragging and
dropping in order to help the user use
the interface with minimal learning.
Continuous
representation of
the object of
interest
Physical actions
instead of complex
syntax
Continuous
feedback and
reversible,
incremental actions
Rapid learning
Characteristics
of Direct
Manipulation
Continuous
representation of the
object of interest
Users can see visual representations of the objects that
they can interact with. As soon as they perform an action,
they can see its effects on the state of the system. For
example, when moving a file using drag-and-drop, users
can see the initial file displayed in the source folder, select
it, and, as soon as the action was completed, they can see
it disappear from the source and appear in the destination
— an immediate confirmation that their action had the
intended result. Thus, direct-manipulation UIs satisfy, by
definition, the first usability heuristic: the visibility of the
system status. In contrast, in a command line interface,
users usually must explicitly check that their actions had
indeed the intended result (for example, by listing the
content of the destination directory).
Physical actions instead
of complex syntax
Actions are invoked physically via clicks, button
presses, menu selections, and touch gestures. In the
move-file example, drag-and-drop has a direct analog
in the real world, so this implementation for the move
action has the right signifiers and can be easily learned
and remembered. In contrast, the command-line
interface requires users to recall not only the name of
the command (“mv”), but also the names of the objects
involved (files and paths to the source and destination
folders). Thus, unlike DM interfaces, command-line
interfaces are based on recall instead of recognition
and violate an important usability heuristic.
Continuous feedback and
reversible, incremental
actions
Because of the visibility of the system state, it’s easy to
validate that each action caused the right result. Thus, when
users make mistakes, they can see right away the cause of
the mistake and they should be able to easily undo it. In
contrast, with command-line interfaces, one single user
command may have multiple components that can cause the
error. For instance, in the example below, the name of the
destination folder contains a typo “Measuring Usablty”
instead of “Measuring Usability”. The system simply
assumed that the file name should be changed to “Measuring
Usablty”. If users check the destination folder, they will
discover that there was a problem, but will have no way of
knowing what caused it: did they use the wrong command,
the wrong source filename, or the wrong destination?
Rapid learning
Because the objects of interest and the
potential actions in the system are visually
represented, users can use recognition
instead of recall to see what they could do
and select an operation most likely to fulfill
their goal. They don’t have to learn and
remember complex syntax. Thus, although
direct-manipulation interfaces may require
some initial adjustment, the learning
required is likely to be less substantial.
When direct manipulation first appeared, it was based on the office-desk
metaphor — the computer screen was an office desk, and different documents (or
files) were placed in folders, moved around, or thrown to trash. This underlying
metaphor indicates the skeuomorphic origin of the concept. The DM systems
described originally by Shneiderman are also skeuomorphic — that is, they are
based on resemblance with a physical object in the real world. Thus, he talks
about software interfaces that copy Rolodexes and physical checkbooks to
support tasks done (at the time) with these tools. As we all know, skeuomorphism
saw a huge revival in the early iPhone days, and has now come out of fashion.
Direct Manipulation
vs. Skeuomorphism
Direct Manipulation
vs. Skeuomorphism
Direct Manipulation vs.
Skeuomorphism
While skeuomorphic interfaces are indeed
based on direct manipulation, not all direct
manipulation interfaces need to be
skeuomorphic. In fact, today’s flat interfaces
are a reaction to skeuomorphism and depart
from the real-world metaphors, yet they do
rely on direct manipulation.
Almost each DM characteristic has a directly corresponding disadvantage:
- Continuous representation of the objects? It means that you can only act on the
small number of objects that can be seen at any given time. And objects that are
out of sight, but not out of mind, can only be dealt with after the user has
laboriously navigated to the place that holds those objects so that they can be
made visible.
- Physical actions? One word: RSI (repetitive strain injury). It’s a lot of work to
move all those icons and sliders around the screen. Actually, two more words:
accidental activation, which is particularly common on touchscreens, but can also
happen on mouse-driven systems.
Disadvantages of
Direct Manipulation
Disadvantages of
Direct Manipulation
- Continuous feedback? Only if you attempt an operation that the system feels like
letting you do. If you want to do something that’s not available, you can push and
drag buttons and icons as much as you want with no effect whatsoever. No
feedback, only frustration. (A good UI will show in-context help to explain why the
desired action isn’t available and how to enable it. Sadly, UIs this good are not
very common.)
- Rapid learning? Yes, if the design is good, but in practice learnability depends on
how well designed the interface is. We’ve all seen menus with poorly chosen
labels, buttons that did not look clickable, or drop-down boxes with more options
than the length of the screen. And there are even more disadvantages:
Disadvantages of
Direct Manipulation
Disadvantages of
Direct Manipulation
- DM is slow. If the user needs to perform a large number of actions, on many objects,
using direct manipulation takes a lot longer than a command-line UI. Have you
encountered any software engineers who use DM to write their code? Sure, they might
use DM elements in their software development interfaces, but the majority of the code
will be typed in.
- Repetitive tasks are not well supported. DM interfaces are great for novices because
they are easy to learn, but because they are slow, experts who have to perform the same
set of tasks with high frequency, usually rely on keyboard shortcuts, macros, and other
command-language interactions to speed up the process. For example, when you need to
send an email attachment to one recipient, it is easy to drag the desired file and drop it
into the attachment section. However, if you needed to do this for 50 different recipients
with customized subject lines, a macro or script will be faster and less tedious.
Disadvantages of
Direct Manipulation
Disadvantages of
Direct Manipulation
- Some gestures can be more error-prone than typing. Whereas in theory, because
of the continuous feedback, DM minimizes the chance of certain errors, in
practice, there are situations when a gesture is harder to perform than typing
equivalent information. For example, good luck trying to move the 50th column of
a spreadsheet into the 2nd position using drag and drop. For this exact reason,
Netflix offers 3 interaction techniques for reordering subscribers’ DVD queues:
dragging the movie to the desired position (easy for short moves), a one-button
shortcut for moving into the #1 position (handy when you must watch a particular
movie ASAP), and the indirect option of typing the number of the desired new
position (useful in most other cases)
Disadvantages of
Direct Manipulation
Disadvantages of
Direct Manipulation
Direct
Manipulation
- Accessibility may suffer. DM UIs may fail visually impaired users or users with
motor skill impairments, especially if they are heavily based on physical actions,
as opposed to button presses and menu selections. (Workarounds exist, but it
can be difficult to implement them.)
- Direct manipulation has been acclaimed as a good form of interface design, and
are well received by users. Such processes use many source to get the input and
finally convert them into an output as desired by the user using inbuilt tools and
programs. “Directness” has been considered as a phenomenon that contributes
majorly to the manipulation programming. It has the following two aspects:
Distance and Direct Engagement.
Disadvantages of
Direct Manipulation
Disadvantages of
Direct Manipulation
Distance
Distance is an interface that decides
the gulfs between a user’s goal and
the level of explanation delivered by
the systems, with which the user
deals. These are referred to as the
Gulf of Execution and the Gulf of
Evaluation.
The Gulf of
Execution
The Gulf of Execution defines the
gap/gulf between a user's goal and the
device to implement that goal. One of the
principal objective of Usability is to
diminish this gap by removing barriers
and follow steps to minimize the user’s
distraction from the intended task that
would prevent the flow of the work.
The Gulf of
Evaluation
The Gulf of Evaluation is the
representation of expectations that the
user has interpreted from the system in a
design. As per Donald Norman, The gulf
is small when the system provides
information about its state in a form that
is easy to get, is easy to interpret, and
matches the way the person thinks
of the system.
Direct
Engagement
It is described as a programming where
the design directly takes care of the
controls of the objects presented by the
user and makes a system less difficult to
use. The scrutiny of the execution and
evaluation process illuminates the
efforts in using a system. It also gives
the ways to minimize the mental effort
required to use a system.
- Even though the immediacy of response and the conversion of objectives to
actions has made some tasks easy, all tasks should not be done easily. For
example, a repetitive operation is probably best done via a script and not through
immediacy.
- Direct manipulation interfaces find it hard to manage variables, or illustration of
discrete elements from a class of elements.
- Direct manipulation interfaces may not be accurate as the dependency is on the
user rather than on the system.
- An important problem with direct manipulation interfaces is that it directly
supports the techniques, the user thinks.
Problems with Direct
Manipulation
Problems with Direct
Manipulation
In HCI, the presentation sequence can be planned according to the task or application
requirements. The natural sequence of items in the menu should be taken care of. Main
factors in presentation sequence are:
• Time
• Numeric ordering
• Physical properties
A designer must select one of the following prospects when there are no task-related
arrangements
• Alphabetic sequence of terms
• Grouping of related items
• Most frequently used items first
• Most important items first
Item Presentation
Sequence
Item Presentation
Sequence
Helping users navigate should be a high priority for almost every website and
application. After all, even the coolest feature or the most compelling content is
useless if people can’t find it. And even if you have a search function, you usually
shouldn’t rely on search as the only way to navigate. Most designers recognize
this, and include some form of navigation menu in their designs.
Definition: Navigation menus are lists of content categories or features, typically
presented as a set of links or icons grouped together with visual styling distinct
from the rest of the design.
Menu Layout
Menu Layout
Navigation menus include, but are not limited to, navigation bars and hamburger
menus.
Menus are so important that you find them in virtually every website or piece of
software you encounter, but not all menus are created equally. Too often we
observe users struggling with menus that are confusing, difficult to manipulate, or
simply hard to find.
Avoid common mistakes by following these guidelines for usable navigation
menus:
Menu Layout
Menu Layout
A. Make It Visible
1. Don’t use tiny menus (or menu icons) on large screens. Menus shouldn’t be
hidden when you have plenty of space to display them.
2. Put menus in familiar locations. Users expect to find UI elements where
they’ve seen them before on other sites or apps (e.g., left rail, top of the
screen). Make these expectations work in your favor by placing your menus
where people expect to find them.
3. Make menu links look interactive. Users may not even realize that it’s a menu
if the options don’t look clickable (or tappable). Menus may seem to be just
decorative pictures or headings if you incorporate too many graphics, or
adhere too strictly to principles of flat design.
Menu Layout
Menu Layout
A. Make It Visible
4. Ensure that your menus have enough visual weight. In many cases menus
that are placed in familiar locations don’t require much surrounding white
space or color saturation in order to be noticeable. But if the design is
cluttered, menus that lack visual emphasis can easily be lost in a sea of
graphics, promotions, and headlines that compete for the viewer’s attention.
5. Use link text colors that contrast with the background color. It’s amazing how
many designers ignore contrast guidelines; navigating through digital space is
disorienting enough without having to squint at the screen just to read the
menu.
Menu Layout
Menu Layout
B. Communicate the Current Location
6. Tell users ‘where’ the currently visible screen is located within the menu
options. “Where am I?” is one of the fundamental questions users need to
answer to successfully navigate. Users rely on visual cues from menus (and
other navigation elements such a breadcrumbs) to answer this critical
question. Failing to indicate the current location is probably the single most
common mistake we see on website menus. Ironically, these menus have the
greatest need to orient users, since visitors often don’t enter from the
homepages.
Menu Layout
Menu Layout
C. Coordinate Menus with User Tasks
7. Use understandable link labels. Figure out what users are looking for, and use
category labels that are familiar and relevant. Menus are not the place to get
cute with made-up words and internal jargon. Stick to terminology that clearly
describes your content and features.
8. Make link labels easy to scan. You can reduce the amount of time users need to
spend reading menus by left-justifying vertical menus and by front-loading key
terms.
9. For large websites, use menus to let users preview lower-level content. If typical
user journeys involve drilling down through several levels, mega-menus (or
traditional drop-downs) can save users time by letting them skip a level (or two).
Menu Layout
Menu Layout
C. Coordinate Menus with User Tasks
10. Provide local navigation menus for closely related content. If people frequently
want to compare related products or complete several tasks within a single
section, make those nearby pages visible with a local navigation menu, rather
than forcing people to ‘pogo stick’ up and down your hierarchy.
11. Leverage visual communication. Images, graphics, or colors that help users
understand the menu options can aid comprehension. But make sure the
images support user tasks (or at least don't make the tasks more difficult).
Menu Layout
Menu Layout
D. Make It Easy to Manipulate
12. Make menu links big enough to be easily tapped or clicked. Links that are too small or
too close together are a huge source of frustration for mobile users, and also make
large-screen designs unnecessarily difficult to use.
13. Ensure that drop-downs are not too small or too big. Hover-activated drop-downs that
are too short quickly become an exercise in frustration, because they tend to disappear
while you’re trying to mouse over them to click a link. On the other hand, vertical drop-
downs that are too long make it difficult to access links near the bottom of the list,
because they may be cut off below the edge of the screen and require scrolling. Finally,
hover-activated drop-downs that are too wide are easily mistaken for new pages,
creating user confusion about why the page has seemingly changed even though they
haven’t clicked anything).
Menu Layout
Menu Layout
D. Make It Easy to Manipulate
14. Consider ‘sticky’ menus for long pages. Users who have reached the bottom of a
long page may face a lot of tedious scrolling before they can get back to the
menus at the top. Menus that remain visible at the top of the viewport even after
scrolling can solve that problem and are especially welcome on smaller screens.
15. Optimize for easy physical access to frequently used commands. For drop-down
menus, this means putting the most common items close to the link target that
launches the drop-down (so the user's mouse or finger won't have to travel as
far. Recently, some mobile apps have even begun reviving pie menus, which
keep all the menu options nearby by arranging them in a circle (or semicircle)
Menu Layout
Menu Layout
Dialog box is a graphical user interface element, which can be noticed as small
window that provides information for the user and waits for the response the user in
order to perform action upon users input. These dialog boxes are also used to
confirmation message/notice to the user with “OK” button on it in order to confirm
that the message/notice has been read by the user.
Appropriate for multiple entry of data fields:
• Complete information should be visible to the user.
• The display should resemble familiar paper forms.
• Some instructions should be given for different types of entries
Form Fill-in
Dialog Boxes
Form Fill-in
Dialog Boxes
Users must be familiar with:
• Keyboards
• Use of TAB key or mouse to move the cursor
• Error correction methods
• Field-label meanings
• Permissible field contents
• Use of the ENTER and/or RETURN key.
One of the reasons why dialog boxes are very important is to ensure that the users
will avoid mistakes such as the dialog box shown on Figure 1. The user may be
trying to close the application already while working on a document but is not yet
done saving it.
Form Fill-in
Dialog Boxes
Form Fill-in
Dialog Boxes
Gallery 01
Form Fill-in Design Guidelines:
• Title should be meaningful.
• Instructions should be comprehensible.
• Fields should be logically grouped and sequenced.
• The form should be visually appealing.
• Familiar field labels should be provided.
• Consistent terminology and abbreviations should be used.
• Convenient cursor movement should be available.
• Error correction for individual characters and entire field’s facility should be present.
• Error prevention.
• Error messages for unacceptable values should be populated.
• Optional fields should be clearly marked.
• Explanatory messages for fields should be available.
• Completion signal should populate.
Form Fill-in
Dialog Boxes
Form Fill-in
Dialog Boxes

Human Computer Interaction Miterm Lesson

  • 1.
  • 2.
    Objectives Learn all theaspects of design and development of interactive systems, which are now an important part of our lives. INTERACTIVE SYSTEM DESIGN The design and usability of these systems leaves an effect on the quality of people’s relationship to technology. Know about different web applications, games, embedded devices, etc., are all a part of this system, which has become an integral part of our lives.
  • 3.
    Until the 1980salmost all commercial computer systems were non-interactive. Computer operators would set-up the machines to read in large volumes of data – say customers bank details and transactions – and the computer would then process each input and generate appropriate output. THE PAST INTERACTIVE SYSTEM DESIGN
  • 4.
    There are stilllots of these systems in place but the world is also now full of interactive computer systems. These are systems that involve users in a direct way. In interactive systems the user and computer exchange information frequently and dynamically. Norman’s evaluation/execution model is a useful way of understanding the nature of interaction: THE PRESENT INTERACTIVE SYSTEM DESIGN
  • 5.
    THE PRESENT 1. Userhas a goal (something to achieve) 2. User looks at system and attempts to work out how he would execute a series of tasks to achieve the goal 3. User carries out some actions (providing input to the system by pressing buttons, touching a screen, speaking words etc.) 4. System responds to the actions and presents results to the user. System can use text, graphics, sounds, speech etc. INTERACTIVE SYSTEM DESIGN
  • 6.
    THE PRESENT 5. Userlooks at the results of his action and attempts to evaluate whether or not the goals have been achieved A good interactive system is one where: • User can easily work out how to operate the system in an attempt to achieve his goals • User can easily evaluate the results of his action on the system INTERACTIVE SYSTEM DESIGN
  • 7.
    In his book,‘The Invisible Computer’ Don Norman argues the case for ‘information appliances’. He suggests that the PC is too cumbersome and unwieldy a tool. It has too many applications and features to be useful. He sees the future as being one where we use specific ‘appliances’ for specific jobs. Norman envisions a world full of information appliances, a world populated by interactive computer systems: THE FUTURE INTERACTIVE SYSTEM DESIGN
  • 8.
    The Invisible Computerby Don Norman DIGITAL PICTURE FRAMES: give this frame to a friend or relative. When you have taken a new picture you want them to share, simply ‘email’ the picture direct to the frame. The frame will be connected to the net wirelessly THE HOME MEDICAL ADVISOR: sensors in the home will enable blood pressure, temperature, weight, body fluids and so on to be automatically monitored. A computer could use these readings to assist with medical advice or to contact a human doctor. INTERACTIVE SYSTEM DESIGN
  • 9.
    The Invisible Computerby Don Norman EMBEDDED SYSTEMS WITHIN OUR CLOTHES: ‘consider the value of eyeglass appliances. Many of us already wear eye glasses … why not supplant them with more power? Add a small electronic display to the glasses … and we could have all sorts of valuable information with us at all times’ [Norman 99, pg 271-272] THE WEATHER AND TRAFFIC DISPLAY: at the moment, when we want the time we simply look at a clock. Soon, perhaps, when we want to know the weather or traffic conditions we will look at a similar device. INTERACTIVE SYSTEM DESIGN
  • 10.
    Many people believewe will soon enter an age of ubiquitous computing – we will be as used to interacting with computing systems as we are with other people. This dream will only be fulfilled if the businesses that produce these systems and services clearly understand the needs of users so that the systems can be useful and usable. THE FUTURE INTERACTIVE SYSTEM DESIGN
  • 11.
    USABILITY ENGINEERING isa method in the progress of software and systems, which includes user contribution from the inception of the process and assures the effectiveness of the product through the use of a usability requirement and metrics. It thus refers to the Usability Function features of the entire process of abstracting, implementing & testing hardware and software products. Requirements gathering stage to installation, marketing and testing of products, all fall in this process. Concept of Usability Engineering
  • 12.
    Goals of UsabilityEngineering 1 Efficient to use 2 Error free in use 3 Easy to use 4 5 Effective to use Enjoyable in use INTERACTIVE SYSTEM DESIGN EFFICIENT FRIENDLY FUNCTIONAL SAFE DELIGHTFUL EXPERIENCE
  • 13.
    DotDash Bank PLChas launched a new telephone-based banking service. Customers will be able to check balances, order chequebooks and statements and transfer money all at the press of a button. Users are presented with lists of choices and they select an option by pressing the appropriate touchtone key on their handset. The system development team is certain that the system is technically very good – the speech synthesis used to speak out instructions/ options is the state-of-the-art and the database access times are very fast. Back Story INTERACTIVE SYSTEM DESIGN
  • 14.
    The new bankingsystem described is clearly a success from a system point of view: the designers have thought about the technical demands of the system to achieve, for example, high through-put of database queries. How, though, do users feel about the system? Back Story INTERACTIVE SYSTEM DESIGN
  • 15.
    The bank’s customershave responded badly to the new system. Firstly, users want to know why the system does not let them allow them to hear details of their most recent transactions, pay bills and do other common functions. Worse still, they find the large number of key-presses needed to find out a piece of information tedious and irritating. Often, users get lost in the list of choices, not sure of where they are in the system and what to do next Back Story INTERACTIVE SYSTEM DESIGN
  • 16.
    From a humanperspective the system is a real failure. It fails because it is not as useful as it might be and has very serious HCI problems – it fails because the designers have not fully considered what would be useful and usable from the customers’ point of view. Back Story INTERACTIVE SYSTEM DESIGN
  • 17.
    For an interactivesystem to be useful it should be goal centered. When a person uses a computer they will have one or more goals in mind – e.g., ‘work out my expenses for this month’; ‘buy a book on motor mechanics’. A useful interactive system is one that empowers users to achieve their goals. When you build an interactive system you should make sure you use a range of design and evaluation methods to discover the goals and associated system functionality that will make your system useful. Usability
  • 18.
    EFFECTIVENESS EFFICIENCY SATISFACTION USABILITYCOMPONENTS The completeness with which users achieve their goals The competence used in using the resources to effectively achieve the goals The ease of the work system to its users. INTERACTIVE SYSTEM DESIGN
  • 19.
    The methodical studyon the interaction between people, products, and environment based on experimental assessment. Example: Psychology, Behavioral Science, etc. A cork-screw is a tool for opening bottles sealed with a cork. They are useful tools. However, if you are a left-handed person most cork-screws are difficult to use. This is because they are designed for right-handed people. So, for a left- handed person the cork-screw has low usability (despite being useful). Usability is about building a system that takes account of the users' capabilities and limitations. A system that has good usability is likely to have the following qualities: Usability Study
  • 20.
    Usability Study ROBUST A systemis robust if a user is given the means to achieve their goals, to assess their progress and to recover from any errors made. FLEXIBLE Users should be able to interact with a system in ways that best suit their needs. The system should be flexible enough to permit a range of preferences. INTERACTIVE SYSTEM DESIGN
  • 21.
    ‘Interfaces are somethingwe do at the end of software development. We want to make the system look nice for the end user’. Unfortunately, many analysts and programmers might agree with the above statement. They cannot see the point in spending time and money on seriously considering and involving the users in design. Instead they consider they know what is best for the user and can build effective interfaces without using extensive user-centered methods. However, experience has shown that badly designed interfaces can lead to serious implications. If you build poor interfaces you might find: Usability Study
  • 22.
    USABILITY STUDY Your companyloses money as its workforce is less productive than it could be The quality of life of the users who use your system is reduced Disastrous and possibly fatal errors happen in systems that are safety- critical INTERACTIVE SYSTEM DESIGN
  • 23.
    The scientific evaluationof the stated usability parameters as per the user’s requirements, competences, prospects, safety and satisfaction is known as usability testing. According to Interaction Design Foundation, the main benefit and purpose of usability testing is to identify usability problems with a design as early as possible, so they can be fixed before the design is implemented or mass produced. As such, usability testing is often conducted on prototypes rather than finished products, with different levels of fidelity (i.e., detail and finish) depending on the development phase. Usability Testing
  • 24.
    Prototypes tend tobe more primitive, low-fidelity versions (e.g., paper sketches) during early development, and then take the form of more detailed, high-fidelity versions (e.g., interactive digital mock- ups) closer to release. To run an effective usability test, you need to develop a solid test plan, recruit participants, and then analyze and report your findings. Usability Testing
  • 25.
    Acceptance testing alsoknown as User Acceptance Testing (UAT), is a testing procedure that is performed by the users as a final checkpoint before signing off from a vendor. Let us take an example of the handheld barcode scanner. Let us assume that a supermarket has bought barcode scanners from a vendor. The supermarket gathers a team of counter employees and make them test the device in a mock store setting. By this procedure, the users would determine if the product is acceptable for their needs. It is required that the user acceptance testing "pass" before they receive the final product from the vendor. Acceptance Testing
  • 26.
    Software Tools A softwaretool is a programmatic software used to create, maintain, or otherwise support other programs and applications. Some of the commonly used software tools in HCI are as follows − INTERACTIVE SYSTEM DESIGN
  • 27.
    Specification Methods Grammars Transition Diagram SOFTWARE TOOLS The methodsused to specify the GUI. Even though these are lengthy and ambiguous methods, they are easy to understand Written Instructions or Expressions that a program would understand. They provide confirmations for completeness and correctness Set of nodes and links that can be displayed in text, link frequency, state diagram, etc. They are difficult in evaluating usability, visibility, modularity and synchronization INTERACTIVE SYSTEM DESIGN
  • 28.
    Statecharts Interface Building Tools Interface Mockup Tools SOFTWARETOOLS Chart methods developed for simultaneous user activities and external actions. They provide link-specification with interface building tools Design methods that help in designing command languages, data entry structures, and widgets Tools to develop a quick sketch of GUI. E.g., Microsoft Visio, Visual Studio, .Net, etc. INTERACTIVE SYSTEM DESIGN
  • 29.
    Software Engineering Tools Evaluation Tools SOFTWARE TOOLS Extensiveprogramming tools to provide user interface management system. Tools to evaluate the correctness and completeness of programs INTERACTIVE SYSTEM DESIGN
  • 30.
    SOFTWARE ENGINEERING isthe study of designing, development and preservation of software. It comes in contact with HCI to make the man and machine interaction more vibrant and interactive. Let us see the following model in software engineering for interactive designing. HCI and Software Engineering
  • 31.
    The Waterfall modelis the earliest SDLC approach that was used for software development. The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap. Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate phases. In this Waterfall model, typically, the outcome of one phase acts as the input for the next phase sequentially. The Waterfall Method
  • 32.
  • 33.
    Requirement Gathering and analysis System Design Implementation TheWaterfall Method Sequential Phases All possible requirements of the system to be developed are captured in this phase and documented in a requirement specification document. The requirement specifications from first phase are studied in this phase and the system design is prepared. This system design helps in specifying hardware and system requirements and helps in defining the overall system architecture. With inputs from the system design, the system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality, which is referred to as Unit Testing. INTERACTIVE SYSTEM DESIGN
  • 34.
    Integration and Testing Deployment of systemMaintenance The Waterfall Method Sequential Phases All the units developed in the implementation phase are integrated into a system after testing of each unit. Post integration the entire system is tested for any faults and failures. Once the functional and non- functional testing is done; the product is deployed in the customer environment or released into the market. There are some issues which come up in the client environment. To fix those issues, patches are released. Also to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer environment. INTERACTIVE SYSTEM DESIGN
  • 35.
    The uni-directional movementof the waterfall model of Software Engineering shows that every phase depends on the preceding phase and not vice-versa. However, this model is not suitable for the interactive system design. The interactive system design shows that every phase depends on each other to serve the purpose of designing and product creation. It is a continuous process as there is so much to know and users keep changing all the time. An interactive system designer should recognize this diversity. INTERACTIVE SYSTEM DESIGN
  • 36.
  • 37.
    Prototyping is anothertype of software engineering models that can have a complete range of functionalities of the projected system. In HCI, prototyping is a trial and partial design that helps users in testing design ideas without executing a complete system. Example of a prototype can be Sketches. Sketches of interactive design can later be produced into graphical interface. See the following diagram. The following diagram can be considered as a Low Fidelity Prototype as it uses manual procedures like sketching in a paper. A Medium Fidelity Prototype involves some but not all procedures of the system. E.g., first screen of a GUI. Finally, a Hi Fidelity Prototype simulates all the functionalities of the system in a design. This prototype requires, time, money and work force. Prototyping
  • 38.
  • 39.
    User Centered Design(UCD) The process of collecting feedback from users to improve the design is known as user centered design or UCD. UCD Drawbacks: • Passive user involvement • User’s perception about the new interface may be inappropriate • Designers may ask incorrect questions to users INTERACTIVE SYSTEM DESIGN
  • 40.
    The stages inthe following diagram are repeated until the solution is reached Interactive System Design Life Cycle (ISLC)
  • 41.
    Graphic User Interface(GUI) is the interface from where a user can operate programs, applications or devices in a computer system. This is where the icons, menus, widgets, labels exist for the users to access. It is significant that everything in the GUI is arranged in a way that is recognizable and pleasing to the eye, which shows the aesthetic sense of the GUI designer. GUI aesthetics provides a character and identity to any product. GUI Design & Aesthetics
  • 42.
  • 43.
    Objectives Understand what are theseknown tools and how some are recently developed or are a concept to be developed in the future Learn about several interactive devices are used for the human computer interaction Discuss on some new and old interactive devices.
  • 44.
    Overview of InteractiveDevices There are many different types of interaction devices being used and conceived today. Some are familiar tools from the past and many are just distant concept dreams of the future. Some of interactive devices are recently developed and some of them are innovated earlier. This section describes about some new and old interface devices.
  • 45.
    As shown inthe figure, though users actually interact physically with a device, they actually require it to execute a use case to accomplish their need. Hence, users are interacting logically with the service. Software engineers define the service as a use case that is realized by a certain subsystem/component in the software, while the interface is considered as boundary class during analysis and as the user interface during the design and implementation stage.
  • 46.
    Keyboard A keyboard canbe considered as a primitive device known to all of us today. Keyboard uses an organization of keys/buttons that serves as a mechanical device for a computer. Each key in a keyboard corresponds to a single written symbol or character.
  • 47.
    Keyboard This is themost effective and ancient interactive device between man and machine that has given ideas to develop many more interactive devices as well as has made advancements in itself such as soft screen keyboards for computers and mobile phones.
  • 48.
    Touch Screen The touchscreen concept was prophesized decades ago, however the platform was acquired recently. Today there are many devices that use touch screen. Gadgets like mobile phone, tablet, ipad, etc uses touch screen technology which allows the users to navigate with the installed software on their devices with the use of their fingertips.
  • 49.
    Touch Screen Unlike earlierdesign of personal computers, touch screen technology doesn’t need an input device such as mouse and keyboard as these are already built-in to the device. After vigilant selection of these devices, developers customize their touch screen experiences. The cheapest and relatively easy way of manufacturing touch screens are the ones using electrodes and a voltage association.
  • 50.
    Touch Screen Other thanthe hardware differences, software alone can bring major differences from one touch device to another, even when the same hardware is used. Along with the innovative designs and new hardware and software, touch screens are likely to grow in a big way in the future. A further development can be made by making a sync between the touch and other devices. In HCI, touch screen can be considered as a new interactive device.
  • 51.
    Gesture Recognition Gesture recognitionis a subject in language technology that has the objective of understanding human movement via mathematical procedures. Hand gesture recognition is currently the field of focus. This technology is future based. This new technology, magnitudes an advanced association between human and computer where no mechanical devices are used.
  • 52.
    Gesture Recognition This newinteractive device might terminate the old devices like keyboards and is also heavy on new devices like touch screens. The general definition of gesture recognition is the ability of a computer to understand gestures and execute commands based on those gestures. Most consumers are familiar with the concept through Wii Fit, X-box and PlayStation games such as “Just Dance” and “Kinect Sports.”
  • 53.
    How gesture recognitionworks Gesture recognition is an alternative user interface for providing real-time data to a computer. Instead of typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the primary source of data input. This is what happens between the time a gesture is made and the computer reacts. For instance, Kinect looks at a range of human characteristics to provide the best command recognition based on natural human inputs. It provides both skeletal and facial tracking in addition to gesture recognition, voice recognition and in some cases the depth and color of the background scene. Kinect reconstructs all of this data into printable three- dimensional (3D) models. The latest Kinect developments include an adaptive user interface that can detect a user’s height
  • 54.
    How gesture recognitionworks Specially designed software identifies meaningful gestures from a predetermined gesture library where each gesture is matched to a computer command. A camera feeds image data into a sensing device that is connected to a computer. The sensing device typically uses an infrared sensor or projector for the purpose of calculating depth.
  • 55.
    How gesture recognitionworks Once the gesture has been interpreted, the computer executes the command correlated to that specific gesture. The software then correlates each registered real-time gesture, interprets the gesture and uses the library to identify meaningful gestures that match the library.
  • 56.
    Who makes gesturerecognition software? Microsoft is leading the charge with Kinect, a gesture recognition platform that allows humans to communicate with computers entirely through speaking and gesturing. Kinect gives computers, “eyes, ears, and a brain.” There are a few other players in the space such as SoftKinect, GestureTek, PointGrab, eyesight and PrimeSense, an Israeli company recently acquired by Apple. Emerging technologies from companies such as eyeSight go far beyond gaming to allow for a new level of small motor precision and depth perception.
  • 57.
    Gesture recognition examplesbeyond gaming Gesture recognition has huge potential in creating interactive, engaging live experiences. Here are five gesture recognition examples that illustrate the potential of gesture recognition to to educate, simplify user experiences and delight consumers.
  • 58.
    Gesture recognition examplesbeyond gaming Changing how we interact with traditional computers - A company named Leap Motion last year introduced the Leap Motion Controller, a gesture-based computer interaction system for PC and Mac. A USB device and roughly the size of a Swiss army knife, the controller allows users to interact with traditional computers with gesture control. It’s very easy to see the live experience applications of this technology. In-store retail engagement-Gesture recognition has the power to deliver an exciting, seamless in-store experience. This example uses Kinect to create an engaging retail experience by immersing the shopper in relevant content, helping her to try on products and offering a game that allows the shopper to earn a discount incentive.
  • 59.
    Gesture recognition examplesbeyond gaming Windshield wipers - Google and Ford are also reportedly working on a system that allows drivers to control features such as air conditioning, windows and windshield wipers with gesture controls. The Cadillac CUE system recognizes some gestures such as tap, flick, swipe and spread to scroll lists and zoom in on maps. The operating room - Companies such as Microsoft and Siemens are working together to redefine the way that everyone from motorists to surgeons accomplish highly sensitive tasks. These companies have been focused on refining gesture recognition technology to focus on fine motor manipulation of images and enable a surgeon to virtually grasp and move an object on a monitor.
  • 60.
    Gesture recognition examplesbeyond gaming Sign language interpreter-There are several examples of using gesture recognition to bridge the gap between the deaf and non-deaf who may not know sign language. This example showing how Kinect can understand and translate sign language from Dani Martinez Capilla explores the notion of breaking down communication barriers using gesture recognition. Mobile payments - Seeper, a London- based startup, has created a technology called Seemove that has gone beyond image and gesture recognition to object recognition. Ultimately, Seeper believes that their system could allow people to manage personal media, such as photos or files, and even initiate online payments using gestures.
  • 61.
    Speech Recognition The technologyof transcribing spoken phrases into written text is Speech Recognition. Such technologies can be used in advanced control of many devices such as switching on and off the electrical appliances. Only certain commands are required to be recognized for a complete transcription. However, this cannot be beneficial for big vocabularies. This HCI device help the user in hands free movement and keep the instruction based technology up to date with the users.
  • 62.
    Application of GestureRecognition Kinect: It is a motion sensing console launched by Microsoft as an extension of the Microsoft Xbox 360 game console. The main function is to enable you to control the Xbox through voice or gestures rather than physically using the controller. Kinect is based on technologies which are developed by Microsoft and PrimeSense. It basically makes use of an infrared projector which is able to read your gestures hence enabling you have a complete hands-free control of the gadget or game you are playing.
  • 63.
    Microsoft has alreadysold more than 18 million copies of Kinect and they plans to implement the same system and technology for its PC and release it in February this year.
  • 64.
    Application of GestureRecognition Eon Interactive Mirror: EON Interactive Mirror enables customers to virtually try-on clothes, dresses, handbags and accessories using gesture- based interaction. Changing from one dress to another is just a ‘swipe’ away and offers endless possibilities for mixing designs and accessories in a fun, quick and intuitive way. Customers can snap a picture of his/hers current selections and share it on Facebook or other Social Media to get instant feedback from friends.
  • 65.
    The EON InteractiveMirror is growing in popularity in the amusement and retail industry. It will be showcased on its ability to engage crowds through various interactive applications.
  • 66.
    Response Time Response timeis the time taken by a device to respond to a request. The request can be anything from a database query to loading a web page. The response time is the sum of the service time and wait time. Transmission time becomes a part of the response time when the response has to travel over a network. In modern HCI devices, there are several applications installed and most of them function simultaneously or as per the user’s usage. This makes a busier response time. All of that increase in the response time is caused by increase in the wait time. The wait time is due to the running of the requests and the queue of requests following it.
  • 67.
    Response Time So, itis significant that the response time of a device is faster for which advanced processors are used in modern devices. According to Jakob Nielsen, the basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]:
  • 68.
    Response Time is aboutthe limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect 0.1 Second 1.0 Second 10 Seconds
  • 69.
  • 70.
    LESSON OBJECTIVES 01 Understand thedifferent characteristics of engineering task models and methodologies 03 Learn about the basic activities of interaction design and principles for user-centered approach 02 Understand the design process and task analysis that plays an important part in user requirement analysis 04 Know the different methodologies being used in HCI design
  • 71.
    HCI design isconsidered as a problem solving process that has components like planned usage, target area, resources, cost, and viability. It decides on the requirement of product similarities to balance trade-offs. HCI Design
  • 72.
    HCI Design 01 Evaluating designs 03 Identifyingrequirements 02 Developing interactive versions of the designs 04 Building alternative designs
  • 73.
    Three principles foruser-centered approach Empirical Measurement Early focus on users and tasks Iterative Design
  • 74.
    Various methodologies havematerialized since the inception that outline the techniques for human–computer interaction. Following are few design methodologies Design Methodologies
  • 75.
    Activity Theory This isan HCI method that describes the framework where the human computer interactions take place. Activity theory provides reasoning, analytical tools and interaction designs.
  • 76.
    User-Centered Design It providesusers the center-stage in designing where they get the opportunity to work with designers and technical practitioners.
  • 77.
    Principles of UserInterface Design Tolerance, simplicity, visibility, affordance, consistency, structure and feedback are the seven principles used in interface designing.
  • 78.
    Value Sensitive Design Thismethod is used for developing technology and includes three types of studies − conceptual, empirical and technical.
  • 79.
    Value Sensitive Design ConceptualEmpirical Technical Conceptual investigations work towards understanding the values of the investors who use technology. Empirical investigations are qualitative or quantitative design research studies that shows the designer’s understanding of the users’ values. Technical investigations contain the use of technologies and designs in the conceptual and empirical investigations.
  • 80.
    Participatory Design Participatory designprocess involves all stakeholders in the design process, so that the end result meets the needs they are desiring. This design is used in various areas such as software design, architecture, landscape architecture, product design, sustainability, graphic design, planning, urban design, and even medicine. Participatory design is not a style, but focus on processes and procedures of designing. It is seen as a way of removing design accountability and origination by designers.
  • 81.
    Task Analysis playsan important part in User Requirements Analysis. Task analysis is the procedure to learn the users and abstract frameworks, the patterns used in workflows, and the chronological implementation of interaction with the GUI. It analyzes the ways in which the user partitions the tasks and sequence them. Task Analysis
  • 82.
  • 83.
    Human actions thatcontributes to a useful objective, aiming at the system, is a task. Task analysis defines performance of users, not computers. What is a TASK?
  • 84.
    Purpose of TaskAnalysis What your users’ goals are; what they are trying to achieve What users actually do to achieve those goals What experiences (personal, social, and cultural) users bring to the tasks In their book User and Task Analysis for Interface Design, JoAnn Hackos and Janice Redish note that performing a task analysis helps you understand: How users are influenced by their physical environment How users’ previous knowledge and experience influence: How they think about their work The workflow they follow to perform their task
  • 85.
    When to Performa Task Analysis Website requirements gathering Developing your content strategy and site structure It’s important to perform a task analysis early in your process, in particular prior to design work. Task analysis helps support several other aspects of the user-centered design process, including: Wireframing and Prototyping Performing usability testing
  • 86.
    Types of TaskAnalysis Hierarchical Task Analysis - is focused on decomposing a high- level task subtasks.. Cognitive Task Analysis - is focused on understanding tasks that require decision-making, problem-solving, memory, attention and judgement. There are several types of task analysis but among the most common techniques used are:
  • 87.
    How to Conducta Task Analysis Your task analysis may have several levels of inquiry, from general to very specific. In addition to market research, competitive analysis, and web metrics analysis, you can identify top tasks through various user research techniques. UXPA’s Usability Body of Knowledge breaks down the process for decomposing a high-level task into the following steps:
  • 88.
    How to Conducta Task Analysis Identify the task to be analyzed Break this high-level task down into 4 to 8 subtasks. The subtask should be specified in terms of objectives and, between them, should cover the whole area of interest It’s important to note that you need to decide to what level of detail you are going to decompose subtasks so that you can ensure that you are consistent across the board. Draw a layered task diagram of each subtasks ensuring that it is complete Produce a written account as well as the decomposition diagram Present the analysis to someone else who has not been involved in the decomposition but who knows the tasks well enough to check for consistency
  • 89.
    Hierarchical Task Analysisis the procedure of disintegrating tasks into subtasks that could be analyzed using the logical sequence for execution. This would help in achieving the goal in the best possible way. "A hierarchy is an organization of elements that, according to prerequisite relationships, describes the path of experiences a learner must take to achieve any single behavior that appears higher in the hierarchy. (Seels & Glasgow, 1990, p. 94)" Hierarchical Task Analysis
  • 90.
    Techniques for Analysis Taskdecomposition Observation and documentation of actions of the user. This is achieved by authenticating the user’s thinking. The user is made to think aloud so that the user’s mental logic can be understood Ethnography Splitting tasks into sub-tasks and in sequence Knowledge-based techniques Observation of users’ behavior in the use context Protocol analysis Any instructions that users need to know. ‘User’ is always the beginning point for a task
  • 91.
    Unlike Hierarchical TaskAnalysis, Engineering Task Models can be specified formally and are more useful. Engineering Task Models
  • 92.
    Characteristics of EngineeringTask Models Engineering task models have flexible notations, which describes the possible activities clearly They have organized approaches to support the requirement, analysis, and use of task models in the design They support the recycle of in-condition design solutions to problems that happen throughout applications Finally, they let the automatic tools accessible to support the different phases of the design cycle
  • 93.
    CTT is anengineering methodology used for modeling a task and consists of tasks and operators. Operators in CTT are used to portray chronological associations between tasks. Following are the key features of a CTT: ConcurTaskTree (CTT)
  • 94.
    ConcurTaskTree (CTT) Focus onactions that users wish to accomplish Hierarchical structure Graphical syntax Rich set of sequential operators
  • 95.
    ConcurTaskTree (CTT) Examples Plugin to main and switch on supply Locate on/off switch on projector A person preparing an overhead projector for use would be seen to carry out the following actions: Discover which way to press the switch Press the switch for power Put on the slide and orientate correctly Align the projector on the screen Focus the slide
  • 96.
    In Human ComputerInteraction, task analysis is the recording of physical and perceptual actions of the user whilst executing the task. Take a look on another example of task analysis. At a bare minimum to identify tasks, you can simply ask users what overall tasks they are trying to accomplish or how they currently accomplish the task. ConcurTaskTree (CTT)
  • 97.
    ConcurTaskTree (CTT) Trying tofind a nursing home near you for an elderly relative Trying to get information about options for treatment for skin cancer What overall tasks are users trying to accomplish on our website? Trying to sign up to receive an email notice when a payment is due
  • 98.
    ConcurTaskTree (CTT) Using asearch engine Navigating through your site How are users currently completing the task? People are completing that task using: Using another site (Through some other means)
  • 99.
    THANK YOU! DO YOU HAVEANY QUESTIONS?
  • 100.
  • 101.
    Lesson objectives Learn allthe aspects of dialog levels and representation Introduce formalism techniques that we can use to signify dialogs Learn about visual materials being used in communication process Understand direct manipulation as a good form of interface design Know the sequence in item presentation Understand the use of proper use of menu layout and form fill-in dialog boxes
  • 102.
    Dialog Representation To represent dialogs,we need formal techniques that serves two purposes: It helps in understanding the proposed design in a better way. It helps in analyzing dialogs to identify usability issues. E.g., Questions such as “does the design actually support undo?” can be answered
  • 103.
    Introduction to Formalism There aremany formalism techniques that we can use to signify dialogs. In this chapter, we will discuss on three of these formalism techniques, which are − The state transition networks (STN) The state charts The classical Petri nets
  • 104.
    State Transition Network (STN) STNsare the most spontaneous, which knows that a dialog fundamentally denotes to a progression from one state of the system to the next. The syntax of an STN consists of the following two entities: Circles − A circle refers to a state of the system, which is branded by giving a name to the state. Arcs − The circles are connected with arcs that refers to the action/event resulting in the transition from the state where the arc initiates, to the state where it ends.
  • 105.
  • 106.
    StateCharts StateCharts represent complexreactive systems that extends Finite State Machines (FSM), handle concurrency, and adds memory to FSM. It also simplifies complex system representations. StateCharts has the following states: Active state − The present state of the underlying FSM Basic states − These are individual states and are not composed of other states. Super states − These states are composed of other states.
  • 107.
    For each basicstate b, the super state containing b is called the ancestor state. A super state is called OR super state if exactly one of its sub states is active, whenever it is active. Let us see the StateChart Construction of a machine that dispense bottles on inserting coins. illustration Illustration
  • 108.
  • 109.
    The diagram explainsthe entire procedure of a bottle dispensing machine. On pressing the button after inserting coin, the machine will toggle between bottle filling and dispensing modes. When a required request bottle is available, it dispense the bottle. In the background, another procedure runs where any stuck bottle will be cleared. The ‘H’ symbol in Step 4, indicates that a procedure is added to History for future access. illustration Illustration
  • 110.
    Petri Nets Petri Netis a simple model of active behavior, which has four behavior elements such as − places, transitions, arcs and tokens. Petri Nets provide a graphical explanation for easy understanding. Place − This element is used to symbolize passive elements of the reactive system. A place is represented by a circle Transition − This element is used to symbolize active elements of the reactive system. Transitions are represented by squares/rectangles
  • 111.
    Petri Nets Arc −This element is used to represent causal relations. Arc is represented by arrows Token − This element is subject to change. Tokens are represented by small filled circles
  • 112.
    Petri Nets weredeveloped originally by Carl Adam Petri [Pet62], and were the subject of his dissertation in 1962. Since then, Petri Nets and their concepts have been extended and developed, and applied in a variety of areas: Office automation, work-flows, flexible manufacturing, programming languages, protocols and networks, hardware structures, real-time systems, performance evaluation, operations research, embedded systems, defense systems, telecommunications, Internet, ecommerce and trading, railway networks, biological systems. Here is an example of a Petri Net model, one for the control of a metabolic pathway. Tool used: Visual Object Net++ Petri nets Petri Nets
  • 113.
  • 114.
    Visual materials haveassisted in the communication process since ages in form of paintings, sketches, maps, diagrams, photographs, etc. In today’s world, with the invention of technology and its further growth, new potentials are offered for visual information such as thinking and reasoning. As per studies, the command of visual thinking in human- computer interaction (HCI) design is still not discovered completely. So, let us learn the theories that support visual thinking in sense-making activities in HCI design. An initial terminology for talking about visual thinking was discovered that included concepts such as visual immediacy, visual impetus, visual impedance, and visual metaphors, analogies and associations, in the context of information design for the web. Visual thinking Visual thinking
  • 115.
    Visual thinking isthe use of imagery and other visual forms to make sense of the world and to create meaningful content. Digital imagery is a special form of visual thinking, one that is particularly salient for HCI and interaction design. Digital photographs qualify as digital imagery only when they are also visual thinking that is, when they are instrumental in making sense or creating meaning. As such, this design process became well suited as a logical and collaborative method during the design process. Let us discuss in brief the concepts individually. Visual thinking Visual thinking
  • 116.
    Visual immediacy It is areasoning process that helps in understanding of information in the visual representation. The term is chosen to highlight its time related quality, which also serves as an indicator of how well the reasoning has been facilitated by the design.
  • 117.
    Visual Impetus Visual impetusis defined as a stimulus that aims at the increase in engagement in the contextual aspects of the representation.
  • 118.
    Visual Impedance It is perceivedas the opposite of visual immediacy as it is a hindrance in the design of the representation. In relation to reasoning, impedance can be expressed as a slower cognition.
  • 119.
    Visual Metaphors, Association, Analogy, Abductionand Blending - When a visual demonstration is used to understand an idea in terms of another familiar idea it is called a visual metaphor. - Visual analogy and conceptual blending are similar to metaphors. Analogy can be defined as an implication from one particular to another. Conceptual blending can be defined as combination of elements and vital relations from varied situations.
  • 120.
    Visual Metaphors, Association, Analogy, Abductionand Blending The HCI design can be highly benefited with the use of above mentioned concepts. The concepts are pragmatic in supporting the use of visual procedures in HCI, as well as in the design processes.
  • 121.
    Direct Manipulation Programming The actionof using your fingertips to zoom in and out of the image is an example of a direct manipulation interaction. Another classic example is dragging a file from a folder to another one in order to move it
  • 122.
    Direct Manipulation Programming Definition: Directmanipulation (DM) is an interaction style in which users act on displayed objects of interest using physical, incremental, reversible actions whose effects are immediately visible on the screen.
  • 123.
    Direct Manipulation Programming Ben Shneidermanfirst coined the term “direct manipulation” in the early 1980s, at a time when the dominant interaction style was the command line. In command-line interfaces, the user must remember the system label for a desired action, and type it in together with the names for the objects of the action.
  • 124.
    Direct Manipulation Programming Direct manipulationis one of the central concepts of graphical user interfaces (GUIs) and is sometimes equated with “what you see is what you get” (WYSIWYG). These interfaces combine menu based interaction with physical actions such as dragging and dropping in order to help the user use the interface with minimal learning.
  • 125.
    Continuous representation of the objectof interest Physical actions instead of complex syntax Continuous feedback and reversible, incremental actions Rapid learning Characteristics of Direct Manipulation
  • 126.
    Continuous representation of the objectof interest Users can see visual representations of the objects that they can interact with. As soon as they perform an action, they can see its effects on the state of the system. For example, when moving a file using drag-and-drop, users can see the initial file displayed in the source folder, select it, and, as soon as the action was completed, they can see it disappear from the source and appear in the destination — an immediate confirmation that their action had the intended result. Thus, direct-manipulation UIs satisfy, by definition, the first usability heuristic: the visibility of the system status. In contrast, in a command line interface, users usually must explicitly check that their actions had indeed the intended result (for example, by listing the content of the destination directory).
  • 127.
    Physical actions instead ofcomplex syntax Actions are invoked physically via clicks, button presses, menu selections, and touch gestures. In the move-file example, drag-and-drop has a direct analog in the real world, so this implementation for the move action has the right signifiers and can be easily learned and remembered. In contrast, the command-line interface requires users to recall not only the name of the command (“mv”), but also the names of the objects involved (files and paths to the source and destination folders). Thus, unlike DM interfaces, command-line interfaces are based on recall instead of recognition and violate an important usability heuristic.
  • 128.
    Continuous feedback and reversible,incremental actions Because of the visibility of the system state, it’s easy to validate that each action caused the right result. Thus, when users make mistakes, they can see right away the cause of the mistake and they should be able to easily undo it. In contrast, with command-line interfaces, one single user command may have multiple components that can cause the error. For instance, in the example below, the name of the destination folder contains a typo “Measuring Usablty” instead of “Measuring Usability”. The system simply assumed that the file name should be changed to “Measuring Usablty”. If users check the destination folder, they will discover that there was a problem, but will have no way of knowing what caused it: did they use the wrong command, the wrong source filename, or the wrong destination?
  • 129.
    Rapid learning Because theobjects of interest and the potential actions in the system are visually represented, users can use recognition instead of recall to see what they could do and select an operation most likely to fulfill their goal. They don’t have to learn and remember complex syntax. Thus, although direct-manipulation interfaces may require some initial adjustment, the learning required is likely to be less substantial.
  • 130.
    When direct manipulationfirst appeared, it was based on the office-desk metaphor — the computer screen was an office desk, and different documents (or files) were placed in folders, moved around, or thrown to trash. This underlying metaphor indicates the skeuomorphic origin of the concept. The DM systems described originally by Shneiderman are also skeuomorphic — that is, they are based on resemblance with a physical object in the real world. Thus, he talks about software interfaces that copy Rolodexes and physical checkbooks to support tasks done (at the time) with these tools. As we all know, skeuomorphism saw a huge revival in the early iPhone days, and has now come out of fashion. Direct Manipulation vs. Skeuomorphism Direct Manipulation vs. Skeuomorphism
  • 131.
    Direct Manipulation vs. Skeuomorphism Whileskeuomorphic interfaces are indeed based on direct manipulation, not all direct manipulation interfaces need to be skeuomorphic. In fact, today’s flat interfaces are a reaction to skeuomorphism and depart from the real-world metaphors, yet they do rely on direct manipulation.
  • 132.
    Almost each DMcharacteristic has a directly corresponding disadvantage: - Continuous representation of the objects? It means that you can only act on the small number of objects that can be seen at any given time. And objects that are out of sight, but not out of mind, can only be dealt with after the user has laboriously navigated to the place that holds those objects so that they can be made visible. - Physical actions? One word: RSI (repetitive strain injury). It’s a lot of work to move all those icons and sliders around the screen. Actually, two more words: accidental activation, which is particularly common on touchscreens, but can also happen on mouse-driven systems. Disadvantages of Direct Manipulation Disadvantages of Direct Manipulation
  • 133.
    - Continuous feedback?Only if you attempt an operation that the system feels like letting you do. If you want to do something that’s not available, you can push and drag buttons and icons as much as you want with no effect whatsoever. No feedback, only frustration. (A good UI will show in-context help to explain why the desired action isn’t available and how to enable it. Sadly, UIs this good are not very common.) - Rapid learning? Yes, if the design is good, but in practice learnability depends on how well designed the interface is. We’ve all seen menus with poorly chosen labels, buttons that did not look clickable, or drop-down boxes with more options than the length of the screen. And there are even more disadvantages: Disadvantages of Direct Manipulation Disadvantages of Direct Manipulation
  • 134.
    - DM isslow. If the user needs to perform a large number of actions, on many objects, using direct manipulation takes a lot longer than a command-line UI. Have you encountered any software engineers who use DM to write their code? Sure, they might use DM elements in their software development interfaces, but the majority of the code will be typed in. - Repetitive tasks are not well supported. DM interfaces are great for novices because they are easy to learn, but because they are slow, experts who have to perform the same set of tasks with high frequency, usually rely on keyboard shortcuts, macros, and other command-language interactions to speed up the process. For example, when you need to send an email attachment to one recipient, it is easy to drag the desired file and drop it into the attachment section. However, if you needed to do this for 50 different recipients with customized subject lines, a macro or script will be faster and less tedious. Disadvantages of Direct Manipulation Disadvantages of Direct Manipulation
  • 135.
    - Some gesturescan be more error-prone than typing. Whereas in theory, because of the continuous feedback, DM minimizes the chance of certain errors, in practice, there are situations when a gesture is harder to perform than typing equivalent information. For example, good luck trying to move the 50th column of a spreadsheet into the 2nd position using drag and drop. For this exact reason, Netflix offers 3 interaction techniques for reordering subscribers’ DVD queues: dragging the movie to the desired position (easy for short moves), a one-button shortcut for moving into the #1 position (handy when you must watch a particular movie ASAP), and the indirect option of typing the number of the desired new position (useful in most other cases) Disadvantages of Direct Manipulation Disadvantages of Direct Manipulation
  • 136.
  • 137.
    - Accessibility maysuffer. DM UIs may fail visually impaired users or users with motor skill impairments, especially if they are heavily based on physical actions, as opposed to button presses and menu selections. (Workarounds exist, but it can be difficult to implement them.) - Direct manipulation has been acclaimed as a good form of interface design, and are well received by users. Such processes use many source to get the input and finally convert them into an output as desired by the user using inbuilt tools and programs. “Directness” has been considered as a phenomenon that contributes majorly to the manipulation programming. It has the following two aspects: Distance and Direct Engagement. Disadvantages of Direct Manipulation Disadvantages of Direct Manipulation
  • 138.
    Distance Distance is aninterface that decides the gulfs between a user’s goal and the level of explanation delivered by the systems, with which the user deals. These are referred to as the Gulf of Execution and the Gulf of Evaluation.
  • 139.
    The Gulf of Execution TheGulf of Execution defines the gap/gulf between a user's goal and the device to implement that goal. One of the principal objective of Usability is to diminish this gap by removing barriers and follow steps to minimize the user’s distraction from the intended task that would prevent the flow of the work.
  • 140.
    The Gulf of Evaluation TheGulf of Evaluation is the representation of expectations that the user has interpreted from the system in a design. As per Donald Norman, The gulf is small when the system provides information about its state in a form that is easy to get, is easy to interpret, and matches the way the person thinks of the system.
  • 141.
    Direct Engagement It is describedas a programming where the design directly takes care of the controls of the objects presented by the user and makes a system less difficult to use. The scrutiny of the execution and evaluation process illuminates the efforts in using a system. It also gives the ways to minimize the mental effort required to use a system.
  • 142.
    - Even thoughthe immediacy of response and the conversion of objectives to actions has made some tasks easy, all tasks should not be done easily. For example, a repetitive operation is probably best done via a script and not through immediacy. - Direct manipulation interfaces find it hard to manage variables, or illustration of discrete elements from a class of elements. - Direct manipulation interfaces may not be accurate as the dependency is on the user rather than on the system. - An important problem with direct manipulation interfaces is that it directly supports the techniques, the user thinks. Problems with Direct Manipulation Problems with Direct Manipulation
  • 143.
    In HCI, thepresentation sequence can be planned according to the task or application requirements. The natural sequence of items in the menu should be taken care of. Main factors in presentation sequence are: • Time • Numeric ordering • Physical properties A designer must select one of the following prospects when there are no task-related arrangements • Alphabetic sequence of terms • Grouping of related items • Most frequently used items first • Most important items first Item Presentation Sequence Item Presentation Sequence
  • 144.
    Helping users navigateshould be a high priority for almost every website and application. After all, even the coolest feature or the most compelling content is useless if people can’t find it. And even if you have a search function, you usually shouldn’t rely on search as the only way to navigate. Most designers recognize this, and include some form of navigation menu in their designs. Definition: Navigation menus are lists of content categories or features, typically presented as a set of links or icons grouped together with visual styling distinct from the rest of the design. Menu Layout Menu Layout
  • 145.
    Navigation menus include,but are not limited to, navigation bars and hamburger menus. Menus are so important that you find them in virtually every website or piece of software you encounter, but not all menus are created equally. Too often we observe users struggling with menus that are confusing, difficult to manipulate, or simply hard to find. Avoid common mistakes by following these guidelines for usable navigation menus: Menu Layout Menu Layout
  • 146.
    A. Make ItVisible 1. Don’t use tiny menus (or menu icons) on large screens. Menus shouldn’t be hidden when you have plenty of space to display them. 2. Put menus in familiar locations. Users expect to find UI elements where they’ve seen them before on other sites or apps (e.g., left rail, top of the screen). Make these expectations work in your favor by placing your menus where people expect to find them. 3. Make menu links look interactive. Users may not even realize that it’s a menu if the options don’t look clickable (or tappable). Menus may seem to be just decorative pictures or headings if you incorporate too many graphics, or adhere too strictly to principles of flat design. Menu Layout Menu Layout
  • 147.
    A. Make ItVisible 4. Ensure that your menus have enough visual weight. In many cases menus that are placed in familiar locations don’t require much surrounding white space or color saturation in order to be noticeable. But if the design is cluttered, menus that lack visual emphasis can easily be lost in a sea of graphics, promotions, and headlines that compete for the viewer’s attention. 5. Use link text colors that contrast with the background color. It’s amazing how many designers ignore contrast guidelines; navigating through digital space is disorienting enough without having to squint at the screen just to read the menu. Menu Layout Menu Layout
  • 148.
    B. Communicate theCurrent Location 6. Tell users ‘where’ the currently visible screen is located within the menu options. “Where am I?” is one of the fundamental questions users need to answer to successfully navigate. Users rely on visual cues from menus (and other navigation elements such a breadcrumbs) to answer this critical question. Failing to indicate the current location is probably the single most common mistake we see on website menus. Ironically, these menus have the greatest need to orient users, since visitors often don’t enter from the homepages. Menu Layout Menu Layout
  • 149.
    C. Coordinate Menuswith User Tasks 7. Use understandable link labels. Figure out what users are looking for, and use category labels that are familiar and relevant. Menus are not the place to get cute with made-up words and internal jargon. Stick to terminology that clearly describes your content and features. 8. Make link labels easy to scan. You can reduce the amount of time users need to spend reading menus by left-justifying vertical menus and by front-loading key terms. 9. For large websites, use menus to let users preview lower-level content. If typical user journeys involve drilling down through several levels, mega-menus (or traditional drop-downs) can save users time by letting them skip a level (or two). Menu Layout Menu Layout
  • 150.
    C. Coordinate Menuswith User Tasks 10. Provide local navigation menus for closely related content. If people frequently want to compare related products or complete several tasks within a single section, make those nearby pages visible with a local navigation menu, rather than forcing people to ‘pogo stick’ up and down your hierarchy. 11. Leverage visual communication. Images, graphics, or colors that help users understand the menu options can aid comprehension. But make sure the images support user tasks (or at least don't make the tasks more difficult). Menu Layout Menu Layout
  • 151.
    D. Make ItEasy to Manipulate 12. Make menu links big enough to be easily tapped or clicked. Links that are too small or too close together are a huge source of frustration for mobile users, and also make large-screen designs unnecessarily difficult to use. 13. Ensure that drop-downs are not too small or too big. Hover-activated drop-downs that are too short quickly become an exercise in frustration, because they tend to disappear while you’re trying to mouse over them to click a link. On the other hand, vertical drop- downs that are too long make it difficult to access links near the bottom of the list, because they may be cut off below the edge of the screen and require scrolling. Finally, hover-activated drop-downs that are too wide are easily mistaken for new pages, creating user confusion about why the page has seemingly changed even though they haven’t clicked anything). Menu Layout Menu Layout
  • 152.
    D. Make ItEasy to Manipulate 14. Consider ‘sticky’ menus for long pages. Users who have reached the bottom of a long page may face a lot of tedious scrolling before they can get back to the menus at the top. Menus that remain visible at the top of the viewport even after scrolling can solve that problem and are especially welcome on smaller screens. 15. Optimize for easy physical access to frequently used commands. For drop-down menus, this means putting the most common items close to the link target that launches the drop-down (so the user's mouse or finger won't have to travel as far. Recently, some mobile apps have even begun reviving pie menus, which keep all the menu options nearby by arranging them in a circle (or semicircle) Menu Layout Menu Layout
  • 153.
    Dialog box isa graphical user interface element, which can be noticed as small window that provides information for the user and waits for the response the user in order to perform action upon users input. These dialog boxes are also used to confirmation message/notice to the user with “OK” button on it in order to confirm that the message/notice has been read by the user. Appropriate for multiple entry of data fields: • Complete information should be visible to the user. • The display should resemble familiar paper forms. • Some instructions should be given for different types of entries Form Fill-in Dialog Boxes Form Fill-in Dialog Boxes
  • 154.
    Users must befamiliar with: • Keyboards • Use of TAB key or mouse to move the cursor • Error correction methods • Field-label meanings • Permissible field contents • Use of the ENTER and/or RETURN key. One of the reasons why dialog boxes are very important is to ensure that the users will avoid mistakes such as the dialog box shown on Figure 1. The user may be trying to close the application already while working on a document but is not yet done saving it. Form Fill-in Dialog Boxes Form Fill-in Dialog Boxes
  • 155.
  • 156.
    Form Fill-in DesignGuidelines: • Title should be meaningful. • Instructions should be comprehensible. • Fields should be logically grouped and sequenced. • The form should be visually appealing. • Familiar field labels should be provided. • Consistent terminology and abbreviations should be used. • Convenient cursor movement should be available. • Error correction for individual characters and entire field’s facility should be present. • Error prevention. • Error messages for unacceptable values should be populated. • Optional fields should be clearly marked. • Explanatory messages for fields should be available. • Completion signal should populate. Form Fill-in Dialog Boxes Form Fill-in Dialog Boxes