The Paciello Group -home page | back to whitepapers

Designing the World Wide Web for People with Disabilities: A User Centered Design Approach

Lila F. Laux, Ph.D. US West Communications Human Factors/Knowledge Base Engineering 1801 California, #1640 Denver, CO USA 80202 llaux@uswest.com

Peter R. McNally Sensory Disabilities Research Unit Division of Psychology University of Hertfordshire Hatfield, Herts AL10 9AB, United Kingdom p.mcnally@herts.ac.uk

Michael G. Paciello Digital Equipment Corporation Usability Expertise Center 110 Spit Brook Rd. Nashua, NH USA 03060 mpaciello@paciellogroup.com

Gregg C. Vanderheiden, Ph.D. University of Wisconsin-Madison Trace Research & Development Center on Communications Control and Computer Access for Handicapped Individuals 1500 Highland Avenue Madison, WI USA 53705 gv@trace.wisc.edu

ABSTRACT

The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. An explosion of next-generation information systems are flooding the commercial market. This cyberspace convergence of data, computers, networks, and multimedia presents exciting challenges to interface designers. However, this "new technology frontier" has also created enormous roadblocks and barriers for people with disabilities. This panel will discuss specific issues, suggest potential solutions and solicit contributions required to design an accessible Web interface that includes people with disabilities.

KEYWORDS:

Accessibility, blindness, deaf, disabilities, hypermedia, mobility, people with disabilities, special needs, software development, user interfaces, user requirements.

INTRODUCTION

Designing an information network as complicated as the World Wide Web increases the need for accessibility. Access will be the key issue for every Web user. This is particularly true when it comes to designing accessibility for people with disabilities. The panelists will discuss current issues and barriers and explore potential solutions. Lila Laux cites the complexity and inaccessibility of information to special user populations. Peter McNally cites the lack of software tools that automate the process of user interface design for people with disabilities. Michael Paciello argues for a pervasive accessible technology that places the responsibility of adaptation on the Web designer, not on the user with disabilities. Gregg Vanderheiden summarizes the subject by citing several barriers that the Web currently includes.

SPECIAL USER POPULATIONS - Lila Laux

Introduction The Internet is an incredibly large and diverse body of information. Because of its complexity and lack of organization, the information is often difficult to access. In addition, accessibility is limited by expense - many Americans do not have access to telephone lines (in inner city neighborhoods, less than 80% of households have telephones), and many more cannot afford a computer with a modem or the on-line fees for Internet service. Special user populations are those who have one or more characteristics which could make it more difficult for them to locate, extract, and/or use Internet information.

Although there are many user groups which might be considered special populations given this definition, this paper will focus on two: the disadvantaged and those with disabilities - for many reasons, these two groups overlap.

Who comprises the disadvantaged special user population? The elderly, those with low incomes, and rural and inner-city residents have the lowest exposure to the Internet. These are large groups: the fastest growing part of the world population is adults over 55 years of age and many rural and inner city schools lack the resources to provide Internet access to students. Inner city and rural residents often have less exposure to training and fewer opportunities to learn computer skills. The majority of people with disabilities are disadvantaged both economically and educationally - they are typically unemployed or under employed and under educated or unskilled.

Who is disabled with respect to Internet use? There are 26+ million Americans with physical and sensory disabilities which could affect their ability to use the Internet. Another 23+ million have potentially handicapping cognitive or literacy disabilities. Age is also associated with increasing numbers of disabling conditions which can affect a person's ability to use a computer and to access information on the Internet.

The Internet grew up without consideration for the special needs of any of the many special populations which might potentially use it to great benefit. Now that these potential benefits are understood, we are concerned with how to fit the Internet to them or them to the Internet. But, there has been very little work to date to determine whether or how the Internet can meet their needs. The main question that we need to ask ourselves at this point is whether the Internet can provide information to special populations that isn't more readily available to them in another format. And the answer should be "yes". With regard to disabilities, there are many potential advantages to making the WWW accessible: A scholar with severe mobility limitations can access information from his or her colleagues through the Internet; an elderly person who has experienced a fall can access the results of AARP's studies about the quality and cost of walkers and where to purchase them at the best price in a timely fashion; a person with low vision who needs a screen reader, could contact ABLEDATA, who will do a computer based search for him and send this information within a few days, but how much more efficient and empowering if he can get that information from the Internet himself; a person needing information on accommodating Repetitive Stress Injuries can find the most current information of on the Web. In addition, of course, people with disabilities find the same types of information useful as their non-disabled counterparts.

For the disadvantaged, the Web offers fewer resources. Currently, most Web users are young (<55), white males with college education's and incomes above the median for the US. But many of the needs of the disadvantaged could be served by the WWW - since Web pages can "talk", problems of illiteracy and low vision could be surmounted to provide information to groups with these disabilities. Information on educational and work opportunities, government agencies, Medicare, etc., and on-line training programs are only some of the ways the needs of these groups could be served.

Does the Internet meet the information needs of most special populations? The answer is, "not yet, at least not completely"; but, that will be remedied by demand when those populations have access. Are people in special populations even aware of the Internet? Many are not, especially those in disadvantaged populations which do not have access to the Internet.

There has been a great deal of progress in adapting computers for people with disabilities, and there is work which indicates that disadvantaged users and users with disabilities can and do use computers and on-line services when they have access and training and when they believe that the effort involved in learning to use the system has a payoff for them. Our challenges are to adapt browsers to make the Web more accessible to special populations and to insure that Web pages conform to the Standardized General Markup Language (SGML) and to provide information through the Web which is accessible and directly relevant to them and meets their needs.

LESSON THE CHALLENGE TO DEVELOPERS: PROVIDE THEM WITH SUITABLE SOFTWARE TOOLS - Peter McNally

Introduction

The current state of software tools does not allow for the development of software, e.g., World Wide Web (WWW) browsers, for the disabled market in parallel with the "mainstream" market. Developers usually have to modify existing systems in an "ad hoc" approach to catch up. Disabled users will be several steps behind on the "information super-highway" unless software developers can build web browsers for groups of customers with different abilities and preferences as easily as software developers can port a web browser from one windowing system to another using a cross-platform user interface development tool, e.g., products from XVT™ [5]. This is not a trivial task. Several fundamental tasks must be accomplished before this goal can be achieved.

Decide Upon Appropriate Interaction Techniques For each end-user group, many factors, e.g., the appropriate I/O devices, must be decided upon. The ACCESS project is focusing on three user groups: speech-motor impaired, language-cognitive impaired, and blind users. Each group has different needs, abilities, and preferences which must be determined to develop usable systems. The task of gathering these requirements is best completed with user-centered design: by consulting several groups of people:

Transferring Requirements to Development This knowledge must be available to the software developers; disseminated through standards and guidelines. Ideally, third-party software development tools should be available so that when given the profile of the user group they can make decisions on which interaction techniques to use. These tools should allow the developers to design and develop the system once and "port" the software to a different environment specific to the user group.

THE ACCESS APPROACH

The ACCESS project is developing software tools to automate the process of developing user interfaces for different user groups on different platforms. A small part of the "core" of the user interface will still be programmed [2]. The platform specific and user group specific aspects of the user interface will be managed by the ACCESS tools. These tools can be divided into two groups: user adaptability and user interface specification.

User Adaptability Tool User adaptability is a fundamental part of the ACCESS project and is the key to enabling software developers to produce products for more than one user group from one development effort. A high level user interface design assistant called USE-IT will support user adaptability at the lexical level by reasoning about possible adaptations of the user interface [1].

USE-IT examines three areas: (i) the user models; (ii) task oriented design constraints; and (iii) device availability: which I/O devices are available to the user [1]. USE-IT will produce a file containing adaptability rules that will be input to the user interface specification tools.

User Interface Specification Tools Two tools will contribute to the specification of the user interface [2]. • G-DISPEC will enable a designer to specify the user interface in a high level 4th generation language. The tool will use the adaptability rules output from the USE-IT tool. • I-GET will compile the high level language produced from G-DISPEC and generate the user interface source code.

THE ACCESS HYPERMEDIA SYSTEMS

In order to validate the tools mentioned above several demonstration systems will be developed. One demonstrator is a hypermedia system for both sighted and blind students. The materials to be authored are text books and other study materials. The hypermedia system is on a stand-alone PC, therefore there will be no WWW capabilities. However, in the future, a web browser capability should be feasible if the system is designed with object-oriented techniques.

On the basis of a series of user requirements, an initial prototype for a hypermedia system for blind students has been designed and is currently being implemented. The hypermedia system for sighted students will contain the same functionality, and the design of the user interface will be started later in the project. The system for blind students will use synthetic speech, digitized speech, non-speech audio, and Braille for output, and a conventional keyboard, a joystick, voice recognition and a touch tablet for input. Developing a system which will allow blind users to interact effectively with hypermedia systems is clearly a challenging problem and several iterations of design and evaluation of prototype systems will be needed before a satisfactory solution to this problem is found.

Some of the features of the hypermedia system include [3]:

This features will be implemented in the first prototype system, using a preliminary form of G- DISPEC to generate the interface. Many of these features would be transferable to other systems for blind users, such as WWW browsers.


CONCLUSIONS

With the popularity of the Internet and the WWW in the marketplace, software companies are under great pressure to meet deadlines and make profits. If companies are to actively support the concept of making the WWW and Internet products accessible and usable by all we must make their tasks less challenging. One way to achieve this goal is to develop software tools that have user knowledge built in, which allow developers to re-use concepts proven usable for disabled users. An example of this philosophy is the current work in the ACCESS project: user interface development tools and a hypermedia system for blind and sighted students.

PERVASIVE ACCESSIBLE TECHNOLOGY: THE KEY TO WEB ACCESSIBILITY - Michael Paciello

Introduction

Why is it that so few products and interfaces include accessible design for people with disabilities? The problem lies not in the "why" but the "when". For years usability engineers have fought to help traditional engineers appreciate the value in designing first, developing second. At a minimum, design and develop concurrently. Yet, when it comes to designing for people with disabilities, the popular terms are "adaptive" and "assistive"; quite simply, design that occurs after product inception and long after it's been released to the public. Is this the essence of usability or user centered design?

User Centered Design for the User with Disabilities Woodson defined human factors engineering as "...the practice of designing products so that users can perform required use, operation, service, and supportive tasks with a minimum of stress and maximum of efficiency" [6]. Additionally, commenting on user interface design, Brenda Laurel noted that, "the goal of all our efforts, is to empower the user." [7] Laurel also noted that "As computer technology has become available to more and more people in a greater variety of devices and contexts, the need for accessibility...has grown more and more pronounced." This is, in effect, the very essence of user centered design (UCD); placing the needs and requirements of the user at the center of the design process.

To the person with a disability, however, the concept and practice of user centered design is generally perceived as a novelty, albeit a "Cave of Wonders". Assistive and adaptive technology have become the acceptable norm while accessible design has become the usability orphan.

Designing for people with disabilities is, in fact, just an extension of human factors engineering. Accessibility engineers employ the same usability methods and techniques used to design and test interfaces for able-bodied persons. My own personal experience has taught me that I simply need to be more "sensory-aware". I call this the "hear no evil, see no evil, speak no evil" design method. When an interface is designed and tested so that it achieves independent use by the deaf, the blind, and others with assorted disabilities, it is truly an accessible human interface.

PERVASIVE ACCESSIBLE TECHNOLOGY

In order to accomplish optimal user centered design that includes people with disabilities, Human Computer Interface groups should consider the Pervasive Accessible Technology (PAT) strategy. PAT is built upon two major components:

1. The Standard Human Interface (SHI) 2. An Accessible Information Technology Infrastructure (AITI)

Pervasive Accessible Technology or PAT is a technology I first heard proposed by the late David Stone at the 1991 World Congress on Technology for People with Disabilities. PAT is a technology in which persons with disabilities use standard human interface devices to communicate with an information technology infrastructure that knows how to adapt to the user. The IT interface employs an intelligent agent that adapts to the user rather than requiring that the user adapt to the infrastructure. This should be the very core of next-generation interfaces, particularly the often discussed, Global Information Infrastructure (GII).

The Standard Human Interface The pervasive accessible technology acknowledges (as does the GII) a Standard Human Interface. The standard human interface includes microphones, speakers, touch screens, glidepoint touchpads, kiosks, infrared devices, and video cameras (among other things). These capabilities, if supported by the right standards and an accessible infrastructure, have the potential of being accessible to the vast majority of people with disabilities.

For the standard human interface, we need to develop standard programming interfaces for all the capabilities including speech, video and touch, as well as for an accessible access port. We also need to build multimedia capabilities into the human interface model.

An Accessible Information Technology Infrastructure (AITI) The last - and crucial - component is the accessible framework that wraps itself around the data and the applications. This is the accessible information infrastructure. This framework is the equivalent of a personal interpreter for the person who is using the system. Each person has his or her own framework; one that knows what language they speak, what alphabet they use, and what accessibility services they prefer. In this model, accessible technology is not only pervasive, it's accessibility is non- discriminatory and totally transparent to users. In the accessible information technology infrastructure (AITI), we need to develop a set of standard accessibility services that can be found on all platforms in the infrastructure. These standards must be built upon usability foundation stones, including efficiency, effectiveness, and flexibility. We also need a set of data and document architectures that allow us to maintain accessible data and documents. Something much more than the HyperText Markup Language (HTML). Something more robust and flexible than the current Standardized General Markup Language (SGML) or it's included accessible standard, the ICADD22T (International Committee for Accessible Document Design). Conclusion The least-cost, most-efficient way of providing high- quality accessibility is one that integrates accessibility directly into the infrastructure. Usability groups should develop a well planned strategy that implements Pervasive Accessible Technology (PAT), a technology that implements a standard human interface and envelopes an accessible information technology infrastructure.
Indeed the challenge of designing next-generation interfaces that are accessible to all people, including people with disabilities is to make provisions for integrating personal assistive products as front ends or access ports to technology, and making information technology pervasively disability-friendly.

PRESENTATION-INDEPENDENT INFORMATION SERVING AND "EVERYONE INTERFACES"- Gregg Vanderheiden

Introduction

We are rapidly integrating the information technologies into our daily lives, much in the way that electricity is currently incorporated. At one time, electricity was available at a couple of points in only some homes, and electrical appliances were special devices. Today, it is illegal to build a house that does not have electricity in basically every livable room. In fact, most places insist that there be outlets on every significant piece of wall in a room. Electricity is also inherent in almost every device we use, from our computers to our telephones to our cars to our wristwatches. In the future, information technologies will also be similarly integrated. We won't think of there being specific isolated appliances for accessing and using information. Rather, it will be integrated into our environments and our lives.

As this happens, our environments and lives will not be accessible unless the information in them is accessible. Individuals who have worked out strategies for education, work, or daily living may find that they are no longer able to function adequately or independently. This includes not only people with disabilities, but just people who find the information technologies too complicated or requiring too high a literacy or other skill level.

"Everyone Interfaces"

In order to address this issue and ensure the broadest possible access to the next-generation systems and appliances, there has been increased attention on the development of "everyone interfaces." Such interfaces allow people with low or no technology skills (or inclinations) as well as those with literacy and other language barriers to effectively access and use these systems. While much of the emphasis here is on creating very user-friendly yet accessible interfaces, there is only so far that one can go in creating an accessible interface if the source material itself is not accessible. Thus, it is important to look at all components of a system, from the source to the viewer, if systems are to be made accessible.

Four Components to an Accessible System

In looking at information systems, it is important to note that there are four major components which must be accessible for the system to be accessible:

1) The source information;
2) The pipeline;
3) The in-line transmission services; and
4) The viewer.

Source information: If the information sent from the source is not accessible, then it is often difficult or impossible to make it accessible to the user at the destination point or viewer. For example, if the only way that an airline presents the seats that are available on a given airplane is by presenting a picture of the airplane with the empty seats light colored and the reserved seats black, it is very difficult for a viewer to make this information available to someone who is blind. Similarly, if the only way that temperatures are provided is as color coding on a map, the information may be difficult or impossible to provide at the user's viewer (except by some specialized application specifically adapted to deal with that data type).

Pipeline: Care must be taken that shipping information over a pipeline does not cause it to lose its accessibility information. At one time, there was a data compression technique for movies which inadvertently stripped all captioning out. Movies arrived in pristine condition at the destination point, but all of the captions were lost in the pipeline. Of particular interest here are any data transmission standards which do not provide for the inclusion of captions, text descriptions, etc.

In-line transmission services: This is a new and very powerful development. The ability to have e-mail turned into voice mail, voice mail into e-mail, faxes into e-mail or voice mail, etc., can provide very powerful access tools for people with disabilities. As OCR, page layout recognition, and voice recognition technologies advance, the utility of these services will increase dramatically.

Viewers: This includes computer-based programs such as Mosaic, Netscape, and AOL browsers, etc., as well as kiosks, television set-top boxes, hand-held PDA, and home next-generation home information appliances. It also includes touch-tone phones and other audio- based systems which might be used by someone who is driving a car to access diverse information services.

PRESENTATION-INDEPENDENT INFORMATION

Key to making these information systems accessible across disabilities and across viewers is the ability to have information stored and served in a presentation- independent format. That is, information needs to be available in a form which can be easily rendered in visual, auditory, or electronic text format. The simplest form of presentation-independent information is ASCII text. ASCII text does not in fact have any natural presentation form. It can be rendered into visual form on either a display screen or printer. It can also be presented in auditory form using a speech synthesizer. For individuals who can neither see nor hear sufficiently well, the information can also be presented in tactile form; for example, in braille. The information is easily searchable and is probably the easiest type of information for machine intelligence's to deal with.

Not all information is presentation-independent in its natural state. A photograph, for example, is by its nature easiest to present in visual format. A symphony or forest sounds are easiest to present in auditory form. Some types of information we do not know how to effectively present in other forms. How do we present the Mona Lisa or Guernica in auditory form?

However, much information can either be presented in presentation-independent form, or be put into a package (with more than one form in the package) which can allow presentation using the different modalities. For example, an audio speech could be packaged along with a transcript of the speech; a movie could have captions embedded, etc., as shown in Table 1.

Table 1: Examples of Presentation- Independent Information Storage and Serving Packages

1. ASCII text file; 2. Audio track with accompanying text description (ideally available both separate from and time- synched with the audio track); 3. Graphic image files accompanied by (or including) a text description of the important information within the graphic (preferably the description would include both a functional and an aesthetic description); 4. Video/movie files including time-synched text translation/description of the audio component (e.g., captions) and time-synched description of the video content (preferably in both auditory and text format).

Selective Display and Partial Serving

Ideally, these data packages would allow the user to select which portions of the overall data package they wished to view. Someone who can both see and hear may wish to watch the movie in traditional format. If they have to watch it in a very noisy room, or if they have hearing problems, they could turn on the text description (captions) of the audio track. Individuals driving a car or engaged in some other activity which occupied their vision might want to turn on the video description component, as might an individual who was blind.

Users may also want to save time by only having selected portions of the material downloaded. A reporter interested in doing research on presidential speeches may wish to take the time only to download the transcripts or the caption tracks from audio or movie files of the president's speeches. These can be downloaded very quickly and searched for key words or phrases. The reporter might then download only the specific much larger auditory or movie files that are of interest. (Once downloaded, the reporter could again use the caption tracks in the movies to quickly jump to the particular portions of interest within the speeches.)

Individuals with disabilities may in fact opt to only download those components they are interested in. For example, a person who is blind may only download the audio tracks and save themselves time (and money) if they do not have anyone who will be viewing the material with them who needs the video track. Individuals who are deaf may wish to download the video track, the caption track, and/or perhaps even a second video track which had an American Sign Language interpreter signing the material on the first video track.

The ability to serve information in one or more formats also applies to information which is generated on demand rather than stored. For example, the airline reservation system does not store the information about available seats in picture format. Rather, it generates the picture of the airplane with the full or empty seats at the time of request. Individuals making the request should be able to request either the visual presentation or a non-visual presentation of the information, as best meets their current needs and circumstances. Again, we can think of the individual who is driving his car to the airport, who has missed one plane and is madly trying to find a seat on another.

Attached or Included?

This discussion raises the question of whether the different data should be attached as separate data files which can be resynched at the player or whether they should be handled as a single data file format. Although there is not a definitive answer at this time, it appears that the best mechanism would be a format which is seen as a single format but which will allow easy serving of the individual data streams. That is, rather than being thought of as a collection of different data types, it is better if the data storage formats allow the different data types to be stored together in one package. This facilitates copying and also leads to a more standardized package. It also increases the probability that information sources will be aware of and provide the information in the different data formats. Finally, when the data is mirrored or copied and transported, it is much more likely that all of the different components will be copied and carried forward.

At the same time, it should be possible to easily separate the different data tracks so that individual users can request only those components they need at any particular time. The two major reasons for this are to decrease the time needed to download the information and to minimize any unnecessary packet traffic on the Internet.

In Transmission Filters

In addition to the in-transmission services discussed above (where different data streams might be combined to form an accessible version of a multimedia title), there are other in-line transmission services which might be used to broaden the usability of particular multimedia files. For example, the audio component might be filtered and shifted to help individuals with specific frequency hearing losses. Text-to-speech algorithms might be run to add text translation (or language translation) to a file with audio speech content. Special filters might be used to convert proprietary text files or image files of text into plain ASCII text or ICADD 22 (a form of SGML) to make it easier to access for users who need or prefer this format. Relying on this type of translation, however, is not reliable, and should not be considered a main approach for accessibility at this time.

Speech recognition, particularly in noisy or complex scenes, is marginal, and if the authorship of the speech is not apparent, many interactions may be incomprehensible. Similarly, complex graphic materials cannot be easily rendered into text, even if OCR did perfect recognition on the textual matter. Reliable cross-modal provision of information will therefore generally require the multi-modal serving discussed above.

"Everyone Interface" Browsers and Viewers

The browser/viewer represents the last link in the chain. If the browser or viewer is not accessible, the information will not be usable. Accessibility at the browser/viewer level means that the browser must: a) Be able to present the information in the different formats required by different users. (For example, a movie viewer must be able to display captions if they are included in the data stream, as well as be able to present alternate audio tracks with might contain descriptive video narrations.) b) Be usable by individuals with a wide range of skills and abilities. If the viewers are in public locations, this means that cross-disability access must be built directly into the browsers/viewers. Today, there are touchscreen kiosks that demonstrate the feasibility of built-in cross-disability access in commercial products. Systems exist and are being placed commercially which allow individuals with low vision, blindness, physical disabilities, cognitive or literacy problems which make reading difficult, and individuals who are deaf-blind to use public kiosks. At the present time, browsers and Internet viewers do not exist which meet either criterion a or b above across disabilities. However, some examples noted above exist for specific access features (such as the text stream option in QuickTime). Hopefully, as both the ease of building this cross-disability access into browsers becomes known, as does the benefits to all users, we will be seeing increased incorporation of these strategies in the standard commercial browsers.

Conclusion

In order to provide access to next-generation information systems by people with diverse physical and sensory abilities, it is necessary to begin thinking of systems, data formats and viewers which are not presentation mode specific. This is important not only to allow access by people with disabilities, but also by individuals with different data requirements or who are operating under conditions (e.g., a very noisy room) where particular data types are of no particular value.

Providing information in this fashion also has other advantages aside from the presentation flexibility it affords. Primary among these are on the data sourcing side. These include enhanced indexing and search capabilities, particularly for audiovisual materials. On the viewer end, it includes the ability to use these systems in a much wider range of environments (noisy malls, silent libraries) and in unusual or adverse conditions.

ACKNOWLEDGMENTS

This work in the ACCESS Project is funded by the TIDE Programme of the EU (DG XIII). The partners in the ACCESS Project are: Consiglio Nazionale delle Ricerche, Instituto Ricerca Onde Elettromagnetiche, Italy; Institute of Computer Science, Foundation of Research and Technology, Crete, Greece; Royal National Institute for the Blind, U.K.; Sensory Disabilities Research Unit, University of Hertfordshire, U.K.; Department of Informatics, University of Athens, Greece; Seleco SpA, Italy; MA Systems and Control Ltd., U.K.; Hereward College, U.K.; Research and Development Centre for Welfare and Health, Finland; Technical Research Centre, Finland; Pikomed, Finland.

In particular, the USE-IT, G-DISPEC, and I-GET tools are developed by the Institute of Computer Science, Foundation of Research and Technology, Crete, Greece.

We would also like to acknowledge the work of the recently deceased Yuri Rubinsky. Yuri is responsible for implementing much of the HTML and ICADD standards that allow people with print disabilities to have easier access to Web content, hypermedia, and print information.

REFERENCES

1. Akoumianakis, D., Sephanidis, C., Petrie, H. and Morley, S. (1995). Supporting user interface adaptability during the design and development process. Adjunct Proceedings of the HCI '95 Conference, People and Computers X. pp 29 - 32. Cambridge: Cambridge University Press 1995.

2. Stephanidis, C., Savidis, A. and Akoumianakis, D. (1995). Tools for user interfaces for all. In I. Placencia-Porreo and R. Puig de la Bellecasa (Eds.). The European Context for Assistive Technology. pp 167 - 170. Amsterdam: IOS Press.

3. Petrie, H., Morley S., McNally P., and Graziani P. (1995). Accessing Hypermedia Systems for Blind People. Proceeding of ECART 3 Conference. pp 311 - 313. Lisbon, Portugal: National Secretariat of Rehabilitation

4. Salminen, A.-L. and Petrie, H. (1995). Human- human interaction in the design of rehabilitation technology. In I. Placencia-Porreo and R. Puig de la Bellecasa (Eds.). The European Context for Assistive Technology. pp 29 - 32. Amsterdam: IOS Press.

5. XVT Software Inc. [Http://www.xvt.com/docs/products.html] November 1995

6. Woodson, Wesley E. Human Factors Design Handbook: Information and Guidelines for the Design of Systems, Facilities, Equipment and Products for Human Use. New York: McGraw-Hill, 1981

7. Laurel, Brenda The Art of Human Computer Interface design. Addison-Wesley Publishing Company, Inc 1990.

Copyright 2002 The Paciello Group