The Evolution of Alternative Communications Technologies for the Deaf, Hard of Hearing and Speech Impaired


Disclaimer:

This report was commissioned by the Canadian Radio-Television and Telecommunications Commission ("the Commission" or "the CRTC") in November 2012 and was completed by CONNECTUS Consulting Inc. This report presents a review of alternative assistive communications technologies for persons with hearing or speech impairments that are currently on the market or in development, not including video compression technology, such as that used for video-based services like Skype or video relay service (VRS), and gateways.

While the author has endeavored to ensure that the information is current and accurate at the time of writing, significant changes may be occurring or have occurred in some areas by the time of publication. This report reflects the research and views of the author, and should not be construed as representing any views of the Commission.


Final Report

PDF version

Report Author: Richard Cavanagh PhD, M.A.

CONNECTUS Consulting Inc. (CONNECTUS) is pleased to present its Final Report to the Canadian Radio-television and Telecommunications Commission (CRTC, the Commission) on The Evolution of Alternative Communications Technologies for the Deaf, Hard of Hearing and Speech Impaired (the Report).

The Report was compiled through a scan and review of existing alternative communications technologies (which could also be termed 'assistive communications devices') designed for people with hearing or speech disabilities. The scan was supplemented through discussion with six international experts in the field of accessible technologies.

Overall, the Report focuses on ways, both current and in development, of improving telecommunications accessibility for those with hearing or speech disabilities outside of video compression techniques or gateways. The Report also touches on evolving approaches to the improvement of communication for those with hearing or speech disabilities, largely through the development of new software applications.1

The Report is organized as follows:

Part I provides a brief definition and description of the term 'assistive devices' from a communications technology perspective.

Part II is the core of the Report that presents a review of alternative communications technologies available or in development (in North America and other international jurisdictions). For each technology we include:

In addition, we present for each technology:

Where feasible, whether accessed in the public domain and/or permitted by rights holders, schematics, graphics and other technology elements are presented for illustrative purposes.

Part III of the Report presents a summary grid of the above review and scan, together with recommendations on those alternative communications technologies that would be useful to monitor going forward.

The Report also includes two Appendices.

Appendix A provides a list of sources used for the Report.

Appendix B presents a brief biography of the Report's author.

Executive Summary

This Report was compiled through a scan and review of existing alternative communications technologies (which could also be termed 'assistive communications devices', defined in the first part of the Report) designed for people with hearing or speech disabilities.

Overall, the Report focuses on ways of improving telecommunications accessibility for those with hearing or speech disabilities outside of video compression techniques or gateways. The Report also touches on evolving approaches to the improvement of communication for those with hearing or speech disabilities, largely through the development of new software applications but outside of the telecommunications system itself.

A review and scan of existing and developing alternative assistive communications technologies indicates that the following six categories can be identified, although there is overlap between them.

New captioning technologies in telecommunications examines advancements in telecommunications devices for the deaf (TDD) traditionally used for text communication via telephone lines, focusing primarily on captioned telephones.

Traditional TDD technology has fallen by the wayside due to (i) advances in TDD technology that uses operators, software or both to effectively create 'captioned telephony' and (ii) the widespread use of digital (or internet protocol) networks instead of analog networks for communication by those with hearing and speech disabilities.

Captioned telephones operate in a fashion similar to TDDs, but work as a regular telephone that provides voice and displays captions simultaneously.

While technical barriers to the adoption of captioned telephones in Canada appear limited, the feasibility of integrating this accessible technology into the Canadian system is in question, for reasons of public policy and the rapid evolution of other useful technologies.

Advancements in text relay looks primarily at Internet Protocol (IP) Relay which allows people with hearing or speech disabilities to communicate using a computer and the Internet – the computer effectively becomes the TTY. There are no additional costs to users of IP Relay beyond a computer or other Web-capable device and an Internet connection. IP Relay services were launched in Canada in early 2011.

Multiple types of computer programs can be used with IP Relay, including custom programs that run in a computer's web browser, as well as instant message-based services. It is multi-device and multi-platform, capable of functioning with tablets, smartphones, and computers.

While IP Relay is a fully feasible text-based service for Canadians with hearing and speech disabilities, the speech to text conversion software that is used to enhance the efficiency of operators may be limited in terms of (i) its accuracy in conveying correct text and/or (ii) the 'trainability' of the software itself.

Speech to text conversion technologies utilizes software to convert vocal sounds to written words (i.e. speech recognition or more advanced voice recognition technologies). It is used for both CapTel and IP Relay, where a Communications Assistant (i.e. relay operator) repeats the words of anyone who is speaking into a computer microphone. The computer's speech to text software converts the spoken words into written ones.

In general, speech to text conversion is probably limited in its attractiveness to those with hearing and speech disabilities, because of its questionable accuracy. Moreover, it is viewed as secondary to video, secondary to Sign-to-speech/text conversion, and has been surpassed by mainstream SMS and instant messaging technology.

However, speech to text conversion technology has a number of applications, including IP Relay, closed captions for television broadcasting and mobile platforms.

Sign Language to speech/text conversion converts Sign Language to text or computer-generated spoken word in near-real time.

A discussion of Sign to Speech/Text conversion shifts the technology focus from one on telecommunications to one on communication apps. This is an important distinction, because this particular technology does not involve augmenting or otherwise altering the telecommunications system for purposes of better accessibility. Rather, it adds a software application to deliver a new type of communication between users.

Sign to speech or text conversion technology is once again a software-based system that converts Sign Language to computer-generated spoken or written words (for example, American Sign Language, Langue des signes québecois, British Sign Language or the specific Sign Languages of other jurisdictions), or converts text or spoken word to Sign Language in near-real time.

While there is an international groundswell of interest in this evolving technology, Sign to speech or text conversion is in its relatively early stages of development.

The speed at which Sign to speech/text conversion is proceeding appears more rapid (compared with, for example, speech to text conversion technology) because at the present time there is considerable momentum on an international scale behind this technology; multiple projects, similar in design and approach, presenting opportunities for information exchange, are underway in a number of jurisdictions.

But while indications are that this technology will deliver a useful addition to assistive applications, some current claims – such as real time conversion from symbol to text – should be treated with caution. Such caution is reasonable given the early promise of voice recognition software that ultimately encountered significant barriers and limitations that have proven difficult to overcome.

Mainstream technologies such as Short Message Service (SMS) and instant messaging have been massively adapted for use by those with hearing or speech disabilities. There is little doubt that mainstream communications technologies such as SMS have simply and very quickly surpassed other assistive technologies for a number of reasons: text-based, easy to use, vibrating functionality of handsets, fast and potentially inexpensive.

The benefits for users are, in a word, enormous – dramatically changing the lives of millions.

It has been suggested that mainstream text messaging, SMS or other instant messaging are by far the most widely available and widely adapted technologies for people with hearing and speech disabilities. They are technologies that have unintentionally resulted in 'electronic curb cuts' of mass proportions.2 Considering that such technologies are developed by the world's largest software companies and most creative application developers also means that these technologies will keep evolving given market competition and massive consumer uptake.

Future developments and applications in alternative technologies include a portable device in which two or more users type messages to each other that can be displayed simultaneously in real time; a personalized text-to-speech synthesis system that synthesizes speech that is more intelligible and natural sounding to be incorporated in speech-generating devices; and automatic personalization of communication preferences, using a cloud-based preferences profile that cuts through the clutter of 'too much choice' for users.

As a concluding note on the current and developing state of alternative communications technologies, professionals in this field who were consulted for the scan identify two important emerging issues.

First, the "ecosystem for these technologies is option-rich." There are more choices than ever before for enhancing communication for people with hearing and speech disabilities – to the point of being both overwhelming and creating a digital divide in terms of tech-savvy users and those who are not.

Second, "accessibility features and services are massively underused, even when they are free. People lack awareness and confidence – they don't know what will work and they don't know how to get started." In other words, putting information into action can be a barrier to using the technologies that are there now – and resolving this issue is key.

Monitoring of three areas is recommended going forward:

Although a scan and review of video compression technologies that result in applications such as Skype, Google Hangout or VRS were beyond the scope of this Report, video compression technologies should also be monitored given their importance to users and the more efficient use of bandwidth that future developments may represent.

Part I – A Definition of Assistive Devices

It should be noted from the outset that, while some alternative communications technologies specifically identify their utility for people with hearing and/or speech disabilities, a number of mainstream technologies have been adapted for widespread use by this community of people with disabilities. Such technologies include SMS (short message service) and instant messaging/texting of the type widely and currently available on many mobile devices such as smartphones.

From the perspective of accessibility, this is an important consideration in terms of defining an assistive device. As noted by the U.S.-based National Institute on Deafness and Other Communication Disorders (NIDCD),

"The terms assistive device or assistive technology can refer to any device that helps a person with hearing loss or a voice, speech, or language disorder to communicate. These terms often refer to devices that help a person to hear and understand what is being said more clearly or to express thoughts more easily".3

With respect to the adaptation of mainstream technologies, widely used by the able-bodied population, the NIDCD goes on to say,

"With the development of digital and wireless technologies, more and more devices are becoming available to help people with hearing, voice, speech, and language disorders communicate more meaningfully and participate more fully in their daily lives."4

Text messaging and the use of video technologies such as Skype have become ubiquitous with the community of those with hearing and speech disabilities, largely because of their easy adaptation for those who have typically used assistive devices to communicate. Smartphone and other mobile devices were not specifically manufactured for this purpose – but their functionality nonetheless reaches over to the disability community by the very nature of the method of communications used: text.

Despite the usefulness of mainstream communication technologies to users with hearing or speech disabilities – explored in greater detail below – there remains an important and expanding assortment of alternative assistive communications technologies, as set out in the next section of our Report.

Part II – A Scan and Review of Alternative Assistive Communications Technologies

A review and scan of existing and developing alternative assistive communications technologies indicates that the following six categories can be identified, although there is overlap between them:

1) New captioning technologies in telecommunications

To understand the role and importance of new captioning technologies in telecommunications, it is first important to understand the basics of how telecommunications devices for the deaf, or TDD, operate.

First developed in the 1960's, a telecommunications device for the deaf (TDD) is an electronic device that enables text communication using telephone lines, enabling those with hearing or speech disabilities to communicate one-on-one with each, as well as with hearing people. A typical TDD – also called a teletypewriter (TTY), a textphone (in Europe) and a minicom (in the U.K.) is about the size a small laptop, with a standard QWERTY keyboard and small LED or LCD screen that displays typed text electronically.

Photograph of an older model TDD

Photograph of an older model TDD

In Canada and other countries, there are different ways to communicate with a TDD. Between deaf parties, each possessing compatible TDDs, the text between TDDs is transmitted live, via a telephone line.5

However, TDDs can also be used for communication between a deaf person and a hearing person, through the use of a human relay operator. These added features of TDDs are referred to as 'carry-over' services, enabling people who can hear but cannot speak ('hearing carry-over' or HCO) or people who can speak but not hear ('voice carry-over' or VCO) to use the telephone. Relay operators do just that: they relay conversation between parties, converting speech to text and text to speech.6

This traditional form of telecommunication is falling by the wayside for two key reasons: (i) advances in TDD technology that uses operators, software or both to effectively create 'captioned telephony' and (ii) the widespread use of digital (or internet protocol) networks instead of analog networks for communication by those with hearing and speech disabilities (which is discussed later in our Report).

Captioned Telephones – Summary Description

Captioned telephones operate in a fashion similar to TDDs, but work as a regular telephone that provides voice and displays captions simultaneously. An illustration from CapTel, the principal captioned telephone provider in the U.S. summarizes how the service operates:

"You dial the other person's number, exactly the same way as with any other telephone. While you dial, the CapTel phone automatically connects to the captioning service. When the other party answers, you hear everything they say, just like a traditional call. At the same time, the (TRS) captioning service transcribes everything they say into captions, which appear on the CapTel display window. You hear what you can, and read what you need to."7

Graphic illustration of how CapTel works

Graphic illustration of how CapTel works

 

A few conditions to captioned telephony are worth noting.

First, while captions appear automatically in the telephone set display screen such as the one illustrated below, how one connects to the captioning service depends on the type of phone in use. For example, a one-line unit means the caller must first dial the captioning service, and route the call from there. A two-line unit means that calls are directly routed, just like a regular telephone call.

Second, there is the question of how calls are transcribed and converted into captions by the captioning service. In general, voice recognition software is used; this means that the words spoken by a caller are repeated into a computer by an operator, and then converted to captions at the other end of the call. This in turn can create practical issues in terms of captioning accuracy, discussed below in the section on 'barriers'.

Third, it is important to note that captioned telephones that are in use in the U.S. and funded through the U.S. Telecommunications Relay Service as part of Title IV of the Americans with Disabilities Act (ADA). As a result, it is funded by consumers through a small levy on their telephone bills. Alternate funding mechanisms were developed in Australia and the U.K., largely through government subsidies.8 (The service is not available in Canada.)

Captioned Telephones – Intended Purpose

The intended purpose or function of captioned telephony is to provide users – especially those with residual hearing who may have difficult with regular telephony – with near-real time captions for those with hearing or speech disabilities in one-to-one communications.

 

Photograph of a Captioned Telephone

Photograph of a Captioned Telephone

Captioned Telephones – Product Development Cycle – Time to Market

A wide variety of captioned telephones are manufactured in the U.S., and their availability in the Canadian market would be rapid (for example, less than three months) presuming (i) Industry Canada approval of the captioned telephone sets and (ii) the establishment of contractual agreements between telecommunications service providers (such as Bell Canada or Telus) and product retailers (such as Future Shop) with hardware vendors.

However, the integration of captioned telephones into the Canadian market would require the development and implementation of public policy to create a system similar to the U.S. CapTel system noted above and, as an outcome of that policy, a mechanism for funding the captioning operations system. (This is discussed in more detail with respect to 'barriers' below.)9

Captioned Telephones – Benefits for Users

It is important to note that the target user for captioned telephony is one who has experienced mild hearing loss, and consequently may have difficulty with voice-only telephone communication.

In general, the ability for end users to communicate in near-real time is viewed as a major benefit of captioned telephony. So too are the transparency of the system (i.e. the role of transcription) and the ease of interface between end users. The central limitation on accessibility resides with existing limitations on voice recognition software used by the captioning service (again, discussed in the section on 'barriers' to follow).

In addition, there is a potential benefit to the cost of using CapTel for users, vis-à-vis other platforms or technologies – largely because it's availability in the U.S. is heavily subsidized, from handsets (which can cost $400 (U.S.) or more), to the use of the captioning service, which is free of charge. A cost comparison of various alternative technologies might prove useful to determine the financial benefit of captioned telephony for users.

Captioned Telephones – Barriers to Adoption

Presuming the seamless integration of the system with Canada's existing telecommunications system and setting aside for the moment concerns about feasibility (discussed below), a central barrier to adoption is the mechanism for delivering near-real time captions to end users: the accuracy and reliability of voice recognition software.

Unlike relay service, an important part of the communications chain for captioned telephony has a Communications Assistant (or CA, who works for a captioning company) speaking the words of a caller into a computer. (The software works more efficiently when limited voices are used; hence, the voice of the CA and not the caller is used to create the captions.) The computer's voice recognition software converts the CA's words into captions, which are then transmitted to the end user and appear on that user's captioned telephone set.

However, voice recognition software is not yet perfected, and the translation of speech to captions can become garbled (a caption stating, "Howard cue viewing?" instead of the spoken words, "How are you doing?", or a caller with an accent that the CA finds difficult to interpret). The spelling of certain words can slow down the speed of captions to the end user, and line interference can occasionally disrupt the flow of communication within the chain.10

As noted above, the software works more efficiently when limited voices are used to 'train' it; the system continues to work poorly with multiple voices, and multi-party situations such as conference calls would likely prove too complex for the software to manage (assuming there is no Communications Assistant re-speaking all callers).

It is generally accepted, however that voice recognition software will continue to improve as its use becomes more widespread (in closed captioning for television, for example). To this end, the major barriers to the adoption of captioned telephony in Canada are more about feasibility than technical limitations.

A more secondary barrier to the adoption of CapTel may be the unwieldy nature of 911 service when using a captioned telephone. In the case of the U.S., 911 operators are called directly by users of captioned telephones (i.e. calls are not routed through a captioning centre), but 911 call centres do not provide captions. Instead, the captioned telephone defaults to a Voice Carry Over phone; 911 operators must use a TTY to communication with the captioned telephone.

However, since captioned telephones do not have keyboards, "The CapTel user can only use their voice to talk to 9-1-1 at all times."11

This, of course could prove problematic if a caller is unable to speak, or speak clearly enough to communicate with the 911 operator; no fallback scenario is provided in these instances.

Captioned Telephones – Stage of Development

Although improvements continue to be made with respect to handsets and software, the technology is fully developed in the U.S. It is also being trialed in Australia.

Captioned Telephones – Feasibility

While technical barriers to the adoption of captioned telephones in Canada appear limited, the feasibility of integrating this accessible technology into the Canadian system is in serious question, for reasons of public policy and the evolution of technology more generally.

First, Canada has no federal legislation similar to the U.S. Americans with Disabilities Act (ADA), which mandates the funding of captioned telephony in the U.S. With no similar technology-specific legislation, the provision of captioned telephony would require the development of new public policies or regulation requiring the funding of a captioned telephone system in Canada.

Such funding would be a fundamental necessity of such a system; although handsets, even dating back to TTYs, have not traditionally been subsidized on a national basis in Canada (only through the occasional provincial program), the captioning system itself would require funds to establish and maintain operations for a Telecommunications Relay Service captioning centre (equipment, staffing, potential integration with IP Relay call centres discussed below). For users themselves – potentially numbering in the thousands, a relatively small number – to fund the system would likely prove cost-prohibitive (even at $400 for a handset and perhaps $50 per month to fund the service), as many would be lower-income Canadians.12

Second, it is questionable as to whether there would be a sufficient market need for a captioned telephone system at this time, since in the seven years since the introduction of CapTel in the U.S., instant messaging systems have advanced considerably – bringing with them unintended 'electronic curb cuts' for people with hearing and speech disabilities.

Given the absence of a policy framework supporting captioned telephony and the rapid development of instant communication on digital platforms, market research would be required to determine the actual need for a Canadian captioned telephone system at this stage of technology evolution.

Captioned Telephones – Potential Enhancement or Integration with Other Existing Applications, Platforms or Technologies

As noted above, captioned telephone handsets would be easily integrated with the public switched telephone network (PSTN), once approved for the Canadian market by Industry Canada. However, captioned telephones can also be integrated with digital platforms through what is known in the U.S. as WebCapTel.13

In this instance, a regular telephone is connected to a computer or smart phone; calls are made on the regular handset, but captions of the call are viewed online via the Internet browser window of a computer or smart phone.

While WebCapTel can be used via any phone, requires no special equipment and is free of charge, international calls cannot be place and certain service providers (such as CapTel) have restrictions on how many users can be registered for the service at once. Other service providers such as Sprint CapTel have no such restrictions. The reason for this is not known, but may have something to do with available network capacity or available capacity or human resource limitations at captioning centres.

Captioned Telephones – Potential Impact on Users

Based on the U.S. model of captioned telephony, the impact on users can be assessed overall as positive – but largely because of a legislative and policy framework that restricts or altogether eliminates the cost burden on end users.

This means that, as noted above, an unsubsidized Canadian version of captioned telephony, which requires users to bear the costs of handsets and the associated operational costs, may not derive sufficient benefits compared to the costs of an alternative communication device and platform like SMS or instant messaging.

Our review and scan of captioned telephony did not reveal the precise number of users in the U.S., which were only generally estimated in "the thousands" four years ago. To situate the impact of captioned telephony on users vis-à-vis other devices and platforms, it would be useful to identify the number of users of WebCapTel, versus the number of users of traditional CapTel, versus the number of users who have abandoned CapTel in favour of SMS, instant messaging or other multi-platform service.14

This data would provide some insight into the relative importance or usefulness of captioned telephony against more recent developments in instant messaging-based one-on-one communications. As noted by an expert consulted for the scan, "SMS has overtaken (the need for) CapTel…TTYs are dead, and have been obsolete since the 1980's."

2) Advancements in Text Relay – IP Relay or Web-based Relay Services

Internet Protocol Relay, or IP Relay as it has become known in Canada, the U.S. the U.K. and a number of other jurisdictions, has become a key advancement in text-based relay services, available to large numbers of people with hearing and speech disabilities in a relatively simple and inexpensive fashion.

IP Relay – Summary Description

IP Relay allows people with hearing or speech disabilities to communicate through the telephone system with hearing persons. Rather than using a traditional TTY and telephone, with a relay operator conveying text and voice as required, IP Relay is accessed using a computer and the Internet – the computer effectively becomes the TTY.

In a traditional Telecommunications Relay Service (TRS), a TTY user would contact a TRS centre, and the Communications Assistant (CA) at the TRS centre call the receiving party via voice telephone. In IP Relay, the first part of the call goes from the caller's computer (or other web-enabled device like a tablet or smartphone) to the IP Relay Centre via the Internet; the centre is typically accessed via a service provider webpage.

The next part of the call – a more traditional TRS element – is made by the CA to the receiving party via voice telephone through the PSTN.

The caller types out his/her end of the call, which is relayed by voice to the receiving party; the receiving party responds by voice, which is typed and relayed to the caller by the CA. It is essentially a TTY call, but a computer or other device stands in for the TTY.

There are no additional costs to users for IP Relay beyond a computer or other Web-capable device and an Internet connection. In Canada,

"… the CRTC determined (previously) that all wireline (traditional), wireless, and Voice over IP (VoIP) service providers were responsible for giving their customers access to TTY relay service. Broadcasting and Telecom Regulatory Policy 2009-430 (Accessibility of telecommunications and broadcasting services, July 21, 2009) extends the message relay service requirements. One year from the date it was issued, all phone companies that are required to provide TTY relay service (i.e. local phone companies, wireless providers, VOIP phone providers) will be required to give customers access to IP relay service."15

As many Canadian service providers required additional time to develop and launch their respective IP relay services, extensions were granted by the CRTC. IP Relay services were launched in Canada in early 2011.

IP Relay – Intended Purpose

The purpose of IP Relay is to enable text-based communication on digital platforms for those with hearing and speech disabilities, effectively enabling computers and other web-enabled devices to take the place of TTYs that are more restrictive in their utility and rapidly declining in usage given advancements in other technologies, most notably instant messaging.

IP Relay – Product Development Cycle/Time to Market

IP Relay is now in use in a number of countries, including Canada, the U.S., the U.K., Australia and a range of European countries (where it is more commonly known as Web-based text relay services). While time to market for IP Relay varies, the Canadian experience was approximately 18 months from the announcement of a regulatory obligation to provide IP to its launch in the marketplace – although some glitches in the system are still being addressed.

IP Relay – Benefits for Users/Limitations in Promoting Accessibility

As a text-based service for those with hearing and speech disabilities, IP Relay brings a number of benefits for users.

First, IP Relay can be used by "many deaf and hard of hearing people who don't use American Sign Language (ASL)" and "those without high-speed Internet access (who) are therefore unable to use Video Relay Service (VRS)."16

Second, multiple types of computer programs can be used with IP Relay, including custom programs that run in a computer's web browser, as well as instant message-based services.

Third, IP Relay is multi-device and multi-platform, capable of functioning with tablets, smartphones, and computers (so long as connectivity is made available by the IP Relay service provider). At some point in the near future, a television interface should also be possible.17

Fourth, IP Relay enables users to multi-task while carrying on a conversation (surf the Internet, for example) and further enables participation in conversations with multiple parties (such as conference calls).

IP Relay – Barriers to Adoption

There are few if any barriers to adopting IP Relay as a text-based service for people with hearing and speech disabilities, and certainly none that would serve as a disincentive to its introduction into the marketplace. However, difficulties with IP Relay have arisen on two fronts: access to 911 and misuse of the service through criminal activity such as fraud.

While 911 is accessible through IP Relay (a relay operator places the call), it is not possible to identify the exact location of callers. This means that callers need to be able to provide their exact address and other information about their location or the operator is unable to place the call. However, because IP Relay works well on mobile phones that may not support bandwidth requirements needed for VRS, it can provide users with access to 911 when no other option may be available (as in the case of car accidents, for example).

On the question of misuse of the service, instances of fraudulent usage of IP Relay have been reported, largely in the U.S. Fraudulent use of IP Relay involves (i) a registration for IP Relay by hearing individuals, often from foreign countries (temporary registrations were often granted pending verification of an individual as a legitimate user), (ii) use of IP Relay to contact businesses (which are required to accept relay calls under the Americans with Disabilities Act) and (iii) use of fake or stolen credit cards to make fraudulent purchases from those businesses.18

In other words, an individual fraudulently obtains a registration to use IP Relay; obtains fake or stolen credit cards; contacts a U.S.-based business knowing that businesses must by statute accept relay calls; and makes purchases under a 'double fraud' of fake registration and illicit payment methods.

To combat these activities, the FCC proposed a tighter system of registration for IP Relay users, or increasing the discretionary powers of Communication Assistants to terminate suspect calls. The U.S. National Association of the Deaf, for its part, rejected any measures that would potentially abrogate the privacy of users, and further resisted increasing the latitude of CAs to decide on whether or not a call was fraudulent.19

In order to combat the fraudulent use of IP Relay, the FCC elected to implement a registration system, requiring user pre-authorization prior to the issuing a 10-digit access number and eliminating the practice of temporary authorizations for users.20

Consultations with experts in accessible telecommunications indicate that a further barrier to IP Relay is its reliance on voice recognition software. That is, the highest possible levels of accuracy in the conversion of speech to text are required (> 95% accuracy), but providers in Canada are reported to have a variable range of accuracy – suggesting variable levels of software and/or operator success.21

It should be noted that speed of conversation is also a factor in accuracy. In addition, text-based relay services are generally limited by other factors, including the typing speed of users, the typing speed of operators, hearing limitations of users and the voice-to-text software. (FCC service standards require operators to relay conversation at a minimum speed of 60 wpm.)

IP Relay – Stage of Development

IP Relay has reached completion and launch in the Canadian marketplace. However, because it is based on a system involving relay operators and voice recognition/ speech-to-text software, improvements in the service are constantly sought by providers, based in large part on available improvements in software (and training of that software in speech recognition).

IP Relay – Feasibility

IP Relay is a fully feasible text-based service for Canadians with hearing and speech disabilities. However, the speech to text conversion software that forms a core element of the service may be limited in terms of (i) its accuracy in conveying correct text and/or (ii) the 'trainability' of the software itself (see the discussion on the development cycle of speech to text conversion software below).

In other words, the feasibility of IP Relay is largely dependent on achieving a high level of accuracy in converting speech to text – at least as high as the 95 percent level that is claimed by some software providers. Data on accuracy of IP Relay is not available, but as noted above, experts in accessible communications technology indicate that accuracy varies across service providers.

IP Relay – Potential Enhancement or Integration with Other Existing Applications, Platforms or Technologies

As noted above, IP Relay works well with multiple devices, platforms and programs, because its text-based nature takes up little bandwidth.

IP Relay – Potential Impact on Users

IP Relay has been found to be an important addition to text-based communication for people with hearing and speech disabilities. It is entirely suited for the mobile platform as it uses little bandwidth; in the U.S., IP Relay can be used without a high-speed Internet connection and can therefore be even more cost effective for consumers. In Canada, a high-speed connection is required for IP Relay services provided by Canadian telecommunications providers.22

3) Speech to Text Conversion Technology

As noted by an expert consulted for the Study, speech to text (and the reverse, text to speech) conversion has clearly made advancements, but has nonetheless encountered limitations inherent in conversion software applications to date. However, because of its availability in the marketplace and continuing development, it is worthwhile including speech to text conversion in this scan and review.

Speech to Text Conversion Technology – Summary Description

In basic terms, speech to text conversion technology utilizes special software to convert vocal sounds into written text. It is also referred to as 'speech recognition' or, in the case of more advanced software 'voice recognition' or 'speaker recognition' conversion technology. In the case of the latter – far from being available in the marketplace – the software recognizes the speech patterns, vocabulary and syntax of individual/ unique voices and converts the sounds to text.

However, for purposes of this Report, the focus is on more generic speech to text conversion, in part because it is further along in development and in part because individual voice to text conversion may never fully develop.

A good example of speech to text conversion has already been discussed above, with respect to CapTel and IP Relay: in both cases, a Communications Assistant repeats the words of anyone who is speaking into a computer microphone. The computer's speech to text software converts the spoken words into written ones.

But it is important to note that the Communications Assistant must repeat the spoken words, as the software can only recognize a very limited number of voices. It would never be able to recognize and convert the voices of individual callers to text. Hence the 'relay' component of the communication chain must remain intact.

The speech recognition systems on the market generally rely on two models: an acoustic model, or the encoding of linguistic information in speech, and a language model, or estimates of the probability of word sequences. For large vocabularies with certain words pronounced in different ways, the system will include a pronunciation model as well.

But since speech patterns vary widely, by individual and by language spoken – thus there is no such thing as a 'universal speech decoder' or recognizer.23

Speech to Text Conversion Technology – Intended Purpose

In essence, speech to text conversion is intended to provide another alternative text-based translation system to people with hearing and speech disabilities. Because of the technology's limitations, speech must be converted at source rather than anywhere/anytime; thus a Communication Assistant must re-speak a caller's words in the case of CapTel or IP Relay, as the software would be unable to convert the speech of individual callers to text.

Applications of speech to text include, as noted, CapTel and IP Relay; voice mail-to-text software for telephone systems (produced by Dragon, one of the major providers of speech to text software and other assistive software applications); and closed captions for television programming (where a captionist re-speaks the words of, for example, a news anchor and the speech is converted to captions for broadcast).

Speech to Text Conversion Technology – Product Development Cycle

The cycle of development for any software is typically linked to its complexity, the lines of code that need to be written and the number of developers involved in creating the work – all of which is converted to a measure of man-years in terms of timing. While individual software can vary in terms of its development cycle, two common models resemble something like this:

A graphic Illustration of the product development cycle – cascading Two graphic Illustration of the product development cycle – circular

 

Two graphic Illustrations of the product development cycle – cascading and circular

 

Activities, methodologies, supporting disciplines (like quality assurance) and tools can differ from between software development projects. But whether the methodology of development is cascading (to the left above), circular (to the right above) or some other method, the core elements of the process are essentially always the same:

Speech to text software is highly complex to develop, and its development cycle is directly linked to the type and amount of speech the software can be programmed to recognize. It is difficult to determine the exact amount of time involved in speech to text conversion software, since it has been in a cycle of development and improvement for decades. In addition, it is a competitive marketplace, so proprietary content and trade secrets are common.

But one can safely suggest that product improvements might take anywhere from 20 to 30 man-years to reach the market, i.e. 20 to 30 developers working for one year to bring the software to the next level.

One further point on the development of speech to text conversion software should be made: accuracy is everything. As noted below, a lack of accuracy is a major and continuing issue for this type of alternative technology. This is because system operation is more complex than a simple matter of downloading and using the software, as the each piece of speech to text conversion software must be 'trained' to recognize the speaker and manner in which the words are spoken by that individual. This can add months to getting the software to the operational stage.

"Generally speaking, if you speak standard American English and enunciate clearly while speaking using a quality headset microphone, over the course of three months of repeated use you can expect accuracy rates for your speech recognition software to be in the 90th percentile. By repeated use, this means nearly every day and always correcting errors using the suggested method by the software. If you have an accent of any kind, the amount of training required to achieve high accuracy rates with your speech recognition software could take between six months and one year."24

Speech to Text Conversion Technology – Benefits to Users

Assuming that the specific speech to text software has been programmed and trained for a high level of accuracy (>95%), then its usefulness to those with hearing and speech disabilities is undeniable, in particular when it is used for such services as IP Relay.

The attractiveness of speech to text conversion for those with hearing and speech disabilities is dependent on:

SMS and other text-based communication available on multiple devices and multiple platforms make speech to text a less important alternative technology

Speech to Text Conversion Technology – Barriers to Adoption

As noted above, difficulties in achieving acceptable (very high) levels of accuracy is the major barrier to the adoption of speech to text conversion software, as is the list of the above five considerations noted by experts consulted for the scan. But barriers to adoption of this alternative technology may be best summarized by the questions that need to be asked when deciding on conversion software (as developed by the Inclusive Design Centre at the Ontario College of Art and Design)25:

This litany of considerations alone might serve as a barrier to individuals , who are less comfortable with technology (less 'tech savvy'), contemplating the use of speech to text software.

Speech to Text Conversion Technology – Stage of Development

It is safe to say that speech to text conversion is in a continuous state of development and improvement with respect to it trainability and ultimate level of accuracy. At the present time, it is generally viewed that accuracy can reach 90 percent with a standard period of training – approximately three months, with improvements sought on a consistent basis. It is also generally agreed that an accuracy threshold of 95 percent is the goal – but difficult to surpass. There will therefore always be some inaccuracy in speech to text (which is also considered the case for closed captioning for television programming – in that some level of error will inevitably occur in the chain of events that bring captions to the screen).

Speech to Text Conversion Technology – Feasibility

Again, the feasibility of speech to text is called into question by two factors: (i) the level of accuracy that can be achieved and (ii) the sheer number of highly variable solutions available to users, each of which has "specific requirements in terms of latency, memory constraints, vocabulary size and adaptive features." At the same time, each solution must be categorized by users for specific usage, including "command and control, dialog system, text dictation, audio document transcription, etc."26

Moreover, the user's personal characteristics will also determine the relative feasibility of a speech to text conversion technology. For instance, a pure speech to text solution is less meaningful for a person who is deaf and uses sign language than a speech-to-Sign/Sign-to-speech solution. While there are instances where speech to text is "necessary", the technology is viewed by some in the deaf community as less useful than SMS or a "highly accurate" Sign Language conversion solution. For many sign language users, written language i.e. English or French is not their primary language. Due to their lack of comfort this specific user-group has with written language, a sign language conversation solution is more meaningful as it bridges are more significant gap in communications than a speech to text solution would.

Speech to Text Conversion Technology – Potential Enhancement or Integration with Other Existing Applications, Platforms or Technologies

As noted above, speech to text conversion technology has a number of applications, including IP Relay, closed captions for television broadcasting (currently the dominant technology for French-language programming), and mobile platforms. A 'Voice Dictation' application is also available for $1.99 from iTunes that provides speech to text conversion for SMS, email and a wide range of social media.27

Speech to text conversion is thus among the most ubiquitous of alternative technologies for use by those with hearing and speech disabilities. The question is more about retaining its relative usefulness vis-à-visdeveloping technologies in Sign-to-text conversion given different target audiences (i.e. differing levels of disability).

 

iPhone Screenshot 1 - illustrating Apple's Voice Dictation to SMS application. The photograph shows a microphone on an iPhone screeniPhone Screenshot 2 - illustrating Apple's Voice Dictation to SMS application. The photograph shows a microphone on an iPhone screen.

Two photographs of an iPhone illustrating Apple's Voice Dictation to SMS application. Both photographs show a microphone on an iPhone screen.

Speech to Text Conversion Technology – Potential Impact on Users

An issue in accessible communications technologies identified for the scan was articulated in the following way:

There are now dozens of options for text, voice, video, and most of them are free (assuming you have some form of connectivity) and pre-installed in all the devices and service packages that are ubiquitously offered. The challenge is compatibility – can you communicate across the competing value chains? Not always easy or intuitive.

It's an endless Google search – whereas before we had a single non-optimal solution (e.g., TTY) that every deaf person used, now we have hundreds of competing mainstream solutions with slight variations in features, which all change quarterly, where sophisticated deaf users get better service than 'trailing edge' deaf people. It's also kind of like drug interactions regarding compatibility – I'm taking so many medicines under treatment by so many physicians that I'm bound to suffer some bad pharmaceutical collision. In both senses information is the missing element – not raw information, but deeply contextualized information that will make sense for me, doing what I do, at the school or workplace where I am.

But the upsides are so strong – we've really reached a point where a well-informed and self-actualized consumer can find what they're looking for and put together their own package of devices and services, usually at a reasonable cost. Highly customized and personalized, with just the features needed.

It was also noted that a "new digital divide" is being created by the plethora of technologies available: the divide between sophisticated and not-so-sophisticated deaf users as noted in the above quote. It may be that speech to text – many variations, many platforms – offers one partial alternative technology solution, one piece of a communications solution that is made up of many, constantly changing, pieces.

Speech to Text Conversion – Spin-offs and Mass Marketing

It should also be noted that speech conversion technology has had and is having considerable spin-off impact in sectors such video gaming. While the software in a gaming system such as Xbox 360 Kinect does not convert the speech to text, it does function as a system command – saying 'Xbox go home' will bring up the home screen, 'Xbox play disc' will play a disc in the drive, and other voice commands will enable disc rewind, fast forward and eject among other options. (The Xbox 360 Kinect system also works with a type of motion control or motion capture technology designed for interactive play that is a more basic version of Sign to Speech/Text technology discussed below.)28

Such a system of speech recognition could have a positive use for individual with motion disabilities – i.e. those who are able to speak but with limited movement for the use of remote controls and other devices.

As another example (in trial in Japan) of speech recognition for gaming and its intersection with education, Nintendo is developing a system for its DS gaming devices that is actually designed for classroom use. Students with hearing disabilities will be able to use the device to record what a teacher says in converted text format. Moreover, rather than rely on storage capacity of the DS unity, the materials are stored in the NTT (Japan's national telecommunications carrier) cloud – uploaded via the NTT mobile network and available for later review by users. While still in trials, the system may eventually enable text sharing/interactivity among users – effectively augmenting the use of the DS as a person-to-person communications device.29

4) Sign to Speech/Text Conversion Technology

As noted above, Sign to Speech/Text conversion – as it is called by members of the deaf and hard of hearing community – is a technology of major interest to those with hearing and speech disabilities that are conversant in Sign Language communication.

There is currently a large gap in Canada for communications between people who are conversant in Sign Language and those that are not. Live interpreter services are available through various agencies offering interpretation in American Sign Language/Langue des signes québecoise (ASL/LSQ), but Video Relay Service does not currently exist in Canada. Thus an application which could help narrow the gap such as sign to speech/text conversion technology is of major interest.

It is, however important to note that there are some apps out there that convert speech or text to Sign. However, they match individual words to individual Signs out of a Sign language dictionary. Their effective speed and accuracy of translation make it a communication tool, but not an interpreter substitute.

In addition, Sign Language is a type of communication that uses gestures as well as facial expressions and body language to convey meaning. Current Sign-to-speech/text conversion technology cannot capture all of these sometimes subtle nuances. Syntax is also very important; that is, Sign Language is not directly translatable word-for-word to spoken language and some degree of interpretation is typically required to understand context and meaning. While applications like the Portable Sign Language Translator (PSLT) has taken this into account, it will likely be some time before the fullness of Signing can be completely interpreted by a software application.

Thus a technology that substitutes for human Sign Language interpreters is viewed as an important development by users, and is the focus of this part of our scan.

It should be noted as well that a discussion of Sign to Speech/Text conversion shifts the technology focus from one on telecommunications to one on communication apps. This is an important distinction, because this particular technology does not involve augmenting or otherwise altering the telecommunications system for purposes of better accessibility. Rather, it adds a software application to deliver a new type of communication between users.

Sign to Speech/Text Conversion Technology – Summary Description

Sign to speech or text conversion technology is once again a software-based system that converts Sign Language to computer-generated spoken or written words (for example, American Sign Language, Langue des signes québecois, British Sign Language or the specific Sign Languages of other jurisdictions), or converts text or spoken word to Sign Language in near-real time (i.e. translation is a few seconds – or more – behind the Signer, depending on the complexity of Signing involved. real time.

Sign to speech or text conversion is in its relatively early stages of development; at the present time, a computer program converts one form of communication (such as Sign Language) into the other (such as text or a computer-generate voice). For example, a video camera connected to computer records an individual who is communicating in Sign Language. The hand signs are imported into the conversion program, and the signs are converted to another form of communications, either text or a computer-generated voice.

Stated another way,

"The video stream (of a person signing) captured by the device camera is then software processed to recognise sequences of user gestures through a locally stored 'library' of core concepts or words. These are then assembled into sentences, which are outputted as text in real time."30

The key advantage, of course, is that this technology enables a person who is conversant in Sign Language (the person's first language, perhaps) to communicate with a person who cannot read signs. The claim of outputting text 'in real time' should be viewed cautiously however, given the level of complexity involved in Sign language syntax, facial expression and body language.

The Sign to speech or text conversion program is still in development at a firm called Technabling, which is a spin-off company of the University of Aberdeen in Scotland. The conversion program is called the Portable Sign Language Translator (PSLT), and has generated considerable attention for its groundbreaking advancements in Sign to speech or text conversion.

There is also an international groundswell of interest in this evolving technology, as a number of other programs are also in development, including the Atlas program in Italy, the DePaul ASL Synthesizer Project in the U.S., SASL-MT in South Africa, and the 'SiSi' (Say It, Sign It) project in the U.K.

In the latter project, IBM has combined a number of computer technologies including speech recognition, which converts spoken word into British Sign Language – which is then signed by an animated digital character or avatar which pops up in the corner of a display screen (computer, laptop, television screen). The Open Sign database – an international compilation of Sign language projects and research currently underway – indicates that nine such Sign/avatar initiatives are currently underway in Europe and South Africa.31

SiSi -A photograph of a computer-generated avatar character that is using Sign Language

A photograph of a computer-generated avatar character that is using Sign Language32

 

The syntax of Sign language is highly complex, with combinations of symbols, facial expressions, body language and emotions used to communicate. The technology behind such projects as SiSi is equally complex, using combinations of software to produce a Signing avatar. These projects typically use linguistic processing – that is, teletext analysis and speech recognition software – to create sequences of motion-captured Signing data.

More recently, this approach has been enhanced with synthesized animation from HamNoSys (the internationally established phonetic transcription system for Sign languages), which is integrated with an avatar animation platform; this is necessary because HamNoSys does not transcribe facial expressions. The avatar platform combines skeletal animation with accurate facial gestures, and is becoming increasingly sophisticated in its ability to record and animate more complex gestures, body language and facial expressions.33

A slight variation on the above Sign to speech/text and speech/text to Sign conversion programs is Dicta-Sign, a project in development at the Athena Institute of Language and Speech Processing in Greece. The project enables Sign Language interaction with Web 2.0, so that updates and contributions can be made by a person who 'dictates' changes via sign language, which are converted to an avatar who signs them back to users.34

While cameras are an integral part of SiSi and other projects requiring motion capture, they are not the only devices capable of supporting Sign to text conversion. A group of Ukrainian students organized in a venture called Enable Talk has developed a set of gloves that, when connected to a smartphone via Bluetooth, automatically translate Sign language to text, and then into speech. The gloves use "flex sensors, touch sensors, gyroscopes and accelerometers (as well as solar cells to increase battery life)" to produce a relatively inexpensive ($75U.S. per pair) system that can adapt to a range of international Sign languages.35

 

EnableTalk_gloves - A photograph of a pair of Enable Talk Sign-to-text/speech gloves

A photograph of a pair of Enable Talk Sign-to-text/speech gloves36

Sign to Speech/Text Conversion Technology – Intended Purpose

The intended purpose of Sign to speech/text conversion and vice-versa is to enable new opportunities for communication between those conversant in Sign Language and those who are not, and to enable the more ubiquitous use of Sign Language interpretation when live interpreters may not be available. On the latter point, the conversion of speech to Sign – for example, in a classroom setting – would allow the provision of Sign Language via an on-screen avatar when a live Sign Language interpreter is not available for a lecture.

On the objective of the PSLT project more specifically, its leading developer has noted, "The aim of the technology is to empower sign language users by enabling them to overcome the communication challenges they can experience, through portable technology."37

Sign to Speech/Text Conversion Technology – Product Development Cycle

The product development cycle for Sign to speech/text conversion technology largely follows the cascading or cyclical pattern noted above in our discussion of product development for speech to text conversion.

The speed at which Sign to speech/text conversion is proceeding appears more rapid (compared with, for example, speech to text conversion technology in development since the 1960's) because at the present time there is considerable momentum on an international scale behind this technology; multiple projects, similar in design and approach, presenting opportunities for information exchange, are underway in a number of jurisdictions.

This momentum includes considerable private sector and university-based funding as key drivers – pushing new conversion technologies such as the PSLT to a mass-marketed application that targets the end of 2013 for completion – just over 12 months from now.

But while indications are that this technology will deliver a useful and potentially valuable addition to assistive applications, we note once again that some current claims – such as a claim of real time conversion from symbol to text – should be treated with caution. The functionality of the first app also remains to be seen. Such caution is reasonable in this instance, given the early promise of speech recognition technology which slowed considerably once it encountered significant limitations that have proven difficult to overcome.

Sign to Speech/Text Conversion Technology – Benefits for Users

While there is considerable excitement at present about the advent and continuing development of the Sign to speech/text conversion systems, the potential benefits for users appear to be considerable.

As noted below, the system is designed for portability and multi-device, multi-platform use, and, in the case of the PSLT system, can be customized for individual use, i.e.:

"This means that any signer can create her/his own set of signs and gestures (or adapt them from any general-purpose set of signs such as [British Sign Language]) and associate to them their own words and concepts. In this way, signers can bridge the current communication gap with the wider community around them, being able to use whatever jargon they need in whatever situation they may find themselves (e.g., in education, in training, at work, at home, on the go)."38

In this way, those younger learners with speech disabilities can use the system to create libraries of customized hand gestures and signs that express "domain-specific concepts" needed to discuss topics of study with teachers and others.

The customizable feature also enables those with more limited motion or other physical disabilities to create meaning-specific gestures (such as a flick of the wrist for "Must use the bathroom") tailored to physical capabilities.

Regional variations of culture and custom can also be integrated with the basic vocabulary of the system enabling one's language to be personalized for everyday situations. In other words, the flexibility of the system enables the development and integration of personal CSL – Customizable Sign Language.39

As noted by the founder of Technabling, which is developing the PSLT system,

"One of the most innovative and exciting aspects of the technology is that it allows sign language users to actually develop their own signs for concepts and terms they need to have in their vocabulary, but they may not have been able to express easily when using (British Sign Language)."40

Sign to Speech/Text Conversion Technology – Barriers to Adoption

At the present time, the central barriers to adoption are (i) the extent to which the software can read and convert specific signs, (ii) accuracy of voice recognition software which may be used to convert Sign Language to speech and (iii) speed at which the software and camera can capture signs.

The limitations of voice recognition software noted above also apply to Sign to speech conversion.

However, the prototype PSLT system developed by Technabling focuses more on Sign to text than Sign to speech, for the time being obviating reliance on potentially limited speech recognition software.

Sign to Speech/Text Conversion Technology – Stage of Development

The PSLT project is currently in its mid-stage of development; the basic system is in place but details are still being added, including the level of vocabulary and complexity of signing that the system can interpret. However, PSLT project developers are moving towards the development of an off-the shelf application that will enable the conversion software to work on computers, laptops, netbooks and smartphones – and completion of the application is expected by the end of 2013.41

But while indications are that this technology will deliver a useful addition to assistive applications, some current claims – such as real time conversion from symbol to text – should be treated with caution.

Avatar-based Sign language systems are varied in their development stage. The U.K. SiSi project is well advanced, for example, while the South African Sign Language project (which is virtually identical to other avatar platform conversion projects) is still in mid-development. For more advanced projects, a 24- to 36 month window of expected before the conversion software is available on a mass market basis.

Sign to Speech/Text Conversion Technology – Feasibility

Because Sign to speech/text conversion technologies are centrally focused on the development of software applications – as opposed to augmenting telecommunication systems – their feasibility is very strong, for several reasons.

First, it is anticipated that the final version of the software will have a high degree of both accuracy and flexibility. That is, it is anticipated that the PSLT application will accurately interpret Signing on a consistent basis, and will have a strong level of adaptability in learning unique symbols and language (such as those exchanged within sub-cultures or youth cultures).

For example, it is important to note that the PSLT conversion software enables a 'complete' interpretation of signs. In other words, if the signs for 'I', 'drive' and 'car' are given, the text that was generated read, 'I drive the car'. If the sign for 'yesterday' is given, then the system converts the verb tense automatically: "Yesterday, I drove the car".

Second, the app-based nature of PSLT and the Sign/avatar-based projects lend themselves to production on a large scale, with affordable pricing (an important consideration given the lower socio-economic status of many people with disabilities).

Third, the focus of the Sign-to-speech conversion is portability, in that the applications are designed to work across multiple devices, including mobile devices, thus making the application more widely accessible, and portable, to those with hearing and speech disabilities. However, to date, the technologies being developed are off-line applications and targeted towards in-person communication.

However, it should be cautioned that the speed of image capturing and sign conversion has yet to be tested for highly complex or variable elements of Sign language.

As with the other technologies reviewed in this scan, the meaningfulness of this technology is depended on the user's characteristics. The meaningfulness of sign to speech/text conversion technology is limited to those who use sign language as their primary means of communication, and those who wish to communicate with them. For a person who is deaf, hard of hearing or speech impaired and does not use sign language, the meaningfulness of this technology is largely diminished.

Sign to Speech/Text Conversion Technology – Enhancement or Integration with Other Existing Applications, Platforms or Technologies

PSLT developers note that the system software will be developed into a portable offline application for use on "Android smartphones and Tablet PCs, as well as on any netbooks, notebooks, laptops and desktops running Linux or Windows equipped with a standard webcam."42 This illustrates how technical advancement in mobile devices (phones and tablets), that is their capacity to effectively function as mobile mini-computers, has enabled assistive technologies to become more pervasive and portable. (In other words, their portability is an advantage – not necessarily their telecommunications functionality. The significance of the off-line nature of the PSLT application indicates that it is intended to facilitate in-person communication.)

The developers will not be integrating the PSLT with voice and/or video communication tools such as Facetime or Skype upon its initial release; PSLT can be ported to iPhones and iPads, but only if demand warrants.

Developers also note that, although the sequence of signs from a camera can be displayed as text on the same device it has been detected from, the sequence can also be transmitted and viewed remotely. That is, the sequence can be sent "as an SMS message or as a Bluetooth command to control an appliance".43

Sign to Speech/Text Conversion Technology – Potential Impact on Users

The portability and flexibility of the PSLT system, together with multiplatform functionality and customizable software, carries a potentially positive impact for users in a number of face to face situations, such as school and employment.

The developers of PSLT have focused largely, if theoretically, on the positive impact for users with respect to finding employment and then communicating more effectively on the job. That is, using a device that accurately and rapidly translates signs to text would enable a user with a hearing or speech disability to expand the number of jobs he or she applies for, because the communications barrier between signers and non-signers is reduced.

In general, the most positive impact on users would appear to be the difference that the system can make in general face to face communication.

Sign to Speech/Text Conversion Technology – Spin-offs and Mass Marketing

As noted above, certain sign to speech/text conversion applications utilize existing motion capture technology as one element of creating a signing avatar. Motion capture is also used in video gaming systems such as Wii and Xbox 360 Kinect, so once again there may be advancements made to gaming systems as a result of Sign to speech/text conversion research.

While less developed, there are indications of interest in other spin-off products – some in use now, and some in development.

The Sign language converter necklace or pendant was announced more than three years ago, but has evidently not found its way to the mass market. The device – essentially a Sign to speech translator – can be worn around the neck to pick up Sign language and convert the symbols into speech.44

 

Graphic of a Sign language converter pendant, a small tube shaped device worn on a chain around the neck. The image shows three versions of the pendant: left image, ”Click the bottom to open the speaker“, centre image, ”Adjust the volume“ and right image, ”Turn off“.

Graphic of a Sign language converter pendant, a small tube shaped device worn on a chain around the neck. The image shows three versions of the pendant: left image, "Click the bottom to open the speaker", centre image, "Adjust the volume" and right image, "Turn off".

 

The original pendant did not appear to have a two-way communicator, for example software that would convert speech to text. An advancement in this direction has been made by the S.V.L.T., the Sign Voice Language Translator. This device, also worn as a pendant around the neck, uses a camera to capture and translate Sign language into speech, then converts the speech into text that appears on a small LCD screen next to the camera. This would enable an individual who is blind and on who is deaf to communicate with one another. (The current status of this device, its price point and/or mass market planning, are not known.)45

Research and development in the field of assistive technologies like speech to text and Sign to text/speech conversion has given rise to another, potentially major spin-off: a more universal translator that automatically converts text from one language to another language, or converts the spoken words of one language to another language.

For example, Google Play has developed the Voice Translator for the Android platform, an app that translates speech into either another language or text, supporting 50 different languages. This type of speech recognition app has also been developed by Apple for the iPhone and by Microsoft for laptops and mobile devices. It is generally acknowledged that advancements in automatic translation software will enable simplified communication when travelling, for those working in the hospitality business and other uses – but the research also acknowledges the limitations of the technology with respect to accuracy.46

5) Adapted Mainstream Technologies such as SMS

We have elected to approach this part of the Report as a basic narrative, as the discussion moves from an examination of alternative technologies to one of how mainstream technologies have been adapted by people with hearing and speech disabilities.

While there is a temptation to declare SMS, instant messaging and other text-based instantaneous message systems the 'assistive technology of choice' for those with hearing and speech disabilities, there is no conclusive data to support such a declaration.

But there is little doubt that mainstream communications technologies such as SMS have simply and very quickly surpassed other assistive technologies for a very strong set of reasons, nicely summarized as follows:

"It makes sense that so many Deaf people have adopted SMS as a preferred communications channel around the world. It is text-based, easy to use, affordable and is mobile. The vibrating function of the handset alerts the user about a message. Unlike other technology designed specifically for Deaf people, such as teletypewriters (TTY), it does not require each party to have bespoke equipment or rely on an expensive, time-intensive and intrusive intermediary to translate messages back and forth."47

SMS is the world's most popular data application, with more than three-quarters of the world's mobile phone users' texting. Its widespread use by the Deaf, and those with other hearing and speech disabilities, was in fact predicted in 2004 in a research paper delivered to an academic conference in Australia.48 Since that time, applications for SMS and instant messaging have multiplied, making it widely available and relatively affordable for people with hearing and speech disabilities.

While figures are not available for Canada and the U.S., 98 percent of the deaf and hard of hearing population in the U.K use SMS text messaging – a market penetration so complete that police services are establishing text messaging as a method of reporting crimes for the deaf community. (This should not be confused with text-to-911 emergency reporting, which does not exist in many countries.)49

While SMS is limited to 160-character messages and thus requires substantial use of short form abbreviations, instant messaging applications can provide alternatives for those with hearing and speech disabilities. The benefits for users are, in a word, enormous – "profoundly changing the lives of millions of non-verbal people".50

The ubiquitous nature of mobile devices has also delivered competitive pricing, ease of use, portability, and ease of international communication. With respect to price points, the deaf community had long objected to the necessity of paying for voice plans when no voice communication was ever needed. A number of carriers in the U.S. have responded with text-only plans for mobile customers with hearing disabilities.51

In Canada, data-only plans are typically not offered by wireless carriers for smartphones; instead, data services that support text messaging are available as add-ons, or for devices like sticks or tablets. Voice over Internet Protocol (VOIP) services like Google Voice further eliminate the need for voice subscriptions with such services as voice mail transcription at no charge to users.52

In addition, the science of instant messaging continues to develop through new applications of direct benefit to people with hearing and speech disabilities. One of the most striking applications – which demonstrates the versatility of the technology – is 'PocketSMS' developed by an engineering student for the Android platform, specifically for people who are deaf-blind. The application converts an SMS text into Morse code; as the phone displays the text one letter at a time, it vibrates the alphabet in equivalent Morse code dashes.53

Anecdotal evidence suggests that, of all the technologies reviewed for this Report, mainstream text messaging, SMS or other instant messaging are by far the most widely available and widely adapted for people with hearing and speech disabilities. They are technologies that have resulted, completely unintentionally, in electronic curb cuts of mass proportions. The fact that such technologies are developed by the world's largest software companies and most creative application developers also means that these technologies will keep evolving given market competition and massive consumer uptake.

6) Future Developments and Applications

There is little doubt that the above noted Sign to text or speech conversion technology will be developing at a rapid pace over the next 12 to 36 months; an application is expected from Technabling later in 2013. As noted in the concluding section of our Report, it is a technology that bears monitoring and also bears consideration of user reaction to it in terms of its usefulness – and the role it plays in the large range of technologies available to people with hearing or speech disabilities.

A number of other technologies and applications are either being discussed for development, in the very early stages of development or just further away given a lack of funding, absence of required technology or lack of interest about their potential uptake by consumers.

As a concluding note on the current and developing state of alternative communications technologies, professionals in this field who were consulted for the scan make two very important observations.

First, the "ecosystem for these technologies is option-rich." There are more choices than ever before for enhancing communication for people with hearing and speech disabilities – to the point of being overwhelming for some users. It is worth reiterating that this richness of options has created something of a digital divide in terms of tech-savvy users and those who are not.

Second, and equally as important, "accessibility features and services are massively underused, even when they are free. People lack awareness and confidence – they don't know what will work and they don't know how to get started." In other words, putting information into action can be a barrier to using the technologies that are there now – and resolving this issue is key.

One part of the solution involves a source for "one-stop shopping" – not so much for purchasing products, but for finding out "what's out there and what good it will do me." These include databases such as the Global Accessibility Reporting Initiative, the FCC Accessibility Clearinghouse and a European consortium of information providers seeking coordinated solutions. The Global Accessibility Reporting Initiative, a project designed to help consumers learn more about the accessibility features of mobile devices and identify the features most useful to them, includes the participation of the Canadian Wireless Telecommunications Association (CWTA).59

Part III – Recommendations for Monitoring Alternative Communications Technologies and Summary Grid of the Assistive Technologies Scan

We have three recommendations for the on-going monitoring of alternative communications technologies:

Although a scan and review of video compression technologies that result in applications such as Skype, Google Hangout or VRS were beyond the scope of this Report, video compression technologies should also be monitored given their importance to users and the more efficient use of bandwidth that future developments may represent.

A Summary Grid of our Report is set out on the following pages.

 

Summary Grid – Alternative Communications Technologies for the Deaf, Hard of Hearing and Speech Impaired

Technology, Description, Purpose Product Cycle and Stage of Development Benefits for Users Barriers to Adoption Feasibility Integration with other Apps and Platforms Impact on Users
CapTel Fully developed in the U.S.; not introduced in Canada. Subsidized U.S. system makes it highly affordable. Faster than TTYs, but essentially the same system. No legislative mandate to generate funding support; voice recognition software has limitations. Less relevant as a result of SMS, instant messaging and other mainstream applications. Available in the U.S. through WebCapTel. Positive given limited costs to users, but surpassed by other options in Canada.
IP Relay Launched in Canada in 2011. Internet-based relay service, no additional charges incurred, but a high-speed connection is required. Where used, voice recognition technology has limitations; 911 access can be complex; fraud has been a problem (U.S.) Fully feasible, although reliance on the accuracy of voice recognition results in varying degrees of service. Works well with multiple devices, platforms and programs; text-based nature takes up little bandwidth. Important addition to text-based communication entirely suited for the mobile platform as it uses little bandwidth.
Speech to Text Conversion Widely used for CapTel, IP Relay and other applications, but software is always upgraded to new versions. Assuming a high degree of accuracy, benefits are undeniable; but accuracy remains an issue. Questionable accuracy; text is secondary to video and to Sign to text conversion. The stronger and higher the rate of accuracy, the greater the feasibility of the application. Multiple uses across platforms; allows for transcription services, enabling access to more mainstream applications. A trigger for a massive number of apps and uses that might prove overwhelming for users and create a digital divide.
Sign Language to speech/text conversion Early stages; signs of rapid development (but possible over-promotion should be treated with caution); first app expected to be available to the public by the end of 2013; extent of functionality to be determined, but holds considerable promise. Extensive; enables Sign language communication with non-users; implications for educations, jobs; customizable. Some prototypes are looking at Sign to speech which is viewed as more complex than Sign to text applications. Marketed as a low-cost application with a high degree of accuracy and consistency. Customized feature is key.

Will be available across most platforms although not anticipated for iPhone or iPad right away.

Possible integration with other apps (Skype, Google Hangout). However, technology would require significant upgrades before achieving that level of functionality.

Strong interest in this technology from the user community for its potential impact on everyday life.
Adapted mainstream technologies e.g. SMS Massive uptake by people with hearing and speech disabilities; fast, affordable, ubiquitous. Cost and lack of a data-only plan when speech is not needed can be barriers; but fully feasible. Choice is enormous, even overwhelming; but the impact has been life-changing.

Appendix A

Resources

Age and Disability Resource Center, AbleData "Sign Voice Language Translator" http://bexar.tx.networkofcare.org/aging/assistive/assistive_devices.aspx?pageid=19327&top=15112&ksectionid=0&productid=199323&trail=22,13436&discontinued=0

Article Myriad, "How long does it take to train speech recognition programs like Dragon or Vista?" posted January 16, 2012 www.articlemyriad.com/long-train-speech-recognition-programs-dragon-vista/

BGR.com, March 12, 2012 www.bgr.com/2012/03/12/new-microsoft-software-can-translate-voices-into-foreign-languages/ and Google.com https://play.google.com/store/apps/details?id=com.smartmobilesoftware.voicetranslatorfree&hl=en

Bulk SMS.com www.bulksms.com/int/w/BulkSMS_SMS-improves-communications-for-the-Deaf.htm

CapTel, "How CapTel Works" www.captel.com/how-it-works.php

CapTel, "How voice recognition errors affect captions" www.captel.com/customer_service/kb/index.php/article/voice-recognition-errors

CapTel, "Responding to Captioned Telephone Calls, 911, PSAP" www.captel.com/911psaps.php

CRTC, "Relay services for people with hearing or speech disabilities" www.crtc.gc.ca/eng/info_sht/t1038.htm

Disabled World (2009), "Text Phones for the Deaf" www.disabled-world.com/assistivedevices/hearing/text-phones.php

European Assistive Technology Information Network www.eastin.eu/en-GB/searches/products/index

FCC Clearinghouse http://apps.fcc.gov/accessibilityclearinghouse/, and

FCC Consumer Advisory "Doing Business Using IP Relay" http://transition.fcc.gov/cgb/consumerfacts/iprelayfraud.pdf

Federal Communications Commission, Internet Protocol Relay Service www.fcc.gov/guides/internet-protocol-ip-relay-service;

Geek.com, 'Nintendo DS gets voice recognition and cloud storage for education', January 12, 2012 www.geek.com/articles/games/nintendo-ds-gets-voice-recognition-and-cloud-storage-for-teaching-20120131/

Geek.com, March 25, 2009, "Necklace turns sign language into speech" www.geek.com/articles/gadgets/necklace-turns-sign-language-into-speech-20090325/

Global Accessibility Reporting Initiative www.mobileaccessibility.info/,

Government of Canada (2010), Human Resources and Social Development Canada 2010 Federal Disability Report www.hrsdc.gc.ca/eng/disability_issues/reports/fdr/2010/page07.shtml

Hearing Loss Association of America, "FCC issues report and order to curb IP Relay fraud", July 3, 2012 www.hearingloss.org/content/fcc-issues-report-and-order-curb-ip-relay-fraud

High Speed Experts, July 20, 2012 "AT&T to end analog landline phone services?" www.highspeedexperts.com/att-ending-pots/

IBM, Extreme Blue and the SiSi Team www-03.ibm.com/press/us/en/pressrelease/22316.wss;

University of Hamburg (Germany), HamNoSys www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/hamnosys-97.html

Innovate U.K. SBRI Programme, "Spotlight on a project" www.innovateuk.org/_assets/pdf/case%20studies/technabling.pdf

iTunes, "Voice Dictation to SMS", http://itunes.apple.com/us/app/voice-dictation-voice-to-sms/id492594590?mt=8

Microsoft Support, 'Xbox 360 + Kinect Voice Commands' for a menu of speech recognition options. http://support.xbox.com/en-US/kinect/voice/control-your-xbox-360-with-your-voice

Mobile Syrup: mobile news and reviews for Canadians August 30, 2011 "Why no data-only plans for smartphones?" http://mobilesyrup.com/forum/showthread.php?t=17454

National Association of the Deaf (2012) "NAD Comments on the Importance of IP Relay" www.nad.org/news/2012/3/nad-comments-importance-ip-relay VRS requires a high speed connection with the Internet given the video compression technology in use.

National Association of the Deaf, www.nad.org/news/2012/3/nad-comments-importance-ip-relay

National Institute on Deafness and Other Communication Disorders, www.nidcd.nih.gov/health/hearing/Pages/Assistive-Devices.aspx

Open Sign, www.opensign.org/index.php?option=com_zoo&view=category&Itemid=21

Power, D., M.R. Power, and L. Horstmanshof (2005), "Deaf people's use of SMS and other text-based communication: a brave new world" Paper presented to Communication at work: showcasing communication scholarship: Annual Meeting of the Australia New Zealand Communication Association, Christchurch, New Zealand, 4-7 July 2005.

Schindler, Christine (2011) "Text Messaging: more than just an add-on to cell phone plans" Adaptive Technology Center for New Jersey Colleges www.tcnj.edu/~technj/2003/testmessaging.htm

SNOW (2012), Inclusive Design Centre, Ontario College of Art and Design, "Questions to consider when choosing Voice Recognition Software" www.snow.idrc.ocad.ca/content/voice-recognition-speech-text-software

Squidoo.com (2012) 'Text-only Plans' www.squidoo.com/text-only-plans

Technabling, Portable Sign Language Translator website, www.pslt.org/info

Text Crunch, July 9, 2012, "Ukrainian students develop gloves that translate sign language to speech" http://techcrunch.com/2012/07/09/enable-talk-imagine-cup/

The Daily Telegraph March 12, 2012 quoting Dr. Ernesto Compatangelo www.telegraph.co.uk/science/science-news/9134827/Sign-language-program-converts-hand-movements-into-text.html

The Daily Times March 24, 2012 'Text messages provide deaf with new means of communication' www.daily-times.com/ci_20245436/text-messages-provide-deaf-new-means-communication

U.K. Council on Deafness, May 11, 2012 "A call to industry to engage on the next generation of relay services" http://deafcouncil.org.uk/news/2012/05/11/393/

Vocapia /www.vocapia.com/glossary.html#lm

Vocapia Solutions, "How it Works" www.vocapia.com/

Wirlessaccessibility.ca http://wirelessaccessibility.ca/

YouTube, 'SMS for those both Deaf and Blind' www.youtube.com/watch?v=_jisK0N7JF4

Appendix B

Report Author

This Report was researched and authored by Richard Cavanagh, Partner, CONNECTUS Consulting Inc.Dr. Cavanagh has over 20 years of experience in researching and analyzing Canada's communications industries, with a specialized focus on social policy and accessibility issues. He has recently completed research on the evolution of technology in the broadcasting and telecommunications industries.

Dr. Cavanagh holds a PhD in Social Sciences from Carleton University and an M.A. in Sociology from Queen's University.

CONNECTUS Consulting Inc.

251 Loretta Avenue South

Ottawa, Ontario

K1S 4P6

(613) 729-8892

Richard@connectusinc.ca


[1] Video compression techniques would include Video Relay Service and Skype. For purposes of the Report, the term 'technology' is a catch-all term, referring to devices, applications, software and other elements in a product chain that ultimately delivers accessibility to users.

[2] Curb cuts – sidewalks that slope to the street – were originally designed for wheelchair users, but have a number of 'unintended' benefits, e.g. strollers, toddlers, other wheeled devices, people with walkers, etc. An electronic curb cut is the term given to an unintended but beneficial spin-off of something like text messaging. Thus while text messaging was not specifically designed for people with disabilities, it has become widely used by those who are deaf, hard of hearing or have a speech disability.

[3] National Institute on Deafness and Other Communication Disorders (2011) Website, "What is an assistive device?" www.nidcd.nih.gov/health/hearing/Pages/Assistive-Devices.aspx

[4] Ibid

[5] See Disabled World (2009), "Text Phones for the Deaf" www.disabled-world.com/assistivedevices/hearing/text-phones.php which provides a thorough discussion of TDD features.

[6] As another way of explaining voice- and hearing-carry over: if one can speak clearly, but must use a TTY to read what the other person is saying, Voice Carry Over is requested from the service provider. This lets one party speak, while a relay service operator types what the person says to you.

If one can hear, but must use a TTY to type what she/he needs to say, Hearing Carry Over is requested from the service provider. This allows one to hear what the other party is saying while a relay service operator reads aloud what is typed to the other person.

[7] CapTel Captioned Telephone, "How CapTel Works" www.captel.com/how-it-works.php

[8] CapTel was terminated in the U.K. in 2008 due a lack of uptake by consumers. The U.K. Council on Deafness recently called for the service to be reinstated. See http://deafcouncil.org.uk/news/2012/05/11/393/

[9] Discussions with a former Product Development Manager for Telus and accessibility design expert in the U.S.

[10] See CapTel, "How voice recognition errors affect captions" www.captel.com/customer_service/kb/index.php/article/voice-recognition-errors

[11] CapTel, "Responding to Captioned Telephone Calls, 911, PSAP" www.captel.com/911psaps.php

[12] See Government of Canada (2010), Human Resources and Social Development Canada 2010 Federal Disability Report www.hrsdc.gc.ca/eng/disability_issues/reports/fdr/2010/page07.shtml By way of example, the 2010 Disability Report states that people with disabilities aged 25 to 54 are more than twice as likely to be living below the after-tax low-income cutoff (LICO).

[13] Discussions are continuing in the U.S. with respect to the phasing out of the PSTN, or traditional voice services, in favour of full digital/IP services. AT&T has made a formal request to the FCC to end all analog landline phone services. See High Speed Experts, July 20, 2012 www.highspeedexperts.com/att-ending-pots/

[14] The only estimate of the number of captioned telephones in use as of 2008 is provided by CapTel; see CapTel, "Responding to Captioned Telephone Calls, 911, PSAP" www.captel.com/911psaps.php

[15] CRTC, "Relay services for people with hearing or speech disabilities"www.crtc.gc.ca/eng/info_sht/t1038.htm For additional descriptions of IP relay service, see Federal Communications Commission, Internet Protocol Relay Service www.fcc.gov/guides/internet-protocol-ip-relay-service;

[16] National Association of the Deaf (2012) "NAD Comments on the Importance of IP Relay" www.nad.org/news/2012/3/nad-comments-importance-ip-relay VRS requires a high speed connection with the Internet given the video compression technology in use.

[17] Discussion with a leading expert on accessible communications devices, U.S.

[18] FCC Consumer Advisory "Doing Business Using IP Relay" transition.fcc.gov/cgb/consumerfacts/iprelayfraud.pdf

[19] The FCC considers instances of using IP Relay to make calls from foreign countries to the U.S. in order to defraud businesses and individuals to be a serious problem; the problem was exacerbated as a result of temporary registrations granted to users before their eligibility to use the system was verified; see National Association of the Deaf, www.nad.org/news/2012/3/nad-comments-importance-ip-relay

[20] See Hearing Loss Association of America, "FCC issues report and order to curb IP Relay fraud", July 3, 2012 www.hearingloss.org/content/fcc-issues-report-and-order-curb-ip-relay-fraud

[21] IP Relay providers in Canada use voice recognition software to convert speech to text.

[22] Review of IP Relay web pages of Canadian service providers (Bell, Telus, MTS, Northwestel, Rogers, Cogeco, Shaw, Bell Aliant, Videotron)

[23] A developer of speech recognition software, Vocapia, has developed a glossary of useful terminology in speech to text conversion software. See www.vocapia.com/glossary.html#lm

[24] Article Myriad, "How long does it take to train speech recognition programs like Dragon or Vista?" posted January 16, 2012 www.articlemyriad.com/long-train-speech-recognition-programs-dragon-vista/

[25] SNOW (2012), Inclusive Design Centre, Ontario College of Art and Design, "Questions to consider when choosing Voice Recognition Software" www.snow.idrc.ocad.ca/content/voice-recognition-speech-text-software

[26] Vocapia Solutions, "How it Works" www.vocapia.com/

[27] iTunes, "Voice Dictation to SMS", http://itunes.apple.com/us/app/voice-dictation-voice-to-sms/id492594590?mt=8

[28] See for example, 'Xbox 360 + Kinect Voice Commands' for a menu of speech recognition options. http://support.xbox.com/en-US/kinect/voice/control-your-xbox-360-with-your-voice

[29] Geek.com, 'Nintendo DS gets voice recognition and cloud storage for education', January 12, 2012 www.geek.com/articles/games/nintendo-ds-gets-voice-recognition-and-cloud-storage-for-teaching-20120131/

[30] Innovate U.K. SBRI Programme, "Spotlight on a project" www.innovateuk.org/_assets/pdf/case%20studies/technabling.pdf

[31] Project summaries are available at Open Sign, www.opensign.org/index.php?option=com_zoo&view=category&Itemid=21

[32] Ibid

[33] IBM, Extreme Blue and the SiSi Team www-03.ibm.com/press/us/en/pressrelease/22316.wss; University of Hamburg (Germany), HamNoSys www.sign-lang.uni-hamburg.de/dgs-korpus/index.php/hamnosys-97.html

[34] Open Sign, op. cit.

[35] Text Crunch, July 9, 2012, "Ukrainian students develop gloves that translate sign language to speech" http://techcrunch.com/2012/07/09/enable-talk-imagine-cup/

[36] Ibid

[37] The Daily Telegraph March 12, 2012 quoting Dr. Ernesto Compatangelo www.telegraph.co.uk/science/science-news/9134827/Sign-language-program-converts-hand-movements-into-text.html

[38] Technabling, Portable Sign Language Translator website, www.pslt.org/info

[39] Ibid.

[40] The Daily Telegraph, March 12, 2012

[41] Technabling, Portable Sign Language Translator website, www.pslt.org/info

[42] Ibid

[43] Technabling, Portable Sign Language Translator website, www.pslt.org/info

[44] Geek.com, March 25, 2009, "Necklace turns sign language into speech" www.geek.com/articles/gadgets/necklace-turns-sign-language-into-speech-20090325/

[45] Age and Disability Resource Center, AbleData "Sign Voice Language Translator" http://bexar.tx.networkofcare.org/aging/assistive/assistive_devices.aspx?pageid=19327&top=15112&ksectionid=0&productid=199323&trail=22,13436&discontinued=0

[46] BGR.com, March 12, 2012 www.bgr.com/2012/03/12/new-microsoft-software-can-translate-voices-into-foreign-languages/ and Google.com https://play.google.com/store/apps/details?id=com.smartmobilesoftware.voicetranslatorfree&hl=en

[47] Bulk SMS.com www.bulksms.com/int/w/BulkSMS_SMS-improves-communications-for-the-Deaf.htm

[48] Power, D, M.R. Power, and L. Horstmanshof (2005), "Deaf people's use of SMS and other text-based communication: a brave new world" Paper presented to Communication at work: showcasing communication scholarship: Annual Meeting of the Australia New Zealand Communication Association, Christchurch, New Zealand, 4-7 July 2005.

[49] Schindler, Christine (2011) "Text Messaging: more than just an add-on to cell phone plans" Adaptive Technology Center for New Jersey Colleges www.tcnj.edu/~technj/2003/testmessaging.htm

[50] The Daily Times March 24, 2012 'Text messages provide deaf with new means of communication' www.daily-times.com/ci_20245436/text-messages-provide-deaf-new-means-communication

[51] Squidoo.com (2012) 'Text-only Plans' www.squidoo.com/text-only-plans

[52] Mobile Syrup: mobile news and reviews for Canadians August 30, 2011 "Why no data-only plans for smartphones?" http://mobilesyrup.com/forum/showthread.php?t=17454

[53] See YouTube, 'SMS for those both Deaf and Blind' www.youtube.com/watch?v=_jisK0N7JF4

[54] National Institute on Deafness and Other Communication Disorders, www.nidcd.nih.gov/health/hearing/Pages/Assistive-Devices.aspx

[55] Ibid

[56] Discussion with U.S.-based professional in accessibility infrastructure; see also the Global Public Inclusive Infrastructure, website http://gpii.net

[57] Discussion with U.S.-based professional in assistive device development; the reference to crowdsourcing by Amara can be found at www.universalsubtitles.org/en/; AssistMeLive can be found at http://beta.assistmelive.com/login

[58] National Institute on Deafness and Other Communication Disorders, www.nidcd.nih.gov/health/hearing/Pages/Assistive-Devices.aspx

[59] See Global Accessibility Reporting Initiative www.mobileaccessibility.info/, FCC Clearinghouse http://apps.fcc.gov/accessibilityclearinghouse/, and European Assistive Technology Information Network www.eastin.eu/en-GB/searches/products/index See also Wirlessaccessibility.ca http://wirelessaccessibility.ca/

Date modified: