• Complain

Dahl - Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything

Here you can read online Dahl - Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Cham, year: 2017, publisher: Springer International Publishing : Imprint: Springer, genre: Computer. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Dahl Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything
  • Book:
    Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything
  • Author:
  • Publisher:
    Springer International Publishing : Imprint: Springer
  • Genre:
  • Year:
    2017
  • City:
    Cham
  • Rating:
    5 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Part I Standards -- 1 Introduction to the Multimodal Architecture -- 2 The role and importance of speech standards -- 3 Extensible Multimodal Annotation for Intelligent Interactive Systems -- 4 Emotion Markup Language -- 5 Introduction to SCXML -- 6 Dialogue Acts -- 7 Six-Layered Model for Multi-Modal Interaction Systems -- 8 WebRTC handling media on the web -- Part II Implementations -- 9 Developing portable context-aware multimodal applications for connected devices using W3C multimodal architecture -- 10 SCXML on Resource Constrained Devices -- 11 A Standard Portal for Intelligent Services -- 12 Assembling the jigsaw How multiple W3C standards are synergistically combined in the HALEF multimodal dialog system -- Part III Applications -- 13 Applications in Ambient Assisted Living -- 14 Applications in Ambient Assisted Living -- Part IV Future Directions -- 15 Finding and integrating components into dynamic systems -- 16 Laborda Multimodal Interactivity in Foreign Language Testing -- 17 Multi-device applications using the multimodal architecture -- 18 Multimodal Interaction Description Language based on data modeling -- 19 Multimodal Fusion and Fission within the W3C MMI Architectural Pattern.;This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted living, language learning, and health care; Features illustrative examples of implementations that use the standards, to help spark innovative ideas for future applications.

Dahl: author's other books


Who wrote Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything? Find out the surname, the name of the author of the book and a list of all author's works by series.

Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Part I
Standards
Springer International Publishing Switzerland 2017
Deborah A. Dahl (ed.) Multimodal Interaction with W3C Standards 10.1007/978-3-319-42816-1_1
1. Introduction to the Multimodal Architecture Specification
Jim Barnett 1
(1)
Department of Architecture Team, Genesys, Daly City, CA, USA
Jim Barnett
Email:
Abstract
The W3Cs Multimodal Architecture standard is a high-level design featuring loosely coupled components. Its goal is to encourage interoperability and re-use of components, without enforcing any particular approach to building multimodal applications. This paper offers an overview of the architecture, outlining its components and the events they use to communicate, as well as giving basic examples of how it can be applied in practice.
1.1 Overview
Many standards emerge in areas where the technology is stable and industry participants think that they understand the field well enough to be able to codify existing best practices. However the consensus within the Multimodal Interaction Working Group of the W3C was that best practices for multimodal application development had not yet emerged. The group therefore took it as its task to support exploration, rather than trying to codify any particular approach to multimodal applications. The goal of the Multimodal Architecture and Interfaces standard [] is to encourage re-use and interoperability while being flexible enough to allow a wide variety of approaches to application development. The Working Groups hope is that this architecture will make it easier for application developers to assemble existing components to get a base multimodal system, thus freeing them up to concentrate on building their applications.
As part of the discussions that lead to the Multimodal Architecture, the group considered existing multimodal languages, in particular SALT []. SALT was specifically designed as a multimodal language, and consisted of speech tags that could be inserted into HTML or similar languages. HTML5 in turn has multimodal capabilities, such as video, which were absent from earlier versions of HTML. One problem with this approach is that it is both language- and modality-specific. For example, neither SALT nor HTML5 supports haptic sensors, nor do they provide an extension point that would allow them to be integrated in a straightforward manner. Furthermore, in both cases overall control and coordination of the modalities is provided by HTML, which was not designed as a control language. Multimodal application developers using HTML5 are thus locked into a specific graphical language with limited control capabilities and no easy way to add new modalities. As a result of these limitations, HTML5 is not a good framework for multimodal experimentation.
The Multimodal Working Groups conclusion is that it was too early to commit to any modality-specific language. For example, VoiceXML [] has been highly successful as language for speech applications, particularly over the phone. However there is no guarantee that it will turn out to be the best language for speech-enabled multimodal applications. Therefore the Working Group decided to define a framework which would support a variety of languages, both for individual modalities and for overall coordination and control. The framework should rely on simple, high-level interfaces that would make it easy to incorporate existing languages such as VoiceXML as well as new languages that havent been defined yet. The Working Groups goal was to make as few technology commitments as possible, while still allowing the development of sophisticated applications from a wide variety of re-usable components. Of necessity the result of the Groups work is a high-level framework rather than the description of a specific system, but the goal of the abstraction is to let application developers decide how the details should be filled in.
We will first look at the components of the high-level architecture and then at the events that pass between them.
1.2 The Architecture
The basic design principles of the architecture are as follows:
  1. The architecture should make no assumptions about the internal structure of components.
  2. The architecture should allow components to be distributed or co-located.
  3. Overall control flow and modality coordination should be separated from user interaction.
  4. The various modalities should be independent of each other. In particular, adding a new modality should not require changes to any existing ones.
  5. The architecture should make no assumptions about how and when modalities will be combined.
The third and fourth principles motivate the most basic features of the design. In particular item 3 requires that there be a separate control module that is responsible for coordination among the modalities. The individual modalities will of course need their own internal control flow. For example, a VoiceXML-based speech recognition component has its own internal logic to coordinate prompt playing, speech recognition, barge-in, and the collection of results. However the speech recognition component should not be attempting to control what is happening in the graphics component. Similarly the graphics component should be responsible for visual input and output, without concern for what is happening in the voice modality. The fourth point re-enforces this separation of responsibilities. If the speech component is controlling speech input and output only, while the graphics component is concerned with the GUI only, then it should be possible to add a haptic component without modifying either of the existing components.
The core idea of the architecture is thus to factor the system into an Interaction Manager (IM) and multiple Modality Components (MCs).
The Interaction Manager is responsible for control flow and coordination among the Modality Components. It does not interact with the user directly or handle media streams, but controls the user interaction by controlling the various MCs. If the user is using speech to fill in a graphical form, the IM would be responsible for starting the speech Modality Component and then taking the speech results from the speech MC and passing them to the graphics component. The IM is thus responsible for tracking the overall progress of the application, knowing what information has been gathered, and deciding what to do next, but it leaves the details of the interactions in the various modalities up to the MCs. A wide variety of languages can be used to implement Interaction Managers, but SCXML [] is well suited to this task and was defined with this architecture in mind.
The Multimodal Architecture also defines an application-level Data Component which is logically separate from the Interaction Manager. The Data Component is intended to store application-level data, and the Interaction Manager is able to access it and update it. However the architecture does not define the interface between the Data Component and the IM, so in practice the IM will provide its own built-in Data Component.
Modality Components are responsible for interacting with the user. There are few requirements placed on Modality Components beyond this. In particular, the specification does not define what a modality is. A Modality Component may handle input or output or both. In general, it is possible to have coarse-grained Modality Components that combine multiple media that could be treated as separate modalities. For example, a VoiceXML-based Modality Component would offer both ASR and TTS capabilities, but it would also be possible to have one MC for ASR and another for TTS. Many Modality Components will have a scripting interface to allow developers to customize their behavior. VoiceXML is again a good example of this. However it is also possible to have hard-coded Modality Components whose behavior cannot be customized.
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything»

Look at similar books to Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything»

Discussion, reviews of the book Multimodal Interaction with W3C Standards Toward Natural User Interfaces to Everything and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.