• Complain

Hong Jiao (editor) - Application of Artificial Intelligence to Assessment (The MARCES Book Series)

Here you can read online Hong Jiao (editor) - Application of Artificial Intelligence to Assessment (The MARCES Book Series) full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2020, publisher: Information Age Publishing, genre: Home and family. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover

Application of Artificial Intelligence to Assessment (The MARCES Book Series): summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Application of Artificial Intelligence to Assessment (The MARCES Book Series)" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

The general theme of this book is to present the applications of artificial intelligence (AI) in test development. In particular, this book includes research and successful examples of using AI technology in automated item generation, automated test assembly, automated scoring, and computerized adaptive testing. By utilizing artificial intelligence, the efficiency of item development, test form construction, test delivery, and scoring could be dramatically increased. Chapters on automated item generation offer different perspectives related to generating a large number of items with controlled psychometric properties including the latest development of using machine learning methods. Automated scoring is illustrated for different types of assessments such as speaking and writing from both methodological aspects and practical considerations. Further, automated test assembly is elaborated for the conventional linear tests from both classical test theory and item response theory perspectives. Item pool design and assembly for the linear-on-the-fly tests elaborates more complications in practice when test security is a big concern. Finally, several chapters focus on computerized adaptive testing (CAT) at either item or module levels. CAT is further illustrated as an effective approach to increasing test-takers engagement in testing. In summary, the book includes both theoretical, methodological, and applied research and practices that serve as the foundation for future development. These chapters provide illustrations of efforts to automate the process of test development. While some of these automation processes have become common practices such as automated test assembly, automated scoring, and computerized adaptive testing, some others such as automated item generation calls for more research and exploration. When new AI methods are emerging and evolving, it is expected that researchers can expand and improve the methods for automating different steps in test development to enhance the automation features and practitioners can adopt quality automation procedures to improve assessment practices.

Hong Jiao (editor): author's other books


Who wrote Application of Artificial Intelligence to Assessment (The MARCES Book Series)? Find out the surname, the name of the author of the book and a list of all author's works by series.

Application of Artificial Intelligence to Assessment (The MARCES Book Series) — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Application of Artificial Intelligence to Assessment (The MARCES Book Series)" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Application of Artificial Intelligence - photo 1

______________________________________

Application of Artificial Intelligence to Assessment

______

A volume in
The MARCES Book Series
Hong Jiao, and Robert W. Lissitz, Series Editors

__________________________________________

Application of Artificial Intelligence to Assessment

______

edited by

Hong Jiao

University of Maryland

Robert W. Lissitz

University of Maryland

INFORMATION AGE PUBLISHING INC Charlotte NC wwwinfoagepubcom Library of - photo 2

INFORMATION AGE PUBLISHING, INC.
Charlotte, NC www.infoagepub.com

Library of Congress Cataloging-in-Publication Data

A CIP record for this book is available from the Library of Congress

http://www.loc.gov

ISBN: 978-1-64113-951-9 (Paperback)

978-1-64113-952-6 (Hardcover)

978-1-64113-953-3 (E-Book)

Copyright 2020 Information Age Publishing Inc.

All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, electronic, mechanical,
photocopying, microfilming, recording or otherwise, without written permission
from the publisher.

Printed in the United States of America

Chapter 1

Augmented Intelligence and the Future of Item Development

Mark J. Gierl

University of Alberta

Hollis Lai

University of Alberta

Donna Matovinovic

ACT Inc.

Testing organizations require large numbers of diverse, high-quality, content-specific items to support their current test delivery and test design initiatives. But the demand for test items far exceeds the supply. Conventional item development is a manual process that is both time consuming and expensive because each item is written individually by a subject-matter expert (SME) and then reviewed, edited, and revised by groups of SMEs to ensure every item meets quality control standards. As a result, item development serves as a critical bottleneck in our current approach to content development for testing. One way to address this problem is to augment the conventional approach with computer algorithms to improve the efficiency and increase the scalability of the item development process. Automatic item generation (AIG) is the process of using models to produce items using computer technology. With AIG a single model can be used to produce hundreds of new test items. The purpose of our chapter is to describe and illustrate how augmented intelligence in item development can be achieved with the use of AIG. The chapter contains three sections. In the first section, we describe the conventional approach to item development. We also explain why this approach cannot be used to meet the growing demand for new test items. In the second section, we introduce augmented intelligence in item development and we describe how AIG can be used to support the human-machine interactions needed for efficient and scalable content production. In the third section, we provide a summary and we highlight directions for future research.

Contemporary Item Development and the Problem of Scalability

The conventional approach to item development is a manual process where SMEs use their experiences and expertise to produce new test items. It relies on a method where the SME creates each test item individually. Then, after each item is created, it is edited, reviewed, and revised until the item meets the required standards of quality (Haladyna & Rodriguez, 2013; Lane, Raymond, Haladyna, & Downing, 2016; Schmeiser & Welch, 2006). The SME is responsible for the entire process which involves identifying, organizing, and evaluating the content required for creating new items. This approach relies on human judgment acquired through training and experience. As a result, item development has often been described as an art because it depends on the knowledge, experience, and insight of the SME (Haladyna & Rodriguez, 2013; Schmeiser & Welch, 2006). Conventional item development is also a standardized process that requires iterative refinements to address quality control (Lane, Raymond, Haladyna, & Downing, 2016; Schmeiser & Welch, 2006). The item development process is standardized through the use of guidelines where SMEs are provided with information to structure their task in a consistent manner that produces reliable and valid test items (Haladyna & Downing, 1998; Haladayna, Dowing, & Rodriguez, 2002; Haladyna & Rodriguez, 2013). Standardization helps control for the potentially diverse outcomes that can be produced when different SMEs perform the same item development task. Guidelines provide a summary of best practices, common mistakes, and general expectations that help ensure the SMEs have a shared understanding of their tasks and responsibilities. Iterative refinement supports the practice of item development through the use of a structured and systematic item review. That is, once an item has been written, it is then reviewed to evaluate whether it has met important outcomes described in the guidelines. Typically, reviews are conducted by committees of SMEs. Reviews can focus on a range of standards and objectives related to item content (e.g., does the item match the test specifications), fairness (e.g., does the item illicit construct-irrelevant variance due to subgroup differences), cognitive complexity (e.g., is the linguistic complexity of the item aligned to grade-level expectations), and presentation (e.g., is the item grammatically correct; Perie & Huff, 2016; Schmeiser & Welch, 2006). The review yields feedback on different standards of item quality that, in turn, can be used by the SME to revise and improve the original item.

The conventional approach has two noteworthy limitations. First, conventional item development is inefficient. It is both time consuming and expensive because it relies on the item as the unit of analysis (Drasgow, Luecht, & Bennett, 2006). That is, each item in the process is unique and therefore each item must be individually written, edited, reviewed, and revised. Many different components of item quality can be identified. For example, item quality can be determined, as noted in the previous paragraph, by the item content, fairness, cognitive complexity, and presentation. Because each item is unique, each component of item quality must be reviewed and, if necessary, each item must be revised. Because writing and reviewing is conducted by highly-qualified SMEs, the conventional approach is expensive.

Second, conventional item development is challenging to scale in an economical way. The scalability of the conventional approach is again linked to the item as the unit of analysis. When one item is required, one item is written and reviewed by the SME. When 100 items are required, 100 items must be written and reviewed by the SMEs. Hence, a large number of SMEs who can write and review items is needed to scale the process. Conventional item development can result in an increase in item production when large numbers of SMEs are available. But item writing and reviewing is a time-consuming and expensive process due to the human effort needed to create, review, edit, and revise large numbers of new items.

These two limitations highlight the importance of establishing an efficient and scalable approach to item development. These limitations are also amplified in the modern era of educational assessment where test delivery and design are rapidly evolving to support different forms of on-demand testing. Test delivery marks the most important shift. Researchers and practitioners now recognize that educational testing is neither feasible nor desirable using the paper-based format. The cost of printing, scoring, and reporting paper-based tests requires tremendous time, effort, and expense. Computer-based testing (CBT) provides a viable alternative to paper-based testing that helps reduce delivery costs while providing important benefits for examinees. CBT permits testing on-demand thereby allowing examinees to take the test at any time during instruction. Items on CBT are scored immediately thereby providing examinees with instant feedback. CBT allows for continuous administration thereby allowing examinees to have more choice about when they write their tests.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Application of Artificial Intelligence to Assessment (The MARCES Book Series)»

Look at similar books to Application of Artificial Intelligence to Assessment (The MARCES Book Series). We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Application of Artificial Intelligence to Assessment (The MARCES Book Series)»

Discussion, reviews of the book Application of Artificial Intelligence to Assessment (The MARCES Book Series) and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.