Qing Wang , Zhenyu Chen , Junjie Wang and Yang Feng
Intelligent Crowdsourced Testing
Logo of the publisher
Qing Wang
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zhenyu Chen
Software Institute, Nanjing University, Nanjing, Jiangsu, China
Junjie Wang
Institute of Software, Chinese Academy of Sciences, Beijing, China
Yang Feng
Software Institute, Nanjing University, Nanjing, Jiangsu, China
ISBN 978-981-16-9642-8 e-ISBN 978-981-16-9643-5
https://doi.org/10.1007/978-981-16-9643-5
The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Foreword
At every minute of every day there are probably hundreds of millions of computers that are idle. It is staggering to contemplate the enormity of the amount of computing capacity sitting idle and unused the totality of all of these devices. And it is intriguing to consider what might be accomplished if even a modest fraction of that capacity could be put to good use. Indeed, others have not only contemplated such usage but some have also tapped into it. In one such project, that computing power is used to analyze electromagnetic spectrum for possible emissions from extraterrestrial civilizations. Another project attempts to use that computing power to evaluate approaches to predicting the structure of the proteins expressed by a gene, given only the DNA sequence of that gene.
In this fascinating book, the authors suggest how all of that unused computing power could be put to use by software engineers, using it to perform otherwise unachievably thorough testing of software through crowdsourced testing. The prospect of using the combined computational capacity of millions of idle computers all around the world to carry out unprecedentedly thorough testing, thereby improving the quality of the worlds software, is exciting and intriguing. It is the subject of this book.
While the upside potential of crowdsourced testing is enormous, the difficulties in doing this are also enormous. They range from deciding how much capacity is available on which computers to how to apportion testing tasks to each available computer, to deciding how to integrate all of the testing results that have been returned, to knowing what to do when intentionally overlapping testing tasks have returned inconsistent results. This book addresses all of these problems, and more. In doing so, it makes for fascinating reading and contemplation, and also lays out a challenging and invigorating research agenda.
I congratulate the authors for this important and intrepid undertaking. Their vision is broad and exciting, and their research roadmap is challenging and stimulating. This book seems destined to become a seminal work in an area of boundless importance and promise.
Leon J. Osterweil
Orleans, MA, USA
15 October 2021
Preface
Software is everywhere today. It leads our every step. It is part of everything we do. Software makes our everyday work easier and simplifies our daily lives. We use software to work, study, and communicate with friends. It allows us to shop, make payments, travel, and do a lot more.
Yet when one programs, one makes mistakes. Every 1000 lines of code easily contain up to 16 errors, and a companys software has millions of lines, so that is a lot of errors. Once the software errors reveal themselves, they have consequences that range from annoying to very severe. There has been a long list of software errors that have caused big disruptions. Airport systems cant function for a day, banking systems of entire countries shut down, and spacecrafts explode, among other mishaps.
There is no way to prove that a piece of software is 100% bug free. Nevertheless, there are things we can do to improve software quality. Among which, software testing is the most important strategy. Software testing is a method to check whether the actual software product matches expected requirements and to ensure that software product is defect free. If there are any bugs or errors in the software, it can be identified early and can be solved before delivery of the software product. Properly tested software product ensures reliability, security, and high performance, which further results in time saving, cost effectiveness, and customer satisfaction.
Software testing involves execution of software/system components using manual or automated tools to evaluate one or more properties of interest. Traditionally, it was conducted by dedicated quality assurance teams with formally trained testers. Although these quality assurance teams are reliable, the high cost and delayed responses made them hard to scale and non-flexible for rapid update needs for the software industry today. Automated testing could be one solution, but the inability to create realistic user behavior test cases makes them hard to rely on given the variations in software products.
Crowdsourced testing is an emerging practice that enables testing with more flexibility and scalability than quality assurance teams. Crowdsourced testing, also known as crowdtesting, is a fresh approach to quality assurance. It combines human skills with technology to eliminate some of the problems involved in conventional testing. Instead of carrying out testing within an organization, crowdsourcing uses a dispersed, temporary workforce of multiple individual testers. This on-demand community of testers is able to test the software more quickly and effectively than an in-house team. Crowdsourced testing offers companies an opportunity to have their products tested by real users on real devices across the globe, ensuring a customer-centric emphasis.
Thanks to the advantages of crowdsourced testing, it has been adopted by a growing number of organizations, including, but not limited to, Google, Facebook, Walmart, PayPal, and Uber. In particular, Google has deployed crowdsourced testing on its 14 major software product lines. Crowdsourced testing schema has also spawned a number of crowdsourced testing platforms. For example, Applause, which is the pioneer of global crowdsourced testing schema and the largest crowdsourced testing platform, provides usability, compatibility, security, functionality, accessibility, and other types of testing services. Synack, which is the worlds largest secure crowdsourced testing platform, provides crowdsourced testing services to the U.S. Department of Defense, one-third US banks, and three-thirds credit card companies. The crowdsourced testing market is expected to register a compound annual growth rate of 10.7% over the forecast period 2021 to 2026.