David Chapman - Better Without AI
Here you can read online David Chapman - Better Without AI full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. genre: Romance novel. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:
Romance novel
Science fiction
Adventure
Detective
Science
History
Home and family
Prose
Art
Politics
Computer
Non-fiction
Religion
Business
Children
Humor
Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.
- Book:Better Without AI
- Author:
- Genre:
- Rating:4 / 5
- Favourites:Add to favourites
- Your mark:
- 80
- 1
- 2
- 3
- 4
- 5
Better Without AI: summary, description and annotation
We offer to read an annotation, description, summary or preface (depends on what the author of the book "Better Without AI" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.
Better Without AI — read online for free the complete book (whole text) full work
Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Better Without AI" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.
Font size:
Interval:
Bookmark:
Better without AI
How to avert an AI apocalypse... and create a future we would like
David Chapman
This book is a call to action. You can participate. This is for you.
Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the worldand so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machinesand in so doing, we may destroy the systems we depend on for survival.
Worries about AI risks have long been dismissed because AI itself sounds like science fiction. That is no longer possible. Fluent new text generators, such as ChatGPT, have suddenly shown the public that powerful AI is here now. Some are excited about future possibilities; other fear them.
We dont know how our AI systems work, we dont know what they can do, and we dont know what broader effects they will have. They do seem startlingly powerful, and a combination of their power with our ignorance is dangerous.
In our absence of technical understanding, those concerned with future AI risks have constructed scenarios: stories about what AI may do. We dont know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer AI away from an apocalypseand perhaps toward a remarkably likeable future.
So far, weve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. Weve found zero that lead to good outcomes.
Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1
Unless we can find some specific beneficial path, and can gain some confidence in taking it, we should shut AI down.
I am not a Luddite. I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating. Id love to believe theres a way AI could improve our lives in the long run. If someone finds one, I will do an immediate 180, roll up my sleeves, and help build that better future.
Unless and until that happens, I oppose AI. I hope you will too. At minimum, I advise everyone involved to exercise enormously greater caution.
AI is extremely cool, and we can probably have a better future without it. Lets do that.
This book is about you. Its about what you can do to help avert apocalyptic outcomes. Its about your part in a future we would like.
I offer specific recommendations for the general public; for technology professionals; for AI professionals specifically; for organizations already concerned with AI risks; for science and public interest funders, including government agencies, philanthropic organizations, NGOs, and individual philanthropists; and for governments in their regulatory and legislative roles.
Since this book is for everyone, it requires no technical background. It is also not a beginners introduction to artificial intelligence, nor an overview of the field, nor a survey of prior literature on AI safety. Instead, you will read about the AI risk scenarios Im most concerned about, and what you can do about them.Medium-sized apocalypses
This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people.
Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction.2 It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum.
However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development. Neither AI ethics nor AI safety has done much to propose plausibly effective interventions.
We should consider many such scenarios, devise countermeasures, and implement them.A heros journey
This book has five chapters. They are mostly independent; you can read any on its own. Together, however, they form a heros journey path: through trials and tribulations to a brilliant future.
We are not used to reasoning about artificial intelligence. Even experts cant make much sense of what current AI systems do, and its still more difficult to guess at the behavior of unknown future sorts. We are used to reasoning about powerful people, who may be helpful or hostile. It is natural to think about AI using that analogy. Most scenarios in science fiction, and in the AI safety field, assume the danger is autonomous mind-like AI.
However, the first chapter, What is the Scary kind of AI? explains why that is probably misleading. Scenarios in which AIs act like tyrants are emotionally compelling, and may be possible, but they attract attention away from other risks. AI is dangerous when it creates new, large, unchecked pools of power. Those present the same risks whether the power is exploited by people or by AI systems themselves. (Here the herothats yourealizes that the world is scarier than it seemed.)
The second chapter, Apocalypse now, explores a largely neglected category of catastrophic risks of current and near-future AI systems. These scenarios feature AI systems that are not at all mind-like. However, they act on our own minds, coopting people to act on their behalf, altering our cultural and social systems for their benefit, amassing enormous power, undermining governments and other critical institutions, and could cause societal collapse unintentionally. That may now sound as unlikely as the scenarios in which a tyrannical, self-aware AI deliberately takes over the world and enslaves or kills all humans. I hope reading the chapter will make this alternative terrifyingly plausible. (The hero gets thrown into increasingly perilous, unexpected, complicated scenarios. Is survival possible?)
Chapter three, How you can avert an AI apocalypse describes seven practical approaches. These may be effective against both the mind-like AIs of the first chapter, and the mindless ones of the second. For each approach, it suggests helpful actions that different sorts of people and institutions can take. They are complementary, and none is guaranteed to work, so all are worth pursuing simultaneously. (The hero takes up magical arms against the enemy, and victory seems possible after all.)
The utopian case for AI is dramatic acceleration of scientific understanding, and therefore technological and material progress. Those are worthy goals, which I share fully. However, no one has explained how or why AI would accomplish them. Chapter four, Technological transformation without Scary AI, suggests that it wouldntbut that such acceleration is within our reach. The pace of progress currently depends on dysfunctional social structures and incentives for research and development. We can take immediate, pragmatic actions to remove obstacles and speed progress, without involving risky AI. (The hero achieves an epiphany of the better world to come, and discovers that the key is of quite a different nature than expected.)
Next pageFont size:
Interval:
Bookmark:
Similar books «Better Without AI»
Look at similar books to Better Without AI. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.
Discussion, reviews of the book Better Without AI and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.