• Complain

Davey Gibian - Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning

Here you can read online Davey Gibian - Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2022, publisher: Rowman & Littlefield Publishers, genre: Politics. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover
  • Book:
    Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning
  • Author:
  • Publisher:
    Rowman & Littlefield Publishers
  • Genre:
  • Year:
    2022
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Sheds light on the ability to hack AI and the technology industrys lack of effort to secure vulnerabilities.

We are accelerating towards the automated future. But this new future brings new risks. It is no surprise that after years of development and recent breakthroughs, artificial intelligence is rapidly transforming businesses, consumer electronics, and the national security landscape. But like all digital technologies, AI can fail and be left vulnerable to hacking. The ability to hack AI and the technology industrys lack of effort to secure it is thought by experts to be the biggest unaddressed technology issue of our time. Hacking Artificial Intelligence sheds light on these hacking risks, explaining them to those who can make a difference.

Today, very few peopleincluding those in influential business and government positionsare aware of the new risks that accompany automated systems. While society hurdles ahead with AI, we are also rushing towards a security and safety nightmare. This book is the first-ever laymans guide to the new world of hacking AI and introduces the field to thousands of readers who should be aware of these risks. From a security perspective, AI is today where the internet was 30 years ago. It is wide open and can be exploited. Readers from leaders to AI enthusiasts and practitioners alike are shown how AI hacking is a real risk to organizations and are provided with a framework to assess such risks, before problems arise.

Davey Gibian: author's other books


Who wrote Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning? Find out the surname, the name of the author of the book and a list of all author's works by series.

Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make

Davey Gibian is a technologist and artificial intelligence practitioner. His career has spanned Wall Street, the White House, and active war zones as he has brought cutting-edge data science tools to solve hard problems. He has built two start-ups, Calypso AI and OMG, was a White House Presidential Innovation Fellow for Artificial Intelligence and Cybersecurity, and helped scale Palantir Technologies. He holds patents in machine learning and an undergraduate degree from Columbia University. Davey served in the U.S. Air Force and currently resides in New York City.

Ackerman, Evan. Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve into Wrong Lane. IEEE Spectrum, June 24, 2021. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane.

Adversarial Robustness Toolbox. Welcome to the Adversarial Robustness Toolbox. Adversarial Robustness Toolbox 1.7.2 documentation. Accessed September 9, 2021. https://adversarial-robustness-toolbox.readthedocs.io/en/latest/#:~:text=Adversarial%20Robustness%20Toolbox%20(ART)%20is,Poisoning%2C%20Extraction%2C%20and%20Inference.

Angwin, Julia, and Jeff Larson. Machine Bias. ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Barse, E. L., H. Kvarnstrom, and E. Jonsson, Synthesizing test data for fraud detection systems. 19th Annual Computer Security Applications Conference, 2003. Proceedings., 2003, pp. 384394.

Biggio, Battista, and Fabio Roli. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. Arxiv, July 19, 2018. https://arxiv.org/pdf/1712.03141.pdf.

Bischoff, Paul. Surveillance Camera Statistics: Which City Has the Most CCTV Cameras? Comparitech, June 8, 2021. https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/.

Brendel, Wieland, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017).

Bubeck, Sbastien, Yin Tat Lee, Eric Price, and Ilya Razenshteyn. Adversarial examples from computational constraints. In International Conference on Machine Learning, pp. 831840. PMLR, 2019.

Buolamwini, Joy, and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of MachineLearning Research 81 (February 23, 2018): 7791. https://doi.org/http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Carlini, Nicholas. A Complete List of All (ArXiv) Adversarial Example Papers. June 15, 2019. https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html.

Carlini, Nicholas, and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 314. 2017.

Carlini, Nicholas, Chang Liu, lfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 267284. 2019.

Chen, Pin-Yu, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 1526. 2017.

Chiappa, Silvia. 2019. Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence 33 (01):78018. https://doi.org/10.1609/aaai.v33i01.33017801.

Chung, Simon P., and Aloysius K. Mok. Advanced allergy attacks: Does a corpus really help?. In International Workshop on Recent Advances in Intrusion Detection, pp. 236255. Springer, Berlin, Heidelberg, 2007.

Chung, Simon P., and Aloysius K. Mok. Allergy attack against automatic signature generation. In International Workshop on Recent Advances in Intrusion Detection, pp. 6180. Springer, Berlin, Heidelberg, 2006. Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993.

Dastin, Jeffrey. Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Thomson Reuters, October 10, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

Demontis, Ambra, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 321338. 2019.

Diaz, Jesus. Alexa Can Be Hackedby Chirping Birds. Fast Company, September 28, 2018. https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds.

Enam, S. Zayd. Why Is Machine Learning Hard? Zayds Blog, November 10, 2016. https://ai.stanford.edu/~zayd/why-is-machine-learning-hard.html

Engstrom, Logan, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. Exploring the landscape of spatial robustness. In International Conference on Machine Learning, pp. 18021811. PMLR, 2019.

Erwin, Sandra. NGA Official: Artificial Intelligence Is Changing Everything, We Need a Different Mentality. SpaceNews, May 13, 2018. https://spacenews.com/nga-official-artificial-intelligence-is-changing-everything-we-need-a-different-mentality/.

Eykholt, Kevin, I. Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, T. Kohno, and D. Song. Robust Physical-World Attacks on Deep Learning Visual Classification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 16251634.

Fast Company. High-Tech Redlining: AI Is Quietly Upgrading Institutional Racism. Fast Company, November 20, 2018. https://www.fastcompany.com/90269688/high-tech-redlining-ai-is-quietly-upgrading-institutional-racism.

Federal Reserve. Board of Governors of the Federal Reserve System. Supervisory Letter SR 117 on guidance on Model Risk Management, April 4, 2011. Accessed September 9, 2021. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm.

Federal Reserve Board. Trading and Capital-Markets Activities Manual. February 1998. https://www.federalreserve.gov/boarddocs/supmanual/trading/trading.pdf.

Ford, Nic, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513 (2019).

Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp. 13221333. 2015.

Fredrikson, Matthew, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd {USENIX} Security Symposium ({USENIX} Security 14), pp. 1732. 2014.

Freedberg, Sydney J. Joint Artificial Intelligence Center Created under DOD CIO. Breaking Defense, July 22, 2021. https://breakingdefense.com/2018/06/joint-artificial-intelligence-center-created-under-dod-cio/.

Gao, Yansong, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. Strip: A defence against Trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113125. 2019.

Gartner, Inc. Anticipate Data Manipulation Security Risks to AI Pipelines. Gartner. Accessed September 9, 2021. https://www.gartner.com/en/documents/3899783/anticipate-data-manipulation-security-risks-to-ai-pipeli.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning»

Look at similar books to Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning»

Discussion, reviews of the book Hacking Artificial Intelligence: A Leaders Guide from Deepfakes to Breaking Deep Learning and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.