Davey Gibian is a technologist and artificial intelligence practitioner. His career has spanned Wall Street, the White House, and active war zones as he has brought cutting-edge data science tools to solve hard problems. He has built two start-ups, Calypso AI and OMG, was a White House Presidential Innovation Fellow for Artificial Intelligence and Cybersecurity, and helped scale Palantir Technologies. He holds patents in machine learning and an undergraduate degree from Columbia University. Davey served in the U.S. Air Force and currently resides in New York City.
Ackerman, Evan. Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve into Wrong Lane. IEEE Spectrum, June 24, 2021. https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane.
Adversarial Robustness Toolbox. Welcome to the Adversarial Robustness Toolbox. Adversarial Robustness Toolbox 1.7.2 documentation. Accessed September 9, 2021. https://adversarial-robustness-toolbox.readthedocs.io/en/latest/#:~:text=Adversarial%20Robustness%20Toolbox%20(ART)%20is,Poisoning%2C%20Extraction%2C%20and%20Inference.
Angwin, Julia, and Jeff Larson. Machine Bias. ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Barse, E. L., H. Kvarnstrom, and E. Jonsson, Synthesizing test data for fraud detection systems. 19th Annual Computer Security Applications Conference, 2003. Proceedings., 2003, pp. 384394.
Biggio, Battista, and Fabio Roli. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. Arxiv, July 19, 2018. https://arxiv.org/pdf/1712.03141.pdf.
Bischoff, Paul. Surveillance Camera Statistics: Which City Has the Most CCTV Cameras? Comparitech, June 8, 2021. https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/.
Brendel, Wieland, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017).
Bubeck, Sbastien, Yin Tat Lee, Eric Price, and Ilya Razenshteyn. Adversarial examples from computational constraints. In International Conference on Machine Learning, pp. 831840. PMLR, 2019.
Buolamwini, Joy, and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of MachineLearning Research 81 (February 23, 2018): 7791. https://doi.org/http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
Carlini, Nicholas. A Complete List of All (ArXiv) Adversarial Example Papers. June 15, 2019. https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html.
Carlini, Nicholas, and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 314. 2017.
Carlini, Nicholas, Chang Liu, lfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 267284. 2019.
Chen, Pin-Yu, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 1526. 2017.
Chiappa, Silvia. 2019. Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence 33 (01):78018. https://doi.org/10.1609/aaai.v33i01.33017801.
Chung, Simon P., and Aloysius K. Mok. Advanced allergy attacks: Does a corpus really help?. In International Workshop on Recent Advances in Intrusion Detection, pp. 236255. Springer, Berlin, Heidelberg, 2007.
Chung, Simon P., and Aloysius K. Mok. Allergy attack against automatic signature generation. In International Workshop on Recent Advances in Intrusion Detection, pp. 6180. Springer, Berlin, Heidelberg, 2006. Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993.
Dastin, Jeffrey. Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Thomson Reuters, October 10, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
Demontis, Ambra, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 321338. 2019.
Diaz, Jesus. Alexa Can Be Hackedby Chirping Birds. Fast Company, September 28, 2018. https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds.
Enam, S. Zayd. Why Is Machine Learning Hard? Zayds Blog, November 10, 2016. https://ai.stanford.edu/~zayd/why-is-machine-learning-hard.html
Engstrom, Logan, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. Exploring the landscape of spatial robustness. In International Conference on Machine Learning, pp. 18021811. PMLR, 2019.
Erwin, Sandra. NGA Official: Artificial Intelligence Is Changing Everything, We Need a Different Mentality. SpaceNews, May 13, 2018. https://spacenews.com/nga-official-artificial-intelligence-is-changing-everything-we-need-a-different-mentality/.
Eykholt, Kevin, I. Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, T. Kohno, and D. Song. Robust Physical-World Attacks on Deep Learning Visual Classification. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 16251634.
Fast Company. High-Tech Redlining: AI Is Quietly Upgrading Institutional Racism. Fast Company, November 20, 2018. https://www.fastcompany.com/90269688/high-tech-redlining-ai-is-quietly-upgrading-institutional-racism.
Federal Reserve. Board of Governors of the Federal Reserve System. Supervisory Letter SR 117 on guidance on Model Risk Management, April 4, 2011. Accessed September 9, 2021. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm.
Federal Reserve Board. Trading and Capital-Markets Activities Manual. February 1998. https://www.federalreserve.gov/boarddocs/supmanual/trading/trading.pdf.
Ford, Nic, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513 (2019).
Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp. 13221333. 2015.
Fredrikson, Matthew, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd {USENIX} Security Symposium ({USENIX} Security 14), pp. 1732. 2014.
Freedberg, Sydney J. Joint Artificial Intelligence Center Created under DOD CIO. Breaking Defense, July 22, 2021. https://breakingdefense.com/2018/06/joint-artificial-intelligence-center-created-under-dod-cio/.
Gao, Yansong, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. Strip: A defence against Trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113125. 2019.
Gartner, Inc. Anticipate Data Manipulation Security Risks to AI Pipelines. Gartner. Accessed September 9, 2021. https://www.gartner.com/en/documents/3899783/anticipate-data-manipulation-security-risks-to-ai-pipeli.