Plausible Slop: Generative AI and Open Source Cybersecurity
333 | Sat 02 Aug 5:30 p.m.–6:15 p.m.
Presented by
-
Dr. Kaylea Champion
https://kayleachampion.com
Dr. Kaylea Champion studies how people cooperate to build public goods like GNU/Linux and Wikipedia, including what gets built and maintained -- and what doesn't. She has a background in system administration and tech support. She received her PhD in Communication from the University of Washington in 2024. A Linux user since 1994, she enjoys tromping through the woods, smashing goblins, and cooking for a crowd.
Dr. Kaylea Champion
https://kayleachampion.com
Abstract
Despite speculation that the rise of consumer-grade generative AI tools would trigger the development of more advanced cybersecurity attacks, a more grounded view observes that instead these synthetic text generating tools are eroding the social model of open source cybersecurity through the low-effort extrusion of 'plausible slop': potentially significant and well-formed but ultimately erroneous and unwanted text. The presence of plausible slop in newcomer contributions in the form of bug and security reports to open source software packages requires substantial time commitment from scarce experts. These experts are caught in a double bind: their role dictates that they sort through what is truly dangerous and what is nonsense, and they are charged with both welcoming problem reports from newcomers while also setting strong norms against inauthentic reports. In this talk, I report on my effort so far investigating plausible slop, connect this challenge to previous historical challenges, suggest avenues towards solutions, and seek community feedback to shape next steps.
Despite speculation that the rise of consumer-grade generative AI tools would trigger the development of more advanced cybersecurity attacks, a more grounded view observes that instead these synthetic text generating tools are eroding the social model of open source cybersecurity through the low-effort extrusion of 'plausible slop': potentially significant and well-formed but ultimately erroneous and unwanted text. The presence of plausible slop in newcomer contributions in the form of bug and security reports to open source software packages requires substantial time commitment from scarce experts. These experts are caught in a double bind: their role dictates that they sort through what is truly dangerous and what is nonsense, and they are charged with both welcoming problem reports from newcomers while also setting strong norms against inauthentic reports. In this talk, I report on my effort so far investigating plausible slop, connect this challenge to previous historical challenges, suggest avenues towards solutions, and seek community feedback to shape next steps.