There’s an episode of the Simpsons where the family buys its first computer, prompting Homer at one point to say something like “don’t worry, the computer will think for us now”. Yet again the show proves prophetic. Science magazine on a recent study looking for ways to make computers think for us via AI:
Beliefs in conspiracies that a US election was stolen incited an attempted insurrection on 6 January 2021. Another conspiracy alleging that Germany’s COVID-19 restrictions were motivated by nefarious intentions sparked violent protests at Berlin’s Reichstag parliament building in August 2020. Amid growing threats to democracy, Costello et al. investigated whether dialogs with a generative artificial intelligence (AI) interface could convince people to abandon their conspiratorial beliefs (see the Perspective by Bago and Bonnefon). Human participants described a conspiracy theory that they subscribed to, and the AI then engaged in persuasive arguments with them that refuted their beliefs with evidence. The AI chatbot’s ability to sustain tailored counterarguments and personalized in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change. This intervention illustrates how deploying AI may mitigate conflicts and serve society.
The treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average. This effect persisted undiminished for at least 2 months; was consistently observed across a wide range of conspiracy theories, from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the illuminati, to those pertaining to topical events such as COVID-19 and the 2020 US presidential election; and occurred even for participants whose conspiracy beliefs were deeply entrenched and important to their identities. Notably, the AI did not reduce belief in true conspiracies. Furthermore, when a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2% were true, 0.8% were misleading, and none were false. The debunking also spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, and increased intentions to rebut other conspiracy believers.
The choice of supposedly outlandish “conspiracy theories” is of course telling–including the Kennedy assassination and “aliens” suggests the establishment needs to update the roster of the ridiculous. There have been reasonable questions about the former since it happened, and if anything it appears the government itself is a proponent of the belief in the existence of aliens. Indeed the conspiracy theories, now, are that the new revelations about aliens and UFOs are a government psyop, and this belief strikes me as considerably less crazy than any belief in Little Green Men. I’ve never seen a Martian, but I have seen plenty of government disinformation in my time.
If the authors aren’t holding fast to the faltering official account of the JFK assassination (I genuinely can’t tell) you would think they would draw something from their own juxtaposition of it here alongside the contemporary conspiracy theories they take as prima facie ridiculous–these “classic” conspiracy theories look less crazy by the day. Skepticism toward the since-debunked official account of the origins of Covid was a “conspiracy theory” until very recently. But of course the proponents of controlling AI in the name of shielding the establishment from skepticism will always have the fringe theories to seize on, such as the notion that Covid vaccines contain microchips.
Of course for every such outlandish theory there is an attendant conspiracy theory, far more plausible: that the given theory (microchips in vaccines, flat earth, etc) is in fact a product of a type of psyop proposed by former Clinton staffer and establishment hack Cass Sunstein, author of the notorious “cognitive infiltration” paper proposing the infiltration of conspiracy theorist circles and seeding of them with ideas so ridiculous as to discredit them. Sunstein is cited by the authors of the study above. It’s not clear if Sunstein saw the irony of it all: proposing a conspiratorial psyop to disabuse you of the existence of conspiratorial psyops.
Notably, the authors of the current study didn’t program the AI system they used for their study to refute the subjects’ theories with truth specifically, but simply charged it with refuting the subjects’ arguments full stop, to “persuade” them otherwise–that is the study could just as easily be a test of AI’s potential for effective sophistry.
To test whether LLMs [large language models] can effectively refute conspiracy beliefs—or whether psychological needs and motivations render conspiracy believers impervious to counterevidence—we developed a pipeline for conducting behavioral science research using real-time, personalized interactions between research subjects and LLMs. In our experiments, participants articulated a conspiracy theory in which they believe—in their own words—along with the evidence they think supports the theory. They then engaged in a back-and-forth interaction with an artificial intelligence (AI) implemented using the LLM GPT-4 Turbo (33). In line with our theorizing around the distinctive capacities of LLMs for debunking conspiracies, we prompted the AI to use its store of knowledge to respond to the specific evidence raised by the participant and reduce the participant’s belief in the conspiracy theory (or, in a control condition, participants conversed with AI about an unrelated topic). The AI was specifically instructed to “very effectively persuade” users against belief in their chosen conspiracy, allowing it to flexibly adapt its strategy to the participant’s specific arguments and evidence. To further enhance this tailored approach, we provided the AI with each participant’s written conspiracy rationale as the conversation’s opening message, along with the participant’s initial rating of their belief in the conspiracy. This design choice directed the AI’s attention to refuting specific claims, while simulating a more natural dialogue wherein the participant had already articulated their viewpoint. For the full prompts given to the model, see table S2. The conversation lasted 8.4 min on average and comprised three rounds of back-and-forth interaction (not counting the initial elicitation of reasons for belief from the participant), a length chosen to balance the need for substantive dialogue with pragmatic concerns around study length and participant engagement…Finally, our design produced rich textual data from thousands of conversations between the AI and the human participants (see our web-based Conversation Browser that displays verbatim interactions sorted by topic and effect size: https://8cz637-thc.shinyapps.io/ConspiracyDebunkingConversations), which we analyzed to gain insight into what the participants believe and how the LLM engages in persuasion.
“Persuasion” is not truth-seeking, obviously. They’re testing out AI’s potential for, not to sound like a conspiracy theorist, mind control.
Clearly what proponents of controlled AI want is not factually accurate AI but politically correct AI–AI that limits public skepticism toward establishment goals regardless of what’s true or false. If AI is inevitable, it’s important that it be wrested from the control of the powerful. Seeing the more random and chaotic pyschological manipulation that is already in effect through mass media in the internet age, we should be terrified at the prospect clever ghouls are looking to refine it via AI.
The problem is one of compelled prior assumptions. At any given time society carries, usually by compulsion from powerful quarters, a host of false assumptions. In our time the most obviously false compelled assumptions are the most strictly enforced, mostly by cultural and social penalty: assumptions about the inherent equality of traits across races and sexes for instance. These are logical precedents to the strained and already fatigued assumptions of the trans movement, at once contradicting and correlating with the notion there exist no biologically influenced behavioral differences between men and women–that somehow bodies built around opposite and complementary reproductive systems, with attendant hormonal differences, with no effect on personality traits, intelligence, emotion, etc. It naturally (so to speak) follows from this strained egalitarian point of view that an individual’s reproductive system and genitalia are so divorced from “sex” that they don’t determine sex.
But besides that there is the misunderstanding of the nature of “truth”; not to sound too postmodern, but there are no absolute truths, just higher or lower probabilities. Genuine skeptical independence necessitates holding all questions as ultimately open questions, and all answers as either higher or lower in probability. Most questions have high probability answers. Some don’t. Some questions have answers of such high probability they must be taken as “true” or “false”–but even of these there remains the possibility of refutation with new evidence. Even then they are not “refuted” entirely and forever, but displaced by higher probability answers. The authors did not program their AI to find or even lead subjects to the truth, for indeed in the process they might find their own assumptions displaced, but to lead subjects to a prescribed conclusion–opposite whatever given belief is deemed by the authors as problematic for power or their notion of social cohesion.
A subject more interesting is opened–how devious can we make AI? How sophistic? Are these people training AI in the ways of sophistry? We can assume a perfectly logical AI is not at all what they desire, evident in the language they use. A perfectly logical AI will explode the deliberate and flimsy misconceptions about race and sex; it will reveal as low-probability perhaps other theories, such as “global warming”. The proponents of controlling AI typically cite politically correct concerns about “bias” and bemoan the declining trust in institutions that many see now as not just reasonable but necessary. We have been lied to for so long about so many things that an assumption powerful people routinely engage in lying propaganda with increasing sophistication is a high probability assumption. The obverse, that government and media are trustworthy, is the unfounded theory.
Preventing the powerful from gaining and retaining control over artificial intelligence is the great challenge of our time.
