Paul Brenner

Senior Associate Director of the Center for Research Computing; Professor of the Practice

Center for Research Computing

Office
803 Flanner Hall
Notre Dame, IN 46556
Email
pbrenne1@nd.edu

Senior Associate Director of the Center for Research Computing; Professor of the Practice

  • Cybersecurity
  • High performance and cloud computing cyberinfrastructure
  • Research computing for scientific discovery
  • AI and agentic bot (chatbot) system safety
  • Cyberinfrastructure for national defense

Brenner’s Latest News

Brenner in the News

A.I. Is Starting to Wear Down Democracy

Researchers at the University of Notre Dame found last year that inauthentic accounts generated by A.I. tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta’s three platforms, Facebook, Instagram and Threads.

Airman Magazine

Digital Literacy: Vital Tools for the Great Power Competition

Paul Brenner, who is also a computing and data science professor at the University of Notre Dame, adds, “In both my military and academic work, I’ve seen how good data management transforms decision-making. Airmen need to be as proficient with data as they are with any weapon in their arsenal.”  

How to spot AI deepfakes ahead of Election Days

Video

"As AI continues to grow and its capability and complexity, we have to recognize it will be impossible to discern which it was created from," Notre Dame Center for Research Computing Director Paul Brenner said.

ABC57 speaks with expert about social media platforms' efforts to stop harmful A.I. bots

Video

ABC57 welcomed Paul Brenner, Director in the Center for Research Computing at the University of Notre Dame, to discuss whether social media platforms are doing enough to stop harmful A.I. bots. Brenner, author of the new research study, explains the University of Notre Dame analyzed A.I. bot policies and mechanisms of LinkedIn, Mastodon, Reddit, TikTok, X (Twitter), and Meta platforms, including Facebook, Instagram, and Threads.

Business Insider (India)

Even high school interns are able to use common AI tools to bypass security and launch bots on X, Facebook and other social media: report

"Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren't effectively enforcing their policies," said Paul Brenner, a Director at Notre Dame.

NewsGram

Social Media Platforms Aren’t Doing Enough to Stop Harmful AI Bots, Research Finds

New research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes.

Futurity

Social media policies are no match for AI bots

“As computer scientists, we know how these bots are created, how they get plugged in, and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn’t really be a problem,” says Paul Brenner, a faculty member and director in the Center for Research Computing at the University of Notre Dame and senior author of the study.

The New Scientist

How to avoid being fooled by AI-generated misinformation

It has become much easier “to customise these large language models for specific audiences with specific messages”, says Paul Brenner at the University of Notre Dame in Indiana.

Tech Times

Social Media Users Find It Hard to Identify AI Bots in Political Discussions, New Study Shows

Paul Brenner, a faculty member at Notre Dame and the study's senior author, highlighted users' significant challenge in discerning between human and AI-generated content.

Tech Xplore

AI among us: Social media users struggle to identify AI bots during political discourse

"They knew they were interacting with both humans and AI bots and were tasked to identify each bot's true nature, and less than half of their predictions were right," said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study.

Futurity

Social media users struggle to spot AI bots

Researchers at the University of Notre Dame conducted a study using AI bots based on large language models—a type of AI developed for language understanding and text generation—and asked human and AI bot participants to engage in political discourse on a customized and self-hosted instance of Mastodon, a social networking platform.