Americans trust AI scientists, not climate experts

Although artificial intelligence has become increasingly mainstream since Chatgpt’s debut in 2022, Americans believe AI scientists are more suspicious than climate researchers or scientists in general.
This negative impact stems mainly from a widespread focus on the dangerous unintended consequences of AI research, according to new research from the Center for Public Policy at the University of Pennsylvania.
The study, published on PNAS Nexus, surveyed thousands of Americans between 2023 and 2025 to measure public trust across different scientific fields. What emerges is a clear hierarchy of credibility, namely, AI scientists ranked the lowest.
Unexpected consequences drive skepticism
“Our research shows that AI has not been politicized in the United States at least, at the United States,” said Dror Walter, associate professor of digital communications at Georgia State University. However, this did not translate into public confidence.
The researchers used a comprehensive framework called Assessment of Scientific Self-Performance (FASS) factors to assess public perceptions across five key areas: credibility, prudence, unbiased, self-correction, and interests. AI scientists scored the lowest on almost every measure.
The most convincing is the “unexpected consequence” metric, where AI scores are much lower than other areas. In terms of five points, AI scientists averaged only 2.26 points in 2024 and 2.33 points in 2025, while climate scientists were 2.99, while scientists were typically 2.85-2.93.
Beyond Politics: Another Distrust
Unlike climate science that is politicized deep along partisan lines, AI suspicion crosses political boundaries. The study found that political ideology explains only 2-7% of the difference in AI perceptions, while 31% of climate science, which is typically 17-20%.
This model has a significant impact on how the science community addresses public concerns. Where climate science faces ideological resistance, AI faces something different – anxiety about the technological risks that go beyond party lines.
The study reveals fascinating patterns of how Americans consume information about different scientific fields. Media exposure models predict AI perception much less than climate science or general science concepts, suggesting that public attitudes towards AI may be formed entirely through different channels.
Disconnected funds
Perhaps the most attractive thing is how these negative views translate (or untranslated) funding preferences. Although Americans have expressed more trust in climate scientists and ordinary scientists, that doesn’t mean they are against AI research funding. The study found that traditional predictors supported by science funding explain much less differences in AI research than in other fields.
Political ideology is often a strong predictor of scientific funding attitudes and has no significant relationship with AI research support. This suggests that Americans can separate their concerns about AI scientists from their perceptions of whether such research should continue.
Key differences in the field of science:
- Trusted: AI scientists scored 2.97, compared with 3.56-3.60 for general scientists and 3.62 for climate scientists.
- Value Alignment: General science is only for AI-only 3.19-3.22, and climate science is only for 3.19-3.22.
- Prejudice issues: AI scientists’ evaluation is 2.79, while other fields are 3.24-3.31
Time reveals issues of constant concern
The researchers tracked the perception over time to determine whether familiarity would reproductive acceptance. The answer is no. Between 2024 and 2025, as AI applications become more common in daily life, public suspicion of AI scientists has actually remained the same.
This persistence suggests deeper structural problems than simple fear of the unknown. Americans are not only concerned about new technology—they are particularly concerned about the scientists who developed it and their ability to manage risks responsibly.
“Public uneasiness with the potential for unintended consequences of AI can lead to transparency, good communications ongoing assessments of self-or government effectiveness of AI,” Walter noted.
As AI continues to reshape society, these findings highlight a key challenge: Building public trust in the scientific community that drives these changes may require different strategies from traditional science communication.
Related
If our report has been informed or inspired, please consider donating. No matter how big or small, every contribution allows us to continue to provide accurate, engaging and trustworthy scientific and medical news. Independent news takes time, energy and resources – your support ensures that we can continue to reveal the stories that matter most to you.
Join us to make knowledge accessible and impactful. Thank you for standing with us!