The Crisis of Expertise and How Universities Can Fix It
Restoring the legitimacy of expertise in a distrustful age
The Trump administration’s crackdown on Harvard for failing to “live up to both the intellectual and civil rights conditions that justify federal investment” and similar measures directed at other universities have sparked fierce reactions. Critics view the actions as government overreach that threatens academic freedom and autonomy, part of an anti-intellectual crusade that risks hollowing out the nation’s expert class. Supporters counter that universities have forfeited their credibility by prioritizing political activism over true expertise and by flouting civil rights laws through race-based hiring and admissions. If our “expert class” has not justifiably earned its credibility, reform is essential to ensure proper use of taxpayer funds.
As in most polarized debates, both sides have a point. Universities may well have violated laws. Perhaps even more consequently, their intellectual failures have undermined confidence in expertise itself. Yet it is important that after whatever corrective actions are taken, what remains is not a full vacuum of expertise, but rather a restoration of public trust in intellectual authority where such authority is deserved.
COVID-19 and the Loss of Trust in Experts
No single event has made more of a mockery of so-called experts than the policy responses to the COVID-19 virus. Heads of agencies and academics demanded compliance with many measures that they promised would reduce deaths and infections that ultimately proved at best ineffective, and at worst even worse for the general public than the impact of COVID itself.
The extent and magnitude of the what the anointed experts got wrong is staggering. To just name a few: lockdowns didn’t work, cloth masks were ineffective, N95 masks also made no real difference, the COVID-19 vaccines did not stop transmission of the virus, the vaccines were not harmless for all age cohorts, and school closures did not “protect” kids as they were largely unaffected by COVID. The school closures simultaneously imposed significant, negative impacts in terms of learning loss, with children in poverty suffering the most.
These decisions imposed by experts were not even grounded in science. As Hoover Institution Senior Fellow and former member of the White House Coronavirus Task Force Dr. Scott Atlas explained in an interview with the Stanford Review, these decisions were not derived from an idea of settled science: “Not only was the policy [of lockdowns] incorrect, but it was also contrary to best practices of pandemic management.”
So how could the expert decisionmakers have made such incorrect recommendations? A charitable interpretation is that they doubted the validity of science’s own conclusions, or perhaps they wished to help politicians show the public that the government was taking COVID seriously. A less charitable view is that some decisions might have been affected by conflicts of interest. For example, former NIAID head Anthony Fauci has come under scrutiny due to his ties with gain-of-function (GOF) research and his relationship with the Wuhan Institute of Virology, which at this point is the most likely place from which the pandemic originated. Despite President Obama pause of GOF research in 2014, Fauci continued to approve grants to EcoHealth Alliance to fund GOF research which included projects related to coronaviruses at the Wuhan Lab of Virology.1
This evidence is certainly suggestive as to why people like Dr. Atlas might have been ignored in the policy response deliberations. As AEI Senior Fellow Roger Pielke Jr. notes in a writeup on this whole saga, “Fauci [was] involved very early in the pandemic in helping to organize a small group of carefully-selected scientists to refute the possibility of a research-related origin of COVID-19.”
As if this were not enough of a blow the slogan to “trust the science,” the response from Washington D.C. was of all things to pardon Dr. Fauci, and to do so preemptively and covering literally any prosecution for potential crimes from the past decade. If this wasn’t enough, President Biden decided to sign this pardon and leave the public with the outrageous comment that “our nation owes these public servants a debt of gratitude for their tireless commitment.”
Now place yourself in the position of a low-income parent whose child struggled with remote learning, or someone who lost his or her child to suicide due to depression brought on by the isolation of lockdowns, or a business owner who was among the 430,000 Americans who lost their business due to policy response. Why would you ever trust these people again?
It is inevitable that people will lose faith in “experts” when they see credentials being used to launder harmful political agendas. By behaving this way, the “experts” give theselves cover to say, “You must listen to me, as I am the expert.”
Dissenters were censored, decertified, and even dismissed as “anti-science”, pointing to something deeply wrong with the institutional incentive structures. The credentialed experts reached positions of prominence and then, insofar as they were publicly exposed for wrongdoing, protected themselves by vilifying the dissenters.
The scandals around COVID were so outrageous that they were ultimately exposed, even if the perpetrators have not yet paid any real price. But COVID is just one example, the tip of the iceberg. For exampe, we have previously covered here, here, and here the erosion of norms in economics research, and public finance research in particular. This may seem niche relative to COVID, though we emphasize that this flawed research has supported a popular movement favoring socialism. The researchers campaign that their policies are not only morally right but also justified on evidentiary grounds. When they are caught playing fast and loose with the facts they still get handsomely rewarded.
Why We Still Need (Real) Expertise
An uncomfortable truth is that many non-experts ended up more accurately capturing the risks of COVID than any of the public facing epidemiologists from the CDC. This included comedian Joe Rogan, one the world’s biggest podcasters.
Since COVID, Rogan has hosted various people to discuss other important questions of science, history, and current events. But, unlike during the COVID-19 pandemic, Rogan has increasingly relied on more suspect sources for alternative narratives on current events. Recently, he conditioned an appearance by journalist Douglas Murray to discuss the Israel-Palestine conflict (about which Murray has gained significant expertise) on allowing comedian Dave Smith to join him for a debate.
The absurdity of having to debate a comedian on the merits of the war after spending over a year traveling to Israel and following the IDF during its operations in Gaza did not escape Murray. Exasperated, he told Rogan at the start of the podcast that Rogan needed to have “more experts around” when discussing important questions. Smith indignantly responded to Murray that “the expert class hasn’t done a great job – this is ‘follow the science’ [all over again].’”
The discussion was not only about the Israel-Palestine conflict but also about recent revisionist WWII “historians” Rogan has hosted, including Darryl Cooper who has called Winston Churchill the “chief villain” of the war, said that Adolf Hitler was denied by the allies in his good faith effort for peace, and who has said that an image of Nazis in Paris was “infinitely preferable in virtually every way” to an image of drag queens showcased in the opening of the Summer Olympics in France.
Yes, many of our policymakers and intellectuals have been grossly wrong in self-serving ways. But that does not form a basis to explore whether Hitler was misunderstood. And for all of Cooper’s revisionist history, he refused an invitation to debate pre-eminent Churchill historian Andrew Roberts. And so, despite Rogan’s contributions to exposing the COVID scandals, surely the entire concept of experts cannot be thrown away. This is where it is important to make a few points.
First, while Joe Rogan was onto the absurdity early on and spoke about it at great personal risk, what made him valuable during the COVID-era was not any personal scientific research conducted by Rogan. Rather, Rogan was an effective aggregator of shunned doctors and researchers (i.e., other credentialed experts) who were being cancelled by the powers at be in their respective domains. Rogan gave these people a platform to explain where the current class of policymakers were going wrong. This was an imperfect process, but ultimately it helped Americans put coherent arguments to the absurdity of the policy response they witnessed around them.
Second, research from others who in fact had relevant expertise is what allowed people to piece together the policymakers’ missteps. Scientist Kristian Andersen, an evolutionary biologist and professor in the at Scripps Research, was among the first to identify qualities of the virus, which indicated that it must have been engineered in some way. He became one of the early proponents of the “lab leak” theory. Additionally, Dr. Scott Atlas and Dr. Jay Bhattacharya exposed the irresponsible recklessness of the policy response to the pandemic.
So, as much as we may loathe many of the appointed experts, this moment ironically also proves the importance of expertise. Absent the dissenters’ subject knowledge, no one would have had the necessary understanding of the research to fight back. Many who made crucial decisions had significant expertise about their respective fields, but they lacked the character befitting of someone in their positions of authority. Other experts were the ones who ultimately called them out.
No reading of Wikipedia can replace years of scholarship and true experience. This is precisely why it is so important that we restore confidence in the institutions that once conferred credibility onto those who walked their halls as faculty or learned from them.
How Universities Can Regain Public Trust as Certifiers of Expertise
So how can institutions that certify and assess expertise restore their credibility? This has become a much more complicated question with recent academic trends in mind, but it is the question university leadership should be asking.
In the summer of 2020, many academic institutions signed onto “Diversity, Equity, and Inclusion” (DEI) commitments. These commitments quickly became pervasive. For example, by August 2020, 90% of all law schools signed onto implementing new DEI commitments. This became a focus that manifested itself in the form of “diversity statements”, new courses and degrees, and broader school-wide agendas to “stamp out systemic racism.”
As we have come to learn, these initiatives forced both academics and students to make ideological concessions. For example, in one UC Berkeley hiring process, 76 percent of qualified applicants were excluded solely due to their diversity statements. To understand just how overbearing the criteria proved in recruiting new employees, consider that if you had said you were open to all perspectives in your lab, but you failed to indicate you would actively encourage diverse groups to participate, you would receive a score 1-2 out of 5 on that aspect of the diversity statement. In other words, if you were insufficiently enthusiastic about these agendas, you were seen as unfit to work.
The great irony of all these efforts has been that if anything it made the very problems proponents of DEI were allegedly trying to solve much worse. Recent research out of Rutgers took a closer look at all of these practices, and ultimately concluded, “Across all groupings, instead of reducing bias, they amplif[ied] perceptions of prejudicial hostility where none was present, and punitive responses to the imaginary prejudice.”. It should come as no surprise then that students are afraid to speak in such a suffocating environment. A December 2024 report from the Heterodox Academy showed that 96.2% of students “feared suffering at least one sanction if they were to discuss a controversial subject” at school.
Unfortunately, universities seem hellbent on doubling down on these practices. Harvard and Princeton have made it clear that they will not address the concerns raised by the Trump administration. In terms of the credibility crisis, it is hard to imagine a worse way to proceed from an institutional standpoint. The public is rapidly losing regard for the credentials conferred by these institutions. Increasingly, students who are able to successfully navigate this environment might be suspected of caring mainly about the ideological fealty needed for positive assessments, or worse flagrant dishonesty. This, of course, is unfair as many students may have taken courses that did not require such explicit duplicity; however, outside parties cannot definitively verify what the degrees imply.
Fundamentally, these institutions perform an elite-making function. Students leave these places with hopes of climbing the ranks of our most important universities, agencies, and private sector firms. So if leaders of elite universities are concerned about the degree to which the public has opted to ignore their minted experts in favor of alternative sources, we need to consider ways toward reform.
We have the following suggestions for major US academic institutions looking to be taken seriously by developing a new model that regains the trust of the public:
1. Eliminate of DEI as a guiding principle for the university and replace with a focus on merit, fairness, and equality of opportunity (see Abbot and Marinovic (2021))
2. Terminate all requirements for ideological litmus tests like “diversity statements” for admission to academic programs and for employment
3. End racial discrimination in admissions and employment decisions, in accordance with the Civil Rights Act and Supreme Court decisions
4. Offer greater transparency for admissions criteria and decisions to restore public trust
5. Sign the Chicago Trifecta, consisting of the Chicago Principles for Freedom of Expression, the Kalven Report on the importance of institutional neutrality, and the Shils Report on the importance of academic merit in faculty appointments
6. Insofar as the university prefers to reject these baseline conditions, accept a potential loss of federal funding
One can only imagine that proactive approaches to address the crisis of confidence would help institutions avoid a letter from the Department of Education landing in their mailbox.
Conclusion
As long as political activism and narrative are prioritized above intellectual rigor and merit, this issue of the public losing its trust in experts will grow worse. Trust will trend towards alternative sources of information in the form of self-trained journalists and podcasters. Such non-expert public figures will appear to be more transparent about their views and political aims relative to those who might have necessarily adopted political positions in the pursuit of achieving the title of “expert.”
The debate from last month between Murray and Smith will not be the last time we see the divide between those who desire to have an expert class and those who prefer alternative opinions from anyone willing to dissent from conventional orthodoxy. For those who side with Murray and worry about the complete collapse of the experts, the only way out is by reforming our institutions and their practices to regain the respect of the public. As John Adams once said, “Because power corrupts, society's demands for moral authority and character increase as the importance of the position increases.” Let’s hope our institutions recognize this before it is too late.