The vast majority of scientific knowledge is taught, shared, learned, and believed on some basis of authority. This practice of passing on scientific knowledge on some basis of authority occurs in many ways, in many places. It occurs within the formal domains of researchers and students of the sciences. It occurs within the informal domains of armchair science enthusiasts, promoters, and TedTalk lovers. Yet this epistemological convention is often overlooked, even ignored sometimes. Likely because the sciences have—to a great extent, deservedly—gained such a high reputation because their knowledge is supposed to be supported by the observation and experience of at least someone, somewhere, at sometime. And I’m certainly not suggesting that we should always mistrust the findings of research scientists who might speak as authorities, especially if we determine that they are trustworthy through our own careful scrutiny and crosschecking. But what we should notice, at the very least, is how much scientific knowledge is shared and accepted on some basis of authority.
If this surprises you, just think about the ways a science student accumulates knowledge over the course of their studies: they’ll have the odd lab, the odd research assignment, maybe they’ll even go on to specialize and perform some new research of their own, accompanied by a thesis. But at every stage, the vast majority of knowledge that they gain in school doesn’t come through their own direct observation and experience, but through some other source—perhaps in the form of a textbook, an article, a professor, a peer, a supervisor, a literature review, and so on. Likewise, even a research scientist will have gained a very slim percentage of their total scientific knowledge through their own direct observation and experience over the course of their career.
In the natural sciences, like in many human endeavours, we depend on the work of others to an astounding degree. One reason we depend on the work of others in life is we can accomplish greater things together than we can separately, all on our own. This is one of the great stories of human civilization. So, like many of our communal endeavours, scientists strive to work together so they can progress and advance and build on the work of their predecessors and peers, in the shared hope that what they achieve will be a benefit to humanity. Accordingly, researchers will share their work through books, journals, lectures, conferences, conversations, and so on. When we (scientists and armchair enthusiasts, included) unreservedly accept knowledge from sources like these, we do so under the impression that they are trustworthy sources, and thus trustworthy authorities.
Hopefully we take the time and effort to determine that authoritative sources of knowledge are worthy of our trust through carefully crosschecking the knowledge they share against our own reason and experience, and against other authorities. But often we don’t. Because it’s possible, even easier, to not bother with these sorts of rigours. And realistically, if we thoroughly assessed the sources of every single bit of knowledge we’ve embraced, we wouldn’t get much done—at least quickly. Because it would take multiple lifetimes to do this exhaustively. So we exercise measures of trust in many ways and places in life, by necessity.
When we are presented with some scientific knowledge, we have a series of choices for who and what we will place our trust in, and how much trust we will exercise. Hopefully we do some honest digging and thinking and crosschecking before exercising our trust—but it’s surprisingly easy to skip over this, because it’s not like sloppy-learning-alarms will immediately begin to blare, giving us away, if we don’t. So for instance, we might choose to immediately trust the findings we’re presented with at face value, no questions asked. Or we might inquire about the researcher and then we might choose to trust in their impartiality, in their guiding assumptions and hypothesis, in their critical capacities, in their analysis of their findings. Or we might choose to trust them because of their academic credentials or because of the reputation of their institution (which are often enough to elicit the average person’s trust, these days). Or we might press further and then choose to trust in the reliability empirical methods, and trust that such methods were applied in a careful, controlled manner. Or we might choose to trust the findings because we trust the people who participated in the peer-review process. Or we might choose trust in all of the people, assumptions, methods, analysis, and selective publishing that’s involved, perhaps even imagining that such a finely-tuned-research-machine will guarantee that the knowledge it produces will be without error.
Many who like to think of themselves as ‘scientific’ cringe at words like ‘trust’ and ‘believe’ and ‘faith,’ usually because they seemingly assume they are beyond such silliness. But trust is, to varying extents, what we functionally exercise even when we accept experience-based, empirically-tested knowledge that we did not personally experience and test ourselves. And an authority is not automatically bad, as some seem to assume. It is possible for an authority to be good or bad, trustworthy or untrustworthy. But it remains authority, either way. And this is what needs to be noticed more often and more readily.
At this point, some object. Some object because they believe that scientific knowledge is guaranteed to be free from errors and mistakes; that scientific knowledge is strictly hard facts with no unnecessary fat or filler. Because isn’t there a rigorous, peer-review process performed within the scientific community that ensures that ‘scientific knowledge’ shared on authority is totally true and trustworthy? Yes, the peer-review process aims to reduce errors. It aims to do this through subjecting a project’s hypothesis, methods, findings, analysis, and so forth, to additional checks. And it is a good system of scrutiny. But we would be naively optimistic to think that our scientific assumptions, methods, knowledge, theories, and paradigms are guaranteed to be infallible since they are the products of fallible human beings—like it or not. What’s more, it would be sloppy of us to not see the amount of interpretation, imagination, conjecture, hypothesizing, metaphor, and even story-telling that goes into our—supposedly rigorous—scientific endeavours.
The bottom line is, we trust in the work and findings of others in many, many ways, even in the sciences. When we do this, the person or group or textbook or article or TedTalk video or whatever else becomes an authoritative source of knowledge for us. It is incredibly important to notice the pervasiveness of this convention of knowledge sharing since ‘science’ is often treated with a deep kind of piety these days. Accordingly, scientists are often functionally revered as our current high priests; as holy people who are possessors of holy knowledge that is wholly perfect and beyond questioning. It should be emphasized that honest, hard-working scientists do deserve a great deal of respect. The work they do is incredibly valuable and they deserve to be esteemed for it. But they should not be treated as infallible authorities since they, like us all, are limited, fallible, complicated human beings. But somehow ‘science’ has often gained a powerful ethos that does not always matchup with the way things actually are. What’s more, it is ironic when we treat scientists like a holy order since scientific methods and sensibilities are supposed to guard against piously revering people in positions of prestige and power.
What’s needed is for science to be seen for what it is: the systematic study and accumulation of knowledge about nature undertaken by groups of fallible humans who greatly rely on the cumulative work performed by their predecessors and peers.