Warning: I do not specialize in digital literacy and my understanding of such matters is limited. Still, I blog this for discussion and learning only.
Since 2020, libraries have made a commendable push to arm undergraduates with the tools to navigate our chaotic information landscape. We've actively taught them how to "fight fake news," moving from the checklist-style CRAAP method to the more dynamic, fact-checker-aligned SIFT framework. This has been a positive development, encouraging students to read laterally and try to verify claims before accepting them.
However, I have a nagging concern. I wonder, in our enthusiasm to teach these verification skills, we may be inadvertently fostering a kind of intellectual overconfidence. I’ve noticed a growing tendency among students, and even some librarians, to believe that any question or disagreement—even complex scientific issues with legitimate debate among experts—can be definitively resolved by simply “Craaping” and now “SIFTing” their way to a conclusion, all without significant domain expertise.
This is a failure of what has been called epistemic humility. The argument from Danah Boyd in 2017, (which predates SIFT) is the worry that teaching media or digital literacy may embolden students to do what is now called “do your own research” — thinking they can resolve any issue by choosing among experts or even deciding based on their own judgement of the evidence despite the lack of deep subject expertise. (See my own reflections.)
I notice I am confused. Does use of SIFT help reduce “do your own research” issues? DYOR and SIFT both celebrate autonomy but in theory, using SIFT as a quick credibility check should bring you usually to the majority view? But I can see it leading people to drift into “Do Your Own Research” (DYOR) territory as well if they have strong views to believe only certain sources?
I see this manifest in a couple of concerning ways. For instance, I have seen students attempt to settle a legitimate scholarly debate by comparing the h-indexes of the professors on opposing sides, assuming the higher number automatically denotes greater credibility.
While this is not a bad heuristic if one of them is clearly not credible (e.g. one is a random blogger, the other a world-famous scientist), in general, this of course is not the way things work in academic circles.
See for example the condemnation from the academic community when Professor John Ioannidis published an article that dismissed an academic rivals’ work (meta-analysis on COVID-19 fatality rates that differed from his own) by attacking their publication record and academic status (one of the co-authors was a Phd Student at the time)
More recently, I saw someone justified using a AI-generated answer as correct because two other LLMs produced a similar result. The idea? SIFT and lateral reading teach us that consensus across multiple sources is a sign of accuracy. This line of thinking dangerously overlooks that these LLMs are not independent sources, as they are often trained on the same overlapping datasets.
This attitude—that any question can be answered without deep knowledge—is again a problem.
To be clear, this is not an indictment of SIFT itself. Mike Caulfield, the creator of SIFT, is certainly aware of when and when not to use SIFT and what it is meant for.
He often advises that the best response to a sensational claim is to simply wait for professional fact-checkers and journalists to do their work. His book, Verified, includes a nuanced discussion on the status of scientific issues, recognizing that they can range from settled consensus to legitimate debate between reputable experts. Discerning where a topic falls on that spectrum requires a bit of expertise.
Hilariously, he points out in his book that SIFT itself was a minority position, back when everyone was using the check-list based CRAAP, but librarians shifted over so quickly by the time he was invited to speak to librarians at LOEX 2022, he was asked not to talk about SIFT because it was too well known!
SIFT was never designed to adjudicate or settle scientific matters, particular those in dispute. If you applied it during Galileo's time, you would have concluded the consensus was that the sun revolved around the Earth. And that’s okay. SIFT’s strength lies in quickly verifying discrete facts and identifying the general public consensus, not in settling scholarly or scientific frontiers.
This brings me to the crucial distinction that I believe we are failing to make in our library instruction: the difference between the two mindsets of "research."
The Information Consumer Mindset: This is for the undergraduate or layperson trying to understand a topic or verify a everyday claim. Often their goal is to identify the established, majority expert view and or dismiss a clearly fraudulent claim. For this task, SIFT is an excellent tool. A rigorously performed SIFT here helps one become a responsible, informed citizen.
The Academic Researcher Mindset: This is for the scholar, the graduate student, or even the advanced undergraduate doing genuine research. The goal here is not simply to accept the consensus but to question and assess arguments, build upon it, and contribute new knowledge. In this mode, relying solely on consensus-finding tools is counterproductive. A rigorously done systematic review (and that’s more expertise than many people think) might be considered the ultimate form of SIFT, but even its result is often a starting point for new inquiry, not the final word.
By not clearly delineating these two modes of thinking, we risk confusing our students. We equip them with a powerful tool for one context and watch as they misapply it in another, attempting to use verification techniques to solve problems that demand deep academic inquiry.
Our information literacy instruction needs to be careful. We must continue to teach SIFT, but we must also teach its boundaries. We need to be explicit: "Here is the tool for being a smart information consumer. Now, for your academic research, let's switch gears. We will build on that foundation, but the mindset and the goals are different. Lastly, sometimes it is okay to say, “I don’t know” and wait for better evidence."
How can we refine our teaching to impart not just the how of verification, but the far more critical lessons of when, why, and—most importantly—when to stop and acknowledge the limits of our own knowledge?
This was written with the help of ChatGPT, converting my stream of thought + edits.