Two Is Better Than One: Lessons from Writing About Taylor Swift
Comparing a rapid, public-facing investigation to a slow, formal peer-reviewed study.
In Part 2 of my “Swift effect” reflections, I explore what I learned from tackling the same question in two very different arenas: popular media and peer-reviewed science. This is ultimately a piece exploring scientific communication, rather than one about statistics, which I covered in Part 1.
Taylor Swift!
There, I said it (again) — and with any luck, I’ve just lured in a few thousand extra readers who might otherwise scroll past my newsletter.
But in all seriousness, I did just publish a scientific study about Taylor Swift.
Today’s post isn’t about debating her relationship with Travis Kelce or rehashing the study’s findings, I did that in a different recent post. Instead, it’s an insider’s look at how scientific analysis and communication work — and how the same question can take on a completely different life depending on where and how it’s published.
Here’s the question:
What happens when you ask the same thing twice — once in a fast-moving, public-facing arena, and once in the slow, methodical world of peer review?
That’s what I found out when I decided to examine the so-called “Swift effect” — the theory that Taylor Swift’s presence at Kansas City Chiefs games made tight end Travis Kelce perform better, and made the Chiefs more likely to win.
The Manuscripts
In October 2023, I saw plenty of basic stats being tossed around, confidently proclaiming the “Swift effect” was real. I wasn’t buying it. To me, it looked like random noise in the data dressed up as destiny — a few well-timed coincidences mistaken for cause-and-effect. So, I did what any self-respecting scientist (and mildly skeptical football fan) would do: I ran some statistics and wrote about it.
I ended up publishing two articles:
A RealClearScience (RCS) piece — written quickly (published November 2023), aimed at the general public, and published while the Swift/Kelce chatter was still dominating news feeds.
A PLOS ONE paper — a formal, peer-reviewed, open-access study that took about a year from submission to publication. Finally published September 2025.
Both reached the same conclusion: there’s no convincing evidence of a “Swift effect.” But the paths to get there couldn’t have been more different — almost like two versions of the same song. And that’s exactly what this post is about: how the same data can be analyzed and communicated in two entirely different ways, depending on the audience.
Speed vs. Slow Burn
When I saw the “Swift effect” narrative gaining steam — and even influencing sports betting lines — I knew timing mattered.
The RCS article took a few hours to research, write, and submit. It was published while people were still debating whether to bet money on the theory. If you were on the fence, you could read my piece during the NFL season and make an informed choice before kickoff — the perfect example of Speak Now science.
The PLOS ONE paper, in contrast, was a marathon. It went to a couple of journals that rejected it, without even being sent out for peer-review. I suspect it was because of lack of fit — a statistical analysis about Taylor Swift’s effect on Travis Kelce, tinged with Easter eggs referencing her music, is not something most journals would be interested in, even if the methods are sound and the discussion is appropriate.
Finally, when I submitted it to the journal PLoS ONE, the editors moved it forward. It then went through rounds of peer review, revision, and editorial delay (a standard part of the process). By the time it was published (Septemebr 19, 2025) — nearly two years after their relatioship first made headlines — the Swift effect claims had cooled. That’s the reality of academic publishing: it’s slow. Really slow.
By the time the paper came out, an additional entire NFL season had passed.
Control Over the Narrative
In the RCS piece, I had complete creative control. I decided what to emphasize, how to explain the stats, and how many Easter eggs to hide. If my message was simply “no, despite media claims, there is no causal relationship,” that itself was sufficient. I could write it Fearless-ly.
In the PLOS ONE paper, I still had plenty of input — but peer reviewers helped shape the framing. It couldn’t just be “Does Swift affect Kelce’s performance?”; we had to show why this question merited space in the scientific literature. That meant reframing it as a case study in common research pitfalls — the academic equivalent of a Style remix.
The analysis, results, and conclusions were the same, but I needed to give the work a broader purpose beyond “the media made these claims, and I wanted to debunk them in the grandest way possible.” And to be fair, it is an excellent example to learn from — it’s accessible, it’s humorous, and I even provided the full dataset for download so educators or students can use it themselves.
Rigor & Trust
The RCS piece wasn’t peer-reviewed, but it was still objective, quantitative, and fact-checked. My editor expected accuracy and clarity, and I delivered. I trust RCS to publish high-quality articles, and this one was no exception.
The PLOS ONE paper was peer-reviewed — meaning other scientists actively tried to poke holes in it, and I addressed their critiques. Peer-review doesn’t make a paper flawless (as I have written about this previously), but it does make the analysis more battle-tested. If you’re curious, I made the entire peer-review correspondence public — comments, critiques, and my responses — so you can see exactly how the sausage was made.
I’d estimate the PLOS ONE paper took at least 50 hours of work: coding, data analysis, writing, revisions, and formatting. Even after formal acceptance, I spent at least another hour on seemingly minor but mandatory style updates — for example, changing every mention of an “online appendix” to “Supporting Information” to match the journal’s format. And, again at least an additional hour correcting minor issues on the proof… typos and grammatical issues that I missed up until the last chance to correct them.
In this case, the two articles used different approaches but landed in the same place — likely because there really is no convincingly strong effect to find. (You Need to Calm Down if you think otherwise.)
Accessibility
The RCS piece was designed to be read in one sitting by anyone interested in science, sports, or pop culture — from casual readers to fellow scientists.
PLOS ONE is also open-access, meaning anyone can read it for free, but the statistical depth can intimidate non-specialists. Ironically, the “accessible” piece was free for both readers and me, while the “formal” piece was free for readers but cost me (or rather, my research budget) a hefty open-access fee.
Yes, that’s right — it cost money to publish the peer-reviewed paper. Making research available online for free, whether it’s about Taylor Swift or cancer treatments, isn’t costless. Someone has to pay the people who manage the journal and the infrastructure that hosts it. Even non-profits like PLOS have real expenses — open access just shifts the bill from the reader to the author. Enjoy that article on me — you’re welcome!
Why spend research funds to publish a paper about Taylor Swift? Because I thought it was valuable for the scientific record, and I like PLOS ONE’s philosophy: let the readers decide how impactful it is over time. (Long Live open science.)
Humor Factor
Humor flows naturally in popular writing — you can riff, wink, and still be taken seriously.
In peer review, humor is riskier — may even have hurt the paper’s chances. How can something be scientifically rigorous if it has punchlines and Swift lyrics within it? But I wanted both rigor and a smile; humor was non-negotiable. Thankfully, the reviewers liked it, and I was able to keep it in. But, I suspect it was one reason it got desk rejected by other journals (but I have no idea, since I received zero feedback from them).
Shared Outcome — and the Question of Bias
Both analyses concluded the same thing: Swift’s attendance didn’t significantly change Kelce’s performance or the Chiefs’ likelihood of winning.
Could my own bias — my belief that there was probably no effect — have influenced my methods? Possibly. Bias is human nature. But I believe the consistency here comes from the reality: there just wasn’t much to find.
Why Do Both?
Because each serves a different purpose.
Popular science reaches people when it matters. It can stop a bad idea from spreading — or at least slow it down — before it becomes established folklore.
Peer-reviewed research creates a permanent, citable record that other scientists can build on (or challenge). It’s slower, but it’s how knowledge gets formally archived.
And here’s the kicker:
By the time peer-reviewed rebuttals appear, public attention (and misinformation) may have already moved on.
That’s why scientists who care about public understanding need to play in both sandboxes. If we only focus on communicating through peer-review, we’ll be five steps behind.
Closing Thoughts
Studying the “Swift effect” may sound silly, but it turned into a rare opportunity to see the same question travel two entirely different publication pathways.
One was quick, free, conversational, and hit while the debate was hot. The other was slow, costly, statistical, and destined for academic archives.
Both have value. Together, they tell a fuller story — not just about football and pop culture, but about how we share knowledge, challenge bad ideas, and connect science to the public.
I’d love to hear your thoughts! How do you think scientists should balance rapid communication with rigorous peer-reviewed research?




That was an informative and accurate description of the two kinds of writing!
I've found since starting on Substack that I am burned out of the academic writing style, and I find the more casual and stream-of-consciousness Substack writing to be refreshing. (Though I'd probably miss academic writing, too, if I put it down for too long!)
When I do article critiques, I often get asked to publishba formal criticism (as a letter to the editor about the article, or even as a PubPeer post). Sometimes I do, but since it takes so much longer (even to do the initial rewrite when I already know the points I want to make), it's not always something I have capacity for! At the moment I'd rather spend the time on more Substack posts :)
We need both: the rapid and peer-reviewed publications. (I almost wrote rabid) written in different styles, but that is difficult because you'd need to switch mindsets. That said, there should be space for humorous writing in peer-reviewed articles.