Can ChatGPT pitch your op-ed?
I was intrigued by the idea of AI helping me. But while it does some things really well, it can’t handle the nuances of opinion writing.
Thanks for reading Pitches Get Stitches. I’m Jake. My company, Opinioned, helps leaders publish op-eds in top-tier media. I previously built and edited the opinion section at Fortune.
In this newsletter, I dissect reader-submitted op-ed pitches from the perspective of an op-ed editor. I try to not only point out flaws, but highlight common themes I’ve noticed over my career evaluating pitches and helping clients construct their own.
I would love if you submitted a pitch for me to review. It can be something you’re actively working on or hypothetical—all are welcome.
If you have a pitch, reply to this email or send it to jake@getopinioned.com. If I select it, I’ll omit any identifying information and evaluate it in this newsletter—at no cost.
TrapGPT
I’ll admit it.
When I first started using ChatGPT, I wondered if it might save me some time on my work. I love brainstorming with clients and evaluating their pitches, but drafting and pitching can be a drag. I hoped that AI could speed up the writing process, provided I gave it a comprehensive outline and clear instructions.
I tinkered with it for months, and I wasn’t impressed. At this point, ChatGPT is not even close to being able to write a good op-ed pitch.
I understand that I might look biased, given that my company uses a highly tailored, personalized approach for producing great opinion articles. But I was only looking for efficiencies, not a workaround. And I can tell you that when it comes to op-eds, AI isn’t even making you more efficient.
If you’re skeptical of what I’m saying, try it yourself. I’d be curious to hear from you if you’ve found success using AI to generate pitches.
For this week’s newsletter, I thought it would be interesting to have ChatGPT write the pitch so I could demonstrate what I mean. (Spoiler alert: It’s still no good.)
On the surface, ChatGPT’s pitches look like the real deal. But if you look underneath, you can find a lot of problems.
I’m not trying to be hyper-critical of generative AI, a technology still in its early stages. I don’t, and shouldn’t, expect ChatGPT to do everything. I’m just trying to point out that if you want it to produce personalized, nuanced writing, you’re going to be disappointed.
Why did the chicken cross the road? To get you to subscribe.
Pitch
Unpacking the Shift: Why the 2024 Election Isn't Just About the Economy
For decades, the economic performance has been viewed as the linchpin of electoral success in U.S. presidential races. Yet, as we edge closer to the 2024 election, it's becoming increasingly clear that the electorate's priorities are evolving, with economic indicators no longer serving as the sole barometer of voter sentiment. In my proposed op-ed, I aim to dissect the underlying causes behind this significant shift, arguing that a constellation of social, technological, and environmental changes are redefining what matters most to American voters.
The reasons for the economy's diminishing dominance are multifaceted:
Pandemic Aftermath: The COVID-19 pandemic has been a watershed moment, altering perceptions of government responsibility, healthcare, and community welfare. The crisis highlighted the limitations of economic metrics alone in ensuring public well-being, propelling issues like healthcare access and social safety nets to the forefront of voters' minds.
Climate Crisis Awareness: The escalating climate emergency has galvanized a broad segment of the electorate, especially younger voters, for whom climate action is non-negotiable. This demographic shift underscores a growing realization that economic prosperity cannot be sustained without addressing environmental sustainability, pushing green policies up the electoral agenda.
Technology's Double-Edged Sword: Rapid technological advancements have reshaped every aspect of life, including work, privacy, and democracy itself. The techlash against big tech companies over issues like data privacy, misinformation, and job automation has spotlighted the need for regulatory frameworks that transcend traditional economic concerns, urging a reevaluation of what constitutes voter priorities.
Social Justice Movements: The rise of movements like Black Lives Matter has amplified calls for racial and social justice, spotlighting systemic inequalities that economic metrics often obscure. This shift reflects a broader desire among voters for policies that address equity and inclusion, beyond mere economic growth.
Globalization's Backlash: The benefits and drawbacks of globalization have led to a reconsideration of issues such as trade, immigration, and national identity. Economic arguments for globalization are being weighed against concerns about job security, community integrity, and national sovereignty, leading voters to prioritize a wider range of issues.
In my op-ed, I will delve into these dynamics, offering insights into how they collectively signal a departure from the traditional electoral focus on the economy. By exploring these trends, the piece will not only shed light on the evolving landscape of American politics but also stimulate a broader discussion about the future direction of political discourse and policy-making.
This analysis is intended to challenge our preconceptions and spark a dialogue about the complex web of issues that will influence the 2024 U.S. presidential election. I believe this perspective will resonate with your readership, providing them with a deeper understanding of the forces shaping our political future.
Thank you for considering this pitch. I look forward to the opportunity to contribute to your publication's exploration of these critical issues.
Stitches
It doesn’t actually save time
From the get-go, ChatGPT isn’t going to really understand what you’re looking for with an op-ed pitch. It’s normal for AI software to need adjustments to produce the outputs the user is looking for. But be prepared for that to take more time than you wanted.
For me, this pitch took about eight tries to get into shape for this newsletter. Half of those tries were me trying to calibrate the formality of its tone—and I eventually gave up.
The final version in this newsletter is too formal; I kept it because when I asked for a more informal tone, the pitch just sounded ridiculous. It reminded me of a ‘90s Nickelodeon commercial, with phrases like “Here’s the scoop” and “Fresh New Op-Ed Idea.”
As for the other adjustments, I had to ask ChatGPT a few times to give me more detail on the reasons why the economy was going to be comparatively less important to voters in the upcoming election. It seems that by default, it assumes a pitch should only contain a preview of the piece without an explanation of the rationale behind the argument. If you’ve ever heard my spiel, you’d know how much I believe that a pitch should be more of an entrée than an appetizer.
Regressing toward the mean
When I read AI-produced content like the one in this newsletter, I often feel like I’m back to the days of hiding behind my mom at a Chuck E. Cheese’s, afraid to get too close to the life-size animatronics dolls on stage. There’s something just weird and uncanny valley about it that I don’t like.
Maybe the best way to explain it is that AI-written op-ed pitches are too normal. Because they don’t have any noticeable imperfections and are written in an overly formal style, they lack the soul and excitement of a good pitch written by a human.
Consider the following sentence:
By exploring these trends, the piece will not only shed light on the evolving landscape of American politics but also stimulate a broader discussion about the future direction of political discourse and policy-making.
I don’t need a detector to tell me this is written by AI.
Why? Because it sounds like how someone in a high-stakes job interview would talk as they began to crack under pressure. They’d be exaggerating formality in an attempt to sound more qualified.
Stilted language like this isn’t a big problem for an internal memo or an instruction manual. But when you’re writing op-eds, it’s crucial that you bring out your humanity as much as possible. If you don’t touch an emotional nerve with your readers, you won’t succeed.
Editors spend a large portion of their time fielding and rejecting an unending stream of unoriginal and uninspired pitches. They see the same lines over and over—”I think this would be a good fit” and “This is relevant to X news story.” It’s exhausting.
One of the best ways to shake them out of their stupor is to deliver something offbeat, fun or surprising. Generative AI systems aren’t built to do that.
AI is out of touch
Doesn’t it seem a little strange that ChatGPT’s pitch, which is supposed to be about the 2024 U.S. presidential election, doesn’t mention Joe Biden or Donald Trump?
From what I understand, the version of ChatGPT that I use, GPT-4, is trained on information up to April 2023. That sounds fairly up-to-date, but the pitch it produced makes me wonder how capable it is of utilizing this new information. The knowledge base of the original public release, GPT-3, cut off at September 2021.
The meat of the pitch is the five issues that will be more important to voters than the economy in 2024: the pandemic, climate change, the backlash to big tech companies, social justice movements and the backlash to globalization. These issues are all prevalent today, but they were most prominent in the public consciousness in September 2021. This makes me think that when ChatGPT is synthesizing public information into its responses, it’s relying on out-of-date information.
I can’t prove this and I’m not trying to state anything definitive. But I am saying that when you’re using AI to produce something media-relevant, you’d better double-check that it is.
Shaky logic
Does this pitch really explain why these other issues have superseded the economy in the minds of voters? I don’t think so.
On the surface, it appears that it does. Each of the five bullet points of the “reasons for the economy’s diminishing dominance” mentions that issue in relation to the economy.
The globalization bullet does a good job explaining the link between the two issues:
The benefits and drawbacks of globalization have led to a reconsideration of issues such as trade, immigration, and national identity. Economic arguments for globalization are being weighed against concerns about job security, community integrity, and national sovereignty, leading voters to prioritize a wider range of issues.
Here, you can clearly see the interplay between these two issues. According to the author, opening up borders and markets has had some real consequences on trade and migration that don’t justify globalization’s benefits. You may or may not agree with that point, but you can’t argue that there’s no cause-and-effect relationship.
Contrast that with this line from the first bullet, on the pandemic:
The crisis highlighted the limitations of economic metrics alone in ensuring public well-being, propelling issues like healthcare access and social safety nets to the forefront of voters' minds.
It’s not clear to me which economic metrics the pitch is referring to that have been, until now, “ensuring the public’s well-being.” And I don’t understand how the “limitations of economic metrics” propelled “issues like healthcare access and social safety nets to the forefront of voters’ minds.”
Think about that for a second. What is the connection between these two concepts?
It looks to me like the pitch is using a few buzz-terms related to public health to try and draw a connection to why voters care less about the economy now. It looks real, but the link isn’t actually there.
It can’t think for itself
The last issue highlights what is one of ChatGPT’s most dangerous shortcomings: It can’t push back.
When you tell generative AI to do something, I’ve found that it will almost always fall over itself to do what you asked—regardless of whether it produces something accurate or useful.
About a year ago, I asked Bing’s AI chatbot to find me information about New York City mayors’ views on a local issue. It returned a bunch of quotes from mayors dating back to Ed Koch with links to books where the quotes were located.
Nearly all of the quotes, and the books they were supposedly from, were made up.
The problem wasn’t so much that Bing was giving me wrong information; after all, all the chatbots now warn you that their information might be unreliable. The real issue was how confidently and convincingly it presented that information. If I hadn’t checked the sources, I might have used it in a client project and looked like an idiot.
When using generative AI, it’s important to be aware of whether you’re trying to force an answer out of it that it’s not qualified to give. It takes a lot of playing with these systems to get a good feel for when they’re hallucinating.
At the moment, I don’t trust ChatGPT enough to rely on it for pitches. I don’t know if it’ll ever get there.