Deep research: What it is and how it can be a journalist’s AI secret weapon

When I was a reporter, I often took on topics I wasn’t very familiar with. There usually wasn’t someone on staff to ask about potential sources (if there was, they would’ve probably got the assignment), so my method for finding them was pretty simple: Google a bunch of articles on the topic, look at who was quoted and sounded like they knew what they were talking about, then track down those people myself.

These days, AI can do that Googling for you, finding several experts in just a few seconds, complete with biographical data and recent citations. However, there are issues with treating a chatbot like an “everything” search box. A simple query will produce an answer based mostly on surface-level results. It’ll likely be biased toward the easiest people to find and those who have been cited recently. And there’s always the issue of hallucinating, potentially inventing people wholesale.

This is why I always turn to the deep research tools within AI services to perform searches for subject matter experts. If you’ve never used them, some basics: Deep research is a different mode of using AI, prioritizing thoroughness over speed. It’s meant for getting into the weeds on complicated topics, where the desired output is usually something that goes far beyond a simple, straightforward answer.

All of the major AI services offer a deep research mode (sometimes simply called “research”), and you can usually find it in the “+” icon or drop-down in your chat window. A paid account helps, but most services allow a few deep-research queries for free accounts. Just like models themselves, the quality of the tools can vary, but they all do more or less the same thing: take extra time (anywhere from 3–40 minutes) to do several searches to find the precise information you’re looking for (reading the entirety of relevant content, not just summaries), then assemble a document to your specification.

The first time you use AI to create a deep research report, you’re struck by how much time it saves you; a person doing the same task would take hours or even days to complete it. But then you realize any information you use needs to be verified (just look what happens when you don’t), and you hit the human bottleneck: reading the report, marking it up and verifying facts all take time.

However, no one said you had to create a research report. That’s why deep research is often better used to create documents that are more skimmable, like spreadsheets, and function as one part of a multi-step workflow. Building a list of experts is dead center in this Venn diagram, and it’s one of the top use cases I recommend to journalists in the AI courses I teach.

How to prompt deep research the right way

The key is getting your prompt right. Since deep research queries take extra time, and they’re typically rationed even when you do pay for them, you’ll want to spend some time working on a better prompt than “find experts plz.” Fortunately, the tools will ask you clarifying questions when you start with a vague query, but I find a better approach is to collaborate with AI in a regular chat to refine your prompt before you activate deep research.

Usually, you’ll start with something like, “I need to find experts in nuclear fusion for an article about startups in the field. I’m looking for people who can speak intelligently about both the technology and the funding of these companies.” From there, the AI can help guide you on the parameters that align with your priorities, which probably include things like credibility, topical focus and background diversity.

Even better, it can guide you on the output. With a list of experts, a spreadsheet makes the most sense, and you can get really detailed about what information you want to help you vet who will be the most useful to your story. This usually includes things like a short biography, their area of focus, recent citations, online presence, social media handles and public contact information.

An important caveat here: Public AI services have privacy built in, so they won’t tell you emails and phone numbers that aren’t posted somewhere. However, most experts usually have some kind of public profile or at least a LinkedIn account where you can reach out. If the research comes up empty and you’re in need of very specific contact information, paid databases (like RocketReach, Muck Rack, Rolli and others) can help. And if the topic is scientific, SciLine is a good free resource.

AI often has trouble saying, “I don’t know,” so when finalizing your prompt, be specific about what the AI should do when it can’t find certain information. You might say, “For any cell in the spreadsheet where you can’t find information, return ‘NA.’” Although deep research tools, by nature of their thoroughness, are usually pretty good about not hallucinating, this can serve as a safeguard. 

Here’s an example prompt, using the nuclear fusion example, that you can try right now:

Act as an experienced investigative journalist compiling a credible source list on nuclear fusion and energy innovation.

Identify 10 subject-matter experts on nuclear fusion and related energy innovation who have had notable public activity from 2022 to the present.

Parameters:

- Expertise: Must come from academia, government, or private-sector startups.

- Activity: Should have published research, given interviews, or led projects within the past 3 years.

- Credibility: Exclude purely speculative thought leaders without institutional affiliation or publication record.

- Diversity: Aim for a range of voices across institutions, geographies, and disciplines (e.g., plasma physics, materials science, energy policy).

Provide results in a table with these fields:

- Full Name

- Title and Affiliation

- Sector (Academia, Government, Startup/Private)

- Notable Recent Work (e.g., papers, interviews, projects)

- Links to Source (media articles, LinkedIn, institution page)

- Contact Information (email, Twitter, LinkedIn if public)

For any field where data is unavailable, return “NA.”

Once your prompt is ready, pick your favorite AI chatbot, be sure you’re in deep research mode and fire away. Again, most services will still ask you a few questions to ensure it’s on the same page, and once you answer those, it’ll be off to the races. Gemini will actually give you a research “plan” (basically a more detailed prompt than the one you give it) that you can edit yourself for an extra layer of customization.

After several minutes, what it comes back with will be a thoroughly researched list of experts, with whatever notes you specified and ready for you to contact. What I like most about this kind of use case is that a spreadsheet or list is much more scannable than a full report, letting you immediately remove sources who aren’t a good fit and zero in on the ones that are. It’s also something you can easily share with a colleague or team.

Advanced deep research techniques

While the example I used involves the AI searching the web, deep research can also be targeted at your own data through connectors — integrations that connect the AI with services like Gmail or Dropbox. And that’s directly beneficial to the “finding experts” use case: Journalists, after all, are constantly receiving pitches for expert commentary, and their inbox can sometimes be a gold mine of untapped sources.

This level of access requires more consideration than your typical “log in with Google” button. If you decide to use connectors, I highly recommend turning off data sharing with the AI service (often called “improve the model for everyone” in Settings). You can do this even with free accounts. Be sure to turn this off before using a connector to ensure the AI company doesn’t use your inbox data for training future models.

There are two ways you can approach the search: Run a separate deep research query, targeting only your inbox (you can usually turn off researching the web if you’re “aiming” the query at a connector) and then vet them separately. Alternatively, you could search both places—the web and your inbox—via the same query, with specific prompting for both. I would tend to favor the latter (saves time and you only use one research query), but with an extra column in the spreadsheet that identifies whether the contact came from the web or your inbox.

The dual-query approach can be applied in a different way, too: To be extra thorough, you can take the same prompt and feed it into two different AI services (say, Perplexity and ChatGPT), then evaluate them to see where they cross over and where they diverge. That will give you some helpful insights:

  • It’ll give you a clue as to which service might be more thorough, which can guide future queries as well as which AI you might want to pay for
  • The experts that appear on both lists will likely have the strongest visibility in AI search, which may be an indicator of credibility or at least media-savviness.
  • The dual searches serve as an extra safeguard against hallucinations.

Again, AI can be helpful with the comparison. Drop both spreadsheets into an AI and ask for its assessment. A grounded tool like NotebookLM (a tool specifically built to answer questions based solely on the data you give it) can help ensure it doesn’t make anything up, though always remember to vet crucial results.

Going beyond the expert-list example, the ability to use deep research on a specific set of documents (say, in a folder on your Google Drive) can be a powerful tool for reporters. For a larger investigation, you could put all kinds of material — industry reports, court documents, financial filings, interview transcripts and even your own notes — into a single folder, then query the data with a deep-research query that extracts story ideas, hidden angles or even structured outlines for whatever content you want to create.

Deep research was one of the first AI tools to leverage thinking models in a targeted way, and it’s only gotten better as the models have improved. The fact that it’s generally underutilized is likely a symptom of its default output: a research report that takes forever to verify. But by targeting it at specific parts of a workflow and using techniques to create structured outputs and reduce hallucinations, journalists can use the tool’s thoroughness to complement their own.

AI transparency note: The lead art for this story was created by the Help Desk using AI tools on Canva.com.

Written by Pete Pachal

Pete Pachal is the founder of The Media Copilot, where he trains media professionals how to apply AI in their work. A former Mashable and CoinDesk editor, he writes about how AI is changing media for Fast Company and has appeared on CNN, Fox Business and The Today Show.