Anthropic has pushed its Claude AI chatbot into the library with a new Research feature reminiscent of the Deep Research tools offered by both ChatGPT and Google Gemini. Though Claude has stood out for its conversational and reasoning abilities, a full, long-form research report is a different beast.
Claude’s Research feature works by processing a prompt multiple times to expand its results. It can pull from the Internet and any linked internal documents. Then, once the data has been collected and curated, the final report is completed, and citations are added to the user’s answer.
But with all three now offering some version of AI-powered information spelunking, I wanted to play around with what they can do. I’ve made so many reports with Gemini and ChatGPT that I decided to pull from that repository for the prompts this time. So here’s how they stacked up.
One caveat is that Claude’s Research feature requires a Max, Team, or Enterprise-level subscription. Max costs $100 a month, so you have to really want Research (or have a friend who does) to test it out.
Astronomical interest
My first prompt came from an earlier test of ChatGPT’s Deep Research. In this case, it was a guide for a hobby I’m pursuing. The prompt was, “Provide an overview of beginner-friendly astronomy, including necessary equipment, recommended resources for learning, and local astronomy clubs or events in the Nyack, New York area.”
Claude gave me a very complete answer in the form of a solid starter kit. The AI ran multiple searches on topics like the best beginner equipment, learning resources, and local astronomy opportunities. The final result included an executive summary, followed by a breakdown of different ways of engaging with the hobby.
It included binoculars and telescopes and pointed me to practical resources like the Stellarium, Sky & Telescope, and even local star parties. It surprised me when it named specific people in astronomy clubs nearby and mentioned events at the Hudson River Museum. It finished up with TL;DR just in case I didn’t actually want a long-form report.
ChatGPT’s response at the time made for a really nice guide to amateur astronomy. It covered telescopes, binoculars, and the naked eye stargazing, recommendations for equipment, locations to go to, even websites and apps to plan nights out and groups to join.
Gemini wasn’t too different, though it had a somewhat more academic tone than ChatGPT.
Flavor country
The second prompt comes from testing Gemini’s recent Deep Research upgrade, with an eye toward the kitchen. I asked the three AI chatbots to “Explain flavor pairing and expand on the science and culture of it and how to do it at home successfully.”
As I noted at the time, Gemini had a notably verbose response, even for a Deep Research report. It filled a lot of the report with the science of it all before going on a global tour of cultural food pairings. ChatGPT came in hot with a chemistry lesson. I got a mini-dissertation on aroma compounds, synergy in taste receptors, and the neurobiology of deliciousness. It also touched on culture and gave me a really helpful list of dos and don’ts for beginners.
Claude delivered a thoughtful, slightly bookish overview. It started with the Maillard reaction and talked about flavor molecules as if they were dating profiles—some meant to match, others better off apart. Then it drifted into cultural waters: cheese and wine in France, chili and chocolate in Mexico, soy and citrus in Japan.
I loved that it emphasized experimentation, telling me to pair strawberries with balsamic vinegar or coffee with orange zest. It was brisker in its lists for the home experiment, but the TL;DR was concise and pointed in wrapping things up.
Game time
For my final test, I went with something I keep telling myself I’ll do but haven’t gotten around to yet. I asked the AIs to “Teach me how to play mahjong well enough to win and assume I know absolutely nothing about the game right now.” I know mahjong has tiles, and those tiles mean something, but I don’t know much beyond that. So I decided to see what the AIs would come up with.
ChatGPT and Gemini clearly have a lot of databases with mahjong information. Both went deep, starting with the game’s history, complete with context from Qing dynasty China to Western adaptations and its place in American communities. They broke down the different tile types like an annotated field guide and mapped out how the scoring worked. There was a whole section on tips for spotting patterns, defending against aggressive players, and ChatGPT even linked me to a downloadable, printable cheat sheet.
Claude took a much more direct route. Using its multiple searches, it pulled together a series of bullet-point lists and numbered instructions explaining the basics of the game, how the tiles work, and what a typical turn looks like. There were practical tips and even exercises to use to improve over time. It felt like the AI thought I was about to enter a tournament and needed an in-depth guide that wasn’t too bogged down in wordy explanations.
Researching Research
All three tools work. Really well, in fact. Claude’s Research feature is better than I expected. It’s not really analogous to the Deep Research tools, though.
Even with the multiple searches, it was far faster than ChatGPT or Gemini, finishing in a couple of minutes instead of spending seven to ten minutes on its hunt for information. It was more in-depth than the standard Claude answer, but that was more because it was a collection of multiple queries rather than because of a journey into every possible data source for its response.
Still, none of these tools alone is reason enough to buy a subscription. For Claude Max, that goes quintuple compared to the $20 a month option that doesn’t include Research as a feature yet.
When it becomes more affordable, I see it appealing to those who want something more comprehensive than a standard AI answer but not quite a 25-page report from either Deep Research option.