OMG: We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for. Washington Post. https://www.washingtonpost.com/technology/2025/11/12/how-people-use-chatgpt-data/

Methodology: The Post downloaded 93,268 conversations from the Internet Archive using a list compiled by online research expert Henk Van Ess. The analysis focused on the 47,000 chat sessions since June 2024 in which English was the primary language, as determined using langdetect.

A random sample of 500 conversations in The Post’s corpus was classified by topic using human review, with a margin of error of plus or minus 4.36 percent. A sample of 2,000 conversations, including the initial 500, was classified with AI using methodologies described by OpenAI in its Affective Use and How People Use ChatGPT reports, using gpt-4o and gpt-5, respectively.

The post mentioned the following problems detected:

Privacy issues

Users often shared highly personal information with ChatGPT… People sent ChatGPT more than 550 unique email addresses and 76 phone numbers in the conversations.

One user asked the chatbot to help draft a letter that would persuade his ex-wife to allow him to see their children again and included personal details such as names and locations.

OpenAI retains its users’ chats and, in some cases, utilizes them to improve future versions of ChatGPT. Government agencies can seek access to private conversations with the chatbot in the course of investigations, as they do for Google searches or Facebook messages.

Default to yes and “Endorsing falsehoods”

ChatGPT was often less of a debate partner and more a cheerleader for whatever perspective a user expressed. ChatGPT began its responses with variations of “yes” or “correct” nearly 17,500 times in the chats — almost 10 times as often as it started with “no” or “wrong.”

ChatGPT showed the same cheerleading tone in conversations with some users who shared far-fetched conspiracies or beliefs that appeared detached from reality.