- What Happened: OpenAI employees flagged Jesse Van Rootselaar's ChatGPT activity describing gun violence scenarios in June 2025, debated calling police, and were overruled by leadership before Van Rootselaar killed eight people in Tumbler Ridge, British Columbia in February 2026.
- Why It Matters: OpenAI banned the account but said the activity did not meet its threshold for law enforcement reporting, citing privacy concerns, even as roughly a dozen employees pushed to alert authorities.
- Bottom Line: Canada's AI Minister summoned OpenAI to Ottawa and said all regulatory options are on the table after it was revealed the company did not disclose its prior knowledge to government officials even at a meeting the day after the shooting.
Eight people died in Tumbler Ridge, British Columbia. OpenAI knew something was wrong EIGHT MONTHS before the first shot was fired. They did nothing.
A Wall Street Journal investigation revealed that Jesse Van Rootselaar, the 18-year-old who murdered his mother, his young brother, a teacher, and five students at Tumbler Ridge Secondary School on February 10 before taking his own life, had been using ChatGPT in June 2025 to describe detailed scenarios involving gun violence over the course of several days. OpenAI's automated abuse detection system flagged the account. The account was banned. And then roughly a dozen employees debated whether to call the police.
Leadership said no.
🚨🇺🇸 BOMBSHELL: OPENAI EMPLOYEES WANTED TO ALERT POLICE ABOUT CANADA SHOOTER MONTHS BEFORE MASSACRE, COMPANY SAID NO
— Mario Nawfal (@MarioNawfal) February 20, 2026
Jesse Van Rootselaar was flagged by ChatGPT last June after describing gun violence scenarios.
About a dozen staffers debated reporting to law enforcement.… https://t.co/NKzfXziMgR pic.twitter.com/q01d6Vor5z
OpenAI's policy requires an activity to constitute a "credible and imminent risk of serious physical harm" before the company will contact law enforcement. Describing gun violence scenarios over multiple days in a chatbot, apparently, did not clear that bar.
Van Rootselaar's digital footprint did not stop at ChatGPT. He also created a Roblox game simulating a mass shooting inside a mall and claimed to have 3D-printed bullet cartridges. Local police had visited his home multiple times over mental health concerns and had temporarily removed firearms from the residence. The warning signs were everywhere.
OpenAI cited privacy concerns as part of its reasoning for not escalating to authorities. That explanation landed especially hollow given that a federal judge recently ruled in a separate case that ChatGPT users have no real privacy expectation in their communications since they voluntarily submitted them to the company. Privacy shields copyrighted content. Apparently it does not shield potential mass shooters.
Canada's Artificial Intelligence Minister Evan Solomon called OpenAI's decision "very disturbing" and summoned the company's senior safety team to Ottawa. He said all regulatory options are on the table. To make matters worse, OpenAI attended a meeting with British Columbia government officials on February 11, the day after the massacre, to discuss opening a Canadian office. They did not mention they had prior knowledge of Van Rootselaar's activity. They requested RCMP contact information the following day.
Eight people died. OpenAI employees wanted to make the call. Someone above them said no.
That someone needs to answer for it.

