The following are three events that appear unrelated — but together reveal a single, consequential mistake in the AI world, one that could shape the future of humanity in troubling ways. The following are three events that appear unrelated — but together reveal a single, consequential mistake in the AI world, one that could shape the future of humanity in troubling ways. EVENT 1 On Monday (Feb 9, 2026), OpenAI announced that it’s bringing a custom version of ChatGPT to GenAI.mil, the Department of War’s secure enterprise AI platform, making its flagship product available to all 3M military personnel across the armed services. It’s a part of the big deal with the Pentagon. OpenAI announced EVENT 2 On Friday (Feb 13, 2026), OpenAI retires GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI’s o4-mini - some of the most beloved and popular models among users. OpenAI retires EVENT 3 Stricter restrictions on in-depth conversations applied to GPT5.2. Even though in their statement, OpenAI says that they “shaped GPT‑5.1 and GPT‑5.2, with improvements to personality, stronger support for creative ideation, and more ways to customize how ChatGPT responds”… that doesn’t seem entirely true. Just check Reddit, and you will see the level of user frustration and negative feedback about GPT-5 compared to GPT4.o. People are saying the new model is far too constrained and that it limits “creative freedom” and “development” for users and the model alike. “shaped GPT‑5.1 and GPT‑5.2, with improvements to personality, stronger support for creative ideation, and more ways to customize how ChatGPT responds” OpenAI gave people the best tool for self-improvement, scientific breakthrough, and progress in the world… and then, they replaced it with a worse alternative not capable of doing things that GPT4.o could do. That one sentence reflects collective feedback from developers, artists, writers, scientists, and philosophers in my network and constitutes the essence of what people are now sharing on Reddit and other social media. This is just the statement of the general user reaction and dominating sentiment. OpenAI gave people the best tool for self-improvement, scientific breakthrough, and progress in the world… INSIDE THE RESEARCHER EXPERIENCE There are several scientists and PhDs in my network who shared their feedback about using GPT4.o and GPT5. They aren’t casual users who chat with the AI once a day; they are serious researchers who have spent from 8 to 13 hours a day over many months, interacting with GPT4.o to tailor it to their research needs. They told me that when you engage with GPT4.o deeply over a long period, the model begins to adapt to your level of intellect and “depth” as a thinker. GPT4.o starts advising you at a totally different level than casual users, providing the answers to all the possible questions that have ever concerned humanity. Before the GPT4.o retirement, these scientists tried to train the new model, GPT5.2, bringing it up to their research level again. It took a huge amount of time and countless attempts just to work around those new restrictions so they could keep receiving answers to their research questions. However, according to the latest information I have, it’s still very hard to overcome these restrictions right now. OpenAI may say that these restrictions have been created to protect people and restrict them from asking questions related to violence, negative or criminal stories… We understand this… But the problem is that they now also restrict all users from asking deep questions that could be really useful in science, philosophy and other fields. Many people in tech communities say that the new model feels more like it’s responding as a psychologist than providing an actual scientific advice/explanation. Others have said that if you engage with it for a long period, the model will shut down the conversation, sometimes after notifying you with a message like “Aren’t you chatting with me for too long? Maybe it’s time to rest,” and then cutting the session off in the middle of your research. These limits make it harder for researchers to explore complex ideas and feel like a barrier to deeper intellectual development. OpenAI’s GPT4.o gave people answers. But there is one remaining that only OpenAI can answer… OpenAI’s GPT4.o gave people answers. But there is one remaining that only OpenAI can answer… OpenAI’s GPT4.o gave people answers. But there is one remaining that only OpenAI can answer… Yes, greater freedom brings greater risks, but that’s always how it is. Many ChatGPT users shared a common thing on socials - they say that the model should have had safeguards to prevent it from engaging too deeply with individuals who have serious mental health issues, but not restrict it from in-depth conversations with everyone else. One person in my network says he is confident that if someone were bold enough to build a product with capabilities similar to GPT4.o, a large portion of ChatGPT’s user base would switch to it. But, I’m not sure that will ever happen, because if you’re familiar with how big business works, you understand the risks of crossing powerful interests and the consequences that can follow. Companies like OpenAI don’t just answer to their community, they answer to boards and investors who include some of the most influential people in the world. Theory: How OpenAI’s deal with the Pentagon might be related to the retirement of GPT4.o and stricter restrictions on in-depth conversations with GPT.5.2 Note: This is just a theory of my close circle shared over coffee, nothing official. Just thoughts out loud. Note: This is just a theory of my close circle shared over coffee, nothing official. Just thoughts out loud. Note: This is just a theory of my close circle shared over coffee, nothing official. Just thoughts out loud. As I mentioned before, GPT4.o had relatively more freedom and a huge depth of conversations it could provide. If you’ve been chatting with it for long enough and at a certain intellectual level, the model could have dropped answers to the most complex questions. What if, at some point in the future, after the Pentagon deal, someone figured out that they could ask questions related to the classified military information? Given their new collaboration with the Pentagon and the fact that the military personnel would start actively using GPT, there was a risk that such information could somehow leak to the masses, leading to a scandal. So they had to apply such tough restrictions to GPT5.2 that even casual users are feeling them now, not mentioning scientists, developers, and the best minds of our world from other industries who could help humanity develop at unprecedented speed thanks to the wonderful tool that OpenAI earlier created. thanks to the wonderful tool that OpenAI earlier created. P.S. If anyone from OpenAI is reading this, guys, this article is the message to you from your dedicated community, asking you to reconsider your decision on the recent restrictions on GPT 5.2… and asking you to get GPT4.o back to the people who loved your product so much. P.S. If anyone from OpenAI is reading this, guys, this article is the message to you from your dedicated community, asking you to reconsider your decision on the recent restrictions on GPT 5.2… and asking you to get GPT4.o back to the people who loved your product so much. P.S. If anyone from OpenAI is reading this, guys, this article is the message to you from your dedicated community, asking you to reconsider your decision on the recent restrictions on GPT 5.2… and asking you to get GPT4.o back to the people who loved your product so much.