OpenAI is threatening to block users who try to understand the processes behind its new “o1” AI models (also known as “Strawberry”). These offerings use a “chain of thought” system in which the AI first thinks through possible answers. As Frank Landymore reports at Futurism, users who are too curious will receive emails warning them against “attempting to circumvent safeguards”. OpenAI justifies this with security concerns and the protection of competitive advantages. Critics see this as a step backwards for the transparency and interpretability of AI systems. The measure contradicts OpenAI’s original vision of promoting open-source AI.