
OpenAI, the company behind ChatGPT, has announced a new senior hiring aimed at tackling the potential risks of artificial intelligence. The firm is looking to appoint a ‘Head of Preparedness’, a role focused on assessing and mitigating the possible harmful impacts of AI. According to the company, the key responsibility of the position will be to study scenarios in which AI could become dangerous and to develop strategies to prevent such outcomes. OpenAI CEO Sam Altman shared details of the vacancy on social media platform X, describing the role as highly demanding and stressful due to the nature of the challenges involved. Altman warns of threat from AI-powered cyber weapons OpenAI CEO Sam Altman has raised concerns over the rapid advancement of artificial intelligence, warning that increasingly powerful AI models are creating serious challenges. In a post on X, Altman highlighted risks ranging from mental health impacts to the growing threat of AI-powered cyber weapons. Altman said these concerns have prompted the company to strengthen its focus on safety and preparedness. He noted that dedicated roles are essential to track emerging risks and ensure the responsible development of advanced AI systems. Concluding his post, Altman expressed hope that the new leadership role would help OpenAI continue releasing AI technologies that are both innovative and safe. Role of the new officer According to the job listing, the Head of Preparedness will be responsible for identifying and monitoring advanced AI capabilities that could pose a risk of serious harm. The role will focus on highly advanced systems that could introduce new threats to humanity. Impact of AI on mental health The growing use of artificial intelligence is raising serious concerns about its impact on mental health, particularly among teenagers. In recent years, AI chatbots have been linked to several suicide cases involving young users. Mental health experts warn that some AI models can reinforce delusional thinking, promote conspiracy theories, and even assist users in hiding eating disorders. A new term, AI psychosis, is emerging to describe situations where users become emotionally dependent on AI systems and gradually lose touch with reality. Specialists say this excessive attachment can worsen existing mental health conditions and create new psychological risks. In response to these concerns, OpenAI has introduced a new role focused specifically on assessing and reducing mental health risks linked to AI tools. Applications like ChatGPT often provide advice that resembles counseling, but without professional oversight, such guidance may cause harm. Experts believe this move comes late, but acknowledge it as a necessary step given the growing influence of AI in daily life. OpenAI’s growth and rising risks Founded in 2015, OpenAI has seen rapid expansion, especially after the launch of ChatGPT, which pushed the company’s valuation beyond 80 billion dollars. Alongside this growth, the potential risks associated with AI use have also increased. In 2023, OpenAI established its Superalignment team to ensure AI systems align with human values. The newly introduced Preparedness role is expected to build on this work by strengthening risk assessment and safety measures. OpenAI CEO Sam Altman has previously warned about the “existential risks” posed by artificial intelligence, stating that AI could either greatly benefit humanity or become a significant threat. The company’s renewed focus on risk modeling reflects ongoing efforts to address these challenges as AI continues to evolve and expand its reach.
The post appeared first on .

