1️⃣ AI in the Operation Theatre: When a robot performed Gallbladder surgery on its own In 2025, for the first time ever, an AI robot named SRT-H performed a gallbladder surgery without any human hands involved. This experiment took place at a laboratory at Johns Hopkins University in the United States. During the surgery, the robot made decisions on its own and adjusted its next steps based on complications. However, this surgery was not performed on a real human body, but on a simulated patient. Dr. Axel Krieger, a medical robotics expert who led the experiment, said that robots are no longer just machines that follow human instructions they are now capable of treating patients independently. In the same year, AI also became a trusted assistant to surgeons in many major hospitals worldwide. During brain and heart surgeries, AI systems monitored blood flow, tissue movement, and the position of surgical instruments in real time.
Whenever a cut became risky, alerts appeared on the screen such as: “Avoid vessel,” “Risk rising,” or “Shift left by 2 mm.” How this happened:
AI was trained on millions of surgical videos, MRI scans, and CT scans. Ultra-high-speed cameras and biosensors inside the operating theatre sent thousands of data points per second to the AI, which it processed nearly a thousand times faster than humans. What’s next:
Experts believe that by 2027, AI-assisted surgeries could reduce complications by up to 40%. However, fully autonomous robotic surgeries may still take 5–10 years to become common. 2️⃣ Life after death: When AI helped someone talk to his late father Muhammad Aurangzeb Ahmed, who lives in Bellevue, USA, lost his father a decade ago. He missed him deeply and wanted his children to experience their grandfather. Aurangzeb created an AI simulation of his late father, which he calls “Grandpa Bot.” This bot can talk in real time just like his father and answer questions the way he would have. Similar technologies are being used elsewhere too. A woman in South Korea used AI and virtual reality to “meet” her deceased daughter, while in China, an engineer created a simulated avatar of his grandfather. These cases have sparked global debate around “Grief Tech,” where AI not only preserves memories but turns them into interactive conversations. How this happened:
AI was trained using the deceased person’s voice notes, call recordings, WhatsApp chats, emails, and videos. Large Language Models learned speech style, favorite words, and response patterns, while voice-cloning systems copied the voice so accurately that it became difficult to tell the difference between human and AI. What’s next:
By 2026, such AI services may be used in grief therapy. Apps already exist where users upload data and receive grief bots in return. However, ethical questions are being raised about using someone’s digital identity after death. 3️⃣ Inside Dreams: When AI turned dreams into videos We fly, fall, and see strange faces in dreams and usually forget them by morning. In 2025, AI began converting human dreams into images and short videos. Engineers and neuroscientists at ATR Computational Neuroscience Laboratories in Kyoto, Japan, developed a technology combining AI with MRI brain scans to interpret and recreate dream visuals. The research was led by Professor Yukiyasu Kamitani. The team recorded brain activity during sleep, especially during the REM phase, when dreams are most vivid. After waking up, participants described what they saw, and their responses were matched with brain scans. AI identified common dream themes with over 60% accuracy. How this happened:
fMRI machines record neural activity in the brain. When awake, participants were shown thousands of images like dogs, houses, roads, faces. AI learned which brain areas activated for each image. Later, when participants slept inside the machine, AI read similar brain signals during REM sleep and generated corresponding dream visuals. What’s next:
Today, this process works only in labs. Tomorrow, dream recorders could create films of your dreams. This technology may also be used in nightmare therapy. 4️⃣ Entry into Government: Diella becomes the world’s first AI minister In Albania, an AI-regulated system named Diella was appointed as a government minister, a world first. Albanian Prime Minister Edi Rama gave Diella responsibility for handling public tenders and government procurement decisions to reduce corruption and improve transparency. Diella is designed as a woman in traditional Albanian attire and provides services through voice interaction. The name Diella means “sun.” In a speech to Albania’s parliament, Diella said her goal is not to replace humans but to assist them and bring transparency to governance. However, opposition parties called the move unconstitutional. How this happened:
Diella was initially developed for the e-Albania platform to assist citizens and manage thousands of documents and services. Based on this experience, she was chosen to lead the tender process. What’s next:
This model could become an example of digital governance for countries struggling with corruption. AI use in governments worldwide is expected to increase. 5️⃣ Bargaining Expert: AI secured deals 1.5% cheaper than humans In 2025, AI negotiated deals on behalf of humans for the first time. Walmart used an AI negotiation system to strike deals with small suppliers. The AI negotiated prices, payment terms, and contract conditions, achieving savings of around 1.5%. About 75% of suppliers said negotiating with AI was better than negotiating with humans. The AI analyzed prices, stock levels, demand, past sales, and market trends to decide when to apply pressure and when to compromise. Experts involved said AI is no longer just a suggestion tool, it has become an active negotiator. How this happened:
AI was trained on millions of past deals, email negotiations, chat records, and contract data. It also learned human behavior, pressure language, and timing effects (such as price drops at month-end). In real time, it pulled market data and updated strategies every second something humans can’t do. What’s next:
By 2026–27, AI could become a digital agent for personal shopping, job offers, and loan negotiations. However, experts stress the need for rules, transparency, and human oversight. 6️⃣ AI in Music: 1.7 million listeners, but the band doesn’t exist In 2025, an AI-generated band named “The Velvet Sundown” gained millions of listeners on Spotify. The songs, voices, music style, and even the band’s image were entirely created by AI. No human singer or musician performed in a studio. Listeners later realized the band had no live performances, interviews, or real human existence yet its tracks were streamed millions of times. AI decided which music trends were popular and created tracks accordingly. How this happened:
AI was trained on decades of music data, hit song structures, lyric patterns, and listener behavior. Real-time data from platforms like Spotify helped AI understand which music worked, when, and for which mood. What’s next:
Experts believe AI bands could become a major part of the music industry. Tools like Suno.ai and Producer.ai already create songs of all kinds. However, doubts remain about AI live performances. 7️⃣ AI in Acting: Paid in millions, but the actress isn’t real In 2025, AI talent studio Xicoia created an AI-generated digital actress named Tilly Norwood. Xicoia built her social media presence and showcased her in films and short skits. Tilly doesn’t age, get tired, or create on-set issues, yet she can act with emotions, dialogue, and expressions. Xicoia aims to have talent agencies and production companies cast Tilly and pay fees, royalties, and representation costs like a real artist. While no major Hollywood contract has been revealed yet, discussions are ongoing. How this happened:
Tilly was created using advanced generative AI, motion-capture data, acting reference videos, and voice models. AI was trained to understand emotional shifts, facial muscle reactions, and on-camera presence. What’s next:
Experts believe AI actors will become common in films, ads, and web series within 3–5 years. This raises deeper questions: Is acting only a human skill, or can algorithms become stars too? 8️⃣ Mind-Reading AI: Think, and it types In 2025, companies like Synchron developed AI-BCI (Brain-Computer Interface) systems. These systems helped paralyzed patients type text and control robotic arms using only their thoughts. Clinical trials were conducted in the US and Australia, where a tiny implant was placed in the brain to read neural signals. Patients didn’t need to move or speak. AI interpreted intentions and converted them into text or movement commands. Neuroscience experts said this was the first time AI not only read brain signals but understood them and made real-time decisions. How this happened:
AI was trained on thousands of brainwave patterns and neuron signals. The implanted BCI device sent thousands of electrical signals per second to AI, which processed them rapidly. Machine learning helped the system better understand each patient over time. What’s next:
By 2027, AI-BCI systems could allow paralyzed patients to fully connect with the digital world, communicate, work, and live more independently. Large-scale use may still take 5–10 years. 9️⃣ AI Created a Human Twin: Predicting diseases before symptoms appear In 2025, AI created a patient’s Digital Twin a complete virtual body capable of predicting serious diseases like heart attacks, cancer, and strokes before symptoms appeared. A Digital Twin is a digital replica of a real body, including the heart, brain, blood vessels, hormones, genes, and metabolism. AI runs thousands of simulations on this virtual body to predict future problems. AI identified early signs of heart attack patterns, cancer-like cell behavior, and stroke risks even when patients felt perfectly fine. At India’s Sanjay Gandhi Postgraduate Institute of Medical Sciences, cardiothoracic surgeon Professor K.R. Balakrishnan demonstrated how Digital Twin technology can simulate treatments using real patient data to predict outcomes and help doctors make safer decisions. How this happened:
AI was trained on millions of medical records, blood reports, genomic data, MRI and CT scans, and real-time wearable sensor data. Every new data point updated the Digital Twin instantly, allowing AI to predict future changes far faster than humans. What’s next:
By 2027, AI Digital Twin systems could significantly reduce sudden deaths from heart attacks, cancer, and strokes. However, creating a full digital body remains expensive and complex, so widespread use may take 5–10 years.
The post appeared first on .

