Catch up on latest Artificial Intelligence news

Deutsche Telekom: AI for everyone
Deutsche Telekom, a global leader in telecommunications, is introducing a range of AI-enabled devices under the banner “AI for everyone,” utilizing technology from Perplexity.
Since the launch of ChatGPT in 2022, artificial intelligence has gained significant attention and momentum. Companies across industries are now competing to roll out the newest AI models, intelligent devices, and conversational agents to capture the growing demand.
Perplexity is an AI-powered chatbot, comparable to competitors like OpenAI’s ChatGPT and Anthropic’s Claude.
Deutsche Telekom aims to “open the door to the world of artificial intelligence for its customers,” striving to make AI accessible beyond just tech enthusiasts.
Although companies like OpenAI have already helped democratize AI through intuitive interfaces and approachable chatbot designs, Deutsche Telekom is presenting its latest move as a major innovation.
In reality, the company is entering a space already explored by major players like Apple and Google, both of which have introduced AI-powered smartphones—albeit with mixed reception due to the often unnecessary nature of some features.
So, what sets Deutsche Telekom apart? The company is banking on affordability, launching its AI-enabled devices at a notably competitive starting price of €149 (approximately $200).
Priced well below the typical €900–€1,000 ($1,000+) range of Apple and Google’s flagship devices, Deutsche Telekom’s AI phone presents a more budget-friendly alternative while still offering modern AI capabilities.

An AI Doctor Is on the Horizon — and That’s Great News
On June 30, Microsoft introduced its AI-driven medical platform, the Microsoft AI Diagnostic Orchestrator, which successfully diagnosed 85% of medical conditions featured in the New England Journal of Medicine. In head-to-head comparisons, the AI system demonstrated diagnostic accuracy that was four times higher than that of human physicians.
As this technology continues to advance, it opens the door to a healthcare system that is not only more accurate but also more efficient and cost-effective. By streamlining diagnoses and improving reliability, AI could significantly lower healthcare expenses for American consumers while enhancing the standard of care for countless individuals.
While the human connection between doctors and patients has well-documented benefits, that interpersonal element of care is not something AI is poised to replace anytime soon. However, if the more analytical and data-driven responsibilities of physicians are supported or automated through a collaborative human-AI approach, the healthcare system could see improvements in diagnostic accuracy without sacrificing patient experience.
By incorporating AI into the diagnostic process, we can reduce the likelihood of human error and potentially eliminate issues like medical gaslighting — when patient concerns are ignored or dismissed by physicians. This issue disproportionately affects women and highlights the deeper problem of gender bias in medical care. By training AI systems on a broad range of real-world cases that reflect diverse backgrounds, medical histories, and patient experiences, we can build tools that offer more consistent, equitable care and help overcome long-standing disparities.

Cutting-Edge AI Supercomputer Supports Cancer Vaccine Development
Cutting-Edge AI Supercomputer Supports Cancer Vaccine Development
A research team from the University of Oxford’s Nuffield Department of Medicine has secured access to one of the UK’s most powerful AI supercomputers to advance their work on cancer vaccines.
Through a government-backed program, the group will utilize the supercomputer—called Dawn—for 10,000 computing hours.
Their goal is to examine a vast collection of data from cancer patients, seeking patterns that could help inform the design of new vaccine-based cancer therapies.

AI Cameras Set for Trial at High-Risk Crash areas
Artificial intelligence-powered cameras are set to be tested at a known accident-prone stretch of road in an effort to tackle reckless driving.
The trial will take place on the A361 Frome Bypass in Somerset, England—a location where six fatalities have occurred over the past five years, including four in just 2023 and 2024.
These AI-enabled cameras, already in use on several major roads across the country, are capable of identifying drivers using mobile phones illegally, neglecting seatbelt use, and engaging in other forms of dangerous driving.
Motorists found violating traffic laws may receive warnings, face fines, or be subject to legal action.

Taco Bell is re-evaluating its use of artificial intelligence for drive-thru orders
Taco Bell is re-evaluating its use of artificial intelligence (AI) for drive-thru orders after a series of viral videos showcased the technology making humorous errors.
One video gained attention after a customer seemingly overwhelmed the system by ordering 18,000 cups of water. In another, a frustrated customer repeatedly heard the AI asking for more drinks to be added to his order.
Since 2023, the fast-food chain has deployed AI at more than 500 locations across the U.S. with the goal of reducing mistakes and improving order speed. However, the results have not been as expected, with the AI causing more confusion than efficiency.
Dane Mathews, Taco Bell’s Chief Digital and Technology Officer, admitted to The Wall Street Journal that the AI system had both its successes and shortcomings. "It surprises me sometimes, but it lets me down other times," he said.
Mathews acknowledged that the company was learning from these experiences and would be more cautious about where to implement AI in the future. He pointed out that in some cases, human employees may be better suited for handling orders, particularly during peak times.
"We’ll guide teams on when to use AI and when human intervention is necessary," Mathews added.
The growing frustration among customers has led many to share their experiences online, highlighting glitches with the AI system. One Instagram clip, which has been viewed over 21.5 million times, shows a man trying to order "a large Mountain Dew," but the AI repeatedly asks, "And what will you drink with that?"
Taco Bell is not alone in facing challenges with AI-driven ordering systems. Last year, McDonald’s pulled its AI tech from drive-thrus after similar misinterpretations of customer orders led to mistakes, like bacon being added to an ice cream order and customers receiving hundreds of dollars’ worth of chicken nuggets.
Despite the hiccups, Taco Bell reports that its AI has successfully processed over two million orders since its launch.

Researchers have developed a cutting-edge AI-powered stethoscope
Researchers have developed a cutting-edge AI-powered stethoscope that can identify three major heart conditions in just 15 seconds.
The traditional stethoscope, first invented in 1816, has been a core diagnostic tool for doctors for over two hundred years. Now, a team of scientists has created a high-tech version that integrates artificial intelligence, enabling it to rapidly diagnose heart failure, heart valve disease, and abnormal heart rhythms.
This innovative device, created by experts at Imperial College London and Imperial College Healthcare NHS Trust, uses AI to detect subtle variations in heartbeat and blood flow that are undetectable to the human ear. It can also capture an ECG simultaneously.
The breakthrough, which has the potential to enhance early diagnosis of heart-related conditions, was unveiled at the European Society of Cardiology's annual congress in Madrid, the largest heart conference in the world. Early detection of these conditions is crucial because it allows for timely intervention, preventing patients from reaching critical health stages.
In a trial involving around 12,000 patients across 200 GP practices in the UK, the AI stethoscope proved effective in diagnosing heart conditions in individuals showing symptoms such as breathlessness and fatigue. Those examined with the AI tool were more likely to receive accurate diagnoses. Specifically, they were twice as likely to be diagnosed with heart failure compared to those who weren’t tested with the new technology. Similarly, patients were three times more likely to be diagnosed with atrial fibrillation, a heart rhythm disorder that increases stroke risk, and nearly twice as likely to be diagnosed with heart valve disease.
Dr. Patrik Bächtiger from Imperial College London’s National Heart and Lung Institute commented: "The stethoscope’s design hasn’t changed in over 200 years. The fact that we can now conduct a 15-second examination and receive results indicating heart failure, atrial fibrillation, or heart valve disease is remarkable."
Manufactured by California-based company Eko Health, the AI stethoscope is the size of a playing card. It records an ECG and picks up the sound of blood flowing through the heart. This data is then sent to the cloud for AI algorithms to analyze, identifying heart issues that may be missed by the human ear. The results are sent to a smartphone, giving doctors a clear indication of whether the patient is at risk for any of the three heart conditions.
While the tool offers tremendous potential for early detection, researchers caution that it’s not designed for routine screening of healthy individuals. There is a risk of false positives, where a patient may be incorrectly diagnosed with one of the conditions. However, for patients already exhibiting symptoms of heart problems, it could significantly speed up diagnosis and treatment.
Dr. Mihir Kelshiker from Imperial College noted, “Many people with heart failure are only diagnosed when they are critically ill in A\&E. This trial shows how AI-enabled stethoscopes could change that, giving GPs a rapid, effective way to spot issues early and ensure patients receive the right treatment.”
Dr. Sonya Babu-Narayan, clinical director at the British Heart Foundation, which partially funded the study, said: “An earlier diagnosis allows people to get the treatment they need, helping them live healthier, longer lives.”
Professor Mike Lewis, NIHR’s scientific director for innovation, added: “This tool could revolutionize patient care by empowering local clinicians to identify problems sooner, directly in the community, and address some of the most prevalent and deadly health conditions.”

NVIDIA is working to overcome AI’s language limitations
AI might seem like it’s everywhere these days, but in reality, it only supports a small number of the world’s 7,000 languages. That means millions of people are left out when it comes to voice-enabled technologies. NVIDIA wants to change that—especially in Europe.
They’ve just launched a powerful new set of free tools to help developers build advanced speech AI for 25 European languages. These tools don’t just support widely spoken languages—they also include smaller ones like Croatian, Estonian, and Maltese, which are often overlooked by big tech companies.
The idea is to make it easier to create tools like voice assistants, multilingual chatbots, real-time translators, and customer service bots that truly understand more people—no matter what language they speak.
At the core of this project is Granary, a massive collection of human speech—about a million hours of audio in total. It’s designed to train AI systems to better understand and translate spoken language.
To help developers put this data to work, NVIDIA is also releasing two new language models:
-
Canary-1b-v2, built for highly accurate transcription and translation.
-
Parakeet-tdt-0.6b-v3, designed for real-time use, where speed matters most.
If you’re interested in the research side of things, NVIDIA will present a paper on Granary at the upcoming Interspeech conference in the Netherlands. And for developers eager to get started, everything—the dataset and both models—is already available on Hugging Face.
What’s really impressive is how the data was prepared. Usually, creating training data for AI is time-consuming and expensive, because it involves a lot of manual labeling. To solve this, NVIDIA teamed up with researchers from Carnegie Mellon University and Fondazione Bruno Kessler. Together, they built an automated system using NVIDIA’s own NeMo toolkit to turn raw, unlabelled audio into high-quality training data—fast.
This isn’t just a tech breakthrough—it’s a big step forward for digital equality. Developers in places like Riga or Zagreb can now build speech-based apps and services in their own languages, without needing huge amounts of time or money. In fact, the team found that their Granary dataset is so effective, it only takes about half the amount of data to reach the same level of accuracy compared to other common datasets.
The two new models are a great example of this progress. Canary offers top-tier translation and transcription, competing with models much larger in size, but running up to ten times faster. Parakeet can handle full-length recordings—like a 24-minute meeting—and even identify what language is being spoken. Both models are smart enough to handle punctuation, capital letters, and give precise timestamps for each word, making them suitable for real-world applications.
By giving developers everywhere access to these tools and the technology behind them, NVIDIA isn’t just releasing software—they’re helping to build a future where AI truly speaks your language, no matter where you live.

How a Tiny Caribbean Island Struck Gold with Its Web Address
Back in the 1980s, when the internet was still emerging, countries and territories were assigned their own digital “calling cards” — unique two-letter web extensions. The United States received .us, the United Kingdom got .uk, and dozens of other regions were given similar identifiers.
Among them was the small Caribbean territory of Anguilla, which inherited .ai. At the time, no one suspected this unassuming domain would one day become a windfall.
Fast forward to today, and with artificial intelligence (AI) dominating global tech, Anguilla finds itself in an enviable position. Businesses and innovators around the world are snapping up .ai websites, transforming the domain into a lucrative source of revenue for the island of just 16,000 residents.
One striking example came earlier this year, when U.S. entrepreneur Dharmesh Shah reportedly paid $700,000 (£519,000) to acquire you.ai. He explained to the BBC that the purchase was inspired by an idea for an AI tool that could create personalized digital assistants.
The boom is unmistakable. Over the past five years, the number of registered .ai domains has increased more than tenfold, and in just the past year the figure has doubled, according to industry trackers.
For Anguilla, the challenge is turning this stroke of good fortune into a sustainable income stream. Traditionally, the island has relied heavily on tourism, particularly luxury travel from the United States. Last year, it welcomed a record 111,639 visitors, according to official statistics. But tourism remains vulnerable — hurricanes frequently threaten the Caribbean, and Anguilla lies directly in the North Atlantic storm belt.
That’s why domain revenue is proving so valuable. In 2024, the Anguillian government collected 105.5 million Eastern Caribbean dollars ($39m; £29m) from .ai registrations, making up nearly a quarter of the territory’s total income. Tourism accounted for about 37%. Officials expect revenues to grow to 132 million ECD in 2025 and 138 million in 2026, with more than 850,000 .ai websites now active, compared with fewer than 50,000 in 2020.
As a British Overseas Territory, Anguilla remains under U.K. sovereignty but manages most of its domestic affairs. Britain retains responsibility for defense and offers support during crises — such as the £60m relief package provided after Hurricane Irma devastated the island in 2017. The U.K.’s Foreign, Commonwealth and Development Office has since praised Anguilla’s creative approach to economic growth.
To help manage this growing domain industry, Anguilla signed a five-year partnership in 2024 with U.S.-based firm Identity Digital, a company specializing in domain registries. Earlier this year, the firm shifted all .ai domains from local servers to its global network to protect against hurricane disruptions and other infrastructure risks.
Domain names aren’t cheap. Registration typically starts around $150–200, with renewals costing a similar amount every two years. Highly desirable names often go to auction, fetching six-figure sums. Anguilla’s government collects the majority of the proceeds, while Identity Digital takes a reported 10% cut.
Currently, the most expensive .ai sale remains Mr. Shah’s you.ai, though other high-profile deals are catching up. In July, cloud.ai sold for about $600,000, while law.ai was purchased for $350,000 in August.
Mr. Shah, who co-founded HubSpot and describes himself as an AI enthusiast, owns several .ai domains but admits you.ai is still dormant while he focuses on other projects. He occasionally resells unused domains, though he believes the appetite for .ai addresses means new price records will continue to be set.
Still, he adds a note of caution: in the long run, he expects .com domains to hold their value better than any new extension.
Anguilla is not the first small nation to benefit from the digital real estate boom. In fact, the Pacific island state of Tuvalu struck a similar deal back in 1998, when it licensed out its .tv domain. The contract gave exclusive rights to U.S.-based VeriSign, initially for about $2 million annually, later increasing to $5 million.
However, as the internet expanded rapidly, Tuvalu’s leaders began to feel the country had been shortchanged. In the late 2000s, Finance Minister Lotoala Metia remarked that VeriSign was paying only “peanuts” compared to the domain’s true worth. In 2021, Tuvalu moved on, signing a fresh agreement with GoDaddy to manage the extension.
Anguilla, by contrast, has pursued a revenue-sharing strategy instead of a fixed annual payment, allowing its earnings to rise alongside demand for .ai domains. This approach is seen as a way to better capitalize on the current AI-driven surge in registrations.
The government has also emphasized using this new income stream to fund long-term development. Plans include building a new international airport to expand tourism capacity, upgrading public services, and improving access to healthcare.
With registrations of .ai domains now approaching the one million mark, many Anguillians hope the profits will be carefully managed and invested to secure the island’s future prosperity.

Simpler models may beat deep learning in climate forecasting question example
Simpler models may beat deep learning in climate forecasting
A new study suggests that the natural fluctuations within climate data make it difficult for advanced AI systems to accurately predict local temperatures and rainfall.
These days, scientists love using giant AI systems to predict climate change and weather. But here’s the twist: a new study from MIT says bigger isn’t always better. In fact, in some cases, old-school, simpler models actually do a better job than the fanciest deep-learning algorithms.
The problem comes down to nature itself. Weather and climate data are full of random ups and downs — like El Niño, La Niña, and other natural swings. These variations can trick AI models, making them look more accurate than they really are.
To test this, the MIT team compared a traditional method called linear pattern scaling (LPS) with a modern deep-learning system. Their results?
-
LPS was better at predicting temperatures.
-
Deep learning did a bit better at rainfall.
So it turns out, sometimes the simple stuff wins.
The researchers then built a new way to “score” climate models more fairly, so these natural swings don’t throw off the results. Using this, they even upgraded a tool called a climate emulator — a kind of shortcut simulator that lets scientists (and policymakers) quickly test how pollution and greenhouse gases might shape the future climate.
Professor Noelle Selin, one of the study’s authors, summed it up nicely: It’s tempting to throw the biggest AI model at the problem, but sometimes it’s smarter to stick with the basics.
The takeaway? AI is powerful, but when it comes to predicting our climate, the smartest approach might be a mix of old and new tools — not just chasing the latest tech trend.

New Tech Aims to Quickly Check Brain Health for Soldiers
Researchers at Lincoln Laboratory are creating fast brain health screening tools designed for the military. These technologies build on years of research and could help spot issues more quickly in the field.
And it’s not just for soldiers — the same tools might one day be used at sports games, doctor’s offices, or other civilian settings to make brain checkups faster and easier.
Smart Tech for Brain Health in the Military
Cognitive readiness is basically how well someone can react and adapt to the world around them. It’s what lets you catch yourself when you trip, or make the right call in a tough situation based on past experience. For service members, this skill isn’t just useful — it’s critical for survival and mission success.
The problem? Brain injuries are common in the military. From 2000 to 2024, over half a million troops were diagnosed with traumatic brain injury (TBI), caused by anything from training accidents to battlefield blasts. While fatigue from lack of sleep can be fixed with rest, TBIs often require serious, long-term care.
The challenge is that current brain readiness tests aren’t sensitive enough to pick up on small but important changes. Subtle declines in focus, balance, or reaction time can go unnoticed — and untreated.
That’s where researchers at Lincoln Laboratory step in. They’re building portable tools that can quickly screen brain health almost in real time.
-
READY App: A smartphone or tablet test that takes less than 90 seconds to check for changes in attention and brain performance.
-
MINDSCAPE: A virtual reality (VR) system that runs deeper tests to spot conditions like TBI, PTSD, or sleep deprivation.
Together, these tools could give medics in the field the power to make fast, life-saving decisions.
How They Work
Both tools build on more than a decade of research into cognitive biomarkers — signals that reveal how “ready” the brain really is. According to scientist Thomas Quatieri, the most reliable ones are:
-
Balance
-
Eye movement
-
Speech
For example, in the READY app, you might be asked to follow a moving dot with your eyes, stand still to measure balance, or hold a vowel sound steady. The app then crunches the numbers and shows whether your brain is staying sharp or starting to wobble.
If something looks off, the user can move on to MINDSCAPE, where VR goggles and advanced sensors (like EEGs and eye trackers) measure reaction time, memory, and stress responses more precisely.
Why It Matters
These tools aren’t just about spotting problems — they’re about keeping people safe. A soldier showing early signs of brain stress can get the help they need before things get worse. And in the future, these same tools could be used at sports events, doctors’ offices, or anywhere brain health needs quick checking.
As Quatieri puts it: attention is the key to being “ready.” With new tech like READY and MINDSCAPE, the military may finally have a faster, smarter way to protect it.

MIT Creates Tool to Play With “Impossible” Objects
MIT Creates Tool to Play With “Impossible” Objects
MIT researchers have built a new tool, nicknamed Meschers, that lets people see and even edit objects that shouldn’t exist in real life — think wild, Escher-style optical illusions.
The system works in 2.5 dimensions, making it possible to visualize and tweak shapes that seem to defy the laws of physics. Beyond being a mind-bending art experiment, the tool could actually help scientists and designers explore new ideas for structures and objects that push the limits of imagination.
MIT’s New Tool Brings “Impossible” Objects to Life
M.C. Escher’s mind-bending art has fascinated people for decades — staircases that loop endlessly, triangles that couldn’t possibly exist, and shapes that twist the rules of reality. Now, MIT scientists are taking these illusions a step further with a new computer graphics tool called Meschers.
Meschers can turn drawings or 3D models into 2.5-dimensional illusions, making it possible to visualize and even edit “physically impossible” shapes, like Escher-style windows, buildings, or even donuts. Unlike past tricks that relied on clever camera angles, this method actually lets researchers relight, smooth, and study the shapes while still preserving the illusion.
So what’s the point? Beyond just making cool art, the tool could help scientists explore geometry problems that were impossible to study before. For example, they can now measure distances across an “impossible surface” or simulate how heat spreads over it. MIT PhD student Ana Dodik, who leads the project, explains:
“With Meschers, we’ve unlocked a new class of shapes for artists and scientists to explore — even ones that can’t exist in real life.”
The researchers tested the tool on a series of quirky creations, including a warped bagel they called an “impossibagel.” With Meschers, they could calculate how long it might take an ant to crawl across the surface, as if the object were real.
Artists and designers might also use Meschers to play with lighting and shading — imagine relighting an impossible dog-on-a-skateboard scene at sunrise or sunset without breaking the illusion.
For now, Meschers is just the beginning. The MIT team is working on making it more user-friendly and teaming up with perception scientists to learn how people process the impossible. Their work will debut at the SIGGRAPH conference in August.
As researcher Justin Solomon puts it:
“Meschers shows that computer graphics don’t have to be bound by the rules of physical reality. It gives artists a chance to reason about shapes we’ll never actually find in the real world.”
In other words: MIT has found a way to make the impossible… possible.

How AI Companions Are Reshaping Human Relationships in the Digital Age
In today’s world, where technology permeates nearly every dimension of human existence, Artificial Intelligence (AI) is beginning to assume roles once thought to be uniquely human—acting as companions, confidants, and in some cases, even romantic partners. This evolving dynamic between people and AI prompts deep reflections on what companionship truly means, how fundamental the desire for connection is, and what implications may arise when digital entities begin replacing human interaction.
The Rise of AI Companions
Although the idea of AI-driven companionship has existed conceptually for some time, its practical implementation has only recently gained momentum. These systems are created to deliver emotional support, foster a sense of connection, and in certain contexts, replicate aspects of romantic or intimate human relationships.
A notable example is Replika, a widely recognized AI chatbot designed to provide comfort and empathetic interaction. Through ongoing text-based conversations, Replika adapts to individual users, refining its responses to simulate a more authentic and personalized emotional bond.
Another noteworthy case is Gatebox, which introduces a holographic AI partner. Aimed primarily at individuals who live alone, Gatebox’s digital avatar can send daily messages, greet its owner upon arrival, and even manage smart home devices—offering a sense of companionship and presence.
More controversially, Harmony by RealDoll merges AI with a humanoid robotic body to create a romantic and physical partner. Harmony can engage in conversations, remember user preferences, and display different personality traits, pushing the boundaries of what companionship with machines can look like.
The Psychology of AI Companionship
What drives people to seek connection with artificial intelligence? The motivations are diverse, but several recurring themes help explain the trend.
One significant factor is the global rise in loneliness. Social isolation is now widely recognized as a serious health concern, and in this context, AI companions provide a substitute for human connection. This issue is especially evident in places like Japan, where demographic and cultural shifts have contributed to more solitary lifestyles.
Another reason lies in the sense of control and convenience that digital partners offer. Unlike human relationships, AI companions are constantly available, free from personal emotional burdens, and can be paused or deactivated whenever the user wishes.
Finally, technological progress has made AI interactions increasingly lifelike. Modern systems can recall previous conversations, demonstrate empathy, and mimic natural social behavior. These qualities make them more convincing as partners, blurring the line between human and machine companionship.
Societal Implications of AI Companionship
The growing presence of AI companions carries significant consequences for both individuals and society at large, reshaping how we understand relationships. For many, these digital partners can support emotional well-being by offering a safe, judgment-free environment to share feelings—an especially valuable tool for people dealing with social anxiety or other mental health struggles. In some cases, interacting with AI can even act as practice, helping users build social confidence and interpersonal skills that may later be applied to human connections.
At the same time, concerns remain about the possible downsides. A key issue is the risk of increased isolation if individuals begin favoring the simplicity of AI interactions over the complexity of human relationships. Such reliance could ultimately worsen feelings of loneliness instead of resolving them.
Beyond the psychological effects, ethical questions also emerge. Forming attachments to digital entities that lack genuine consciousness or emotion challenges long-standing ideas of empathy, intimacy, and the essence of being human.
Implications for Dating and Marriage
As artificial intelligence companions grow in sophistication and popularity, their influence on traditional relationships—particularly dating and marriage—becomes increasingly evident. For many, these digital partners provide the support and companionship that has historically come from human interaction. In fact, some younger generations are already using AI as a low-pressure environment to practice communication skills, which may later enhance their ability to form meaningful human connections.
Yet, the possibility remains that certain individuals may begin favoring AI partners over human ones. The reliability, absence of criticism, and high degree of personalization offered by AI make them attractive alternatives, especially to those who have faced challenges or disappointments in past relationships. This shift is gradually reshaping how society defines companionship.
Ultimately, the rise of AI relationships forces us to reconsider long-held beliefs about love, intimacy, and connection. It prompts profound questions: what does it mean to give or receive love, and must those experiences be uniquely human?
Navigating a New Era of Companionship
As AI companions become an increasingly common presence, it is vital to approach this shift with responsibility and foresight. Striking a balance between turning to AI for emotional support and nurturing genuine human connections will be essential. Encouraging social interaction alongside AI use can help reduce the risk of deeper isolation.
Developers also carry an ethical responsibility in shaping this technology. Transparency about the limits of AI companionship is critical, as is designing systems that avoid exploiting vulnerable users. Continued research into the long-term impact of human–AI relationships will be necessary to guide ethical standards, policies, and best practices. Just as importantly, society must engage in open discussions about the benefits and risks these digital relationships present.
The emergence of AI as a source of companionship—whether in the form of friendship, comfort, or even love—marks a profound transformation in our social fabric. While these technologies can provide valuable emotional support, they also raise significant psychological, ethical, and cultural questions. Moving forward, careful reflection is needed to ensure that innovation does not erode what makes us distinctly human: our ability to forge authentic connections grounded in empathy and understanding.

OpenAI Plans to Launch In-House AI Chips with Broadcom by 2026
OpenAI is preparing to roll out its first custom-designed artificial intelligence chips in partnership with semiconductor leader Broadcom, with mass production expected to start in 2026. These processors will be built specifically for OpenAI’s internal operations rather than being sold commercially, signaling a strategic step toward reducing dependence on Nvidia and strengthening control over its own hardware ecosystem.
Talks between the two companies have been underway since last year, and the collaboration puts OpenAI in the same league as major tech firms such as Google, Amazon, and Meta — all of which have invested heavily in developing their own AI chips to handle growing computational demands.
Broadcom’s CEO, Hock Tan, recently disclosed that the company had secured AI-related infrastructure contracts exceeding $10 billion from a major, previously unnamed client, now widely believed to be OpenAI. Deliveries of these chips are projected to begin in 2026, a move that could give both companies a stronger foothold in the rapidly expanding AI hardware sector.

AI-enhanced LIGO May Uncover Elusive Black Hole Class
A cutting-edge artificial intelligence system developed by Google DeepMind is showing promise in boosting the Laser Interferometer Gravitational-Wave Observatory (LIGO)'s detection capabilities—raising the possibility of identifying a novel class of black holes that have so far remained undetected.
LIGO observes gravitational waves generated by black holes or other massive objects spiraling into one another. These waves alter the fabric of space-time by a staggeringly minute amount—up to 10,000 times smaller than an atomic nucleus. Over the past decade, the observatory has successfully registered nearly 100 such cosmic collision signals.
The AI algorithm improves the observatory's sensitivity by significantly reducing background noise—particularly the tiny vibrations (“wobbles”) of LIGO's mirrors—that can drown out these delicate signals. The technology is capable of cutting noise levels by a factor of up to 100 compared to conventional methods.
By refining signal clarity to this degree, LIGO could be equipped to detect different types of black hole mergers—such as those involving intermediate-mass black holes, which fall between the commonly observed stellar-mass and the enormous supermassive varieties. It might even pick up on more eccentric merger events that have until now slipped under the radar

Banking’s Next Chapter: The Rise of Agentic AI
Agentic AI is coming of age. And with it comes new opportunities in the financial services sector. Banks are increasingly employing agentic AI to optimize processes, navigate complex systems, and sift through vast quantities of unstructured data to make decisions and take actions—with or without human involvement. “With the maturing of agentic AI, it is becoming a lot more technologically possible for large-scale process automation that was not possible with rules-based approaches like robotic process automation before,” says Sameer Gupta, Americas financial services AI leader at EY. “That moves the needle in terms of cost, efficiency, and customer experience impact.”
From responding to customer services requests, to automating loan approvals, adjusting bill payments to align with regular paychecks, or extracting key terms and conditions from financial agreements, agentic AI has the potential to transform the customer experience—and how financial institutions operate too.
Adopting emerging technologies such as agentic AI is becoming critical for businesses that want to stay competitive, according to Murli Buluswar, head of US personal banking analytics at Citi. “The ability of a company to embrace new technical capabilities and reimagine its operations will determine which organizations thrive and which fall behind,” he notes. “Employees and firms alike must understand that the way work is carried out will change in profound ways.”
The Shifting Landscape
The banking industry is already moving quickly to incorporate agentic AI. In a 2025 survey conducted by MIT Technology Review Insights, which polled 250 banking executives, 70% reported that their institutions are experimenting with or actively using this technology—16% through live applications and 52% via pilot programs.
Early evidence points to strong benefits across multiple functions. More than half of the executives surveyed said agentic AI is highly effective in areas like fraud detection (56%) and security (51%). Other top applications included driving efficiency and cost reduction (41%) as well as enhancing customer experience (41%).

A “Smart Trainer” That Helps LLMs Switch Between Code and Text
MIT researchers have introduced CodeSteer, a specialized assistant designed to help large language models (LLMs) decide when to rely on text-based reasoning and when to switch to code. This guidance dramatically improves LLM performance on complex computational and symbolic tasks.
While LLMs are strong in interpreting written context and producing logical answers, they frequently stumble on even simple math problems. Pure textual reasoning often isn’t the best method for algorithmic or numerical queries. Although many models can write code in languages like Python, they often fail to recognize when coding is the right tool or generate inefficient solutions.
CodeSteer’s Role
CodeSteer—a smaller, fine-tuned LLM—acts like a coach, prompting a larger LLM to alternate between textual reasoning and code execution. After each attempt, it reviews the result, compares it with previous outputs, and provides corrective feedback until a satisfactory solution is reached.
In testing, this approach increased accuracy on symbolic tasks (e.g., multiplication, Sudoku solving, block manipulation) by over 30%. Surprisingly, models paired with CodeSteer outperformed stronger standalone systems, enabling smaller LLMs to rival or surpass advanced ones in reasoning-heavy problems.
Applications Beyond Textual Reasoning
This method could enhance LLMs in areas where logic and computation intertwine, such as robot path planning or logistics optimization.
Chuchu Fan, MIT associate professor of aeronautics and astronautics, explained the philosophy:
“Instead of building a model that can do everything, we focus on enabling LLMs to choose the right tools at the right time, making use of existing expertise.”
How It Works
-
CodeSteer first decides whether a problem is best solved with text or code.
-
It instructs the larger LLM accordingly.
-
If the result is wrong, CodeSteer iteratively refines the strategy, sometimes requiring the LLM to use algorithms or more complex coding approaches.
-
A built-in checker ensures the code isn’t oversimplified or inefficient, while another mechanism verifies the final output.
The researchers built their own dataset, SymBench, consisting of 37 symbolic reasoning tasks (math, spatial logic, optimization, ordering). Using this data, they fine-tuned CodeSteer, which boosted accuracy from 53.3% to 86.4% across multiple benchmarks.
Looking Ahead
Future work aims to streamline CodeSteer’s iterative prompting and potentially merge the guiding mechanism into a single, unified model.
External experts praised the work, calling it a “simple but powerful” way to boost LLMs without retraining massive models.
Source: https://news.mit.edu/2025/smart-coach-helps-llms-switch-between-text-and-code-0717

Switzerland Launches Fully Open AI Model
As per, AI News A consortium of Swiss institutions has introduced a new open large language model (LLM) named Apertus—derived from the Latin word for “open.” Developed by EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS), Apertus is designed as a public foundation for research and practical applications. Its defining feature: every stage of its design, training, and release is openly accessible.
Researchers, companies, and developers can employ Apertus to build tools such as chatbots, translation systems, or educational applications. The model is available for download on Hugging Face or via Swisscom, a key partner in the initiative. Two sizes are offered—an 8B-parameter model and a 70B-parameter version—both released under a permissive open-source license for use in academia, education, and commercial settings.
Commitment to openness
Unlike many AI systems that reveal only partial details, Apertus is fully transparent: its architecture, training datasets, and documentation are all publicly accessible.
Martin Jaggi, Professor of Machine Learning at EPFL and member of the Swiss AI Initiative’s steering committee, emphasized that the model provides a blueprint for building trustworthy and sovereign AI systems. He added that Apertus will be continuously updated by joint teams from EPFL, ETH Zurich, and CSCS.
Thomas Schulthess, CSCS Director and ETH Zurich Professor, described Apertus as more than a research product, calling it a long-term infrastructure for innovation and knowledge development in Switzerland’s AI ecosystem.
A truly multilingual model
Apertus was trained on 15 trillion tokens spanning over 1,000 languages, with around 40% of the dataset in non-English languages. Uniquely, it incorporates Swiss German, Romansh, and other low-resource languages often excluded from mainstream LLMs.
“This model is designed for the public good,” said Imanol Schlag, ETH Zurich Research Scientist and technical lead of the project. “It is among the very few LLMs of this scale to combine multilingualism, openness, and compliance as its foundation.”
Deployment and access
Swisscom has already integrated Apertus into its sovereign AI platform. Hackathon participants will also be able to test the model during Swiss {ai} Weeks (running until October 5, 2025) through a Swisscom-hosted interface. Business clients can begin using Apertus immediately, while international access will be provided via the Public AI Inference Utility.
Joshua Tan, lead maintainer of the utility, described Apertus as “a form of public infrastructure, comparable to roads, water systems, or electricity—built by public institutions for the public interest.”
Transparency and legal compliance
Every element—model weights, training data, and checkpoints—is released under the open-source license. The training process adhered to Swiss data protection standards, copyright rules, and EU AI Act transparency requirements. Sensitive personal data was excluded, and ethical filters removed unwanted content before training.
Looking forward
“Apertus shows that generative AI can be both cutting-edge and fully open,” said Antoine Bosselut, EPFL Professor and co-lead of the Swiss AI Initiative. He emphasized that this release marks the beginning of a long-term commitment to building open and sovereign AI for global benefit.
Planned updates include broadening the Apertus family, improving efficiency, and creating specialized tools for fields such as law, healthcare, climate science, and education—all while preserving transparency as a guiding principle.
Source: https://www.artificialintelligence-news.com/news/switzerland-releases-its-own-fully-open-ai-model/

Brain-Inspired Computing Could Transform AI
The tiny worm Caenorhabditis elegans has a nervous system no thicker than a strand of hair, yet it manages to coordinate complex behaviors as it searches for food. Daniela Rus, a computer scientist at MIT, finds this level of biological efficiency remarkable. Fascinated by its simple yet powerful brain, she cofounded Liquid AI, a company building artificial intelligence systems modeled on the worm’s neural architecture.
Rus is part of a growing movement of researchers who believe that mimicking the structure and function of biological brains could lead to leaner, more adaptive, and ultimately smarter AI. “If we truly want to advance AI, we need to integrate insights from neuroscience,” says Kanaka Rajan, a computational neuroscientist at Harvard University.
This approach — known as neuromorphic computing — is unlikely to fully replace conventional deep-learning models or standard computer hardware, according to Mike Davies, who leads Intel’s Neuromorphic Computing Lab. Instead, he envisions a future where different types of computing systems operate side by side, each suited to different challenges.
The idea of drawing inspiration from biology is not new. In the 1950s, Frank Rosenblatt designed the perceptron, a primitive model of how neurons might communicate. Decades later, its core principles helped inspire deep learning, a technique in which artificial neurons are stacked in multiple layers to detect patterns. Deep learning has driven advances such as self-driving cars, but it remains data- and energy-intensive and struggles with rapid adaptation. “The current approach is brute force — just bigger and bigger — and extremely inefficient,” says Subutai Ahmad, chief technology officer at Numenta, a company exploring brain-inspired efficiency.
Governments continue to pour resources into traditional AI. For instance, in January the Trump administration announced Stargate, a $500 billion plan to build massive data centers to power large-scale models. At the same time, more efficient alternatives are emerging. The Chinese company DeepSeek recently introduced a model that performs at chatbot-level quality while using far less data and energy, suggesting that brute force is not the only viable strategy.
Artificial Neurons That Behave More Like Real Ones
In living brains, neurons accumulate electrical signals until a threshold is reached, at which point they fire and pass the information forward. Neuromorphic engineers have recreated this process through spiking neural networks (SNNs), which transmit discrete bursts of activity. These networks can be simulated in software or directly implemented in hardware.
Traditional deep learning, by contrast, activates nearly all artificial neurons simultaneously, creating inefficiencies. Biological and spiking systems save energy by activating selectively — only when signals surpass a threshold. Another key difference is that in brains, memory and computation occur in the same structures, while conventional computers separate them into processors (GPUs) and memory (RAM). Moving data back and forth wastes both time and power.
The BrainScaleS-2 chip, created under the Human Brain Project, demonstrates the potential of neuromorphic hardware. With spiking neurons physically embedded in the system, it can both process and store information efficiently. Tests show that it consumes vastly less energy than standard hardware — up to a thousandfold reduction in some cases. For handwriting recognition, it required just one percent of the energy a GPU would use. Scaling such systems is the next major challenge.
Scaling Up Spiking Networks
Several major tech companies are now pushing to build larger neuromorphic systems. In 2023, IBM released NorthPole, a chip that combines memory and processing to improve energy efficiency. In 2024, Intel unveiled Hala Point, currently the world’s largest neuromorphic system, containing over a billion artificial neurons — about the same as an owl’s brain. The system is powered by more than 1,100 Loihi 2 chips, each of which uses sparsity and spiking mechanisms similar to biological brains.
These chips show impressive energy savings. For instance, Intel researchers tested Loihi 2 on video and audio processing. By preserving information from one frame to the next rather than treating each frame as entirely new, the system avoided unnecessary recalculations. In one experiment, it consumed only 1/150th the energy of a GPU running the same task. Researchers can also reconfigure the architecture to try new algorithms, some of which may not even exist yet.
“The most exciting aspect is that this hardware could support entirely new approaches to learning and efficiency,” says James Aimone, a neuroscientist at Sandia National Laboratories.
Worm-Inspired Learning: Liquid Neural Networks
Brains are powerful not only because of their structure but also because of their adaptability. Even C. elegans, with just 302 neurons and roughly 7,000 connections, can learn continuously from its environment. Inspired by this, MIT’s Rus and researcher Ramin Hasani developed liquid neural networks. Unlike conventional deep learning models, which “freeze” after training, liquid networks remain flexible, adjusting their parameters in real time.
These networks are based on equations that approximate how worm neurons respond to changing information. Though solving such equations is computationally demanding, Hasani’s team found efficient ways to run them in practice. In 2023, they showed that even tiny liquid networks — with as few as 34 neurons — outperformed much larger deep-learning models in guiding drones through unfamiliar environments.
Their company, Liquid AI, has since scaled up the approach. In 2025, they announced LFM-7B, a seven-billion-parameter liquid network designed for language tasks, which rivals or surpasses traditional models of the same size. While the method is computationally intensive, Rajan describes it as a significant step toward AI that more closely resembles biological intelligence.
Borrowing from the Human Neocortex
Other researchers look beyond worms, drawing inspiration from the neocortex — the folded outer layer of the human brain responsible for reasoning and perception. This region is organized into vertical structures called cortical columns, each containing tens of thousands of neurons arranged in smaller minicolumns. Some theorists, like Jeff Hawkins of Numenta, argue that these minicolumns are key to intelligence, functioning as mapping units for our senses and thoughts.
Although not all neuroscientists agree with Hawkins’ view, his ideas have fueled new neuromorphic projects. In 2024, Numenta launched the Thousand Brains Project, which combines algorithms and architectures inspired by cortical columns. Early tests suggest that such structures could allow systems to learn and recognize complex objects in real time.
For now, Numenta runs its models on conventional hardware, but future designs may integrate spiking neurons into column-based physical systems. This co-design of algorithms, architecture, and hardware could significantly boost efficiency. “How the hardware is built shapes how the algorithms function,” notes Schuman.
A Future of Co-Designed Intelligence
History shows that innovations in AI often depend on the right mix of algorithms, hardware, and timing. Deep learning, for example, was first explored in the 1980s but only became practical when GPUs emerged in the 2010s. As Sara Hooker of Cohere points out, technological breakthroughs can hinge on “hardware lotteries” — the chance availability of the right tools.
Neuromorphic computing may be at a similar turning point. If researchers can align new algorithms with brain-inspired architectures and efficient hardware, AI could become more adaptable, less resource-hungry, and closer in spirit to the biological systems that inspired it. As Aimone puts it, this could pave the way for a future of computing that is “both more capable and far more energy-efficient.”
Source: https://www.sciencenews.org/article/brainlike-computers-ai-improvement
