Catch up on latest Artificial Intelligence news

AI Detection Tools Reveal Rising Use of Large Language Models in Scientific Writing and Peer Review

Artificial intelligence is becoming part of everyday life, and one of its newest uses is in the mental health space. A growing number of people are turning to AI chatbots for emotional support, particularly young users facing long wait times or high therapy costs.

How These Tools Work

AI support bots rely on language models that predict words and phrases based on training data. Unlike human professionals, they don’t “understand” feelings or situations — they simply generate responses from patterns. Because of this, they can’t notice non-verbal signals like tone, body language, or subtle emotional shifts, which psychologists use to build tailored treatment plans.

Potential Benefits

  • Accessibility: Available around the clock, helpful for people who need immediate support or who live in areas with limited access to psychologists.

  • Affordability: Often free or much cheaper than traditional therapy, making them attractive to students and those under financial pressure.

  • Consistency: Bots provide responses without being influenced by external factors such as fatigue, mood, or workload.

What They Cannot Replace

Despite their convenience, chatbots have clear boundaries. They are not suitable for handling crisis situations, deep trauma, or complex psychiatric conditions. Over-reliance could be harmful if people mistake automated replies for expert care. Crucially, they cannot replicate the empathy, judgment, and nuanced understanding a trained professional brings to therapy.

A Supportive, Not Substitutive, Role

Experts see value in using AI alongside traditional care. For instance, bots can help people reflect between therapy sessions, practice coping techniques, or manage stress while waiting for appointments. They could also play a role in screening or post-treatment check-ins. However, they should never be positioned as a full replacement for human therapists.

Key Takeaway

AI chatbots are likely to remain part of the mental health toolkit, offering scalable, low-cost support. But their role is best thought of as complementary — useful for everyday stress management, but not a stand-in for professional help in complex or crisis situations. Human connection and clinical expertise remain irreplaceable.

Source: https://www.abc.net.au/news/2025-04-18/ai-chatbots-therapists-how-it-works-versus-seeing-a-psychologist/105139252

AI Learns to Read Us: New Research Lets Machines Spot Personality Through Writing

Recent advances show that AI can pick up on traits of personality simply from how we write—no test or long interview needed. Whether from essays, social media posts, or everyday messages, language alone gives signals about who we are. New studies are also making these judgments more transparent, revealing how AI arrives at its conclusions. This could change how we use personality assessment in areas ranging from education to hiring to therapy.


How Machines Understand Personality Through Words

Researchers at the University of Barcelona carried out an investigation using some of the latest language models, including BERT and RoBERTa, to see how well they could infer personality traits. They trained the models using large collections of writing by people whose personality profiles (through questionnaires) were already known. These profiles came from two popular frameworks:

  • The Big Five traits: openness, conscientiousness, extraversion, agreeableness, and emotional stability.

  • The Myers-Briggs Type Indicator (MBTI), which assigns people to categories like introvert vs. extravert, intuitive vs. sensing, etc.

To ensure the AI decisions weren’t just blind guesses, the team used a technique called integrated gradients. This method helps map which specific words or phrases influence the AI’s predictions the most. For instance, a word like “hate” might seem negative at first, but when it's part of a larger phrase like “I hate to see others suffer,” it could actually reflect empathy — depending on the context.


Big Five vs. MBTI: What the Data Show

  • The Big Five framework turned out to be more reliable. Its traits correlated more stably with linguistic patterns, yielding predictions that matched what is seen in psychological literature.

  • The MBTI-based approach was less consistent; models based on MBTI tended to latch on to superficial clues rather than deeper language use, which sometimes led to less meaningful or stable predictions.


Applications, Ethics, and the Road Ahead

Such AI-driven personality detection has many possible uses:

  • Psychology & therapy: tracking shifts in people’s mood or personality over time via their writing.

  • Hiring & education: learning style, personality fit or engagement could be inferred from writing to customise education or evaluate job applicants.

  • Digital assistants / chatbots: adjusting responses based on users’ personality traits to make them more natural or effective.

At the same time, the researchers emphasize ethics, fairness, and transparency. They argue that:

  • Any use of this technology should rely on scientifically solid models.

  • Explainable decision-making is critical — understanding why the AI made a particular judgment (which words influenced it) helps guard against bias or misuse.

  • These tools shouldn’t replace traditional personality assessments, but rather work alongside them for a fuller picture.

Looking ahead, the team intends to test how well their findings generalise: to other forms of writing, to other languages and cultures, and with other behavioral data (such as speech or online activity). They also want to explore how digital behavior beyond text might enrich the picture of personality.

Source: https://www.thebrighterside.news/post/artificial-intelligence-is-learning-to-understand-people-in-surprising-new-ways/

Facial analysis with AI linked to predictions of professional success

A recent study from researchers at several universities found that AI models can use a single image of someone’s face to infer personality traits, which in turn can help predict their likelihood of success in education and careers.

The team examined photos from LinkedIn and the photo records of MBA programs in the U.S., gathering data on 96,000 graduates. From those, they inferred the “Big Five” personality traits, then checked how those traits related to outcomes like job compensation, career progression, and academic achievements.

One key finding was that traits like neuroticism—when detected from facial images—were generally associated with poorer career outcomes. On the other hand, traits such as conscientiousness were linked to better success in both academics and employment.

The study used computer vision and natural language processing methods to evaluate facial images. They also tested whether different images of the same person (for example, changing expressions or lighting) changed the trait inferences, and found that the AI’s judgments were relatively stable.

While the method shows promise for making personality assessment more scalable, the authors warn about ethical risks. These include potential misuse in hiring or admissions, risks of bias, and the challenge of preserving personal autonomy and fairness.

Overall, the researchers suggest that being able to infer personality from facial images might become as influential in predicting success as more traditional metrics—such as education, GPA, or test scores—though they stress they are not recommending it for widespread use without careful ethical and practical safeguards

Source: Computerworld

AI Pushes Mental Health Support Into the Digital Age

Generative artificial intelligence is moving into mental health care, with researchers and startups working to create “virtual psychologists” capable of providing round-the-clock emotional support.

These AI-driven systems are designed to simulate therapeutic conversations, offering users coping strategies, reflection exercises, and even techniques based on cognitive-behavioural therapy. Developers say the aim is not to replace professionals, but to expand access to care.

Filling Gaps in the System

Mental health services worldwide are under pressure, with long waiting lists and rising demand. Virtual psychologists could help bridge that gap, particularly for young people or those in remote areas. Support is available instantly, at little or no cost, and without the stigma some feel when seeking face-to-face therapy.

Benefits and Risks

Advocates highlight affordability, consistency, and availability as major strengths of AI companions. Unlike human therapists, they can engage with thousands of users simultaneously and deliver a uniform quality of response.

However, experts caution that AI cannot fully understand human emotions. The systems generate text based on patterns rather than genuine empathy, and they are not suitable for people in crisis or with severe mental illness. Concerns about data privacy and ethical oversight also remain unresolved.

A Hybrid Future

Most specialists see the technology as a supplement rather than a substitute. Virtual psychologists may support patients between therapy sessions, provide stress-management tools, or help with early screening. Complex cases, however, will continue to require trained professionals.

As generative AI becomes more sophisticated, the debate is shifting from whether these tools should exist to how they can be safely integrated into mental health care.

Source: https://pub.towardsai.net/developing-a-virtual-psychologist-with-gen-ai-f2e87c7d7c28

AI Model Matches Dermatologist Expertise in Evaluating Skin Cancer Aggressiveness

A straightforward artificial intelligence (AI) model has demonstrated performance comparable to experienced dermatologists in assessing the aggressiveness of squamous cell carcinoma (SCC), a prevalent form of skin cancer. This research, led by the University of Gothenburg, highlights the potential of AI in enhancing dermatological evaluations.

In Sweden, over 10,000 individuals are diagnosed with SCC annually, making it the second most common skin cancer type after basal cell carcinoma. The incidence of SCC is rising, particularly in areas exposed to prolonged sun exposure, such as the head and neck. SCC often arises from mutations in keratinocytes, the predominant cell type in the skin's outer layer, and is strongly associated with cumulative ultraviolet (UV) radiation over time.

Traditionally, diagnosing SCC is straightforward; however, determining the tumor's aggressiveness preoperatively poses challenges. Accurate assessment is crucial for planning appropriate surgical interventions. In Sweden, preoperative punch biopsies are not routinely performed for suspected SCC. Instead, surgery is conducted based on clinical suspicion, with the entire excised specimen sent for histopathological analysis. This approach underscores the need for alternative assessment methods that do not require tissue samples, such as AI-based image analysis.

In the study, researchers trained an AI system using 1,829 clinical close-up images of confirmed SCC cases. The AI's ability to classify tumor aggressiveness was then tested on 300 images and compared with assessments from seven independent experienced dermatologists. The results, published in the Journal of the American Academy of Dermatology International, revealed that the AI model's performance closely matched that of the dermatologists. In contrast, agreement among individual dermatologist assessments was only moderate, highlighting the complexity of the task.

This study suggests that AI can serve as a reliable tool in evaluating SCC aggressiveness, potentially aiding in more accurate and efficient clinical decision-making.

Source: https://www.news-medical.net/news/20250913/Simple-AI-model-matches-dermatologist-expertise-in-assessing-squamous-cell-carcinoma.aspx

Will AI Replace Doctors? Understanding the Reality Behind the Hype

Artificial intelligence is increasingly integrated into healthcare, transforming how medical professionals work. However, despite its growing capabilities, AI is not expected to replace doctors. Instead, it acts as a powerful support tool, enhancing decision-making, improving patient care, and streamlining administrative tasks.

How AI Supports Medicine

  1. Early Detection of Diseases
    AI algorithms can analyze medical imaging — including X-rays, CT scans, and MRIs — with impressive precision. These systems can spot subtle signs of diseases, such as early-stage cancer, that might be overlooked by human eyes.

  2. Assisting Diagnosis
    By reviewing a patient’s symptoms, history, and test results, AI can suggest possible diagnoses and recommend additional tests. This is particularly valuable in complex cases or in regions with limited access to specialists.

  3. Personalized Treatment Recommendations
    AI can process large amounts of patient data to predict which treatments are likely to be most effective, enabling more individualized care and reducing side effects.

  4. Accelerating Drug Discovery
    AI can streamline drug development by identifying promising candidates, predicting their effectiveness, and optimizing clinical trials. This helps bring new treatments to patients faster and more efficiently.

Limitations of AI in Healthcare

  • Complex Decision-Making
    Medical decisions often require judgment, intuition, and experience. AI can analyze data, but it cannot fully replicate the nuanced reasoning doctors use when faced with incomplete or ambiguous information.

  • Human Interaction Matters
    Patients value empathy, reassurance, and trust in their doctors. Machines, no matter how advanced, cannot provide the emotional support essential to the doctor-patient relationship.

  • Flexibility and Innovation
    Healthcare is constantly evolving. Doctors must adapt to new research, technology, and unforeseen challenges. While AI can assist, it cannot replace human creativity and adaptability in problem-solving.

The Bottom Line

AI should be viewed as a complementary tool rather than a replacement for physicians. It can improve diagnostics, support personalized care, and accelerate research, but the ultimate responsibility for diagnosis and treatment must remain with trained medical professionals. AI-generated insights are valuable guides — not final decisions.

Source: https://www.continuouscare.io/blog/will-ai-replace-doctors-separating-hype-from-reality/

Saudi Arabia Introduces the World’s First AI-Driven Medical Clinic

In a groundbreaking development, Saudi Arabia has unveiled the world's first fully AI-powered medical clinic, marking a significant advancement in healthcare technology. Located in Al-Ahsa, Eastern Province, the clinic is a collaborative effort between Shanghai-based Synyi AI and Saudi Arabia's Almoosa Health Group. Bridgemena

The clinic features "Dr. Hua," an AI-driven virtual physician capable of autonomously diagnosing and prescribing treatments for patients. Patients interact with Dr. Hua via tablet devices, describing their symptoms and undergoing diagnostic tests such as cardiograms and X-rays, facilitated by human medical staff. While Dr. Hua formulates treatment plans, each plan is reviewed and approved by a licensed human doctor to ensure accuracy and safety. Business Standard

Currently, the clinic focuses on diagnosing and treating respiratory illnesses, covering approximately 30 conditions such as asthma and pharyngitis. Plans are underway to expand services to include up to 50 conditions across respiratory, gastrointestinal, and dermatological categories within the next year. Wikipedia

This innovative initiative aligns with Saudi Arabia's Vision 2030, which aims to diversify the economy and promote technological advancements in various sectors, including healthcare. By integrating AI into medical practices, the clinic seeks to enhance diagnostic accuracy, reduce human error, and improve patient outcomes.

The AI clinic represents a significant step toward the future of healthcare, where artificial intelligence plays a pivotal role in patient diagnosis and treatment, complementing the expertise of human medical professionals.

Source: https://etedge-insights.com/industry/healthcare/can-an-ai-diagnose-you-better-worlds-first-ai-powered-clinic-with-ai-doctor-opens-in-saudi-arabia

Albania Appoints AI-Generated 'Minister' to Combat Corruption

 

In a groundbreaking move, Albania's Prime Minister Edi Rama has introduced an artificial intelligence (AI) system named Diella as a virtual member of his Cabinet. Diella, whose name means "sun" in Albanian, is tasked with overseeing public procurement processes to ensure transparency and eliminate corruption.

Developed in collaboration with Microsoft, Diella was initially launched earlier this year as a virtual assistant on the e-Albania public service platform. In this role, she assisted users by processing over one million digital inquiries and documents. Now, Diella has been elevated to a cabinet-level position, where she will be responsible for managing public tenders and ensuring they are free from corruption.

Prime Minister Rama emphasized that Diella's appointment aims to expedite government operations and enhance transparency. He stated that her AI capabilities will help the government work faster and with full transparency.

However, the appointment has sparked controversy. The opposition Democratic Party, led by former Prime Minister Sali Berisha, has criticized the move as unconstitutional and politically theatrical. Legal experts and President Bajram Begaj have also expressed reservations about the AI minister's legal status.

Despite the legal debates, Diella's appointment marks a significant step in integrating AI into government operations, potentially setting a precedent for other nations to follow.

Source: https://apnews.com/article/albania-new-cabinet-parliament-ai-minister-diella-corruption-5e53c5d5973ff0e4c8f009ab3f78f369

Nearly Half of Retail Investors Now Using AI Tools

growing number of everyday investors in Australia are turning to artificial intelligence when making financial decisions. A recent survey shows that close to one in two retail investors with portfolios above $10,000 have begun using platforms like ChatGPT or Microsoft Copilot to support their choices.

Among those who use AI, more than 80% said they were at least somewhat happy with the insights provided. However, some remain hesitant: 43% expressed concerns about trust, while nearly half still prefer advice from traditional sources.

Who is adopting AI fastest?

  • Younger Australians are leading the charge. Roughly 78% of investors aged 18–29 are already using AI to guide their decisions.

  • Men appear slightly more reliant than women, with 15% of male investors saying they depend heavily on AI, compared with 9% of female investors.

Industry adoption rising quickly

The financial advice sector is also embracing AI. In 2024, fewer than half of practices reported using or planning to use these tools. That figure has since surged to 74% in 2025, while the number with no plans to adopt has fallen sharply to just 13%.

Firms are finding AI most useful for routine and time-consuming work, such as:

  • preparing file notes and meeting records (86%),

  • creating client newsletters and communications (53%),

  • marketing activities (48%), and

  • drafting advice documentation like Statements of Advice (46%). Source: https://www.ifa.com.au/news/36224-almost-half-of-retail-investors-now-turning-to-ai

Alibaba Unveils Qwen3-ASR-Flash: A Leap Forward for AI Transcription

Alibaba’s AI research team has launched a new transcription model—Qwen3-ASR-Flash—which promises to significantly raise the bar in speech-to-text performance. Built on the Qwen3-Omni architecture and trained with tens of millions of hours of voice data, the model is designed for tough real-world situations.


What Makes Qwen3-ASR-Flash Stand Out

  • Exceptional accuracy in varied settings. Even in challenging acoustic conditions and with complex speech patterns, the model delivers strong results.

  • Top test scores. In August 2025 trials, the model recorded a character error rate of 3.97% for standard Chinese—lower than many competitors such as Gemini-2.5-Pro and GPT4o-Transcribe.

  • Accent robustness. For Chinese accents (other than standard Mandarin), performance improves further, with error rates down to 3.48%. English tests also show competitive strength (~3.81%), beating rivals in that language.

  • Recognising lyrics & music. A particularly difficult domain, yet Qwen3-ASR-Flash achieves just 4.51% error on individual lyrics and 9.96% on full songs—vastly better than competing models that struggle in musical contexts.


Smart Features That Help

  • Flexible contextual biasing. Instead of needing structured keyword lists, users can supply nearly any relevant background text—keywords, full documents, or mixed content. The model can draw on that context to improve results, even when the provided text includes irrelevant material.

  • Multilingual and dialect support. The model works across 11 languages, including multiple dialects and accents. In Chinese: Mandarin plus dialects like Cantonese, Minnan (Hokkien), Wu, and Sichuanese. In English: standard US, UK, and others. Also included are French, German, Spanish, Italian, Portuguese, Russian, Japanese, Korean, and Arabic.

  • Language detection & noise handling. Qwen3-ASR-Flash can identify which of the 11 supported languages is being spoken, even isolate speech from background noise or silence. The result: cleaner, more reliable transcriptions.


Why It Matters

Speech transcription tools are increasingly central to many applications—from meeting transcription and accessibility tools to media captioning and voice‐driven interfaces. Alibaba’s new model appears to push the envelope on what’s possible by reducing error rates, supporting dialects and non-speech noise, and making context adaptation more flexible. For businesses and developers aiming for high accuracy and flexibility across different languages or challenging audio—for example accents, music, or background noise—Qwen3-ASR-Flash could represent a meaningful advancement.

Source: AI News

How AI Is Changing Life for People Who Are Deaf or Hard of Hearing

In recent years, artificial intelligence (AI) has made massive strides in countless fields—healthcare, entertainment, you name it. But one of its most meaningful impacts has been in making the world more accessible for people who are hearing impaired or hard of hearing. This goes far beyond subtitles and hearing aids—it’s about breaking down barriers in everyday communication.

The Communication Gap

For many who are deaf or have significant hearing loss, the real challenge isn’t just hearing—it’s understanding speech in live, dynamic settings. Think corporate meetings, classrooms, casual conversations. In these spaces, there often isn’t automatic captioning, and sign-language interpreters are rarely available. These gaps can deeply affect participation, learning, and inclusion.

Even when technology helps, there are other obstacles—people speaking without facing others, talking fast, mumbling, or having heavy background noise. All these make it harder for someone relying on lip reading and whatever audio input they get.


AI Tools Making a Real Difference

AI isn’t just theory—it’s increasingly part of day-to-day life for many hearing impaired people. Here are several tools and innovations:

  • Real-time captions in virtual platforms
    Platforms like Zoom, Microsoft Teams, Google Meet now offer live captioning. For many hearing impaired users, this means they can follow meetings more fully in real time—and contribute, instead of relying on after-the-fact summaries.

  • Live-transcription apps
    Applications such as Ava, Otter.ai, Google’s Live Transcribe convert speech to text as it happens. These are especially helpful in settings like doctor visits, lectures, or social situations. Even in noisy or crowded places, they can mean the difference between being lost in the conversation and staying engaged.

  • Machine translation to / from sign language
    For example, in Brazil, projects like Hand Talk use AI to translate speech (and written text) into Libras (Brazilian Sign Language), often via a digital avatar. This helps people who use sign language access content in public spaces, online, or in media.

  • Smarter hearing aids
    New hearing aids incorporate AI to adjust automatically to different sound environments—emphasising speech while reducing background noise, adapting settings via phone apps, etc. For many users, these features yield a big improvement in comfort and clarity, especially in challenging environments.

  • Virtual assistants that accept text
    Assistants like Siri, Alexa, Google Assistant have evolved: they’re increasingly usable via text, not just voice commands. That opens up control over devices, access to information, scheduling tasks, etc., without needing to hear or speak.


Challenges & What’s Still Needed

Even with all this progress, there are still barriers to overcome:

  • Connectivity & infrastructure
    Many AI tools depend on stable internet access. For people in rural or underserved areas, that can limit what’s possible.

  • Accuracy issues with dialects, accents, less common languages
    Automatic captions and speech recognition often struggle outside the standard, widely spoken languages or with strong accents. Regional variations can throw off transcription quality.

  • Inclusive design & representation
    It matters not just what the tools are, but who is involved in developing them. People who are deaf or hard of hearing need to be part of the process—so their real needs and experiences guide design.


Where This is Bringing Us

Overall, AI has become something of a game changer. For someone living with hearing impairment, tools that automatically caption, transcribe in real time, translate into sign language, or adapt hearing-aid settings can make communication more natural, more inclusive, and far less exhausting.

The future looks promising, provided these technologies keep evolving in ways that are accessible, accurate, and attuned to diverse users. When that happens, inclusion moves from being a goal to being genuinely lived.

Source: Telefónica

New Machine Learning Approach Offers Doctors a More Precise 3D View of Fetal Health

Researchers at MIT, Boston Children’s Hospital, and Harvard Medical School have developed a novel tool that generates detailed 3D models of fetuses from MRI data, improving upon traditional ultrasound and MRI methods. Dubbed Fetal SMPL, this method adapts a technique from computer graphics (originally developed for modelling adult body shapes and movements) to represent fetal poses and forms.

What Fetal SMPL Does

  • It was trained using over 20,000 MRI volumes to predict both the shape and position of an unborn baby.

  • The model incorporates an internal skeleton structure—23 “joints” connected in a kinematic tree—that allows it to simulate fetal motion and poses.

  • When tested on MRI frames it hadn’t seen before, its predictions of shape and alignment were, on average, just about 3.1 millimeters off.

Comparisons and Performance

  • Fetal SMPL was compared to “SMIL,” an existing model for infant growth. Because infants are larger than fetuses, SMIL-based models were scaled down by about 75% for this comparison.

  • The new model outperformed SMIL in accuracy when dealing with fetal MRI scans spanning gestational ages from ~24 to ~37 weeks.

  • It achieves reliable alignment in as few as three iterations of its pose-and-shape estimation process. Accuracy gains plateau after the fourth iteration.

Limitations & Future Directions

  • Currently, Fetal SMPL models the external surface of the fetus—skin and outer shape—but does not yet model internal anatomy such as organs.

  • The researchers are planning to extend the tool to create volumetric models that include internal structures like liver, lungs, and muscles.

  • More testing is needed across larger and more diverse populations, different gestational ages, and disease cases to fully validate clinical usefulness.

Significance

  • This tool may allow clinicians to get more precise measurements of fetal growth (e.g. head or abdomen size) and compare them to typical developmental benchmarks.

  • It has the potential to enhance physicians’ ability to detect abnormalities and monitor fetal development more reliably than existing imaging tools. Source: news.mit.edu

CodeRabbit Nets $60M in Funding, Hits $550M Valuation After Just Two Years

CodeRabbit, an AI startup focused on automating code reviews, has raised $60 million in its Series B round, pushing its valuation to $550 million.


Where It All Began

Founder Harjot Gill first noticed something while running FluxNinja (which he co-founded after selling a previous startup): engineers were increasingly relying on tools like GitHub Copilot to generate code. But that surge in AI-assisted code writing was creating a bottleneck—code review became more time-consuming due to errors and oversights in the automatically generated bits.

Gill founded CodeRabbit in early 2023 to tackle this challenge head-on. The platform learns about a company’s existing codebase, identifies bugs, and gives feedback—essentially acting like an extra member of the dev team.


Business Momentum & Scale

  • CodeRabbit has been growing fast—about 20% month over month.

  • Annual recurring revenue (ARR) has surpassed $15 million.

  • Over 8,000 companies now use the tool. Clients include known names like Chegg, Groupon, and Mercury.


The New Round & Key Investors

  • Series B raised: $60 million, bringing total capital raised by the company to $88 million.

  • Lead investor: Scale Venture Partners. Others include NVentures (part of Nvidia), plus existing backers like CRV.


How CodeRabbit Fits Into the Ecosystem

The rise of AI code generation (from tools like Copilot) has created a secondary need: reviewing and correcting that output. CodeRabbit sits at that intersection and offers a standalone tool focused on depth and technical breadth.

Competitors include Graphite (which recently raised its own large round), Greptile, and bundled offerings from companies like Anthropic (Claude Code) and Cursor. But Gill believes many organizations will prefer specialized tools rather than “all-in-one” packages.


What to Watch

  • Whether CodeRabbit can keep scaling at its current growth rate.

  • How well it can out-compete bundled tools and maintain a value proposition strong enough to justify a separate product.

  • How the product evolves to handle more complex codebases and edge cases of AI-generated code (increasingly common). Source:TechCrunch

Microsoft & Workday Team Up to Oversee AI “Worker” Agents

Microsoft and Workday have joined forces to help businesses better manage the growing presence of AI agents—sometimes thought of as AI “coworkers”—by giving these agents identities, permissions, and oversight similar to human employees.


What’s the New Setup

  • AI agents built through Microsoft Azure AI Foundry and Copilot Studio will be able to be registered in Workday’s Agent System of Record (ASOR).

  • Each agent will receive a Microsoft Entra Agent ID, giving it a verifiable identity, which in turn permits precise control over what the agent can do and which data or systems it can access.

  • The ASOR provides business context. This means agents can interact with other agents and users in a structured and secure way, following company-governance rules.


Example of How It Works

  • For instance, an employee might ask a Microsoft Copilot-based agent to update their career goals. That agent can then communicate with a Workday agent to perform the necessary steps—all without the employee having to move between different applications.


Why This Matters

  • Enterprises that are using Microsoft’s tools (like Microsoft 365 Copilot and Azure AI Foundry) will gain the ability to treat AI agents as part of their workforce in terms of identity, access control, and governance.

  • Workday positions its ASOR as a unified system of record not only for human staff but for digital or AI agents, enabling management of both in the same framework.

  • Reporting and analytics are built in: usage, who’s using which agents, productivity impact, and whether agents are adhering to their permission levels are tracked. This helps ensure both safety and effectiveness.


Ecosystem & Strategy

  • Microsoft stresses that companies increasingly want AI agents (“digital coworkers”) but are concerned about safety, governance, and how to integrate them responsibly.

  • Workday also emphasizes that AI isn’t just one vendor’s domain, but part of a broader ecosystem requiring shared intelligence, governance, and open protocols.

  • Part of the plan is for early adopter customers to test this capability, give feedback, and then it’ll be made more broadly available.


Potential Impact

  • This collaboration could reduce duplicate efforts in managing agentic AI, especially for organizations using both Microsoft and Workday products.

  • It may put pressure on other identity and governance providers in the market. Microsoft could strengthen its position vs. competitors (e.g. Google Cloud, AWS) in terms of enterprise AI tools. Source: CIO

Intrinsic and DeepMind Break Ground with AI System for Coordinating Multiple Robots

On September 4, 2025, Intrinsic—a robotics software firm under Alphabet—and DeepMind revealed a major leap in AI capable of orchestrating groups of industrial robots working together in shared spaces, while avoiding collisions. This innovation stems from extensive research by DeepMind Robotics, now operating under the name Gemini Robotics, in partnership with Intrinsic and University College London.

The breakthrough is detailed in the paper “RoboBallet: Planning for Multi-Robot Reaching with Graph Neural Networks and Reinforcement Learning.” The researchers sought to solve one of the most difficult issues in industrial automation: how to make several robots operate concurrently in limited space without interfering with one another.

Their approach replaces older, manual programming techniques—where each robot’s motions are individually coded—with a more scalable, flexible AI method. The core architecture uses Graph Neural Networks (GNNs) combined with reinforcement learning, and was trained on millions of synthetic scenarios to enable motion planning that remains efficient and collision-free even in unseen environments.

According to Matthew Lai, a research engineer at DeepMind, their system can generate high-quality motion plans in seconds, even at industrial scale with up to eight collaborating robots. Intrinsic reports that in lab tests the system outperforms traditional and expert-designed methods by around 25%. Moreover, as more robots are involved (from four to eight), the average execution time drops by about 60%.

The AI takes in what’s called a “bundle of tasks” and autonomously produces motion sequences without needing human-intervention, while automatically avoiding collisions. It also adapts to new scenarios without requiring extra training or hand annotations.

Potential applications are broad, especially in industries like automotive, aerospace, and electronics—where many robots commonly share close working spaces. Future development aims to allow real-time task replanning to handle dynamic changes such as shifted parts, sudden obstacles, or machinery malfunctions.

Intrinsic describes this research as “a critical step toward bringing truly adaptive, hyper-efficient multi-robot models to robotics and manufacturing at large. Source: Robotics & Automation News