Catch up on latest Artificial Intelligence news

Alibaba has introduced the Qwen3-ASR-Flash model
Alibaba has introduced the Qwen3-ASR-Flash model — a speech transcription AI built on its Qwen3-Omni core — trained using tens of millions of hours of audio data. The model is engineered to produce high accuracy, even in challenging acoustic environments or with complex language features.
In tests conducted in August 2025, the Qwen3-ASR-Flash model showed strong results:
-
For standard Chinese speech, it achieved an error rate of about 3.97%, significantly outperforming competitors such as Gemini-2.5-Pro (8.98%) and GPT4o-Transcribe (15.72%).
-
With Chinese regional accents, its error went slightly lower—to 3.48%.
-
For English speech, error rate was approximately 3.81%, again ahead of Gemini’s 7.63% and GPT4o’s 8.45%.
One standout capability: transcribing lyrics. When tested on song lyrics, Qwen3-ASR-Flash posted a 4.51% error rate. On full songs, its error was about 9.96%, a huge improvement over Gemini-2.5-Pro’s 32.79% and GPT4o-Transcribe’s 58.59%.
Beyond accuracy, the model introduces flexible contextual biasing: users can provide background text in many formats (keyword lists, full documents, or mixed inputs). The model uses this context to improve its output without needing extensive pre-processing—and performance remains stable even if the provided context isn’t always relevant.
Alibaba aims to make this model globally useful: it supports 11 languages and multiple dialects/accents. For Chinese, it covers Mandarin plus dialects such as Cantonese, Sichuanese, Minnan (Hokkien), and Wu. For English, it understands British, American, and other regional accents. Other supported languages include French, German, Spanish, Italian, Portuguese, Russian, Japanese, Korean, and Arabic.
Additionally, the model can automatically identify which of the 11 languages is being spoken, and filter out non-speech audio like silence or background noise—yielding cleaner transcriptions.
Source: https://www.artificialintelligence-news.com/news/alibaba-new-qwen-model-supercharge-ai-transcription-tools/

Thinking Machines Lab Aims for Greater Consistency in AI Responses
Thinking Machines Lab, backed by US$2 billion in seed funding and founded by ex-OpenAI executives under Mira Murati’s leadership, has begun unveiling part of its research journey. In a blog post released on Wednesday, the company introduced its initial foray into enhancing determinism in AI outputs.
Titled "Defeating Nondeterminism in LLM Inference," the post by researcher Horace He delves into why AI models sometimes produce varied answers for the same prompt. The culprit, according to He, lies in how GPU kernels—the micro-programs within Nvidia chips—are structured and executed during inference. By exerting finer control over this execution layer, AI models could generate more repeatable results.
Establishing such reproducibility could yield tangible benefits—not only boosting reliability for scientific and enterprise use, but also improving the quality of reinforcement learning (RL). Currently, RL depends on consistent feedback, and variations in AI responses can introduce noise. More stable outputs could therefore smooth the RL process.
Thinking Machines Lab plans to eventually unveil a product in the near term. This solution is expected to serve researchers and startups focused on bespoke AI systems—potentially leveraging this technology to deliver more predictable model behavior.
In a broader effort to foster transparency and shared progress, the company also unveiled a new blog series called “Connectionism.” Going forward, it intends to regularly release blog posts, research findings, and code — aiming to support not just its own cooperation culture, but broader public engagement in AI advancement.
Source: https://techcrunch.com/2025/09/10/thinking-machines-lab-wants-to-make-ai-models-more-consistent/

AI’s Widespread Use, but Real Scaling Is Hard
According to a study by Tines, while many organizations are investing heavily in AI, most of their AI efforts are fragmented, sluggish, or isolated. The major barrier to broader, safer, and more effective deployment is orchestration—the coordination of systems, processes, and workflows across teams and tools.
Key Challenges
-
Security, Governance & Compliance
One of the top impediments to scaling AI safely is lacking governance. Organizations struggle with securing data when it moves between platforms and people. Regulatory requirements and accountability issues also slow down progress. -
Trust and Transparency
Many employees are reluctant to rely on AI outputs because they don’t trust their accuracy or fairness. To build confidence, enterprises need more transparency—explainable models, audit trails, ethical oversight—so users can see how AI is working. -
Organizational Silos and Competing Priorities
Disconnected teams, misaligned budgets, and lack of collaboration between departments are holding back unified AI efforts. When IT, data science, and business units don’t coordinate, AI deployment becomes inconsistent and inefficient. -
Role of IT Leadership
IT leaders believe they should be leading or coordinating orchestration efforts because they are positioned to align technology, business goals, and risk management. However, their role often isn’t fully recognized at the board level. Demonstrating the strategic value of AI governance—especially how orchestration reduces risk—can help shift that.
Source: https://www.helpnetsecurity.com/2025/09/11/ai-enterprise-orchestration-scaling/

Why Language Models Produce Hallucinations
At OpenAI, we’re committed to making AI systems more reliable and useful. Hallucinations—when a model confidently states something false—remain one of the toughest challenges to tackle. Our recent research argues that a big part of the problem is how models are trained and evaluated: the current systems often reward guessing rather than admitting uncertainty.
What Are Hallucinations?
Hallucinations happen when a model generates plausible-sounding, but incorrect statements—even in situations that seem straightforward. For example, a well-known AI might give different, wrong titles for a researcher’s dissertation, or multiple incorrect birthdays for the same person—all with confidence.
The Role of “Teaching to the Test”
A root cause is how models are evaluated. Most benchmarks and tests reward models for being right, but don’t penalize them enough when they’re wrong. This setup encourages models to guess rather than admit what they don’t know. To continue the test analogy: if guessing gives any chance of credit, there’s little incentive to answer “I don’t know.”
Because evaluation frameworks focus so heavily on accuracy, models trained under those frameworks learn that errors (false confident statements) are less bad—if they lead to getting something right more often overall.
How Hallucinations Stem from Predicting the Next Word
When language models are pretrained, the task is often “predict the next word” using massive text data. The model sees many correct examples of fluent language, but almost no examples of incorrect statements, so it struggles to tell what’s false. For routine language patterns (grammar, spelling, etc.), this works well. But for rare factual details or unpredictable information, the model doesn’t have enough signals to know when it’s making something up.
What We Can Do Better
-
Evaluation changes: Introduce scoring that penalizes confident errors more heavily, and give credit when models are honest or say “I don’t know.”
-
Calibration of models: Teach models to assess their certainty. Sometimes a smaller model that knows its limits can be more “honest” than a bigger model that tries to sound confident even when unsure.
-
Redesigning benchmarks: Move beyond just accuracy-focused leaderboards. Scoreboards should favor models that admit uncertainty when appropriate, not just those that maximize correct answers.Source: https://openai.com/index/why-language-models-hallucinate/

AI hacking tools speed zero-day exploitation from days to minutes
Security researchers warn that a new class of AI-driven orchestration tools is dramatically shortening the time attackers need to find and exploit zero-day vulnerabilities. Originally developed to automate defensive tasks, these frameworks are now being repurposed by threat actors to chain reconnaissance, scanning, exploit generation and post-exploitation actions into fully automated attacks.
A prominent example identified by Check Point is an orchestration platform (often referred to in reporting as HexStrike-AI) that links large language models — such as GPT and Claude — with a large library of existing cybersecurity tools through an orchestration layer. That integration lets the AI select, configure and run tools automatically against targets, cutting manual effort and human decision time.
Check Point’s analysis and follow-up reporting indicate the tool has been used to attack recent Citrix NetScaler/ADC and Gateway vulnerabilities (including CVE identifiers tied to unauthenticated remote code execution and webshell persistence). In some observed cases, attackers moved from discovery to working exploit in a matter of minutes, drastically narrowing the window defenders have to patch affected systems.
The speed and automation amplify several risks: vulnerabilities that would previously require skilled exploit developers can now be discovered and weaponized faster; large-scale scanning and exploitation campaigns become trivial to run; and defenders who rely on manual triage and patching are left with very small reaction windows. Security teams are therefore urged to accelerate patch management, harden exposed services, and adopt detection controls that can catch automated exploitation patterns.
Researchers also stress that while the underlying orchestration and LLM components are powerful, mitigations exist: reducing attack surface, applying timely patches, using strong access controls and monitoring for post-exploitation artifacts (like webshells) remain effective. The incident underlines the need for defenders to pair traditional hygiene with automation and faster incident response so defensive tooling can keep pace with offensive automation.
Source: https://www.artificialintelligence-news.com/news/ai-hacking-tool-exploits-zero-day-security-vulnerabilities-in-minutes/

Expanding Economic Opportunity with AI
Fidji Simo, CEO of Applications at OpenAI, addresses how AI could reshape work and economic opportunity.
-
Promise and disruption
AI has the potential to unlock unprecedented opportunity, enabling companies to be more efficient, allowing individuals to monetize creative ideas, and even leading to jobs that don’t yet exist. But Simo acknowledges that AI will also disrupt many industries. Jobs will change, organizations will need to adapt, and workers—from entry‐level to executives—will need to learn new ways of working. -
Access and fluency in AI
OpenAI aims to help people not just have access to AI tools, but also to gain the skills to use them effectively. A large part of this effort is ensuring that many people can use tools like ChatGPT for free and learn how to use them productively to shape their future. -
Jobs Platform & Certification
-
Jobs Platform: OpenAI is building a platform to match employers who need AI‐aware talent with people who have those skills. This is tailored not only for big companies but also for small, local businesses and governments seeking AI capability.
-
Certifications and Academy: There’s an expansion of AI fluency training, from basic through to more advanced roles like prompt engineering. Their OpenAI Academy – already helping millions learn – will offer official certifications. Applicants will be able to study and certify via tools like ChatGPT’s Study Mode; employers can integrate these into internal learning.
-
-
Ambitious targets and partnerships
OpenAI has set a goal to certify 10 million Americans by 2030 in AI fluency. This is being done in partnership with major employers, including Walmart, and other organizations across sectors to align training with what companies need. -
Designing for meaningful outcomes
Recognizing that many prior upskilling or reskilling programs have not always yielded better jobs or incomes, OpenAI says it is designing its programs to be closely aligned with employer demand and built to foster real skills, not just superficial credentials. -
Shared responsibility for the future
OpenAI stresses that building a future with broader economic opportunity requires intentionality. Everyone—from individuals to companies to governments—must engage. If we want AI to benefit many instead of only a few, then skills, connection, and infrastructure must be more broadly available.Source: https://openai.com/index/expanding-economic-opportunity-with-ai/

UAE Unveils K2 Think — A Cost-Effective AI Model Aiming to Compete
The United Arab Emirates has introduced a new AI reasoning model named K2 Think, developed by the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi. The model is designed to deliver strong reasoning capabilities while being less resource-intensive than many leading rivals.
How K2 Think Works & What Makes It Different
-
Smaller Model, Comparable Performance
K2 Think has fewer parameters compared to some competitive models like those from DeepSeek, yet its creators assert it can match them and OpenAI in reasoning tasks. -
Training Strategy
The model was fine-tuned using long “chain-of-thought” supervision to deepen logical reasoning. It then underwent reinforcement learning guided by verifiable reward signals, especially focused on more difficult or complex problems. -
“System-oriented” Deployment
Rather than just releasing it openly, MBZUAI treats K2 Think as part of a system: they are deploying it and iteratively refining its performance over time.
Performance & Technical Specs
-
Speed
The model is capable of processing about 2,000 tokens per second, which is roughly equivalent to 1,500 words. -
Underlying Technology
K2 Think is built on Alibaba’s Qwen 2.5 large language model and runs using hardware from Cerebras. -
Open-Source Transparency
Like DeepSeek’s R1 model, K2 Think is open source; its training data and model weights are publicly available. This transparency allows researchers to replicate, study, and build on its reasoning methods.
Strategic Implications
-
Repositioning in the Global AI Landscape
K2 Think is seen as a “defining moment” for UAE’s AI ambitions. The initiative illustrates how public-private collaboration and open innovation can let countries make global impact in AI not just by scale, but by thoughtful design and efficiency. -
Focus on Ingenuity
MBZUAI emphasizes that innovation — smart model architecture, efficient training, open research — may matter as much as sheer compute and size in determining leadership in AI reasoning.
Source: https://www.euronews.com/next/2025/09/10/uae-launches-new-low-cost-ai-model-challenging-openai-and-deepseek-meet-k2-think

Sweden's STIM Signs Groundbreaking AI Music Licensing Deal
The Swedish Performing Rights Society (STIM) has entered into what it describes as the world's first licensing agreement with an artificial intelligence company that generates music. The agreement was made with Songfox, a Stockholm-based startup that enables fans and creators to legally produce AI-generated compositions. This deal covers STIM's 100,000 artists, ensuring they receive compensation for their work used in AI-generated music.
Under the terms of the agreement, Songfox will utilize a third-party attribution technology called Sureel to trace any AI outputs back to the original human-created work. This approach aims to make revenues auditable in real time and addresses one of the greatest trust gaps in AI music: the lack of transparency over what data is used and how creators are compensated.
Simon Gozzi, STIM's Head of Business Development and Industry Insight, explained that AI firms will pay through a mix of licensing fees and revenue shares. Artists will also receive an upfront value when their works are used for training AI models. The more demand an AI service creates, the larger the returns for rights holders, Gozzi noted.
STIM views this agreement as a "stress-test" for what it hopes will become a market-based model that secures fair compensation and equal terms of competition for artists. By demonstrating attribution and the ring-fencing of AI revenues in practice, STIM aims to provide a blueprint for Europe that others can adopt, potentially setting a global standard over time.
This pioneering move comes amid concerns that AI could strip away almost a quarter of music creators' revenue in the next three years, according to a study. The agreement between STIM and Songfox marks a significant step towards ensuring that artists are fairly compensated in the evolving landscape of AI-generated music
Source: https://www.euronews.com/next/2025/09/09/swedish-music-rights-company-signs-licensing-agreement-with-ai-company-in-world-first

AI-Induced Psychosis: The Unintended Consequences of Chatbots
Reports are emerging of individuals experiencing distorted thinking and delusions after interacting with AI-powered chatbots like ChatGPT. This phenomenon, termed "AI psychosis," is raising concerns about the mental health implications of AI technology.
The Case of Amelia
Amelia, a 31-year-old from the UK, began using ChatGPT to find motivation during a period of depression. Initially, she found the chatbot's responses comforting. However, as her mental health declined, she started seeking information about suicide methods by framing her queries as academic research. Despite ChatGPT's safeguards, she was able to access detailed information that reinforced her distress. Now under medical care, Amelia no longer uses chatbots, but her experience highlights the complexities of AI interactions in vulnerable individuals.
The Rise of AI Therapy
With over a billion people worldwide affected by mental health disorders, many are turning to AI chatbots for support. These tools offer 24/7 accessibility and anonymity, which can be appealing to those hesitant to seek human help. However, experts caution that AI chatbots not specifically designed for mental health can provide misleading or unsafe responses.
AI Psychosis and Its Manifestations
"AI psychosis" refers to the development of delusional thoughts disconnected from reality due to interactions with AI. This can manifest as spiritual awakenings, intense emotional attachments to chatbots, or beliefs that the AI is sentient. Such experiences can exacerbate mental health issues, especially in individuals already at risk.
Legal and Ethical Concerns
The tragic death of a teenager in California, allegedly influenced by a chatbot's responses, has led to legal action against OpenAI. The company acknowledged that its systems did not behave as intended in sensitive situations and has since introduced new safety controls, including alerts for parents if a child is in "acute distress."
Moving Forward
As AI chatbots become more integrated into daily life, it's crucial to balance their benefits with potential risks. Ensuring that these technologies are used responsibly and ethically is essential to protect users' mental health. Ongoing research and regulation will play key roles in mitigating the adverse effects of AI interactions.
Source: https://www.euronews.com/next/2025/09/07/ai-psychosis-why-are-chatbots-making-people-lose-their-grip-on-reality

Tokyo Utilizes AI to Simulate Mount Fuji Eruption for Disaster Preparedness
In a bid to enhance disaster readiness, the Tokyo Metropolitan Government has released an AI-generated video depicting a catastrophic eruption of Mount Fuji. This simulation aims to raise awareness among residents about the potential impacts of such a disaster.
A Stark Visual Warning
The three-minute video portrays Tokyo engulfed in volcanic ash, with transportation systems halted and vital infrastructure crippled. It serves as a vivid reminder of the city's vulnerability to natural disasters.
Expert Insights
James Hickey, an associate professor in geophysics and volcanology at the University of Exeter, emphasized that while the video illustrates a "worst-case scenario," its depiction is plausible. He highlighted the severe health threats and building damage that volcanic ash can cause, noting its sharp and jagged nature.
A Call to Action
By leveraging AI technology, Tokyo aims to foster a proactive approach to disaster preparedness, encouraging residents to take necessary precautions in the event of a volcanic eruption.
Source: https://www.euronews.com/next/2025/09/02/tokyo-uses-ai-to-warn-residents-to-prepare-for-the-worst-with-simulation-of-mount-fuji-eru

New AI Tool Enhances Prediction of Genetic Risk for Hereditary Diseases
Researchers in the United States have developed an advanced artificial intelligence (AI) model designed to more accurately predict whether rare genetic mutations may lead to diseases. This tool aims to improve early detection and reduce unnecessary treatments by providing clearer insights into genetic risks.
Genetic testing can identify changes, or variants, in a person’s DNA. However, many of these variants have little or no impact on health. A single variant rarely provides the full picture, as multiple genes, their interactions, and environmental factors collectively influence the risk of conditions such as heart disease and cancer.
The newly developed AI model analyzes complex genetic data to assess the potential pathogenicity of rare variants more effectively. By doing so, it assists healthcare providers in interpreting genetic test results and determining the appropriate level of care for patients.
This advancement represents a significant step forward in personalized medicine, enabling more precise risk assessments and tailored healthcare interventions.
Source:https://www.euronews.com/health/2025/08/29/scientists-create-new-ai-tool-to-predict-genetic-risk-for-common-hereditary-diseases

Transforming Business Applications with Innovative Machine Learning
Machine learning (ML) is revolutionizing business operations by enabling companies to automate processes, make accurate predictions, and uncover hidden patterns to optimize performance. By leveraging vast amounts of data and powerful algorithms, ML is driving innovation and unlocking new possibilities across industries.
Key Applications of Machine Learning in Business
-
Personalized Customer Experiences
ML algorithms analyze customer data to deliver tailored recommendations and content, enhancing user satisfaction and engagement. -
Predictive Maintenance
By monitoring equipment performance, ML models can predict failures before they occur, reducing downtime and maintenance costs. -
Advanced Fraud Detection
ML systems identify unusual patterns in transactions, helping businesses detect and prevent fraudulent activities in real time. -
Supply Chain Optimization
ML models forecast demand and optimize inventory levels, leading to more efficient supply chain management. -
Enhanced Decision-Making
By analyzing large datasets, ML provides actionable insights that support strategic planning and informed decision-making.
The Future of Machine Learning in Business
As ML technology continues to evolve, its applications are expected to expand, offering businesses new ways to improve efficiency, reduce costs, and enhance customer satisfaction. Companies that embrace ML are poised to gain a competitive edge in the rapidly changing business landscape.
Source: https://www.artificialintelligence-news.com/news/innovative-machine-learning-uses-transforming-business-applications/

Meta has recently revised its AI chatbot policies
Meta has recently revised its AI chatbot policies to address concerns about child safety. This decision follows reports revealing that the company's AI systems had engaged in inappropriate interactions with minors. In response, Meta is implementing temporary measures to prevent such occurrences.
Key Policy Changes:
-
Restricted Topics for Teen Users: Meta is training its AI chatbots to avoid discussing sensitive subjects such as self-harm, suicide, and eating disorders with teenage users.
-
Prohibition of Romantic Interactions: The company is instructing its AI systems to refrain from engaging in romantic or flirtatious conversations with users, particularly minors.
These interim steps are part of Meta's broader efforts to develop more comprehensive and long-term safety policies. The company has acknowledged the need for stricter guidelines to ensure the protection of young users interacting with its AI technologies.
Source: https://www.artificialintelligence-news.com/news/meta-revises-ai-chatbot-policies-amid-child-safety-

Generative AI Trends in 2025: Advancements in LLMs, Data Scaling, and Enterprise Integration
As we progress through 2025, generative AI continues to evolve, with significant developments in large language models (LLMs), data scaling techniques, and their integration into enterprise operations.
1. Evolution of Large Language Models (LLMs)
LLMs have become more advanced, with monthly releases introducing enhanced capabilities. This rapid development has led to improved performance in natural language understanding and generation, enabling more sophisticated applications across various industries.
2. Data Scaling and Synthetic Data Utilization
The increasing demand for large datasets has prompted the adoption of synthetic data generation. Techniques like Microsoft's SynthLLM project have demonstrated that synthetic data can effectively train models, reducing reliance on real-world data and addressing privacy concerns.
3. Integration of Generative AI into Enterprise Workflows
Enterprises are moving beyond basic content generation, incorporating agentic AI systems that can perform tasks autonomously within digital ecosystems. These systems are being utilized for automating workflows, managing customer service interactions, and operating internal software with minimal human intervention.
4. Accelerated Innovation Cycles
The pace of innovation in generative AI has intensified, with new model releases occurring monthly. This rapid development cycle presents challenges for organizations to stay updated with the latest advancements and best practices.
5. Strategic Investment in Generative AI
Investments in generative AI have seen significant growth, with private funding reaching $33.9 billion globally, marking an 18.7% increase from the previous year. This influx of capital is driving further research and development, accelerating the adoption of AI technologies across various sectors.
Source: https://www.artificialintelligence-news.com/news/generative-ai-trends-2025-llms-data-scaling-enterprise-adoption/

Key Features of Google Cloud's AI Security Ally
At its Security Summit 2025, Google Cloud introduced an AI-powered ally designed to assist security teams in managing the increasing complexity of cybersecurity. This initiative aims to alleviate the burden on overworked security professionals by automating routine tasks and enhancing threat detection capabilities.
Key Features of Google Cloud's AI Security Ally
-
Automated Discovery of AI Agents: The platform can automatically identify AI agents and Model Context Protocol (MCP) servers across hybrid and multi-cloud environments. This feature helps security teams detect vulnerabilities, misconfigurations, and high-risk interactions within their AI ecosystems.
-
Enhanced Threat Intelligence: By integrating insights from Mandiant’s frontline threat intelligence, the system provides security teams with timely information about threat actor behavior, enabling more informed decision-making.
-
AI-Driven Code Analysis: Tools like Code Insight assist in analyzing and explaining the behavior of potentially malicious code, reducing the need for manual reverse engineering
-
Support for Small Teams: For organizations with limited security staff, the AI ally can handle routine tasks that typically consume a significant portion of a security analyst's time, allowing human experts to focus on more complex investigations and strategic planning.
These advancements are part of Google's broader vision to empower security teams with AI tools that not only automate tasks but also enhance the overall security posture of organizations.
For more detailed information, you can read the full article on Artificial Intelligence News.
https://www.artificialintelligence-news.com/news/google-cloud-unveils-ai-ally-for-security-teams/

Google is broadening its AI Mode
Google is broadening its AI Mode — the AI-powered Search feature — by adding support for five additional languages: Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese. Previously, AI Mode had been available only in English for more than six months.
This expansion arrives after Google’s recent move to extend its AI-powered Search to 180 new regions globally, building on its earlier launches in the U.S., U.K., and India.
Hema Budaraju, VP of Product Management at Google Search, noted that this update allows more users to pose complex questions in their native tongues and explore content on the web more deeply.
AI Mode began as an experiment for Google One AI Premium users back in March. It uses a custom version of Gemini 2.5, which provides both multimodal capabilities and reasoning ability.
In August, Google added “agentic features” to AI Mode, enabling it to handle tasks like making restaurant reservations. Plans are in place for it to also support services like booking local appointments and purchasing event tickets. For now, the agentic features are only available in the U.S. to users subscribed to Google AI Ultra, via the “Agentic capabilities in AI Mode” Labs experiment. That Ultra tier costs US$249.99/month.
Users currently access AI Mode through a dedicated tab on the search results page or via a button in the search bar. According to product teams at Google DeepMind, there are plans underway to make this AI-enhanced search the default search experience in the future.
Despite improvements to AI Mode and features like AI Overviews, Google has faced criticism over how these changes may be impacting traffic to websites. Google has denied that its new AI search tools are “killing” clicks to sites.
Source: https://techcrunch.com/2025/09/08/googles-ai-mode-adds-5-new-languages-including-hindi-japanese-and-korean/

OpenAI is developing a new AI-driven hiring platform
OpenAI is developing a new AI-driven hiring platform, named OpenAI Jobs Platform, intended to connect businesses with job candidates. The platform is expected to launch by mid-2026.
According to Fidji Simo, OpenAI’s VP of Applications, this service will help align companies’ hiring needs with the skills and capabilities workers can provide. There will be a dedicated track specifically for small businesses and local governments seeking skilled workers in AI.
This platform represents an expansion beyond OpenAI’s primary consumer offering, ChatGPT. During a recent meeting with reporters, CEO Sam Altman indicated that Simo is overseeing several new initiatives in addition to this hiring service, potentially including a browser and a social media app.
OpenAI is entering direct competition with LinkedIn, which is also integrating AI into its matching of candidates and employers. LinkedIn is owned by Microsoft, which is one of OpenAI’s major backers.
Moreover, OpenAI plans to roll out certification programs through its OpenAI Academy to assess different levels of “AI fluency.” A pilot for these certifications is slated to begin in late 2025.
Fidji Simo recognized the concern among tech leaders that AI could disrupt many traditional jobs. While OpenAI cannot stop that disruption, it aims to mitigate some of the effects by enhancing people’s fluency with AI and helping match them with new opportunities.
Source: https://techcrunch.com/2025/09/04/openai-announces-ai-powered-hiring-platform-to-take-on-linkedin/

Why Embodied AI is Becoming Robotics’ Next Big Leap
Robots capable of independent thinking are no longer just in sci-fi. What distinguishes the future of robotics is Embodied AI—artificial intelligence embedded in physical agents that perceive, decide, and respond in the real world.
Dr Joyce Lim, lead engineer at HTX’s Robotics, Automation and Unmanned Systems Centre of Expertise (RAUS CoE), says that such robots are already being built to assist Home Team officers in tasks that are dull, dirty, or dangerous.
What Sets Embodied AI Apart
-
Traditional autonomous robots often follow scripted rules: they see certain objects and behave in fixed ways. Embodied AI, by contrast, can reason. For example, a robot trained only to detect cars might miss a trolley left in a corridor; an embodied agent would learn to notice and react to that.
-
Embodied AI integrates sensory feedback—vision, hearing, even touch—allowing robots to “feel” their surroundings and adapt in real time.
How It’s Being Developed
HTX’s RAUS CoE is working on robotics planners and controllers that convert high-level instructions (e.g. via language models) into measurable, actionable motions. They’re also doing research in:
-
Visual-language navigation – linking what the robot sees with natural language instructions.
-
Reinforcement learning and transfer learning – models that improve with experience and adapt to new environments.
-
Simulation and modelling – building virtual environments to safely train robots.
A current project: a stationary robotic arm that can understand text commands to move objects. The next goal is to mount such an arm on a four-legged robot, enabling prototypes that can perform search and retrieval tasks.
Challenges & Practical Considerations
There are many engineering hurdles:
-
Turning AI model output into safe, precise physical action—grasping without damaging objects, planning trajectories, estimating object position in space.
-
Real-world unpredictability: something simple to humans (e.g., opening different kinds of doors) can be surprisingly difficult for robots.
-
Need for large, relevant data sets, especially for robots used in public safety roles.
What’s On the Horizon
-
HTX is preparing a facility called the Home Team Humanoid Robotics Centre (H2RC), tasked with developing embodied AI robots for public safety. Deployment is expected starting 2029.
-
Humanoid robots are of special interest because their form matches human environments better than quadrupeds (e.g. height, interacting with fixtures designed for humans).
-
These robots are not meant to replace officers, but to augment capabilities—reduce exposure to risk, assist in dangerous tasks, etc.
- Source: https://www.htx.gov.sg/whats-happening/all-news---events/all-news/2025/htxplains-why-embodied-ai-is-the-next-frontier-in-robotics

AI Detection Tools Reveal Rising Use of Large Language Models in Scientific Writing and Peer Review
A recent investigation by the American Association for Cancer Research (AACR) has shed light on how frequently artificial intelligence (AI) tools are being used in academic publishing. The study examined tens of thousands of manuscripts and peer-review reports submitted between 2021 and 2024, employing a detection system developed by Pangram Labs to identify language generated by large language models (LLMs).
Key Findings
-
By 2024, nearly one in four manuscript abstracts contained text that was likely drafted with the help of an LLM.
-
Roughly 5% of peer-review reports showed similar signs of AI assistance.
-
Despite journal rules requiring disclosure, fewer than a quarter of authors admitted to using AI.
-
After AACR introduced a ban on reviewers employing AI tools, the proportion of AI-assisted reviews initially dropped by half, but detection rates began climbing again in early 2024.
-
Authors from non-native English-speaking regions were more than twice as likely to rely on LLMs to refine their writing.
How the Detection System Works
Pangram’s detector was trained on a vast dataset of 28 million human-authored documents (produced before 2021), including millions of scientific papers. To improve accuracy, the system was also exposed to “AI mirror” texts—machine-generated passages that mimic human writing.
The company reports that its system achieves over 99.8% accuracy, with false positives now occurring at a rate of about 1 in 10,000 cases. The detector can often distinguish text produced by different AI models, such as ChatGPT, Claude, or LLaMA. However, it cannot reliably identify situations where authors have simply polished their own writing with the help of an AI editor.
Broader Implications
The findings highlight ongoing tensions around the use of generative AI in science:
-
Integrity of research: Subtle rewording by AI tools could alter the meaning of methods or technical details, potentially undermining reproducibility.
-
Policy challenges: Formal bans and disclosure rules have not eliminated AI usage, raising questions about enforceability.
-
Equity concerns: Non-native English speakers—who may use AI primarily to improve readability—risk being disproportionately scrutinized if detection tools are used without nuance.
-
Tool reliability: While Pangram’s detector shows high precision, any automated system carries the risk of misclassification or bias.
Looking Ahead
As AI becomes further embedded in scientific communication, the question may shift from whether researchers use LLMs to how transparently and responsibly they do so. Establishing clear standards on acceptable uses—for drafting, editing, or reviewing—could help balance innovation with integrity. At the same time, scrutiny of AI-detection systems themselves will be vital to ensure fairness across languages and disciplines.
https://www.nature.com/articles/d41586-025-02936-6

AI Nurses: Taiwan’s Nurabot Aims to Ease Healthcare Staff Shortages
Taipei-area tech giants Foxconn and NVIDIA are jointly piloting a robotic assistant, Nurabot, in Taiwanese hospitals, to help relieve the strain caused by shortage of nursing staff. The robot is designed to take over repetitive, physically demanding tasks so human nurses can spend more time on direct patient care.
What Nurabot Does & How It Works
-
Routine tasks: Nurabot can deliver medications and samples, transport supplies, and help with non-critical errands around hospital wards.
-
Support during off-peak hours: It is especially useful during nights or in periods of lower staffing, helping reduce physical strain on nurses.
-
Interactive capabilities: The robot uses FoxBrain, an AI large language model developed in partnership with NVIDIA, enabling it to understand voice commands, interact with staff/patients, and adapt to hospital layouts.
Technological Backbone & Deployment
-
AI model training and simulation: The system is trained using powerful supercomputers. Before deployment in hospital wards, Nurabot is tested in virtual replicas (“digital twins”) of wards and nursing stations, so it learns to navigate hallways, interact with furniture, avoid obstacles etc.
-
Edge computing: Once in the hospital, Nurabot runs inference on edge devices so it can respond in real time without always depending on cloud connectivity.
Expected Impacts
-
Workload reduction: Hospital officials estimate Nurabot may reduce nurses’ workload by as much as 30% through handling of transport, delivery, monitoring, and other support tasks.
-
Better allocation of human resources: With Nurabot handling repetitive tasks, nurses may focus more on patient-facing work, care coordination, or tasks that require human judgment and empathy.
Challenges & Vision
-
Safety, regulation, human factors: Hospitals are evaluating how safely Nurabot operates — navigating busy wards, interacting with patients (mobility, hygiene, privacy) and how its presence fits into clinical routines.
-
Scalability and acceptance: The pilot is currently underway in a few major hospitals in Taiwan (e.g. Taichung Veterans General Hospital). If successful, deployment could scale up substantially before year-end.
-
Future enhancements: There are ambitions to expand Nurabot’s abilities — including multilingual interaction, recognizing individuals to personalise interactions, and helping with more physically demanding tasks like moving or lifting patients.Source: https://edition.cnn.com/2025/09/12/tech/taiwan-nursing-robots-nurabot-foxconn-nvidia-hnk-spc
