Research
Zero-Input AI Predicts User Intent Without Commands
Zero-Input AI reads gaze and bio-signals to anticipate user needs, promising smoother interaction on edge devices.
Robotics Breakthrough: The 'Think, Act, Learn' Framework Powers Smarter Machines
A closed-loop system embeds Large Language Models into robots, boosting autonomous learning and task mastery.
ReGAIN: AI-Driven Precision in Network Security
ReGAIN combines retrieval-augmented generation with large language models to analyze network traffic with 98.82% accuracy and clear, evidence-backed explanations.
Geo-Semantic Contextual Graph Beats ResNet and Llama 4 Scout in Object Classification
The GSCG framework scores 73.4% accuracy on COCO 2017, outclassing ResNet and Llama 4 Scout in object classification.
Youtu-LLM: A Breakthrough in Lightweight AI Language Models
Youtu-LLM raises the bar for sub-2 billion parameter models with unmatched efficiency and agentic intelligence.
Hybrid-Code: Boosting Clinical Coding with Reliable AI
Hybrid-Code combines language models with symbolic checks to solve AI’s biggest challenges in healthcare.
Hybrid-Code: Boosting Reliability and Privacy in AI Clinical Coding
Hybrid-Code merges neuro-symbolic AI with strict privacy controls to tackle reliability and data security in healthcare coding.
COMETH Framework Boosts AI’s Moral Judgment Accuracy
COMETH improves AI’s ethical decisions by teaching machines to read context like humans do.
Agentic Learning Ecosystem: Building Smarter, More Adaptive LLMs
The Agentic Learning Ecosystem (ALE) introduces a new infrastructure for agentic LLMs, anchored by the open-source ROME model.
AKG Kernel Agent Automates and Speeds Up AI Model Optimization
The AKG kernel agent automates kernel tuning, boosting AI model speed by 1.46× while supporting diverse hardware and languages.
LoongFlow Advances Self-Evolving AI with Cognitive Reasoning
LoongFlow integrates large language models to boost AI evolution efficiency while cutting computational costs.
HGMem: Redefining Memory for Smarter AI Reasoning
HGMem uses hypergraph memory to boost multi-step reasoning in large language models.
SecBERT Boosts Financial Reasoning but Still Trails Human Experts
Domain-specific training with SecBERT improves financial QA accuracy, yet human experts remain unmatched.
New Composite Reliability Score Sets a Higher Bar for LLM Evaluation
Researchers introduce the Composite Reliability Score to improve how large language models are judged in critical decision-making fields.
IMDD-1M: A Million-Image Dataset Transforming Defect Detection in Manufacturing
IMDD-1M’s vast scale lets models spot manufacturing defects with 95% less task-specific data, breaking free from rigid expert systems.