Latest News
Nvidia and Microsoft Bet Big on Anthropic to Challenge OpenAI
Nvidia and Microsoft are investing billions in Anthropic, building a powerful counterweight to OpenAI’s dominance in AI development.
RISE Reveals How Large Language Models Really Think
By dropping human labels for sparse auto-encoders, researchers expose LLMs’ hidden reasoning—and how to control it.
CASCADE Framework Advances AI-Driven Scientific Research
CASCADE, powered by GPT-5, achieves a 93.3% success rate on scientific tasks, signaling a major step for AI in research.
Instacart, Arm, and Stripe Eye IPOs to Recharge Tech Market
Three major US tech firms plan to go public by year-end, promising billions and a boost for Wall Street.
GRASP and StochGRASP Cut AI Fine-Tuning Costs for Edge Devices
Two new frameworks slash trainable parameters and boost robustness, enabling smarter AI on limited hardware.
Youtu-Agent Sets New Standards for LLM Agent Frameworks
Youtu-Agent cuts configuration costs and boosts adaptability with automated generation and continuous evolution.
BatteryAgent Advances Lithium-Ion Battery Fault Diagnosis with AI
BatteryAgent blends physical insights and large language models to deliver safer, clearer lithium-ion battery diagnostics.
CREST: Boosting LLM Accuracy and Efficiency Without Retraining
CREST improves large language model reasoning by steering cognitive attention heads, raising accuracy by up to 17.5% and cutting token use by 37.6%.
CogRec: Merging Cognitive Architecture and LLMs to Fix Recommendation Systems
CogRec combines Large Language Models with Soar architecture to deliver clearer, more accurate recommendations.
LongCat ZigZag Attention Boosts AI Efficiency with Sparse Models
LoZA enables AI models to process up to 1 million tokens efficiently, cutting computational costs drastically.
New System Cuts 3D Mesh Generation to Under One Second for Real-Time Robotics
A breakthrough speeds up 3D mesh creation from a single RGB-D image, enabling robots to perceive and plan in real time with better environmental context.
Recursive Language Models: Extending Context Windows Without the Cost
Recursive Language Models (RLMs) break long prompts into chunks, enabling large language models to process far more context efficiently and affordably.
CogRec: Making Recommendation Systems Transparent and Accurate
CogRec blends Large Language Models with Soar to tackle AI’s black-box problem, boosting recommendation clarity and precision.
SPARK: Advancing Personalized Search with Persona-Based LLM Agents
SPARK uses persona-driven agents and multi-agent coordination to deliver sharper, more personalized search results.
CEC-Zero Cuts Chinese Spelling Errors Without Supervision
CEC-Zero uses zero-supervision reinforcement learning to beat traditional Chinese spelling correction methods.