Artificial Intelligence Research
Exploring general-purpose intelligence through multimodal models, reasoning systems, and embodied agents.
Our Vision
Yazhvin’s AI research aims to build systems that can understand, reason, and act across diverse domains — from science and creativity to language and robotics. We believe in general-purpose models that are interpretable, robust, and aligned with human goals.
Our work spans multimodal learning, neuro-symbolic reasoning, agent-based architectures, and ethical alignment. We're building next-generation foundational models and researching how these can be safely deployed in the real world.
Research Areas
Multimodal Models
Vision-language-action models that learn across text, image, video, and 3D environments.
Neuro-symbolic Systems
Combining deep learning with logic and planning to enable structured reasoning.
Autonomous Agents
Training embodied systems that can plan, act, and learn in simulation and the real world.
Language Understanding
From chat agents to reasoning assistants — we build and align large-scale language models.
Creativity & Code
Tools for AI-assisted design, scientific discovery, and generative programming.
Robustness & Safety
Ensuring models behave reliably, interpretably, and without bias or manipulation.
Open Research. Shared Intelligence.
We publish papers, release models, and support collaborations across the global AI community.