Five research programs exploring how quantum computing can fundamentally transform artificial intelligence.
A quantum-enhanced transformer architecture where the attention mechanism, the core of modern language models, runs natively on quantum hardware.
By computing attention weights through quantum superposition and interference, we can evaluate all token relationships simultaneously rather than sequentially, enabling richer contextual understanding with fundamentally different computational properties.
Active ResearchModels that measure quantum states as they evolve during training and inference, mapping exactly how tokens move through Hilbert space and how thoughts are formed.
By tracking these quantum trajectories, we gain unprecedented visibility into the reasoning process: not just what a model decides, but the full path of how it gets there.
Active ResearchReinforcement learning algorithms designed to discover optimal ansatze and quantum circuits for our models.
Instead of hand-designing quantum circuits, we let RL agents explore the vast space of possible architectures, finding circuit configurations that maximize performance while respecting hardware constraints and noise profiles.
Active ResearchLeveraging quantum computing for geometrical tensor lifting on Grassmann manifolds.
Grassmann geometry provides a natural framework for representing subspaces. By performing these tensor operations on quantum hardware, we access computational shortcuts that classical methods cannot exploit, enabling more expressive model representations.
Early-Stage ResearchTransformer-based models that run fully on quantum hardware. Not just quantum-enhanced components, but end-to-end quantum architectures.
This is the ultimate goal: a complete transformer where embeddings, attention, feedforward layers, and output projections are all implemented as quantum circuits, unlocking the full potential of quantum computation for language understanding.
Active Research