Proof of Concept Complete

Variable Tracing

Benchmarking quantum-enhanced attention against classical transformers on the task of tracking variable values through sequences of code operations.

The Task

Variable tracing is a fundamental reasoning task that requires tracking the value of a variable as it changes through a sequence of operations. Given a code snippet with assignments, arithmetic operations, and variable interactions, the model must determine the final value of a queried variable.

This task tests a model's ability to maintain state across multiple steps and correctly handle temporal dependencies—a capability where we hypothesized quantum attention mechanisms would show measurable advantages.

Why This Matters

Variable tracing mirrors real-world code understanding challenges: debugging, code review, and automated program analysis all require tracking state through execution flow. Superior performance on this task indicates potential for practical applications in AI-assisted software development.

Synthetic Dataset

We designed a synthetic dataset with controlled complexity levels to systematically evaluate model capabilities. The dataset includes simple assignments, arithmetic operations, variable references, and confounding reassignments across three difficulty tiers.

1

Simple

Direct assignments and basic arithmetic. The target variable's value can be determined from a single operation or direct reference.

x = 5
y = x + 3
→ y = 8
2

Interaction

Multiple variable interactions requiring tracking of intermediate values and dependencies between variables.

a = 2
b = a * 3
c = b + a
→ c = 8
3

Reassignment

Confounding reassignments that require precise temporal tracking. Variables are reassigned after being referenced by others.

x = 1
y = x
x = 9
→ y = 1

Experimental Setup

We compared our Quantum-Enhanced Transformer Multi-Head Attention (QETMHA) architecture against a classical transformer baseline with identical parameter counts. Both models were trained on the same dataset splits with equivalent training procedures to ensure a fair comparison.

Results

95%
QETMHA Test Accuracy
70%
Classical Test Accuracy
+25pp
Overall Advantage
+35pp
Level 3 Advantage

Test Accuracy Over Training

Quantum
Classical
100% 75% 50% 25% 1 2 3 4 5 Epoch

Per-Level Accuracy (Final)

Quantum
Classical
99%
79%
Level 1
Simple
98%
60%
Level 2
Interact
90%
57%
Level 3
Reassign

Level 3 (Reassignment) Accuracy

Quantum
Classical
100% 75% 50% 25% 1 2 3 4 5 Epoch

Quantum Advantage Over Training

+35pp +25pp +15pp 0pp 1 2 3 4 5 Epoch

Maximum Quantum Advantage on Hardest Task

+35pp
on Level 3 (Reassignment) problems

Key Findings

Implications

These results validate our hypothesis that quantum attention mechanisms provide measurable advantages on tasks requiring precise state tracking across multiple reasoning steps. The pronounced advantage on Level 3 problems—where temporal order and reassignment must be carefully tracked—suggests significant potential for applications in code analysis, debugging assistance, and automated program verification.

← Back to Quantum-Enhanced Transformers