How LLMs Fail and Generalize in RTL Coding for Hardware Design?

About the Taxonomy

Our four-level error taxonomy evaluates LLM-generated RTL code based on successive stages of the EDA pipeline:

  • L1 Syntactic: The source string is rejected by the HDL parser. No AST can be constructed.
  • L2 Semantic: The source string parses into a valid AST but violates at least one static semantic constraint (e.g., detected during elaboration, linting, or synthesis).
  • L3S Functional-Solvable: The synthesized model fails to meet the design specification, but the model has demonstrated the ability to solve the problem in at least one other rollout (addressable via inference-time scaling / best-of-N sampling).
  • L3U Functional-Unsolvable: The synthesized model fails to meet the design specification, and the model cannot solve the problem in any rollout (requires fundamental model improvement).

Benchmark

Model evaluation on VerilogEval-Human V1 benchmark (156 problems, 10 rollouts each)

{
  • "headers": [
    • "Type",
    • "Model",
    • "Pass Rate (%)",
    • "L1 Syntactic (%)",
    • "L2 Semantic (%)",
    • "L3S Solvable (%)",
    • "L3U Unsolvable (%)",
    • "#Params (B)",
    • "Hub License"
    ],
  • "data": [
    • [
      • "๐Ÿš€ Frontier",
      • "Claude Opus 4.6",
      • 90.8,
      • 2.4,
      • 0.3,
      • 2.2,
      • 4.2,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "GPT-5.3 Codex",
      • 89,
      • 2.7,
      • 0.6,
      • 2.7,
      • 5,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "Gemini 3.1 Pro",
      • 86.3,
      • 8.4,
      • 0.5,
      • 1.2,
      • 3.6,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "GPT-5.4",
      • 81.7,
      • 1.2,
      • 0.6,
      • 6.6,
      • 10,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "GPT-5.2",
      • 76.9,
      • 0,
      • 7.6,
      • 4.6,
      • 11,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "Claude Sonnet 4.6",
      • 76.2,
      • 11.3,
      • 0.3,
      • 5.6,
      • 6.7,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐Ÿ”ถ RTL Specialized",
      • "ScaleRTL-Qwen-32B",
      • 75,
      • 1.5,
      • 1.5,
      • 12,
      • 10,
      • 32,
      • "Open"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "K2-Think (RL)",
      • 73.1,
      • 7.8,
      • 2.5,
      • 7.1,
      • 9.6,
      • null,
      • "Open"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "K2-Think-SFT (RL)",
      • 71.8,
      • 7.4,
      • 4.2,
      • 6.4,
      • 10.2,
      • null,
      • "Open"
      ],
    • [
      • "๐Ÿ”ถ RTL Specialized",
      • "CodeV-R1-Qwen-7B",
      • 69.7,
      • 1.1,
      • 2.1,
      • 11.5,
      • 15.6,
      • 7,
      • "Open"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "GPT-OSS-120B",
      • 69,
      • 12.2,
      • 3.3,
      • 8.8,
      • 6.6,
      • 120,
      • "Proprietary"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "GPT-5.1",
      • 67.9,
      • 6.1,
      • 4,
      • 7,
      • 15,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "K2-Think",
      • 67.1,
      • 12.3,
      • 6.5,
      • 5.7,
      • 8.3,
      • null,
      • "Open"
      ],
    • [
      • "๐Ÿ”ถ RTL Specialized",
      • "CodeV-R1-Distill-7B",
      • 66.3,
      • 2.5,
      • 2.7,
      • 11.8,
      • 16.7,
      • 7,
      • "Open"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "K2-Think-SFT",
      • 64.5,
      • 15.4,
      • 6.9,
      • 4,
      • 9.1,
      • null,
      • "Open"
      ],
    • [
      • "๐Ÿš€ Frontier",
      • "Gemini 3 Pro",
      • 64.4,
      • 29.3,
      • 0.1,
      • 2,
      • 4.2,
      • null,
      • "Proprietary"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "DS-R1-Distill-32B",
      • 49,
      • 24.7,
      • 11,
      • 5.3,
      • 10,
      • 32,
      • "Open"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "Qwen2.5-Coder-32B",
      • 15.4,
      • 56.5,
      • 4.6,
      • 2.6,
      • 20.9,
      • 32,
      • "Open"
      ],
    • [
      • "๐ŸŸข Open Source",
      • "Qwen2.5-Coder-7B",
      • 11.9,
      • 57.4,
      • 6.5,
      • 4.4,
      • 19.7,
      • 7,
      • "Open"
      ]
    ],
  • "metadata": null
}

Evaluation Results

Evaluations on the VerilogEval Human benchmark reveal a strict empirical ceiling, with frontier models plateauing at a 90.8% initial pass rate. The solvability taxonomy exposes that L3U (Unsolvable) errors dominate across all model families, revealing persistent knowledge gaps that inference-time scaling cannot address. Our analysis exposes a striking surface convergence gap: optimization drastically reduces syntax errors but concurrently increases functional testbench failures. Ultimately, register transfer level (RTL) coding capacity relies heavily upon pretraining knowledge. Integrating reward and policy modelling (i.e., GRPO) during the post-training phase amplifies existing competencies by teaching models to compile, while L3S errors (addressable via best-of-N sampling) coexist with L3U errors (requiring model improvement).

Transition Matrices

The transition matrices below show how errors evolve during the SFT and RL phases, revealing the surface convergence gap where optimization reduces syntax errors but increases functional testbench failures.