TY - JOUR
T1 - LLM4Netlist
T2 - LLM-enabled Step-based Netlist Generation from Natural Language Description
AU - Ye, Kailiang
AU - Yang, Qingyu
AU - Lu, Zheng
AU - Yu, Heng
AU - Cui, Tianxiang
AU - Bai, Ruibin
AU - Shen, Linlin
N1 - Publisher Copyright:
© 2011 IEEE.
PY - 2025/5
Y1 - 2025/5
N2 - Empowered by Large Language Models (LLMs), substantial progress has been made in enhancing the EDA design flow in terms of high-level synthesis, such as direct translation from high-level language into RTL description. On the other hand, little research has been done for logic synthesis on the netlist generation. A direct application of LLMs for netlist generation presents additional challenges due to the scarcity of netlist-specific data, the need for tailored fine-tuning, and effective generation methods. This work first presents a novel training set and two evaluation sets catered for direct netlist generation LLMs, and an effective dataset construction pipeline to construct these datasets. Then this work proposes LLM4NETLIST, a novel step-based netlist generation framework via fine-tuned LLM. The framework consists of a step-based prompt construction module, a fine-tuned LLM, a code confidence estimator, and a feedback loop module, and is able to generate netlist codes directly from natural language functional descriptions. We evaluate the efficacy of our approach with our novel evaluation datasets. The experimental results demonstrate that, compared to the average score of the 10 commercial LLMs listed in our experiments, our method shows a functional correctness increase of 183.41% on the NetlistEval dataset and a 91.07% increase on NGen.
AB - Empowered by Large Language Models (LLMs), substantial progress has been made in enhancing the EDA design flow in terms of high-level synthesis, such as direct translation from high-level language into RTL description. On the other hand, little research has been done for logic synthesis on the netlist generation. A direct application of LLMs for netlist generation presents additional challenges due to the scarcity of netlist-specific data, the need for tailored fine-tuning, and effective generation methods. This work first presents a novel training set and two evaluation sets catered for direct netlist generation LLMs, and an effective dataset construction pipeline to construct these datasets. Then this work proposes LLM4NETLIST, a novel step-based netlist generation framework via fine-tuned LLM. The framework consists of a step-based prompt construction module, a fine-tuned LLM, a code confidence estimator, and a feedback loop module, and is able to generate netlist codes directly from natural language functional descriptions. We evaluate the efficacy of our approach with our novel evaluation datasets. The experimental results demonstrate that, compared to the average score of the 10 commercial LLMs listed in our experiments, our method shows a functional correctness increase of 183.41% on the NetlistEval dataset and a 91.07% increase on NGen.
KW - circuit design
KW - electronic design automation
KW - large language model
KW - natural language processing
KW - netlist generation
UR - http://www.scopus.com/inward/record.url?scp=105005449297&partnerID=8YFLogxK
U2 - 10.1109/JETCAS.2025.3568548
DO - 10.1109/JETCAS.2025.3568548
M3 - Article
AN - SCOPUS:105005449297
SN - 2156-3357
JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
ER -