TY - GEN
T1 - Comparative Analysis of Styles in LLM-Generated Code for LeetCode Problems
T2 - 49th IEEE Annual Computers, Software, and Applications Conference, COMPSAC 2025
AU - Zhang, Yifan
AU - Chen, Tsong Yueh
AU - Huang, Rubing
AU - Pike, Matthew
AU - Towey, Dave
AU - Ying, Zhihao
AU - Zhou, Zhi Quan
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Large language models (LLMs) have rapidly become a powerful tool in automated code generation, yet most research has focused on their correctness and efficiency rather than the stylistic patterns of their outputs. In this preliminary study, we analyze the code patterns generated by five popular LLMs - ChatGPT, Gemini, Claude, Grok, and DeepSeek - in their free versions, across three LeetCode problems, one top-ranking each from the easy, medium, and hard categories. Our evaluation employs key metrics including inline comment density, naming conventions, and edge case handling, highlighting both similarities and differences in verbosity, comprehensibility, and robustness among the codes generated by models. The findings of this study have important implications for software engineering and education, suggesting that LLM-generated code can serve as both a tool for rapid prototyping and an effective learning resource for beginners. Our future work will extend this analysis to a broader set of coding challenges and compare LLM outputs with human-written code to develop robust criteria for evaluating automated code generation.
AB - Large language models (LLMs) have rapidly become a powerful tool in automated code generation, yet most research has focused on their correctness and efficiency rather than the stylistic patterns of their outputs. In this preliminary study, we analyze the code patterns generated by five popular LLMs - ChatGPT, Gemini, Claude, Grok, and DeepSeek - in their free versions, across three LeetCode problems, one top-ranking each from the easy, medium, and hard categories. Our evaluation employs key metrics including inline comment density, naming conventions, and edge case handling, highlighting both similarities and differences in verbosity, comprehensibility, and robustness among the codes generated by models. The findings of this study have important implications for software engineering and education, suggesting that LLM-generated code can serve as both a tool for rapid prototyping and an effective learning resource for beginners. Our future work will extend this analysis to a broader set of coding challenges and compare LLM outputs with human-written code to develop robust criteria for evaluating automated code generation.
KW - Large language models
KW - LeetCode problem
KW - artificial intelligence
KW - code generation
KW - software engineering
UR - https://www.scopus.com/pages/publications/105016139958
U2 - 10.1109/COMPSAC65507.2025.00219
DO - 10.1109/COMPSAC65507.2025.00219
M3 - Conference contribution
AN - SCOPUS:105016139958
T3 - Proceedings - 2025 IEEE 49th Annual Computers, Software, and Applications Conference, COMPSAC 2025
SP - 1625
EP - 1630
BT - Proceedings - 2025 IEEE 49th Annual Computers, Software, and Applications Conference, COMPSAC 2025
A2 - Shahriar, Hossain
A2 - Alam, Kazi Shafiul
A2 - Ohsaki, Hiroyuki
A2 - Cimato, Stelvio
A2 - Capretz, Miriam
A2 - Ahmed, Shamem
A2 - Ahamed, Sheikh Iqbal
A2 - Majumder, AKM Jahangir Alam
A2 - Haque, Munirul
A2 - Yoshihisa, Tomoki
A2 - Cuzzocrea, Alfredo
A2 - Takemoto, Michiharu
A2 - Sakib, Nazmus
A2 - Elsayed, Marwa
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 8 July 2025 through 11 July 2025
ER -