keyboard_arrow_up
A Comparative Study of LLM-Powered Database Interfaces versus Traditional SQL Systems for Inventory Management

Authors

Menglong Guo 1 and Yu Sun 2 , 1 The Chinese University of Hong Kong, Hong Kong, 2 California State Polytechnic University, USA

Abstract

This study presents a systematic comparison of LLM-powered database interfaces versus traditional SQL systems for inventory management, implementing two parallel Flask backends—a SQLite-based system using SQLAlchemy ORM and an LLM-based system using DeepSeek to process natural language commands against JSON storage— with identical REST API endpoints enabling controlled comparison [10]. Experimental results reveal significant trade-offs: the SQL backend achieved 12ms mean latency and 100% operational accuracy, while the LLM backend averaged 1,850ms latency (154x slower) with 88% accuracy that degraded to 72% for complex multi-step operations. These findings demonstrate that while LLM-powered databases offer unprecedented query flexibility and natural language accessibility, they currently incur substantial performance and reliability penalties; traditional SQL systems remain superior for mission-critical applications requiring deterministic behavior and ACID compliance, while LLM approaches suit scenarios prioritizing user accessibility and dynamic query capabilities over guaranteed correctness and response speed.

Keywords

Large Language Models, SQL Databases, Natural Language Interfaces, Inventory Management.

Full Text  Volume 16, Number 6