Welcome to

Lancashire Online Knowledge

Image Credit Header image: Artwork by Professor Lubaina Himid, CBE. Photo: @Denise Swanson


Fine-Tuned Large Language Models as an Interface (LLMaaI) for Domain-Specific Human-Robot Collaboration

Kuru, Kaya orcid iconORCID: 0000-0002-4279-4166 (2026) Fine-Tuned Large Language Models as an Interface (LLMaaI) for Domain-Specific Human-Robot Collaboration. In: 3rd International Summit on Robotics, Artificial Intelligence & Machine Learning (ISRAI2026), April 20–22, 2026, Frankfurt, Germany.

[thumbnail of LLM interface ISRAI2026.pdf]
Preview
PDF - Accepted Version
Available under License Creative Commons Attribution No Derivatives.

253kB

Official URL: https://robotics2026.spectrumconferences.com/speak...

Abstract

Human–robot collaboration (HRC) is increasingly central to modern robotics applications, spanning autonomous vehicles, industrial automation, healthcare, disaster response, and humanitarian operations. Despite advances in robotic autonomy, effective collaboration remains constrained by the limitations of existing human-machine interfaces (HMI), which often require technical expertise, rigid command structures, or extensive training. HRC has traditionally relied on structured interfaces such as graphical user interfaces, predefined command languages, or teleoperation systems, which often limit usability, adaptability, and scalability across domains. Recent advances in large language models (LLMs), such as those based on transformer architectures, offer a transformative opportunity to redefine how humans interact with robotic systems. These capabilities position LLMs as a promising foundation for a new interaction paradigm in robotics, LLMs as an Interface (LLMaaI).

This work introduces the concept of LLMaaI, a paradigm in which well-tuned LLMs act as adaptive, domain-aware, and context-sensitive interfaces between human operators and robotic platforms. By integrating natural language understanding, reasoning, and task abstraction, LLMaaI enables natural language interaction capabilities, intuitive interaction, dynamic task specification, and real-time decision support in domain-specific HRC. More specifically, this work presents the architectural design, tuning strategies, safety considerations, and application scenarios of LLMaaI across critical domains such as autonomous vehicles, humanitarian demining, industrial robotics, healthcare, and search-and-rescue. Additionally, this research highlights how well-tuned LLMs can reduce cognitive load, enhance situational awareness, and improve operational efficiency while maintaining human-in-the-loop (HITL) control and accountability. Finally, it discusses open challenges and future research directions toward trustworthy and deployable LLM-driven robotic interfaces.


Repository Staff Only: item control page