<div align="center">

🎙 Towards Joint Modeling of Dialogue Response and Speech Synthesis based on Large Language Model

Xinyu Zhou (周欣宇), Delong Chen (陈德龙), Yudong Chen (陈玉东)

ArXiv | Poster | Notebook | Github

</div>

This project explores the potential of constructing an AI spoken dialogue system that "thinks how to respond" and "thinks how to speak" simultaneously, which more closely aligns with the human speech production process compared to the current cascade pipeline of independent chatbot and Text-to-Speech (TTS) modules.

We hypothesize that Large Language Models (LLMs) with billions of parameters possess significant speech understanding capabilities and can jointly model dialogue responses and linguistic features. We investigate the task of Prosodic structure prediction (PSP), a typical front-end task in TTS, demonstrating the speech understanding ability of LLMs.