Audiopedia: Audio QA with Knowledge

1Indian Institute of Technology Jodhpur 2KTH Royal Institute of Technology, Sweden
*Equal Contribution

Abstract

In this paper, we introduce Audiopedia, a novel task called Audio Question Answering with Knowledge, which requires both audio comprehension and external knowledge reasoning. Unlike traditional Audio Question Answering (AQA) benchmarks that focus on simple queries answerable from audio alone, Audiopedia targets knowledge-intensive questions. We define three sub-tasks: (i) Single Audio Question Answering (s-AQA), where questions are answered based on a single audio sample, (ii) Multi-Audio Question Answering (m-AQA), which requires reasoning over multiple audio samples, and (iii) Retrieval-Augmented Audio Question Answering (r-AQA), which involves retrieving relevant audio to answer the question. We benchmark large audio language models (LALMs) on these sub-tasks and observe suboptimal performance. To address this, we propose a generic framework that can be adapted to any LALM, equipping them with knowledge reasoning capabilities. Our framework consists of two components: (i) Audio Entity Linking (AEL) and (ii) Knowledge-Augmented Audio Large Multimodal Model (KA2LM), which together improve performance on knowledge-intensive AQA tasks. To our knowledge, this is the first work to address advanced audio understanding through knowledge-intensive tasks like Audiopedia. Code will be publicly released for research.

Framework

A framework that combines Audio Entity Linking (AEL) and a Knowledge-Augmented Audio Multimodal Model (KA2LM) is proposed to enhance large audio language models for knowledge-intensive tasks such as Audiopedia.

The overview of our proposed framework.




BibTeX

@article{penamakuri2024audiopedia,
  author    = {Abhirama Subramanyam Penamakuri, Kiran Chhate and Akshat Jain},
  title     = {Audiopedia: Audio QA with Knowledge},
  journal   = {ICASSP},
  year      = {2025},
}