DeepSound-V1: Start to Think Step-by-Step in the Audio Generation from Videos







Abstract


Currently, high-quality, synchronized audio is synthesized from video and optional text inputs using various multi-modal joint learning frameworks. However, the precise alignment between the visual and generated audio domains remains far from satisfactory. One key factor is the lack of sufficient temporal and semantic alignment annotations in open-source video-audio and text-audio benchmarks. Therefore, we propose a framework for audio generation from videos, leveraging the internal chain-of-thought (CoT) of a multi-modal large language model (MLLM) to enable step-by-step reasoning without requiring additional annotations. Additionally, a corresponding multi-modal reasoning dataset is constructed to facilitate the learning of initial reasoning in audio generation. In the experiments, we demonstrate the effectiveness of the proposed framework in reducing misalignment (voice-over) in generated audio and achieving competitive performance compared to various state-of-the-art models. The evaluation results show that the proposed method outperforms state-of-the-art approaches across multiple metrics. Specifically, the FDPaSST indicator is reduced by up to 10.07%, the FDPANNs indicator by up to 11.62%, and the FDVGG indicator by up to 38.61%. Furthermore, the IS indicator improves by up to 4.95%, the IB-score indicator increases by up to 6.39%, and the DeSync indicator is reduced by up to 0.89%.




Demos


Generate Directly

Generate Step-by-Step











Method


overview_deepsound
Figure 1: Overview of DeepSound. The model employs a step-by-step reasoning process to generate audio from video. In the first step, it generates a coarse audio from the input video. The second step identifies voice-over components by analyzing both the coarse audio and the video. The third step removes the detected voice-over elements from the audio. Finally, the model determines whether the resulting audio is silent or not.

overview_mllm
Figure 2: Overview of Dual Multi-Modal Reasoning Learning. CoTstructure represents the internal reasoning steps within the overall audio generation process. CoTdetail refers to the step-by-step procedure for identifying voice-over components from the coarse audio and video.




Main Results


main_res
Table 1: Video-to-audio results on the VGGSound test set. The bold text highlights the superior performance of our proposed method compared to previous methods, while the green text in brackets represents the improvement rate of each index.

ablation
Table 2: Ablation result on MMAudio-L-44k. The improvement between baseline and ours is represented as green color, demonstrating effectiveness of the learned CoT reasoning in enhancing the final audio quality, the improvement between Ours-s3 and Ours-s4 is represented as blue color.