Quantitative precipitation prediction is essential for managing water-related disasters, including floods, landslides, tsunamis, and droughts. Recent advances in data-driven approaches using deep learning techniques provide improved precipitation nowcasting performance. Moreover, it has been known that multi-modal information from various sources could improve deep learning performance. This study introduces the RAIN-F+ dataset, which is the fusion dataset for rainfall prediction, and proposes the benchmark models for precipitation prediction using the RAIN-F+ dataset. The RAIN-F+ dataset is an integrated weather observation dataset including radar, surface station, and satellite observations covering the land area over the Korean Peninsula. The benchmark model is developed based on the U-Net architecture with residual upsampling and downsampling blocks. We examine the results depending on the number of the integrated dataset for training. Overall, the results show that the fusion dataset outperforms the radar-only dataset over time. Moreover, the results with the radar-only dataset show the limitations in predicting heavy rainfall over 10 mm/h. This suggests that the various information from multi-modality is crucial for precipitation nowcasting when applying the deep learning method.