Linking picture with text: tagging flood relevant tweets for rapid flood inundation mapping
Recent years have seen the growth of popularity in social media, especially in social media based disaster studies. During a flood event, volunteers may contribute useful information regarding the extent and the severity of a flood in a real-time manner, largely facilitating the process of rapid inundation mapping. However, considering that ontopic (flood related) social media only comprises a small amount in the entire social media space, a robust extraction method is in great need. Taking Twitter as targeted social media platform, this study presents a visual-textual approach to automatic tagging flood related tweets in order to achieve real-time flood mapping. Two convolutional neural networks are adopted to process pictures and text separately. Their outputs are further combined and fed to a visual-textual fused classifier. The result suggests that additional visual information from pictures leads to better classification accuracy and the extracted tweets, representing timely documentation of flood event, can greatly benefit a variety of flood mitigation approaches.