Recent developments in spatial omics technologies have enabled the generation of high dimensional molecular data, such as transcriptomes, proteomes, and epigenomes, within their spatial tissue context, either through coprofiling on the same slice or through serial tissue sections. These datasets, which are often complemented by images, have given rise to multimodal frameworks that capture both the cellular and architectural complexity of tissues across multiple molecular layers. Integration in such multimodal data poses significant computational challenges due to differences in scale, resolution, and data modality. In this review, we present a comprehensive overview of computational methods developed to integrate multimodal spatial omics and imaging datasets. We highlight key algorithmic principles underlying these methods, ranging from probabilistic to the latest deep learning approaches.