Passive acoustic monitoring can offer insights into the state of coral reef ecosystems at low-costs and over extended temporal periods. Comparison of whole soundscape properties can rapidly deliver broad insights from acoustic data, in contrast to detailed but time-consuming analysis of individual bioacoustic events. However, a lack of effective automated analysis for whole soundscape data has impeded progress in this field. Here, we show that machine learning (ML) can be used to unlock greater insights from reef soundscapes. We showcase this on a diverse set of tasks using three biogeographically independent datasets, each containing fish community (high or low), coral cover (high or low) or depth zone (shallow or mesophotic) classes. We show supervised learning can be used to train models that can identify ecological classes and individual sites from whole soundscapes. However, we report unsupervised clustering achieves this whilst providing a more detailed understanding of ecological and site groupings within soundscape data. We also compare three different approaches for extracting feature embeddings from soundscape recordings for input into ML algorithms: acoustic indices commonly used by soundscape ecologists, a pretrained convolutional neural network (P-CNN) trained on 5.2 million hrs of YouTube audio, and CNN’s which were trained on each individual task (T-CNN). Although the T-CNN performs marginally better across tasks, we reveal that the P-CNN offers a powerful tool for generating insights from marine soundscape data as it requires orders of magnitude less computational resources whilst achieving near comparable performance to the T-CNN, with significant performance improvements over the acoustic indices. Our findings have implications for soundscape ecology in any habitat.