Predicting the influence power of nodes in complex networks, particularly in large-scale scenarios, is a fundamental and challenging problem in network analysis. However, labeling nodes based on their influence power requires running computationally intensive models, such as the Susceptible-Infected-Recovered (SIR) model, which becomes prohibitively time-consuming in large networks, severely limiting scalability. To address this limitation, this study proposes an innovative approach based on multi-level knowledge distillation aimed at enhancing prediction accuracy while substantially reducing inference time, even when few labeled nodes are available. Our approach employs a multi-level teacher-student architecture, enabling knowledge transfer from rich labeled networks to networks with a few labeled nodes. Furthermore, the student model is designed to be shallow, with few parameters, ensuring a lightweight and optimized architecture that significantly reduces the inference time. The transferred knowledge includes both soft labels and an adversarial alignment mechanism between teacher and student models. Experimental results obtained over a range of real-world datasets demonstrate significant improvements in predictive accuracy and computational efficiency.