Deep learning has demonstrated excellent performance in medical image segmentation and plays an important role in clinical diagnosis and treatment. With the growing demand for portable medical devices and real-time segmentation, the efficiency and lightweight nature of the model are particularly critical for actual clinical deployment. However, the pursuit of higher segmentation accuracy often drives researchers to build increasingly complex models, leading to a significant increase in parameter counts and computational cost. Therefore, it becomes a key challenge to adopt an effective approach to balance between performance and computational cost. To this end, we propose an efficient and lightweight medical image segmentation model, DNUNet. It cleverly utilizes large kernel convolution, a dual-path multilevel structure, and a feature sparsification strategy to enhance feature extraction and fusion capabilities while filtering redundant information. Thus, DNUNet achieves high-precision segmentation and significantly reduces computational and memory overheads. Specifically, we design the dual-path multilevel interactive convolutional module that effectively increases the network depth with fewer parameters, capturing both local and broader contextual information. In addition, we propose an adaptive norm sparse fusion module as an alternative to traditional skip connections. This module captures the intrinsic structure of the data and efficiently extracts useful information through a low-rank representation of the features, enabling more efficient and accurate feature fusion. Comprehensive experimental results on multiple medical image datasets show that our method strikes a good balance between lightweight architecture and performance. It also demonstrates superior performance compared to various state-of-the-art (SOTA) methods. Therefore, the high efficiency and low resource consumption of this model make it have the potential for real-time deployment in actual clinical scenarios such as portable devices.