Ultrasound is a point-of-care imaging modality that allows for real-time operation. While real-time capabilities are advantageous, one potential concern is the operator dependency of the modality as settings selected by the operator can alter the image appearance. Improvement of the B-mode image to necessitate fewer adjustments can reduce the operator dependency. Here, we propose a supervised learning framework (Optimal Apodizations with Training Simulations - OATS) to devise new apodization weights for image quality improvement. Our framework relies on the use of a differentiable beamformer to iteratively optimize apodization weights by comparing differences between simulated ground truth images and the corresponding post-beamformed images (simulated training set of over 200 images). We experimentally verified that these apodization weights resulted in higher quality B-mode images on both simulated and real-world data for focused and unfocused imaging scenarios. In the focused imaging scenario, the OATS-apodized images demonstrated reduced sidelobe artifact, improved lateral resolution (11 %), and improved signal equalization across depth when compared to a conventional Hanning apodization. In the unfocused imaging scenario, we observed reduced sidelobe artifacts and improved tissue-to-lesion contrast by up to 13 dB when compared against fixed F-number beamforming. Additionally, the OATS apodization weights were physically interpretable and learned to emulate image formation parameters such as time-gain compensation, F-number limited aperture, and transmit focus through the supervised learning procedure. Overall, the proposed framework successfully learned generalizable receive apodizations to improve image quality.