Adversarial attacks examine the vulnerability of machine learning models to images corrupted by varying levels of perturbations. Typically, these images are visually indistinguishable and can be used to evaluate the robustness of a given model to noise. Adversarial images can also be included in the training set to improve the robustness of the model. We examine adversarial attacks on classification models trained on pediatric hip ultrasound images and use these to improve model robustness in scan adequacy assessment. Three methods of white box adversarial attack-Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Basic Iterative Method (BIM)-were applied to classification networks trained on 2D pediatric hip ultrasound images. We trained popular convolutional neural network(CNN) models like AlexNet, ResNet, DenseNet, Inception, and VGG two hip image datasets(DS) from 108 (DS1) and 200(DS2) subjects. The effect of the adversarial attack was evaluated based on the reduction in accuracy. The images generated were used for adversarial training to refine the CNN models. All deep learning models were sensitive to even mild perturbations (eps=0.2), imperceptible to the human eye. Accuracy of DL models reduced by 11-37% with the highest drop in observed in the DenseNet model. Upon validation on DS2 the accuracy of DL models improved by 2-6% with adversarial trainingClinical Relevance- This work applies adversarial attacks to deep learning models trained on b-mode hip ultrasound images. Initial results suggest that mild perturbations to the ultrasound image data can result in significant changes in the predictions of classification models, and that the robustness of these models improves when adversarial training is applied.