Recent advancements in machine learning, particularly in the field of adversarial training, have underscored the necessity for models that are not only accurate but also robust against adversarial attacks. However, a critical challenge lies in the unequal vulnerability of different classes within these models, often leading to a 'weakest-link' phenomenon where certain classes are more susceptible to adversarial perturbations. This paper presents a novel approach to improving the robustness of machine learning models against adversarial attacks, focusing on classes identified as vulnerable. The core of our methodology involves generating adversarial examples specifically tailored to these vulnerable classes, a process we term class-specific perturbations. We integrate this adversarial training into the learning process, aiming to enhance the model's robustness specifically for these at-risk classes. Motivated by this finding, we propose Balanced Adversarial Training (BAT) to facilitate adversarial training for the vulnerable class. Experimental evaluations across different datasets show that our proposed framework not only improves the robustness of the vulnerable class against adversarial attacks but also maintains overall model performance. Our work highlights the importance of moving beyond average accuracy, which is particularly important in safety-critical applications.