A/B testing is an experimental method used in human-centered design and learning engineering to compare two versions of a product, feature, or learning intervention to determine which performs better for users. By showing different versions (A and B) to separate groups and analyzing their responses, learning engineering teams can make data-driven decisions to enhance usability, engagement, and effectiveness. Often A/B testing is performed using trusted research methodologies, such as randomized controlled trials, and leveraging data instrumentation (see knowledge area ) and learning analytics (see knowledge area ).