MIT scientists explored a critical flaw in AI language models called position bias, where models favor information at the beginning and end of text while ignoring the middle. Their research reveals this bias is rooted not only in the training data, but also in the architecture of the models themselves.

