The Government Accountability Office recently sustained a protest on the basis that the Department of the Air Force failed to adequately trade off the strengths and weaknesses of the awardee’s proposal with proposals that were lower technically rated—but less expensive.
Typically, an agency can implement one of two methods when selecting an awardee of a contract. The first is a “lowest price technically acceptable” selection method. As the name suggests, the offeror with the lowest price will usually win the award. Unlike the LPTA, the best-value tradeoff selection method—the method used in this case—allows an agency to determine whether a specific proposal’s superiority under non-price factors is worth paying a higher price for. In other words, a best-value tradeoff can justify an award to a higher-priced but more advantageous offer.
In this case, the Air Force sought an indefinite amount of training, operations, and administrative services. The Air Force received proposals from nine offerors. As relevant here, among these offerors were R&K Enterprise Solutions and Cyber Engineering and Technical Alliance, LLC, an SDVOSB Mentor-Protégé Joint Venture.
The agency assigned each technical criterion an individual score between zero, three, four, and five. It used a formula to calculate and assign each proposal with an overall weighted total evaluation score. It then conducted an “integrated assessment” of the score plus price to determine the best value.
Using this method, the agency awarded the contract CETA with a score of 443.33 and price of $139.9 million. R&K, the protester, got a score of 382.5 with a price of $112 million—a savings of approximately $28 million.
The evaluation board’s recommendation to give the award to CETA focused on the point difference. It said, “The additional 60.83 points in technical superiority . . . outweighs the $27M price difference[.]” (It’s closer to $28 million, but who’s counting?) The actual award decision also focused on points stating that CETA’s score of “443.33 is 10% higher than the next closest rated offeror.” The decision failed to mention R&K’s lower price.
In its protest of the award decision, R&K argued (1) the agency’s tradeoff decision was unreasonable because the decision authority considered only CETA’s proposal and did not compare the merits of CETA’s proposal to R&K’s proposal, and (2) the ultimate decision by the agency consisted of a mechanical comparison of point scores that did not consider the underlying bases for those scores.
The agency argued that it adopted the evaluation board’s comparison of CETA’s and R&K’s proposals, but GAO said the comparison has to come from the actual award decision maker. The GAO also found that the agency’s tradeoff decision was unreasonable because it was based entirely on a comparison of scores without any consideration of underlying strengths and weaknesses required. It sustained the protest.
As point-based evaluations continue to be agency favorites (take CIO-SP4 and Polaris for example), this case is a good reminder that agencies can’t simply add up the points to see who won. Best-value tradeoff analyses must consider criteria outside of price points and/or numbers. Unlike LPTAs, there is a presumption that best-value tradeoff decisions should be made, in part, on a basis other than just price and/or score.
Whether you are an offeror wanting to secure a best-value tradeoff award, or an agency seeking to insulate itself from a protest, best-value tradeoffs that implement point scores must include qualitative, comparative analyses that justify the agency’s tradeoff selection.
When Numbers Fall Short : Unreasonable Mechanical Scoring in Best-Value Tradeoffs was last modified: January 16th, 2023 by