The Screening Partnership Program (SPP), managed by the Transportation Security Administration (TSA), contracts security screening services at commercial airports to qualified private companies. There are currently 21 SPP airports, and most of them are relatively small. The largest is San Francisco International, which handles 22.7 million passengers, according to the Department of Transportation. Kansas City International is second largest, handling almost 5 million passengers a year. Even with meager SPP participation, that is still a lot of travelers being effectively screened by private companies.

I have wondered over the years why more airports, especially the large ones, are not a greater part of SPP. From an economics perspective, competition brings about excellence, as long as a program is well-managed and uses measurable goals. Metrics on enhanced security is one good measure and a goal on minimizing the ability to smuggle mock explosives or banned weapons (which is currently being done) seems reasonable, provided that the individual airport results are not immediately made public.

Homeland Security Red Teams pose as passengers and try to beat the system to measure TSA’s effectiveness, and results from Red Team tests last year show TSA failed 95% of the time. So to the extent that qualified private companies can do a better job than TSA, objective metrics can and should be considered for airports applying to become a part of SPP.

Potential benefits and metrics other than enhanced security, however, should also be considered. For example, for years TSA asked that people do not accept things from strangers (no kidding). Regardless of the frequency associated with this activity, it can be prevented (even further) if the TSA screening line is moving quickly. Logistically, it would be easier to slip something unseen into the baggage of a slow-moving person. To wit, a slow moving security line presents an opportunity for a bad actor. But are these kinds of metrics really being considered? And if so, are they a factor in TSA weighing SPP applications? They should be.

Consider the additional implications of improved security performance, such as better customer service. Indeed, as the Heritage Foundation’s David Inserra writes, “Beyond just pure efficiency, SPP airports also report improved customer service from their private-sector screeners.”

What is more, the Government Accountability Office (GAO) states that for customer satisfaction, “the performance of contractor screeners at the program level appears to be on par with the performance of federal screeners.”

However, consider some data. For 2014-2015, TSA customer service data shows that for the time period and airports considered, SFO (an SPP airport) had the lowest complaint rate of 2.7 complaints per 100,000 passengers. The standard deviation was 0.75 meaning that on a month to month basis, the complaint rate about two-thirds of the time was between 1.95-3.45 complaints per 100,000 passengers. The ratio of the standard deviation divided by the average complaint rate was lower for several other airports meaning that on a month to month basis, the complaint rate varied less (e.g., CLT 0.62/2.8 = 0.22 versus SFO 0.75/2.7 = 0.27). Meanwhile, the complaint rates at some of the nation’s largest airports were considerably worse.

There is no shortage of information that should weigh on SPP applications. Whether TSA actually weighs these metrics, however, is unclear. See my fellow Security Debrief contributor Justin Hienz’s post on just what TSA is doing behind the scenes.