Program Goals, Technology, and the Unintended Consequences of Automation
Digital gov and policymaking folks, take note of this story about military recruitment numbers tanking after the DoD automated importing recruits’ health records into their system. Check out the Military Times’ coverage of this example of the unintended consequences of replacing human interactions with tech.
Why did their recruitment numbers suddenly nosedive? Well, lots of recruits have disqualifying medical conditions, but they sign up anyway. Recruiters used to (quietly) tell applicants not to disclose them in their applications. But the new automation eliminated that step.
In other words, the program depends on inaccurate data. Automation, improving accuracy, broke the outcomes.
Should the DoD have the most accurate info it can get? Of course — in theory. But the recruiters interviewed here are clear: the program isn’t designed for that.
Was (is) the medical disqualification policy imperfect? Sure, probably, and it probably always will be. But the operationalization of that policy used to be flexible enough to mitigate those imperfections, because there was a human in the middle.
Should recruiters who told recruits to omit information from their applications, and the recruits who followed that advice, be in legal jeopardy? Technically, yes. But the military has relied on this “tacit tradition” for decades.
Programs are more than just laws and regulations. When we automate away the flexibilities in our systems, the rigidity can make them less responsive to our goals in exchange (theoretically) for efficiency and better data.
(Thanks to @ronfein for sending this article my way.)