TECH COMPANIES PITCH chatbots to businesses as a way to keep customers coming back for more. A new bot built by Microsoft employees in their spare time is designed to do exactly the opposite.
The chatbot, tested recently in Seattle, Atlanta, and Washington D.C., lurks behind fake online ads for sex posted by nonprofits working to combat human trafficking, and responds to text messages sent to the number listed. The software initially pretends to be the person in the ad, and can converse about its purported age, body, fetish services, and pricing. But if a would-be buyer signals an intent to purchase sex, the bot pivots sharply into a stern message.
“Buying sex from anyone is illegal and can cause serious long term harm to the victim, as well as further the cycle of human trafficking,” goes one such message. “Details of this incident will be reviewed further and you may be contacted by law enforcement for questioning.” The warning can vary based on the conversation, if, for example, a potential buyer expresses an interest in someone underage.
Microsoft employees built the bot in a philanthropic initiative called Project Intercept, in collaboration with nonprofits that hope it can reduce demand for sex workers, and the incentives for criminals to coerce people into the sex trade. The technology is not a product of Microsoft itself. The National Human Trafficking Hotline received more than 5,000 reports of sex trafficking in 2016, but most cases are believed to go unreported.
Project Intercept’s lead partner, Seattle Against Slavery, is working with counterparts in 21 other U.S. cities including Boston and Houston to deploy the bot more widely. So far, the chatbot has exchanged 14,000 messages with nearly 1,000 people who responded to the planted ads. In about half those cases it heard enough to deliver a warning message.