Artificial Conversations: Something Is Wrong

| Shannon E. Thomas

This week’s Monday morning conversation revolved around James Bridle’s article Something is wrong on the internet

This article inspired a conversation that was part shock and part reflection. How is it that we have children watching content that’s made for them yet completely inappropriate? How is it that the only humans involved in some of these interactions are the children doing the consuming?

Several Artificers chatted with friends and family with children to better understand how kids were interacting with technology. The article was forwarded. The shock was shared.

More important than the shock was our reflection. As designers of systems, how do we fix the internet?

Empathize with those who would exploit the system

Manjari brought up how things have scaled more quickly than anyone had anticipated. Ariane followed up wondering how we could have anticipated the bots. Hans countered – can we really blame the bots when we have humans behind so much abhorrent content (for both children and adults)?

Technology often has unintended consequences, and anticipating these consequences is not always something that designers take enough responsibility for. While none of us have fully fleshed out evil personas, several of us have worked on projects where we explicitly considered what could go wrong as well as who might exploit this and how. Being able to empathize with evil-doers is a designer’s responsibility.

Manjari brought up that it’s not only designers who should be seeking to protect users from harm. Everyone on the team – from the business stakeholders demanding features to the engineers maintaining a system’s security - needs to understand the risks to the user and take some responsibility for safeguarding against the worst intentions.

Take responsibility for the system

Ariane mentioned that her friends with kids had no idea what content existed, and that they were quick to trust if something was labeled for kids that it would also be safe for kids. In many cases, a clear and obvious warning about the content could help companies and parents come to an understanding before children are shown potentially harmful content.

Transparency paired with control could go a long way. Natalia brought up parental controls that force some involvement from the parent – either in selecting approved channels or specific types of content. Giving parents the control to approve specific channels would not only help moderate the market, it would also force bots to create content that appeals to parents and not just children.

Kamila suggested that we should consider still having humans involved in moderation. Tech companies often blame the algorithm to avoid taking responsibility themselves. Taking back the responsibility might cost more money both in terms of moderators and the inevitable PR battles that come from censorship claims – but taking responsibility could also lead to a more aggressive attitude in solving the inappropriate-content problem.

Hans had a more cynical approach: follow the money. If a content distributer’s income comes from advertisers, make it so that advertisers have more control over what their ads appear next to. This would also force content to appeal to more discerning minds than those of 3 year olds.

Give the power (and the responsibility) back to users

Providing unfettered access to content creates problems, even when the content is high quality. Nathalie recalled that her mom used to make her clean her room before a show came on. Because shows came on at a time, this created a sense of urgency to complete a chore. As a young child, she learned to deal with constraints and deadlines.

By eliminating the barriers to consumption, have we removed something more important? Hans brought up how internet porn has affected teens’ ideas of sex and Manjari mentioned how relationships in Japan have suffered in recent years as men are having more digital relationships with fake females.

We all have memories of waiting for a commercial break to run to the toilet or having to rewind a VHS to catch something we missed. All of these little inconveniences were opportunities for learning. How might we design these back into our experiences of content consumption?

Ariane mentioned a service that provided discounts and rewards to students who didn’t check facebook during school hours. Perhaps if we cannot enforce constraint, we should instead reward discipline.


Did this conversation change anything for The Artificial? Aside from all of us being a little less naïve now, we’re also sure to consider users with nefarious intentions alongside our other personas when designing experiences.