From the allegations of Russian interfering with the US election to natural disasters to national tragedies, Westerners especially turn to search engines and various social media feeds for the most up-to-date and accurate information about the people involved and the status of the event. Before there was internet, information often trickled out slowly by ways of multiple channels, and while it was frustrating, it was usually pretty accurate.

 

In a lightning-speed world, though, information is updated on a second-by-second basis, but its accuracy can be low, and the sources are of dubious integrity at best. Lots of reports of missing people, criminals at large, and conspiracies circulate in the immediate aftermath of a huge event. On the Media, a WNYC podcast on media literacy, published a handy list of best practices for what to believe — or not believe — in the immediate aftermath of an event.

 

Usually, a little bit of googling will disprove some of the fishier claims, but what happens when even online search platforms fall prey to some of the nonsensical discussions on the internet given artificial intelligence is producing the results by computer algorithms?

 

After the tragic shooting in Las Vegas, users rushed to search engines and searched, “Las Vegas Shooting” to stay up to date on the manhunt, the victims, and the motive of the crime. Online algorithms produced some reliable sources, but it also produced a piece from 4chan, a website infamous for peddling wild conspiracies, trolling comment sections, and leading readers astray. Within a few hours, a spokesperson from one of the search engine platforms noted the error of the algorithm and took down the entry.

 

Widely used social media platforms, too, retrieved articles from alt-right websites when online users queried for information about the act of domestic terrorism. The Crisis Management Hub team for one of the social media platforms responsible for curating up-to-date reports on huge phenomena issued a statement saying they had removed the errant articles and that it would do better to vet which articles its algorithm produced.

 

When it comes to artificial intelligence and computer algorithms, it’s important to remember that they don’t exist in vacuums, and they have the same biases as their programmers. In a recent demonstration, an online translation platform showed how it assigns gender pronouns when translating from a language that is gender neutral. A presenter translated the sentence “She is a doctor” from English into Turkish, a gender-neutral language. When that sentence was translated back into English, the algorithm produced, “He is a doctor.”

 

Algorithms are products of our world and reflect the society we’ve built, and to that end, it should be no surprise that they don’t always behave the way we think they ought to. As such, in the aftermath of a national disaster or tragedy, tread cautiously, and report any errors you spot so that the programs do better next time around.