The false alarm should be a true alarm for UX designers.
So states Hanlon’s Razor:
“Never attribute to malice that which can be adequately explained by stupidity.”
We’d all do well to keep that truism in mind when considering the following scenario. This video was published on YouTube on October 21st, claiming that YouTube was altering user comments in real time.
The implications of this video are disturbing, to say the least. In the wake of this video, there was indeed a flurry of conspiracy theories, with the general gist that left-leaning tech megacorporations are experimenting with thought control techniques. It certainly didn’t help that Google has hired some rather shady characters (read: raving Marxists) in the past.
It isn’t hard to see how algorithmic comment editing could be used for evil. AI could detect a certain political position based on the content of a post, then either re-word it to tweak its meaning, or — even scarier — embed typos and grammatical errors to slowly discredit anyone who holds a specific view. Perhaps the incident in this video is just an early AI misstep.
Or perhaps it was all a misunderstanding. Which it was.
So what really happened?
As it turns out, the actual explanation for what happened in that video is a lot less sinister… but it should be just as interesting if you are a UX designer.
Perhaps you noticed when the guy said:
“it’s almost like the weird results you often get when you use Google Translate.”
That’s because it was Google Translate. Let me explain a few things.
First, the translation gaffe here most likely involved Russian. Because Russian uses the Cyrillic alphabet, and because English speakers probably don’t want to bother typing Cyrillic characters, Translate attempts to transliterate Latin approximations of Russian words into Cyrillic. For instance, “до свидания” (Russian for “good bye”) would be most accurately transliterated as “do svidaniya”, however many people might misspell the first word as “da”. Translate will attempt to factor in that error, and creatively interpret the text based on what it thinks the user meant.
Second, Google Translate has the ability to translate a whole page from a foreign language to whichever language you choose. That can be very useful but only when the user is aware that it is happening. Unfortunately, Translate’s ability to actually figure out the context language of a page is quite bad. It also seems completely oblivious to the notion that a page may have several languages written on it. Google Translate will decide which language is the dominant one on the page, then translate everything it sees that is not your native language.
Thus, the sequence of events in the video likely went like this:
- Page loads
- Google Translate looks at the text on the page. Somehow it gets fooled into thinking the page is primarily in Russian.
- Google Translate enters “Russian to English” mode.
- Translate somehow confuses the user’s comment for a Latin transliteration of Russian instead of English (possibly because of user’s admitted typo), and accordingly attempts to translate it into Russian.
- Comment is thus translated from English to Russian, then back to English.
- Panic ensues.
The proof that this is what happened can be found if you type that guy’s exact comment into Google Translate in “Russian to English” mode.
Case closed. Sort of.
The whole misunderstanding was preventable
And now we come back to Hanlon’s Razor.
This particular incident did not come about as the result of sinister machinations. That doesn’t mean that it does not represent a significant problem, nor does it even mean that there isn’t a hint of the sinister lurking beneath the surface.
Let’s start off with the obvious: the tech industry needs a viral rumor about Orwellian censorship like it needs a hole in the head. Through their own doing, the big tech brands currently enjoy a reputation approaching that of Bank of America in 2009. Somewhere between Zuckerberg experimenting on his users, Twitter enlisting a rogue’s gallery of villainous organizations to form their “Trust and Safety” Council, and Google demonetizing harmless channels, not to mention the fact that every one of them collaborates with the Chinese government, which still maintains pictures of a certain clown-haired mass-murderer on their buildings, the tech giants are increasingly seen as the evil corporations from 1980s cyberpunk flicks.
Combine that with the fact that there is a general apprehension about the notion that we are in a “post-truth world” (we aren’t), and you have a powder keg awaiting some scandal to ignite it.
Your typical UX designer or product manager can’t do much about that whole issue, unfortunately. Nor can designers prevent programmers from building sloppy algorithms that lead to misleading errors. You’re just going to have to work around these realities by minimizing the chances that a design blunder will be misconstrued as a digital atrocity and tank your employer’s stock value, or even put them on the hit list of a regulatory agency.
Here are three takeaways from this story that you would ignore at your peril.
Modes are a minefield
Dan Saffer, author of Microinteractions, refers to modes as “a fork in the rules”. When a particular interaction can behave in multiple ways depending on a certain variable, it has multiple modes. One ubiquitous mode is “edit mode”, found everywhere from the iPhone’s text message screen to Medium’s articles. In edit mode, you no longer merely consume the information on the screen, but you can alter it.
Misunderstanding of what mode you are in can cause all sorts of problems. If you don’t realize you are in “edit” mode, you might accidentally delete a text message thread. If you don’t realize your keyboard is in Caps Lock mode, you might send off a reply that makes you look UNHINGED.
The case of the altered YouTube comments is a clear-cut example of mode confusion. The user thought he was in original text mode, when in fact he was in Russian-to-English mode. Simply making it more obvious to this user the mode he was in could have prevented the entire fiasco.
Currently, this is how Chrome indicates that you are in translation mode:
That’s it. A single, tiny mystery meat button in the URL bar. What are the chances that the user will even notice it? Why not text that states “Translating Russian to English”?
The user is in control
What I gather from this particular incident is that the user was unaware that translation was even a possible explanation for what happened, which suggests that he did not ask for a translation of the page. I don’t know enough about the underlying software to know if he ever manually set the browser to always translate Russian to English, but if he did not do so, that is a big problem.
The more you take control away from the user, the less they will understand the way your system works, and the less they will trust it. In the short term, you may believe that you are taking a load off the user’s shoulders, however over time, they will begin to question what is happening, especially if anything should go wrong, which it will.
The product is not your personal soapbox
While political motivations played no role in this incident, users’ distrust of YouTube and Google has partly political origins. And it isn’t just about corporate-level skullduggery. Tech products are often designed in ways that transparently betray the creators’ personal values, and even impose them on the user.
One infamous example is Apple’s autocorrect. If you have ever wanted to throw your phone into a wall because it corrected you to “ducking” for the 300th time, you know what I’m talking about. Apple does not allow you to train the autocorrect and autocomplete to reflect your personal linguistic values. Is it such an outrageous stretch to go from the soft censorship of autocorrect to the hard censorship of public comment Bowdlerization?
Even though, in this case, the user was mistaken, Google got themselves into this mess. Lucky for them the rumor was quickly squelched and that was that. They might not be so lucky next time.
Help me hit 5000 followers!
Did you like this article? Then please follow me, because my goal for 2018 is to reach 5000 followers. I can only do it with your help.
Want more Renegade UX?
I’ve got you covered.
- What do 1980s concept cars and 2000s cell phones have in common?
- Here’s a presidential alert for you: Your phone is your enemy.
- Doing Your Job For You: The iOS autocomplete
Technology will be the second coming, and it will hit us while we’re looking for a man.