memoQ blog

The Human Factor in the Development of Translation Software

Balázs Kis
Balázs Kis - 27/04/2022

10 minute read

ethical software development

Humans and AI: Friends or Enemies? 

Not all translation is suitable for machine translation—otherwise stated, the world will always need “premium translation” that cannot be produced without human involvement. 

End-users of translations are humans. Also, the users of translation software are humans.  

In any technology setup, the voluntary agent is always human. No AI is currently capable of using will to make a choice. AI can, however, with various degrees of success, mimic, or rather recycle decisions that certain humans have made in the past. As a result, no technology should ever suggest to replace human agents or not to place the human agent in the middle.  

Stanisław Lem says in Summa Technologiae that “Every technology is actually an artificial extension of the innate tendency possessed by all living beings to gain mastery over their environment, or at least not to surrender to it in their struggle for survival” and he also suggests that humans use technology as another organ.  

Building on this notion, we can think of any technology as an extension of the human body and/or the human mind. Technology will give its human users “superpowers.” It will “augment” their capabilities, so that, as in the case of translation, humans will be able to translate more at the same speed, while maintaining good quality, or even improving quality due to the computer’s unique ability to remember things precisely. 

Humans are indeed not only the users of translation, but also the users, designers, and developers of translation software. This understanding came in handy when I was teaching humanities students to use translation software and had to tackle the technophobia that sometimes occurred among them. In my opinion, the prime source of technophobia is the idea that the computer is an alien or a futuristic robot —a contraption that has its own mind and will. The use (and overuse) of the term ‘artificial intelligence’ does suggest this, but I usually ask my students to imagine a piece of software as yet another method of communication between humans—the designers/developers and the users. In this context, the software is the tool and also the channel. 

I don’t intend to diminish attempts to create a fully automatic “translation machine”. There have always been dreams of effortlessly bridging language gaps. Especially since Biblical heritage depicts the presence of a multitude of languages as punishment for humankind’s greed for power (cf. the Tower of Babel). The Bible presents an image of the Pentecost where everyone hears the Apostles’ speech in their own tongues. Throughout the Middle Ages, there were countless efforts to find the perfect language, the one God allegedly spoke when creating the world (cf. Umberto Eco). Today we have concepts like the Babel fish and the Universal Translator, which would implement the ultimate and perfect machine translation. 

I think these dreams are completely legitimate, and it is also worth it to try to create technology that would make translations fully automatic. What is not right—because it is not truthful—is to claim that it is ready. That any machine is ready to replace human agents of translation. That AI has reached “human parity”. There used to be a lot of talk about this, but I think for now it is obvious that those claims were exaggerated at best. In the scientific community, there are assertions and well-founded speculations (albeit without proof to date) that translation as a cognitive task is AI-complete. This means that, before we achieve human-equivalent machine translation, we need to achieve singularity, that is, human-equivalent (or superior) AI. We know that right now we don’t have it, and we don’t know if it can be created. We’re also not sure if it should be created. 

Translation Software: The Human Factor 

Until we have the Universal Translator, a lot of translation (the field of the so-called “premium translation”) remains hard work—for humans. This means that the purpose of at least some of the translation software continues to make this work quicker and easier so that translators—and editors and project managers—don’t simply trudge through their work but thrive in their profession. Let’s face it, a lot depends on the actual tools they use. If this is the case, the translation software is not all “language technology” (as in “natural language processing”) but data management, text manipulation, user interfaces, quality assurance workflows, and the list goes on. 

Thus, for the foreseeable future, there will be translation software that is built around human users of extraordinary knowledge. The task of such software is to make their work as efficient and enjoyable as possible. The way we say it, they should not simply trudge through, but thrive in their work, partially thanks to the technology they are using. 

From the perspective of a software development organization, there are three ways to make this happen:  

  • Invent new functionality 
  • Interview power users and develop new functionality from them 
  • Go analytical and work from usage data and automate what can be automated; introduce shortcuts 

No matter what you do, put a human face on it. As I mentioned earlier, software is a means of communication between developer and user. In this conversation, it is usually the developer who is the more active agent. They are the ones to push ideas and implementations. It is therefore the developer’s responsibility to make this communication bidirectional — to listen to users, and to provide help when help is needed. A structured method of accepting feedback and high-quality human customer support is not a nice-to-have, ‘overhead’, or an optional add-on. It is integral to the business. 

Disruption for the sake of disruption? 

I am also wary of trying to come up with ‘disruptive’ ideas. First, there is no agreement on what ‘disruptive’ is. In my opinion, a new method that saves a lot of time for the human user is not necessarily a disruptive feature at all. For a new development to be disruptive, it needs to fundamentally change the way we get something done. For example, an electric car does not disrupt transportation (although, in large enough numbers, it may disrupt the fossil fuel industry)—teleportation does. 

Disruptive developments are also unpredictable, even to the developer themselves. Whether or not a development becomes disruptive can also depend on the users—and human users’ adoption of new technology is relatively slow. It also takes a lot of attempts and a lot of failures if someone wants to invent disruptive technology. To illustrate, watch the ‘Nothing Works’ speech by Jack Conte (founder of Patreon) where the main message is that someone may look successful, but you don’t know how many failures they had. The point is, don’t try to be disruptive for the sake of disruption. This may not be the best way forward for your users. 

Ethical technology 

So where is the responsibility of tech companies concerning ethical software development? What can you do as a company to put the human factor into your development process? Here are a few questions with some examples (mostly from the world of AI) you can ask yourself. 

First and foremost, does your technology serve an honorable purpose? Even when you believe it does, who is benefiting from it (Cui prodest)? 

Is your technology what you say it is? 

The very term ‘AI’ is a fallacy because it suggests to an outsider that they’re dealing with an entity of its own mind and its volition—and it has neither. As Kate Crawford says in ‘Atlas of AI’, it is neither intelligent nor artificial. AI is not intelligent because it copies previous human behavior (and it needs that previous human behavior to copy), and it is not entirely artificial because a single AI model may require years of human work collecting and preparing data. 

Does your technology implement hidden agendas? 

Are there hidden costs for the user? Are there hidden gains for the developer or the operator? Does the technology create or facilitate an unfair advantage for the more powerful stakeholder(s) if the technology implements collaboration? 

Does your technology collect or use data in illicit or disingenuous ways? 

In your privacy policy and your data processing agreement, do you disclose all manners of collecting and using data? If you use automatic anonymization, do you tell the truth about it (i.e., it isn’t 100% precise)? Do you mix up data from different customers or users so that you have enough to train and retrain your AI? 

Is your technology climate-conscious—or do you throw deep-learning AI at every less-than-obvious problem? 

For example, it may be possible to create bilingually sensitive predictive typing using simple locally-trained statistics—or you can create the same by training a neural network. The problem with the latter is that by “converting this energy consumption in approximate carbon emissions and electricity costs, the authors estimated that the carbon footprint of training a single big language model is equal to around 300,000 kg of carbon dioxide emissions. This is of the order of 125 round-trip flights between New York and Beijing.” (Payal Dhar, 2020) 

Finally, human dignity. It also matters what kind of employer the developer is.  

  

Sources: 

Atlas of AI (Kate Crawford) 

Artificial Intelligence Has an Enormous Carbon Footprint 

The carbon impact of artificial intelligence 

GDPR itself 

My own blog from the past 

ethical software development

Balázs Kis

Balázs Kis

Co-Founder & Co-CEO at memoQ

Browse all posts