The Google Translate smartphone application allows user to translate text to a desired language framing it with the camera.
The text is translated in real time and the user, looking at the display of the phone, can see a modified reality where the words are substituted with their translation.

           This feature makes the application translate long texts in a very fast and intuitive way because it doesn’t require the user to write any word on the smartphone. Furthermore, it allows people to translate languages composed by alphabets they can’t type on the smartphone keyboard.

          In the future this application might run on smart glasses or contact lenses. In this way people will see any writings translated to their mother-tongue.
This possible usage of augmented reality will make everyone able to read everything without caring about the language. It can be useful for people traveling around the world which won’t even notice that the language of texts surrounding them has changed.

          Furthermore, using a camera, the application can be very smart understanding the context of the text and elaborating a proper translation. This result can be achieved looking at any object surrounding the writings and at the previous and following sentences.

          In order to make all these things possible, together with an improvement in AR technologies, we need to get a fast response from the application and a very low error frequency. If translations are wrong, unpleasant misunderstandings can happen. For this reason, in my opinion, the application should give the user information about how confident it is about the translation. It could be implemented, for example, underlying words that could be wrong. Users themselves should have the opportunity to send feedback to the app noticing the system when a translation is wrong and should be improved.

          The user must have full control on the behavior of the application and must be pretty free to choose for example if a language should be translated or not. The user should also be able to turnoff translation whenever he wants. Example given, while watching famous paintings or monuments, that can maybe contain writings, the overlapping of the translation on the reality can lead people not to enjoy the artworks.

          As we can see by the screenshot of the application, nowadays translations are too rough and need to be improved. Writings I have used to test the translator are a bit tricky to process because they come from a comic and are handwritten. For this reason some words were not correctly recognized by the app. However, the recognized words are well translated even if sometimes I had to wait before the sentence was correctly rendered.

          On the other hand Augmented Reality works well and features like the rendering of the font and the choice of the dimension are good. Synthesized writings fit very well the space taken by the original words.

          Results are worse when the text that must be translated is long and dense like in a book. Under this condition the application doesn’t manage to produce a satisfying solution. The latter usually continues to change very fast and it is very difficult to read it.
The application is also unusable while moving because in the meantime the device translates the sentences the camera has moved. This is a feature that must be absolutely improved if we wanna bring this technology on glasses and lenses.