ArtGuide is an application both web and mobile which processes images with artistic and architectural contents (e.g. a building facade) and reads out loud some information about the elements in the picture, through text-to-speech feature.
Let me go technical a little bit...
ArtGuide has two main disruptive features:
  • it performs image recognition to provide information about the art piece even if it is not recognized by the API of Google images. In such a case, this software identifies some elements inside the picture and offers some information on that. As an example, if the user took a picture of a Gothic building that is not famous, the application provides some information about Gothic architecture.
  • the information provided to the user is tailored both on his age and his expertise in the art field. For example, if a child sends a picture of the Mona Lisa, wondering in the Louvre Museum, ArtGuide will read to the kid that the famous French Emperor Napoleon had hanging Mona Lisa in his bedroom in the Tuileries Palace for about four years and he got a crush on her. Conversely, for the same painting, if the user is an expert of art ArtGuide will say that Leonardo's Mona Lisa is an oil on poplar panel measuring 77 x 53 cm. The painting was resized by sawing it off at the sides to eliminate two painted columns that closed off the background landscape.
I worked on the second feature (text adaptation), within a group of 5 people. The source code was written in Python, using: (1) Spacy for sentences extraction and readability measure, (2) fastText embeddings for capturing the meaning of the sentences in a multilingual environment, (3) Huggingface pre-trained models for sentences embeddings (distil-RoBERTa, distil-BERT-multilingual), (4) NeuralCoref library for coreference evaluation.