Building on advances in Google's Cloud Vision API OCR, which today supports more than 220 languages, the Visual Global Knowledge Graph will no longer be submitting candidate language recommendations when submitting each image to the Cloud Vision API. When Cloud Vision first debuted, the accuracy of multilingual OCR could be improved by submitting a list of likely languages found in each image. The Visual Global Knowledge Graph did this by combining the language of the article with the primary languages spoken at the main locations mentioned in the article. This ensured that a French language news article showing an image from Egypt with Arabic signage with a French overlay would be correctly transcribed. As Cloud Vision has advanced in leaps and bounds it no longer requires these language hints for maximal accuracy and actually achieves better accuracy when left to perform automatic language detection. This means that the standalone "LangHints" field in the CSV files and the "LangHints" field in the "ImageProperties" block in the JSON record will be transitioning to reporting the CLD2-computed human language name of the primary language of the article containing the image. This can yield contradictions, such as the earlier example of an image with Arabic text appearing in a French-language news outlet, but will provide greater OCR accuracy.