Advancements in task like Image Captioning has been attempted to an extent where mostly visual-linguistic grounding of the image-text pair is leveraged. This includes either generating the textual description of the objects and entities present within the image in constrained manner, or generating detailed description of these entities as a paragraph. But there is still a long way to go towards being able to generate text that is not only semantically richer, but also contains real world knowledge in it. This is a brief description about exploring image2tweet generation through the lens of existing image-captioning approaches.
PhD Student @ IIITD | Researcher @ Wipro AI Research