Recognizing Metaphors Computationally

Metaphors render intricate issues into comprehensible language. They use simple, imagistic formulations of a concrete topic (a source domain) and project them onto the topic to be described (the target domain). “Metaphors can be more easily processed by computers by distinguishing source and target domains,” says Meghdut Sengupta, a research fellow with Subproject C04 at TRR 318. Together with project lead Professor Dr. Henning Wachsmuth and Milad Alshomary from the “Natural Language Processing” work group at the Leibniz University Hannover, Sengupta investigated to what extent source domains can be computationally predicted a text using metaphors.

The computational linguists propose a contrastive learning approach to this: the machine learns what makes a given sentence semantically similar to the correct source domain and different from other source domains, with the goal of ranking the probability of that sentence being treated metaphorically in all known source text domains. “This approach has proven to be effective at detecting even rare source domains,” says project lead Professor Dr. Wachsmuth. “With this research, we are working towards achieving greater computational understanding of metaphorical language.”

Sengupta, Alshomary and Wachsmuth published their findings in the article “Back to the Roots: Predicting the Source Domain of Metaphors using Constrative Learning.” They first presented their research at “The Third Workshop on Figurative Language Processing” at the “Empirical Methods in Natural Language Processing” (EMNLP) conference, which was held as a hybrid event from December 7th-11th in Abu Dhabi. Further approaches to this problem are already in the works.

Link to the paper

Meghdut Sengupta, researcher in project C04