Learning to Compute Graph Similarity Using LLM generated Code
Speaker: Sayan Ranu (Indian Institute of Technology Delhi) Topic: Learning to Compute Graph Similarity Using LLM generated Code DataFest Yerevan 2025, https://datafest.am/ Abstract: Data from a wide variety of domains are modeled as graphs. Examples include molecules, protein structures, function-call graphs, etc. Graph Edit Distance (GED) is a widely used metric for measuring similarity between two graphs. Computing the optimal GED is NP-hard, leading to the development of various neural and non-neural heuristics. While neural methods have achieved improved approximation quality compared to non-neural approaches, they face significant challenges: (1) They require large amounts of ground truth data, which is itself NP-hard to compute. (2) They operate as black boxes, offering limited interpretability. (3) They lack cross-domain generalization, necessitating expensive retraining for each new dataset. In this talk, we will present GRAIL, which introduces a paradigm shift in this domain. Instead of training a neural model to predict GED, GRAIL employs a novel combination of large language models (LLMs) and automated prompt tuning to generate a program that is used to compute GED. This shift from predicting GED to generating programs imparts various advantages, including end-to-end interpretability and an autonomous self-evolutionary learning mechanism without ground-truth supervision. Extensive experiments on seven datasets confirm that GRAIL not only surpasses state-of-the-art GED approximation methods in prediction quality but also achieves robust cross-domain generalization across diverse graph distributions.
Download
0 formatsNo download links available.