We propose TD-GEN, a graph generation framework based on tree decomposition, and introduce a reduced upper bound on the maximum number of decisions needed for graph generation. The framework includes a permutation invariant tree generation model which forms the backbone of graph generation. Tree nodes are supernodes, each representing a cluster of nodes in the graph. Graph nodes and edges are incrementally generated inside the clusters by traversing the tree supernodes, respecting the structure of the tree decomposition, and following node sharing decisions between the clusters. Finally, we discuss the shortcomings of standard evaluation criteria based on statistical properties of the generated graphs as performance measures. We propose to compare the performance of models based on likelihood. Empirical results on a variety of standard graph generation datasets demonstrate the superior performance of our method.
Bibtex
@misc{https://doi.org/10.48550/arxiv.2106.10656,
doi = {10.48550/ARXIV.2106.10656},
url = {https://arxiv.org/abs/2106.10656},
author = {Shirzad, Hamed and Hajimirsadeghi, Hossein and Abdi, Amir H. and Mori, Greg},
keywords = {Machine Learning (cs.LG), Social and Information Networks (cs.SI), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {TD-GEN: Graph Generation With Tree Decomposition},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
-
A Survey and Critique of Multiagent Deep Reinforcement Learning
A Survey and Critique of Multiagent Deep Reinforcement Learning
P. Hernandez-Leal, B. Kartal, and M. E. Taylor. International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019
Related Research
-
EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine Translation
EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine Translation
Y. Wen, B. Shayegh, C. Huang, Y. Cao, and L. Mou. Workshop at International Conference on Machine Learning (ICML)
Publications
-
Pre-training multi-billion parameter LLMs on a single GPU with Flora
Pre-training multi-billion parameter LLMs on a single GPU with Flora
Y. Hao, Y. Cao, and L. Mou.
Research
-
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics
A. Vani, F. Tung, G. Oliveira, and H. Sharifi. International Conference on Machine Learning (ICML)
Publications