Image - AAAI_Image1.jpg

A scene from one of the busy poster sessions at AAAI 2019 in Honolulu, Hawaii.

Last month, we had the chance to attend the 2019 Association for the Advancement of Artificial Intelligence (AAAI) conference in Honolulu. As one of the longest-standing conferences in the field, AAAI distinguishes itself from most leading ML events in that it focuses on far more than just deep learning. This breadth of topics encourages a vigorous interaction between some of the more classical methods in the field and a few of the modern ones. In turn, knowledge about these classical ideas can be used to inform and develop better ML approaches. Below, we’ve compiled a few of our favorite moments and stand-out papers, some which clearly show the interactive display between these two worlds.

Automating Density Estimation 

Auto ML proved one of the hot topics at this year’s conference and one of the more intriguing papers on the topic was Automatic Bayesian Density Analysis by Antonio Vergari (Max Planck Institute for Intelligent Systems)*; Alejandro Molina (TU Darmstadt); Robert Peharz (University of Cambridge); Zoubin Ghahramani (University of Cambridge); Kristian Kersting (TU Darmstadt); Isabel Valera (MPI-IS). Inspired by advances around automatic model selection in supervised learning, the paper proposed an automated framework for tackling the problem of density estimation in unsupervised learning. A challenge here is that this problem often requires domain expertise from the sector where the data comes from. Instead of going down this limiting path, the authors suggested learning a sum-product network – an architecture initially proposed by Hoifung Poon and Pedro Domingos at UAI 2011 – to model the data. 

Image - AAAI_Image2.jpg

The sum and product nodes of the network allowed the authors to capture the data’s feature behaviour as a mixture of heterogeneous distributions. In other words, rather than relying solely on a mixture of Gaussians for a particular feature, the authors allowed for the same mixture to contain discrete and continuous distributions and different types of parameterizations from a preset dictionary. Once learned, the structure of the SPN allowed for efficient inference using Gibbs sampling to infer missing feature values for some of the data points in the dataset. The model is also capable of providing information with respect to how well the data points fit the mixtures and highlights which points are likely to be outliers. 

The idea of automating selection and analysis, which is a widely emergent approach at ML and AI conferences, and to see it applied to the age-old statistical ML problem of density estimation, is a prime example of new approaches being used to tackle long-standing classical problems. 

Debate Panel on “The Future of AI”

One of the most engaging panels at the conference took the form of a debate on the “Future of AI.” Here, one team, consisting of Peter Stone and Jennifer Neville, argued for the proposition that “The AI community today should continue to focus mostly on ML methods,” while the opposing team, Michael Littman and Oren Etzioni, argued against it.

The panel was hilarious, entertaining and informative. However, the audience rightly pointed out that all the debaters were, in fact, ML experts, which made the contrarian position somewhat of an argument for the sake of debate and suggested that perhaps bias in ML is mostly inevitable. There was a good argument presented during the panel that suggested too much of an ML focus, like the one happening now, will create bias in current students that due, to industry demands, haven’t learned the classical AI and non-ML related topics as well as previous grads. The challenge will arise when the trend changes from ML to the next big thing, at which point we might end up with an insufficient number of experts in non-ML related AI topics.

Some of the key arguments:

  • Peter mentioned that we should focus on ML because we’ve built good momentum and ML is our weakest area for knowledge. Peter stressed that the AI community focused on symbolic AI for decades until recognizing the perception problem is hard where ML is very successful. He noted that we will ultimately need broader approaches, but lets focus on ML now and solve manual work areas within ML such as Neural Network Architecture Design so that in 10 years, we automate all those works, then start symbolic AI work again in a few years. 
  • Michael hit back with the point that the ML community solves problems that are well-suited for ML, not problems that past AI communities were trying to solve (such as crossword puzzles) and that ML community focuses on easy cognition problems and mentioned System 1 (fast and unconscious decision making) vs System 2 (slow and controlled) thinking.
  • Oren followed up with the point that ML is a great paper-publishing machine, but that gradient descent and matrix multiplication are not enough. He mentioned that we are in the midst of a paradigm shift in all sciences, but we need to go beyond incrementally better architectures for ML.
  • Michael cautioned that the bigger the hype, the bigger the crash and for that reason among many, we must keep non-ML areas alive.

Post-debate voting showed the audience still argued against the proposition that suggest we need to focus on ML research.

Graphs and Neural Networks

One dominant theme throughout the conference was the development and usage of neural networks to address problems involving graphs. Central to this argument was William Hamilton and Jian Tang’s tutorial on Graph Representation Learning. Here, the presenters outlined current advances in Graph Neural Networks (GNNs) and also presented models capable of generating graphs. Example domains of applications for these approaches included operations research and biology, where the data doesn’t lend itself well to traditional architectures. An example of this less-than-ideal fit would be convolutional networks, which were designed mainly for vision applications with the image structure in mind. 

Of particular note was a paper presented in the Search, Constraint Satisfaction and Optimization session, entitled Learning to Solve NP-Complete Problems – A Graph Neural Network for Decision TSP by Marcelo Prates (Federal University of Rio Grande do Sul); Pedro H C Avelar (Federal University of Rio Grande do Sul); Henrique Lemos (Federal University of Rio Grande do Sul); Luís Lamb (Federal University of Rio Grande do Sul); Moshe Vardi (Rice University). In the paper, the authors proposed using a GNN to tackle the “traveling salesman” problem. They introduced this approach as an extension of Learning a SAT Solver from Single-Bit Supervision by Selsam et al., whose authors used a message-passing NN approach to solve SAT problems. 

Unlike that problem, however, the traveling salesman problem (TSP) represents a class of NP-complete problems with weighted relations between the vertices of its graph. To handle this, the authors mapped the weighted graph to a GNN by also allowing for embeddings on the edge weights, rather than just on the vertices. The model was then trained as classification problem by giving it a constructed graph and asking it the question: “Does a Hamiltonian path of length X or less exist in this graph?” 

For each constructed graph, two training examples were fed to the model: one with cost (1-dev)X* and the other with cost (1+dev)X* where X* is the known minimal length and dev is the user-desired deviation from this optimal length. In the training process, the correct labels for these two instances are “NO” and “YES”, respectively. In addition to providing an extension for GNN usage beyond binary SAT problems, this approach is of particular importance because the TSP, along with Karp’s 20 other NP-complete problems, often re-occur as reductions of many daily ML problems. Using GNNs – a modern approach – to solve these NP-complete problems which arise from classical computer science literature is another example of how old met new at AAAI.

Summary

Overall, AAAI 2019 showed that while we are going through current research explosions in DL and RL, classical topics (along with the faculty who have worked on them for decades!) are still alive, well and reincarnating themselves through these new channels. A welcome sight and a sign of what’s to come.