Towards Democratizing AI: Scaling and Learning (Fair) Graph Representations in an Implementation Agnostic Fashion

>> <<

Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this talk, I will present an approach to redress this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing con-temporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a graph convolution neural network that it learns. Time permitting, I will then describe one of several natural extensions to MILE – in a distributed setting (DistMILE) to further improve the scalability of graph embedding or mechanisms – to learn fair graph representations (FairMILE).
The proposed MILE framework and variants (DistMILE, FairMILE), are agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them and is agnostic to their implementation language. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while generating embeddings of better quality, for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation. Our experiments demonstrate that DistMILE learns representations of similar quality with respect to other baselines while reducing the time of learning embeddings even further (up to 40 x speedup over MILE). FairMILE similarly learns fair representations of the data while reducing the time of learning embeddings.
Joint work with Jionqian Liang (Google Brain), S. Gurukar (OSU) and Yuntian He (OSU)

Srinivasan Parthasarathy

Professor of Computer Science and Engineering, The Ohio State University
https://web.cse.ohio-state.edu/~parthasarathy.2/

Mining, Learning and Semantics for Personalized Health

>> <<

In this talk I’ll present an overview of the challenges and opportunities for applying data mining and machine learning for tasks in personalized health, including the role of semantics. In particular, I’ll focus on the task of healthy recipe recommendation via the use of knowledge graphs, as well as generating summaries from personal health data, highlighting our work within the RPI-IBM Health Empowerment by Analytics, Learning, and Semantics (HEALS) project.

Mohammed J. Zaki is a Professor and Department Head of Computer Science at RPI. He received his Ph.D. degree in computer science from the University of Rochester in 1998. His research interests focus novel data mining and machine learning techniques, particularly for learning from graph structured and textual data, with applications in bioinformatics, personal health and financial analytics. He has around 300 publications (and 6 patents), including the Data Mining and Machine Learning textbook (2nd Edition, Cambridge University Press, 2020). He founded the BIOKDD Workshop, and recently served as PC chair for CIKM’22. He currently serves on the Board of Directors for ACM SIGKDD. He was a recipient of the NSF and DOE Career Awards. He is a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the AAAS.

http://www.cs.rpi.edu/~zaki/