Zichong Wang
Florida International University
Lecture Information
CASE 349
2024-10-02 13:00:00
Abstract
Graph Neural Networks (GNNs) have excelled in diverse applications due
to their outstanding performance, demonstrating remarkable capabilities
in tasks ranging from node classification to graph generation. However,
despite their success, GNNs could inherit and exacerbate existing
societal biases in their decision-making processes, posing a significant
concern given their widespread deployment in critical systems.
Consequently, there has been a surge in efforts to tackle fairness
issues and ensure GNNs promote equitable outcomes. However, most of them
rely on the statistical fairness notions, which assume that biases arise
solely from sensitive attributes, neglecting the pervasive labeling bias
prevalent in real-world scenarios. In addition, existing approaches
usually focus on a single notion of fairness, ignoring the interactions
between different fairness goals. Moreover, fair classification tasks
have been the primary focus of research, while biases in the
increasingly prevalent graph generative models remain largely
unattended. To this end, this project aims to develop a new and
versatile fair graph learning framework that can: 1) accurately identify
the various biases present in the graph, 2) simultaneously address
multiple biases in both graph classification and graph generation tasks,
3) improve fairness by decomposing sensitive information in node
representations, while retaining task-related information, 4) generate
diverse underrepresented samples and establish fair link connections to
ensure consistent representation across various groups. The research
approaches include three primary research objectives: i) identify real
counterfactual instances directly from the dataset to guide the bias
mitigating process, ii) achieve individual and group fairness
simultaneously, and iii) the first of its kind fair graph generation
methodology.
Biography
Zichong Wang is a third-year Ph.D. student at the Knight Foundation
School of Computing and Information Sciences, Florida International
University, under the supervision of Dr. Wenbin Zhang. His research
focuses on mitigating inadvertent disparities caused by the interaction
of algorithms, data, and human decisions in policy development. His work
has garnered significant recognition, including the Best Paper Award at
FAccT 2023 and a nomination for the Best Paper Award at ICDM 2023, with
over 10 top-tier publications in leading venues. He also serves as the
Web Chair for WSDM 2024 and actively contributes as a Program Committee
member and reviewer for conferences and journals like AAAI, IJCAI, and
Machine Learning.
More information can be found at: https://lavinwong.github.io/