Yang Liu, an assistant professor of computer science and engineering at the University of California, Santa Cruz, has won a National Science Foundation (NSF) Early Career Development Award (CAREER) to fund his study on human-centered machine learning.
Machine learning models, artificial intelligence algorithms that improve through data and experience, are applied in a variety of industries that have serious impacts on people’s lives, such as request filtering lending in financial services or Medicare applications in healthcare.
Liu’s research project will address issues of robustness, fairness, and dynamics that arise in this field from a data-centric perspective. His team will study how algorithms can become biased by replicating existing biases in the datasets that train the models, but more importantly they will build models to understand and predict the behaviors of humans when interacting with algorithms. machine learning and look at the data produced by that. interaction.
“Part of the proposed research will focus on understanding the possibilities for identifying and mitigating the natural bias and noise that exist in human data,” Liu said. “But looking a little deeper, it’s not just about machine learning and how it performs on the data you already have, it’s about the data it’s going to generate in the future – I think that’s the missing part in most ongoing discussions I care about the data that’s going to be generated after deploying a machine learning model and data collection pipeline, so that’s the one of the main issues of this proposal.
Liu and his team will also use NSF funding to conduct human subject studies to understand how people respond to various machine learning models in a wide range of applications, from financial services to recommender systems, and possibly school admissions. They will use these experiences to build theoretical frameworks and computational solutions to ensure that machine learning models are designed and deployed to serve humans without bias.
“Machine learning will no longer be a one-time or static problem,” Liu said. “Model accuracy is important, but it will become mores about long-term well-being. What are the behaviors, what are the dynamics that the model will induce in people? It’s something I’m really going to focus on. »
Additionally, Liu wants to ensure that machine learning models provide people with opportunities for improvement.
Some machine learning models give results without offering any explanation, and even when they do offer explanations, they often lack constructive suggestions for the user. For example, a constructive suggestion in the context of financial services might look like offering a means by which a customer who is not currently eligible for a loan could improve their financial profile to be approved in the future. Liu hopes that building this level of transparency could increase human trust in machine learning technology, allowing it to be more widely adopted.
“This work underscores the importance of centering people’s real experiences when interacting with machine learning technology, as this technology can have profound effects on their opportunities and well-being,” said Alexander Wolf, Dean from Baskin School of Engineering. “Liu’s ethics-centered approach aligns with our school’s mission to ensure that what we create has a positive impact on our society.”