At Arthur, research is core to who we are. Our team of researchers is always at the forefront of what’s happening in ML, and their research guides our approach to product development. From the lab to the boardroom, we partner with global data scientists, ML directors and AI Center of Excellence leadership to launch real-world solutions worldwide.
On that note, we’re excited to introduce our two ML Research Fellows for 2023! We talked with them to learn some more about their research interests and to get some insight into what they’ll be working on throughout the next few months.
Namrata Mukhija
We’re really excited to be working with you at Arthur! Please tell us a little bit about your background and research interests.
I’m a recent Masters in Computer Science graduate from New York University. I primarily work in Natural Language Processing for the last 5 years and have published my work at conferences such as ACL, ACM COMPASS, and a NeurIPS workshop. My work has spanned developing thought frameworks for technologists building language technology for social good, qualitative methods for including low-resource communities in the development of language technologies, as well as quantitative methods for studying the efficacy of synthetic data for low-resource scenarios. My current research interest is in developing and evaluating machine learning methods for social good. Before doing my Masters, I was a Software Engineer at Microsoft for 2.5 years.
What interested you most about working at Arthur?
I’ve been following Arthur and the amazing work they have been doing ever since I started my Masters and therefore, the opportunity to work here is very exciting. There are numerous brilliant, smart people at Arthur that I look forward to working with. I also love the focus on fairness, explainability, and generalization, and it’s the coolest NLP startup to work for, in my opinion!
What are some areas of research you’re interested in pursuing this summer?
With the advent of large language models and their deployment in real world settings, unfair predictions impact lives of real people across the globe. I’m very keen to investigate what fairness looks like for these models and develop methods to make large language models equitable. I’m also interested in doing user studies for assessing the impact of language technologies on different population groups this summer.
Angelina Wang
We’re really excited to be working with you at Arthur! Please tell us a little bit about your background and research interests.
I’m a PhD student in computer science at Princeton University researching machine learning fairness and algorithmic bias. A topic I have been working on recently is better understanding how we can measure the multi-faceted construct of fairness in a way that is well-grounded in our normative concerns.
What interested you most about working at Arthur?
I am really excited about the focus on measurement, and specifically measurement of fairness. Measurement is such a critical aspect of machine learning, and it’s great to be in a space where that is the primary focus. I’m looking forward to learning more about how different groups think about and prioritize various kinds of measurement.
What are some areas of research you’re interested in pursuing this summer?
In my work this summer I’m interested in gaining a better understanding of how companies are currently thinking about fairness in order to shape future research to be more practical and applicable. I am also interested in better understanding the limitations of LLMs in different kinds of applications as they become increasingly adopted across many domains.