We might be biased, but AI is the most exciting societal development in decades. So, why not treat it with the energy and enthusiasm it deserves?
AI Fest will have tons of panels and sessions where folks can learn about its implications, applications, and innovations—but at its core, AI Fest is a celebration of AI.
On Thursday, September 26th, we’ll share knowledge, debate some of AI’s most controversial topics, host hands-on workshops, and have a blast along the way.
After all, who said conferences can’t be fun?
Speakers
Dr. Avijit Ghosh
ML & Society Team
Renée Cummings
Priyanka Oberoi
Vik Scoggins
Bitun Banerjee
Adam Zhao
Michael Brent
Alyssa Lefaivre Škopac
Angelina Wang
Abhinav Raghunathan
Lily Xu
Gabe Weisz
Meinolf Sellmann
Anna Bethke
Daniel Chesley
Tiffany Luck
Bear Douglas
Nicholas Mattei
Victoria Vassileva
Donny Greenberg
Leah Morris
Var Shankar
Maria João Sousa
Matt Lynley
Raz Besaleli
Shruthi Velidi
Tucker Fross
Teresa Datta
Ian Eisenberg
Ben Schmidt
Marcus Sawyerr
Nakshathra Suresh
George Davis
Joyce Chen
Bryan Subijano
Gurpreet Kaur Khalsa
Vedant Nanda
Adam Wenchel
Lily Li
Pranjal Bajaj
Amit Singh
Alejandro Fernandez
Aaron Ogunro
Dylan Itzikowitz
Ben Feuer
Jayeeta Putatunda
Charlie Flanagan
John Dickerson
Tim Rich
Seth Levine
Why should you attend?
Explore the stages
Find a Session
This talk will explore the psychological, social, ethical, and safety risks of integrating emerging technologies, including AI, into daily life. Nakshathra, a cyber criminologist and co-founder of eiris, will highlight the growing cyber safety challenges posed by innovators who overlook end-user safety. She will discuss non-technical risks like harm, bias, and safety, advocating for human-centered design, as well as case studies on the successes and failures of digital safety by design. The talk aims to inspire companies, particularly in tech and startups, to prioritize cyber safety and consider marginalized groups and minority communities in their innovation processes.
This talk will explore the psychological, social, ethical, and safety risks of integrating emerging technologies, including AI, into daily life. Nakshathra, a cyber criminologist and co-founder of eiris, will highlight the growing cyber safety challenges posed by innovators who overlook end-user safety. She will discuss non-technical risks like harm, bias, and safety, advocating for human-centered design, as well as case studies on the successes and failures of digital safety by design. The talk aims to inspire companies, particularly in tech and startups, to prioritize cyber safety and consider marginalized groups and minority communities in their innovation processes.
In recent years there has been an explosion in interest in topics that sit at the intersection of applications of computing technology and societal issues. There has been significant work in the academic, industrial, and policy spaces to clarify and formalize best practices regarding the deployment of computational decision making (e.g., artificial intelligence and machine learning) at scale. Part of this work has been a newfound interest in many age old conversations about the roles and limits of technology and society. In this talk I'll survey some of these topics with an eye towards concerns around bias and fairness in application domains of my recent work including building recommender systems that are able to handle multi-stakeholder fairness concerns and our work at the Tulane Center for Community Engaged AI.
In recent years there has been an explosion in interest in topics that sit at the intersection of applications of computing technology and societal issues. There has been significant work in the academic, industrial, and policy spaces to clarify and formalize best practices regarding the deployment of computational decision making (e.g., artificial intelligence and machine learning) at scale. Part of this work has been a newfound interest in many age old conversations about the roles and limits of technology and society. In this talk I'll survey some of these topics with an eye towards concerns around bias and fairness in application domains of my recent work including building recommender systems that are able to handle multi-stakeholder fairness concerns and our work at the Tulane Center for Community Engaged AI.
Large Language Models (LLMs) have transformed natural language processing (NLP), but their evaluation poses challenges due to the lack of standardized benchmarks for diverse tasks. The opaque, black-box nature of LLMs complicates understanding their decision-making processes and identifying biases. Effective evaluation metrics are crucial, especially as LLM architectures rapidly evolve, requiring adaptive methodologies. The AI community is coming together to address this, facilitate benchmark development, and provide tools for consistent model assessment across domains. We will also evaluate some of the OS evaluation metrics and walkthrough of code using a demo dataset.
Large Language Models (LLMs) have transformed natural language processing (NLP), but their evaluation poses challenges due to the lack of standardized benchmarks for diverse tasks. The opaque, black-box nature of LLMs complicates understanding their decision-making processes and identifying biases. Effective evaluation metrics are crucial, especially as LLM architectures rapidly evolve, requiring adaptive methodologies. The AI community is coming together to address this, facilitate benchmark development, and provide tools for consistent model assessment across domains. We will also evaluate some of the OS evaluation metrics and walkthrough of code using a demo dataset.
In this panel, you'll hear from industry pioneers who are at the forefront of ethical AI development. This session will delve into the challenges and opportunities of implementing responsible AI practices, with insights from those who are setting the standard. Discover how these leaders are navigating complex ethical considerations, fostering transparency, and ensuring fairness in AI technologies.
In this panel, you'll hear from industry pioneers who are at the forefront of ethical AI development. This session will delve into the challenges and opportunities of implementing responsible AI practices, with insights from those who are setting the standard. Discover how these leaders are navigating complex ethical considerations, fostering transparency, and ensuring fairness in AI technologies.
Join industry leaders as they delve into the transformative power of ML and NLP in enhancing customer experiences. This panel will explore cutting-edge techniques for leveraging ML and NLP to create personalized, efficient, and engaging interactions. Discover how these technologies are being used to understand customer needs, predict behaviors, and drive satisfaction. The discussion will highlight real-world applications and success stories, offering insights into the future of customer-centric innovation.
Join industry leaders as they delve into the transformative power of ML and NLP in enhancing customer experiences. This panel will explore cutting-edge techniques for leveraging ML and NLP to create personalized, efficient, and engaging interactions. Discover how these technologies are being used to understand customer needs, predict behaviors, and drive satisfaction. The discussion will highlight real-world applications and success stories, offering insights into the future of customer-centric innovation.
Join us for a thought-provoking fireside chat with Renée Cummings, renowned AI ethicist and Data Science Professor of Practice at the University of Virginia, as we explore the critical intersection of ethics, equity, and empowerment in AI. In this session, moderated by Arthur's very own Victoria Vassileva, Renée will discuss how AI technologies can both challenge and advance social justice, and the responsibilities of developers and organizations to ensure equitable outcomes. Gain insights into the ethical implications of AI deployment and discover actionable strategies for building more inclusive and accountable AI systems.
Join us for a thought-provoking fireside chat with Renée Cummings, renowned AI ethicist and Data Science Professor of Practice at the University of Virginia, as we explore the critical intersection of ethics, equity, and empowerment in AI. In this session, moderated by Arthur's very own Victoria Vassileva, Renée will discuss how AI technologies can both challenge and advance social justice, and the responsibilities of developers and organizations to ensure equitable outcomes. Gain insights into the ethical implications of AI deployment and discover actionable strategies for building more inclusive and accountable AI systems.
This increased adoption of LLMs requires serving them efficiently to many users. In the first part of the talk, I will highlight key concepts that help us achieve high throughout LLM serving such as tensor parallelism, paged attention and quantization. In the second part, I will talk about how to control decoding from LLMs using "control vectors”. Conceptually, these are vectors in the activation space representing directions of a certain concept (e.g., humor) that can be amplifies or suppressed at inference, giving users a more interpretable axis of control on LLM decoding.
This increased adoption of LLMs requires serving them efficiently to many users. In the first part of the talk, I will highlight key concepts that help us achieve high throughout LLM serving such as tensor parallelism, paged attention and quantization. In the second part, I will talk about how to control decoding from LLMs using "control vectors”. Conceptually, these are vectors in the activation space representing directions of a certain concept (e.g., humor) that can be amplifies or suppressed at inference, giving users a more interpretable axis of control on LLM decoding.
Arthur’s product team will host a special session showcasing some of the latest developments in the platform. We can’t say what they are just yet, but you won’t want to miss this one!
Arthur’s product team will host a special session showcasing some of the latest developments in the platform. We can’t say what they are just yet, but you won’t want to miss this one!
This panel will explore the complex legal landscape surrounding AI adoption in business. Experts will discuss regulatory compliance, data privacy, intellectual property, and ethical concerns, providing actionable insights for companies integrating AI into their operations. Attendees will gain a deeper understanding of the potential legal risks and how to navigate them effectively to ensure responsible and compliant AI use in the enterprise.
This panel will explore the complex legal landscape surrounding AI adoption in business. Experts will discuss regulatory compliance, data privacy, intellectual property, and ethical concerns, providing actionable insights for companies integrating AI into their operations. Attendees will gain a deeper understanding of the potential legal risks and how to navigate them effectively to ensure responsible and compliant AI use in the enterprise.
In this panel, experts will explore the powerful role AI plays in addressing today’s most pressing environmental issues. This session will highlight how AI-driven solutions are being used to combat climate change, enhance conservation efforts, and promote sustainable practices—and some of the ecological challenges that AI presents as well. Learn about the latest innovations at the intersection of technology and ecology, and discover how AI can be harnessed to build a more sustainable future.
In this panel, experts will explore the powerful role AI plays in addressing today’s most pressing environmental issues. This session will highlight how AI-driven solutions are being used to combat climate change, enhance conservation efforts, and promote sustainable practices—and some of the ecological challenges that AI presents as well. Learn about the latest innovations at the intersection of technology and ecology, and discover how AI can be harnessed to build a more sustainable future.
As companies focus on operationalizing their AI development and infrastructure, especially those coming off a long period of GenAI exploration, it’s worth taking a moment to identify the evolution of the AI stack and define just what an "AI platform" is. Part of the confusion is that the notion of an “AI Platform” has changed every ~3 years for the last decade. In this talk, we will walk through the history of what the cutting edge AI platform has been, and why past approaches failed to evolve with team needs. Finally, we will address how modern AI development is significantly more diverse and sophisticated than ever before, with heterogenous data types and compute requirements.
As companies focus on operationalizing their AI development and infrastructure, especially those coming off a long period of GenAI exploration, it’s worth taking a moment to identify the evolution of the AI stack and define just what an "AI platform" is. Part of the confusion is that the notion of an “AI Platform” has changed every ~3 years for the last decade. In this talk, we will walk through the history of what the cutting edge AI platform has been, and why past approaches failed to evolve with team needs. Finally, we will address how modern AI development is significantly more diverse and sophisticated than ever before, with heterogenous data types and compute requirements.
In a 2022 paper, Can There Be Art Without an Artist?, Dr. Avijit Ghosh and Genoveva Fossas discussed the work of human artists within training data for generative AI tools. In the appendix, they connect the practice of scraping training data without consent to its famous precedent in biology, citing the case of Henrietta Lacks. Because of You is a digital video work inspired by this connection, and subsequent conversations between Eryk Salvaggio and Dr. Avijit Ghosh, which began at a presentation on AI and art at SXSW in 2023.
In a 2022 paper, Can There Be Art Without an Artist?, Dr. Avijit Ghosh and Genoveva Fossas discussed the work of human artists within training data for generative AI tools. In the appendix, they connect the practice of scraping training data without consent to its famous precedent in biology, citing the case of Henrietta Lacks. Because of You is a digital video work inspired by this connection, and subsequent conversations between Eryk Salvaggio and Dr. Avijit Ghosh, which began at a presentation on AI and art at SXSW in 2023.
Attention is not all you need—running inference on large language models and other modern neural network topologies would be too slow to be useful without specialized computing devices. In this talk, Gabe will discuss how model design, hardware design, and software interact, and provide a high-level overview of the accelerator space including GPUs, NPUs, and custom accelerators.
Attention is not all you need—running inference on large language models and other modern neural network topologies would be too slow to be useful without specialized computing devices. In this talk, Gabe will discuss how model design, hardware design, and software interact, and provide a high-level overview of the accelerator space including GPUs, NPUs, and custom accelerators.
In this panel session, experts will explore the transformative journey from AI models to impactful business applications. Discover strategies for scaling AI across organizations, overcoming operational challenges, and driving measurable outcomes. Gain insights into real-world examples and learn how to unlock the full potential of AI to deliver tangible business value.
In this panel session, experts will explore the transformative journey from AI models to impactful business applications. Discover strategies for scaling AI across organizations, overcoming operational challenges, and driving measurable outcomes. Gain insights into real-world examples and learn how to unlock the full potential of AI to deliver tangible business value.
In this panel session, industry experts will delve into the transformative impact of artificial intelligence on the job market. Explore how AI is reshaping roles, creating new opportunities, and redefining the skills needed for tomorrow’s workforce. Join us to gain insights into how organizations and individuals can adapt to this rapidly evolving landscape, ensuring they remain competitive and resilient in the face of AI-driven change.
In this panel session, industry experts will delve into the transformative impact of artificial intelligence on the job market. Explore how AI is reshaping roles, creating new opportunities, and redefining the skills needed for tomorrow’s workforce. Join us to gain insights into how organizations and individuals can adapt to this rapidly evolving landscape, ensuring they remain competitive and resilient in the face of AI-driven change.
In this session, leading venture capitalists will explore the most promising trends and innovations shaping the future of artificial intelligence. Discover the key areas attracting investment, the challenges and opportunities within the AI landscape, and how these experts are positioning their portfolios to capture the next wave of AI-driven growth.
In this session, leading venture capitalists will explore the most promising trends and innovations shaping the future of artificial intelligence. Discover the key areas attracting investment, the challenges and opportunities within the AI landscape, and how these experts are positioning their portfolios to capture the next wave of AI-driven growth.
LLMs are increasing in capability and popularity, propelling their application in new domains—including as replacements for human participants in computational social science, user testing, annotation tasks, and more. Angelina will discuss a recent paper she authored that argues analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, explaining why this is harmful for marginalized groups. At the same time, in cases where the goal is to supplement rather than replace human participants (e.g., pilot studies), we provide inference-time techniques that we empirically demonstrate do reduce, but do not remove, these harms.
LLMs are increasing in capability and popularity, propelling their application in new domains—including as replacements for human participants in computational social science, user testing, annotation tasks, and more. Angelina will discuss a recent paper she authored that argues analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, explaining why this is harmful for marginalized groups. At the same time, in cases where the goal is to supplement rather than replace human participants (e.g., pilot studies), we provide inference-time techniques that we empirically demonstrate do reduce, but do not remove, these harms.
Explore how AI investments are transforming into tangible business outcomes in this insightful session. Industry experts from top organizations will discuss strategies for maximizing returns on AI initiatives, highlighting real-world examples of AI-driven growth and efficiency. Gain actionable insights into measuring the success of AI projects, from initial investment to long-term impact.
Explore how AI investments are transforming into tangible business outcomes in this insightful session. Industry experts from top organizations will discuss strategies for maximizing returns on AI initiatives, highlighting real-world examples of AI-driven growth and efficiency. Gain actionable insights into measuring the success of AI projects, from initial investment to long-term impact.
Embedding models are a foundational part of all modern AI systems, and their representations of documents are of potentially great value to anyone with large uncategorized collections of text or images. But high dimensional spaces are also intrinsically hard to understand, which makes providing useful interfaces to embedding spaces both important and difficult. This talk will talk about the ways that GPU-accelerated visualization, interaction, and new filters make the web browser one of the most exciting places to be making AI models interpretable and accessible today.
Embedding models are a foundational part of all modern AI systems, and their representations of documents are of potentially great value to anyone with large uncategorized collections of text or images. But high dimensional spaces are also intrinsically hard to understand, which makes providing useful interfaces to embedding spaces both important and difficult. This talk will talk about the ways that GPU-accelerated visualization, interaction, and new filters make the web browser one of the most exciting places to be making AI models interpretable and accessible today.
The release of ChatGPT in November 2022 sparked an explosion of interest in LLM alignment with human values, preferences and standards. Existing methods claim superiority by virtue of better correspondence with human pairwise preferences, often measured by LLM judges. But do LLM-judge preferences translate to progress on other, more concrete metrics for alignment, and if not, why not? Recent joint research with NYU, Columbia and Arthur.AI shows that (1) LLM-judgments do not correlate with concrete measures of safety, world knowledge, and instruction following; (2) LLM judges have powerful implicit biases, prioritizing style over factuality and safety; and (3) the supervised fine-tuning (SFT) stage of post-training, rather than RLHF, has the greatest impact on objective measures of alignment.
The release of ChatGPT in November 2022 sparked an explosion of interest in LLM alignment with human values, preferences and standards. Existing methods claim superiority by virtue of better correspondence with human pairwise preferences, often measured by LLM judges. But do LLM-judge preferences translate to progress on other, more concrete metrics for alignment, and if not, why not? Recent joint research with NYU, Columbia and Arthur.AI shows that (1) LLM-judgments do not correlate with concrete measures of safety, world knowledge, and instruction following; (2) LLM judges have powerful implicit biases, prioritizing style over factuality and safety; and (3) the supervised fine-tuning (SFT) stage of post-training, rather than RLHF, has the greatest impact on objective measures of alignment.