Artificial intelligence is rapidly transforming daily life and professional practice across research, medicine, industry, law, and computing. From automated decision-making in clinical trials to predictive modeling in drug development and data management, AI is becoming an integral tool in both academic and commercial research settings. However, for individuals unfamiliar with these technologies, the speed of adoption can lead to confusion, concern, and a lack of trust. For HRPP and IRB professionals, as well as those working in research oversight across sectors, understanding how AI functions—and how it may impact study design, participant protections, and consent processes—is critical. Building confidence in evaluating AI-integrated protocols and identifying potential risks strengthens both ethical review and participant engagement. In this 90-minute session, we will explore how frameworks for building trust in science and technology can support the ethical and effective implementation of AI in research. We’ll examine how institutions and companies can proactively foster transparency, accountability, and participant-centered communication when using AI tools. Participants will come away with practical insights for navigating the intersection of innovation and public trust—across academic, nonprofit, and industry-sponsored research environments.