AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.