AI TRUST provides a collaborative, comprehensive assessment of AI solutions to ensure strong guardrails for short- and long-term success

Iliff Innovation Lab, the catalyst for integrity in technology, today announced AI TRUST, a review framework designed to help companies evaluate artificial intelligence (AI)-based technology and certify that the tools they adopt are productive, safe and equitable for users and other stakeholders. The Iliff Innovation Lab is a subset of The Iliff School of Theology, which used its years of experience at the intersection of technology and ethics to create AI TRUST and explore new ideas and approaches to AI and other technologies.

AI TRUST works with organizational leaders via a collaborative review process to assess and score AI-based technology, empowering companies with the knowledge and tools necessary to navigate the complexities of AI. AI TRUST helps businesses anticipate and mitigate risks related to data collection and storage, privacy, bias, in a quickly changing legal and regulatory environment to foster a more responsible technological landscape.

Studies show only 3 in 10 enterprise executives have comprehensive AI policies and protocols in place. Rapidly advancing technologies such as AI and Large Language Models lack established guidelines despite widespread use. Created with large amounts of human data, which includes human biases, AI technologies often reflect intrinsic biases. Iliff’s AI TRUST goes behind the scenes of AI solutions, unearthing existing issues and potential considerations for longer-term uses, all from an independent and neutral perspective.

“The emergence of artificial intelligence is surfacing new types of ethical and legal implications, and many companies are flying blind, lacking effective strategies to evaluate their AI solutions for these. While AI can drastically improve businesses, it is far from a 'set it and forget it' tool,“ said Dr. Michael Hemingway, director of design and data science at the Iliff School of Theology. “AI requires pressure-testing and ongoing human oversight to stave off bias and other inaccuracies. AI TRUST fills this gap — collaborating with teams, giving them knowledge to better understand how AI solutions will perform for their individual business, and making way for people to proactively adjust technologies to better set their company and employees up for success.”

For more than 125 years, the Iliff School of Theology has been at the forefront of theological education, recognized internationally for its emphasis on peace, justice and ethics. AI TRUST follows the success of Iliff’s DEI training, which helps companies create initiatives and practical, mindful and embodied methods to build strong teams in the workplace that not only address racial differences but celebrate them.

The AI TRUST process can benefit an entire organization on multiple levels, building an ecosystem of trust and ensuring quality output to their partners and customers alike by offering a step-by-step approach to inform and pressure-test planned software rollouts. When complete, the process earns companies an AI TRUST certification, establishing the company’s technology as safe, responsible and reliable.

WellPower, the largest community mental health center in Colorado, utilized the AI TRUST review process to decide which AI tools to implement and make sure their technology aligned with their values. TRUST helped WellPower understand the ethics of their business processes, recognizing that TRUST can greatly impact not just their company and clients, but the AI ecosystem as a whole.

“Dealing with highly sensitive medical information, we regularly use AI to transcribe sessions, allowing therapists to focus on patients, instead of taking notes,” said Alires Almon, director of innovation at WellPower. “However, we know that tools using natural language processing can contain biases, and in our work keeping trust with our clients is critical. The AI TRUST process allowed us to assess and manage bias in our AI systems, as well as make sure our AI applications meet professional and legal standards. Now we can confidently provide our customers with the best care possible, taking proactive steps to avoid any problems, either ethical or regulatory — our therapists can focus on being therapists.”

To learn more about the AI TRUST Review Process, click here.

About Iliff

The Iliff Innovation Lab is creating a more responsible technological landscape through its AI TRUST Framework, empowering companies to assess and evaluate AI-based technology in a collaborative, customized process. The Iliff Innovation Lab partners with organizations to implement AI-based technologies more effectively and holistically for short- and long-term success. Our process helps businesses anticipate and mitigate risks related to data collection and storage, privacy, bias, changing regulatory environments and other inherent AI challenges.

At the Iliff Innovation Lab, our mission is to facilitate the adoption of safe and equitable technology, protecting teams, customers and investors. By aiding organizations in evaluating AI tools, we advocate for a world of equitable innovation.

Olivia Venuta 104 West for Iliff Innovation Lab olivia.venuta@104west.com