Person typing on a laptop with virtual icons showing AI ethics and technology concepts like security and automation.

Case Study: Using Our Ethical AI Evaluation Framework—Veriff Integration

Challenge

A nonprofit client needed a secure, efficient way to verify the identities of recipients of their Human Authored Certification.

In the first release of the Human Authored Certification only nonprofit members were eligible to receive certifications. As the client used a manual vetting process to admit members, this was sufficient to verify that the users were indeed "human". However, the second release of the product was set to allow non-members to gain certification. This required a different mechanism to ensure "human-ness". The client wanted a solution that would respect privacy, reduce bias, be quick and effective for users, and align with their mission.

Our Framework in Action

We applied our Ethical AI Tool Evaluation Checklist to guide every step of the selection and integration process.

Purpose & Mission Fit

Veriff's solution aligned with the client's goal of streamlining onboarding while maintaining trust and compliance, to the level of Know Your Customer (KYC) regulations.

Transparency

We assessed Veriff's documentation and reporting features to ensure both staff and recipients could clearly understand how verification decisions were made. We reviewed their portal, which provides clear details on data entered, data reviewed, and the final decision. This ensured full transparency.

Privacy & Data Handling

We examined Veriff's data collection, storage, and retention policies, confirming they met strict privacy requirements and minimized the exposure of sensitive information. In today's digital age, in many cases convenience comes at the price of privacy. While convenience is a critical element, we also ensured that Veriff was compliant with data privacy protection laws in the US and worldwide. Their fraud guide confirms this commitment.

Fairness & Bias

We explored Veriff's approach to bias mitigation. Their clear, detailed, and performance-based approach to bias mitigation was a key factor in our evaluation, in particular the conversation on why bias in machine learning is even more relevant that bias in humans is needed and mission-aligned. While Coat Rack is not in a position to train data models at the level that they can, we want to see that vendors are approaching this issue the way we would if we had their resources.

Human Oversight & Control

The integration included clear protocols for staff to review, override, or appeal verification decisions. This includes the ability to appeal Veriff's decisions, and for client administrators to override decisions entirely. It will be critical to monitor when and if this will happen, ensure there is a historical data trail, and review each case.

Vendor Ethics & Reputation

We evaluated Veriff's public ethics commitments and third-party security audits, adding confidence in their reliability. Our vendor evaluation process used various reviews, published lists of competitors, and updated 2025 guides. Each vendor was vetted for reputation, fairness, ethics, ease of use, and price.

Security & Compliance

Our checklist ensured Veriff's system met industry security standards and nonprofit compliance requirements. Specifically Veriff has obtained and maintains ISO/IEC 27001:2022 Certification, compliant with SOC 2 Type II, GDPR, WCAG 2.0 Accessibility Guidelines, and has obtained Cyber Essentials certification.

User Feedback & Continuous Improvement

We incorporated feedback loops for both staff and service recipients to report issues or suggest improvements. This will support ongoing refinement. The client understands that as the product is used more widely, additional resources may be needed to address client communications, issues, and the intricacies of Veriff's portal.

Outcome

The nonprofit now has a robust, mission-aligned identity verification process that is fast, fair, and transparent. Clear measures are in place to administer the product, review and override decisions if need be, and respond to customer feedback.

As this is a first release, it will be important to regularly survey users and staff, making adjustments as needed.

Want to see how our evaluation framework can help your organization make ethical, effective technology choices? Book a free discovery call or download our Ethical AI Tool Evaluation Checklist.