"From the CEO" Series: by Lisa Montague
In a world increasingly influenced by AI, how do we preserve and celebrate human thought?
Human Authored in an AI World
We're currently working on a project for one of our nonprofit clients called Human Authored. The goal is to distinguish and honor human creativity in an increasingly AI world. The premise is simple: people don't want to be fooled into reading works auto-generated from a remix of previously published works, courtesy of our robot friends.
I'm all in when it comes to transparency, in every area of life, and it makes sense to me to provide a certification program to help readers easily distinguish between human creativity versus AI. I'm genuinely excited we're able to collaborate on this project.
One thing that's been interesting technically — once my client decided to open up the certification process to non-members, the question became: How do we verify the author is actually human? Their existing vetting process handles this for members, but for anonymous non-members, something new was needed. Enter AI-powered Identity Verification. Is this ironic?
A Case Study: How AI Enhances (or Doesn't) My Day-to-Day Work
Here's how AI showed up - or didn't - as I worked on this feature:
Client Use Case and Needs
We spent time understanding my client's specific use case and needs. That means conversations, smart questions, and leveraging our long-standing relationship. I didn't use AI for this part. But if I were new to this client, I would use AI's research capabilities to more quickly and deeply understand my client and their business niche.
Surveying the Marketplace
I vetted many Identity Verification tools on the market and created a matrix to help compare them. I used AI's research capabilities to help me frame the questions and gather data, but I found that I needed to verify sources myself, and in many cases schedule sales calls to gather deeper information. The annoying part of this is when organizations force me to go through the sales and marketing funnel of doom to get simple answers. I tried using AI to avoid this and couldn't. My AI would point me to their online pages but not answer such questions for the vendor. Sigh!
Product Decision
After refining the matrix with client feedback, we decided on Veriff.com. When we were down to our final 3 choices my client said "go with your gut, Lisa". That ... Well, AI's don't have guts do they?
Signing Up
We signed up the client and started a subscription. It took some time as Veriff is an international company and didn't like the corporate credit card. AI tools won't do such things for you. You have to work through setting up the account and subscription yourself.
UI/UX
We're integrating Veriff into multiple user onboarding flows. This requires UI/UX thought and business requirements conversations about timing. Do we verify users before or after purchase? What if someone changes their name? Do we need the court documentation and to re-verify? What balance between usability and security should be taken? How much fraud detection? This required writing up options for my client and meeting to make decisions. My AI edited my drafts, but wasn't my final editor. That was my business partner.
Technical Architecture
This step I think about as "directing my developers". It consists of deeply reading their documentation and planning the steps. Do I understand everything about this product and their API? Am I at the point where I can specify exactly how I want this implementation to be coded? I didn't think about incorporating AI into this process. But this is what I do for a living and I'm super good at it. Perhaps a less experienced architect could use AI planning capabilities to help them through this task.
Coding
The code-completion and suggestion AI built into the IDE can be mostly helpful, sometimes annoying. The same with code linters. Though I wouldn't characterize them as AI, we are implementing custom code linters for our company to keep our coding standards consistent across projects.
I used Vibe Coding for a few days. I found the code wordy and the refactors odd. It did poorly at simple tasks such as break up methods. That said, it's just one tool. I will be trying out next tools when they arrive.
I also mentioned to my business partner how poor the results of my Vibe Coding experiment were. She asked an interesting question — do people fix AI bugs themselves, or do they ask their AI's to fix the bugs? Basically, how does the AI learn that its code sucked? I don't have an answer to this and I wonder how people in my human network will respond to that question.
User Testing
We've got robust backend automation for testing, but for User Testing and Acceptance Testing of the UI, I rely on humans. I've experimented in the past with front-end testing automation and found it pretty lacking, and expensive. Why? Because automation never does things or makes mistakes like humans do. It doesn't get confused like humans do. One of the most desirable pieces of feedback I look for from a human is "this worked but it was kind of weird?" That makes me sit up in my seat and work to understand why immediately.
I do think there's potential for AI to improve in this space. Imagine if AI's could model the human natural propensity for error and weirdness?
Deployment
We already have CI/CD set up, with agile sprints and releases managed by Jira.
Bug Fixing
We're perfect and have zero bugs. Haha... just making sure you're still reading! We are implementing more operational automation around bugs, support tickets, and production notifications. My AI has been really helpful in researching and implementing upgraded tools like Zapier AI.
Client and Development Documentation
AI can really help with code documentation. Plus my AI writes good "How To" documentation and files then neatly in Google Drive for me. I love that. And I know I keep saying "my AI". If anyone is curious, I've been using Sintra.
Final Thoughts and Takeaways
I always keep in mind my client and their needs. What my client wants is to implement a product where we can celebrate human creativity while acknowledging that AI is here to stay. I feel like my workflow already reflects that balance. I use AI tools for research, automation ideas, and documentation support and AI tools have become embedded into many aspects of my job. The research capabilities are scarily fast and good. The automation suggestions I've received from AI helpers alone can keep me busy for months implementing them. Specific implementation tools like Vibe Coding have been much less impressive.
Ultimately, I value sitting and thinking, reading and absorbing, communicating and refining direction. That's not fast. It can't be automated. It's intentional and truly, humanly, creative.
I'm curious to know what others think. Please reach out to me if you'd like. Have a great day!