The broken AI product experience
Have you tried a product that claimed to be AI-enabled, only to realize that only one small portion of the product utilizes AI?
I like to call this the AI solutionist effect.
With AI it only makes it worse, because many companies will take AI as a solution to their product problems, not as a tool to aid with other tools to solve the problem.
Like any new technology, many businesses will rush to release without planning for the entire customer journey. This, in turn, creates a partial experience, where some areas of the product give a wow factor, whereas the remaining can be a letdown, or even more frustrating.
A great start to a poor finish
Let me give an example. A product I frequently use and love, Airtable, allows customers to use AI to set up custom databases very quickly. Simply input to the prompt the type of data you will store, add some other specifications, and voila, everything is created to near perfection. This is where the AI journey ends.
Any future customizations or queries will be completely up to the user to figure out. This can create a stronger sense of disappointment and frustration than before AI was implemented. When the product lacked AI integration, it was expected by users to read the documentation and figure out how to tie together different data points with formulas and through the complex interface.
With the launch of AI, my very first thought when running into a customization problem is “Why can’t the AI implementation help with my inquiry?”
After all, if I go back and create a new project with these new specifications spelled out for the AI, it will create the project almost perfectly. So it isn’t a lack of knowledge, which leaves the customer questioning why the company would not think to add an AI assistant for modifications.
Please note that I used Airtable as an example because I’m very familiar with the product. There are many products that have similar implementation tactics.
We can’t expect companies to launch everything all at once to perfection, otherwise we may never see an update. However, having little transparency on when or what is on the roadmap and vague responses from the company like “I will let our team know about your feedback” leaves users further in the dark. This leads to a whole different topic on best business relationship practices which we may discuss in a later release.
The solution without falling into the solutionist effect?
If releasing half a journey doesn’t work, but we also can’t expect businesses to release everything at once, what can be done?
Sometimes, it may make sense to work backward from the end of the customer journey rather than from the beginning. In the case of Airtable, if the AI was trained on formulas and table modifications first, the journey would start as a smart chatbot answering technical questions.
Small releases that enable new knowledge frequently, eventually allowing the AI to complete the task for the user.
After some time, rolling out the feature of creating a new project will make sense, because the scaffolding is already available to users. Customers didn’t have to wait for the perfect journey, but they also didn’t feel the process was incomplete.
It all comes down to how customers perceive the value being delivered. Rolling out in stages where each micro-interaction is complete and starting from existing pain points will bring a strong sense of progress in the product.
This is where customer research comes into play to understand the most critical areas of the user journey. With AI’s exponential growth, releasing nominal features or seemingly big AI features that then leave the user feeling abandoned isn’t going to make your product go viral.