I vividly remember moderating my very first workshop. I was absolutely energized, pouring my passion into every detail of the preparation. How could I inspire participants to motivate each other, break out of habitual thought patterns, and stay energized? After it was over, I was exhausted but happy. It was a success for both the team and myself. I learned so much from that first workshop!
Over the years, while moderating upper-management workshops and pure co-creation product workshops, one question has continued to drive me: How can I moderate true innovation? How can I achieve that Medici effect that collaborative work is supposed to bring? Moving beyond existing solutions and corporate limitations has been a constant challenge.
Out of the approximately 200 workshops I've moderated, only one had the potential for real innovation. The participants were not only open to bringing crazy ideas to the table but also strong in their professional roles. Often in product workshops, I notice that participants try to take on the role of a UX designer instead of leveraging their own professional expertise.
Recently, I found new hope in a tool that is completely "unbiased," knows no internal problems, and can draw from a vast array of information: AI.
The Experiment
I know that large language models (LLMs) don’t possess real knowledge. However, during the ideation phase, where quantity is crucial, they are excellent tools. They can generate a wide range of ideas on which we can build.
Here's how I set it up:
I prepared two prompts for the team to generate ideas using ChatGPT. The prompts were user-focused to enable user-centricity. In the ideation phase, one part of the team generated ideas using the traditional method, while the other part used AI.
What Went Well and What Didn’t
The team was able to generate many ideas, including some "crazy" ones. AI definitely has the advantage of bringing a large number of idea variations to the table quickly.
However, this also posed a disadvantage. The team without AI generated ideas from within, exchanged thoughts, iterated, and built a connection to their ideas. The AI-generated ideas, on the other hand, overwhelmed the team. They needed immense time to understand, internalize, and work with these ideas. Iteration wasn't possible within the planned time, as understanding and filtering took too long. The passion and connection to the ideas were missing, as the team focused solely on reading. This significantly disrupted the workshop flow. Additionally, diving deeper with the LLM and asking targeted questions wasn't feasible, as deeper questions could only come after a thorough understanding. Thus, everything remained superficial.
Conclusion
LLMs can be a great help in generating a large quantity of ideas. However, in the future, I would allocate significantly more time for understanding and diving deeper into these ideas. I haven't yet found a conclusive method for balancing fast progress and decision-making in workshops with the depth required for truly innovative ideas.
New features, like Miro Assistants that take on roles and generate ideas, will face similar challenges. The workshop team still needs to make decisions and cannot analyze and understand everything within seconds. I'm also not entirely sure if real innovation can emerge, or if teams will struggle with "idea paralysis" due to excessive output and lengthy evaluation times.
I'm eager to see how these tools evolve and how they can enable teams to bring forth true innovation.
Comments