Did I click because the title said butterflies, and I love butterflies? Yes. Did I stay because it began to talk about the smorgasbord of info that AI outputs, how messy it can get and things to keep in mind when it comes to that? Also yes. Thanks for sharing
A couple of years ago I was using Stabile Diffusion to generate images. I was impressed how well it executed my prompts, but also how it added amazing details I hadn’t thought to specify. Now, with even more advanced AI, using the same prompts, the results are mediocre. As you suggest, I have to re-think how to write my prompts to get better results.
Is interesting how in the case of images these assumptions in your prompt led to discovery of new ideas ...I think there's definitely something to be said about that.
That said, in your example the ratio of input : output hasn't changed (prompt to image), versus entire apps being built by a simple prompt now where the ratio on input (prompt) : out (functional app) has drastically changed... Making it much more expensive to then course correct the output through prompt iteration sometimes.
One more thing to add. Besides having clarity in your dinner order it is key to know what kind of cuisine we are going to be dining at. You can ask for a cheeseburger or a sandwich but if the restaurant is a vegan restaurant then neither of those orders will get you 🐔 🤣.
What do I mean by this?
I recently had to do a process that required accuracy and precision. I kept asking the LLM for stuff and it kept giving me wrong answers. It wasn’t funny, although it kept apologizing to me. It wasn’t until I was able to ground my answers with RAG that I was able to get what I needed.
And explaining this to my peers feels so hard it’s like trying to be Jeff Goldblum in the movie Independence Day.
More and more I'm realizing how important and powerful metaphors are becoming... Especially as you get deeper into the weeds of topics that are unfamiliar!
Fascinating analysis of AI prompting as cascading complexity. The butterfly effect metaphor particularly resonates with current industry trends - the shift from 1:1 input-output to exponential multimedia workflows demands upfront precision. Would love to see a follow-up exploring how different AI models handle ambiguity differently, like whether GPT-4 vs Claude have distinct "reasonable assumption" patterns that users should understand. #butterfly
It's both the model, but also the tool that uses it. A lot of prompt chaining and AI systems underly these products which is where things really start to multiply... Sometimes in great ways... Other times in expensive course correcting ways
And, it makes me think that when building products we also have the responsibility to intuitively guide the user towards the right prompt in the first place (without being annoying!).
A lot of angst could be prevented if the waiter just said "We have salmon or fish and chips." :)
Vague prompts -> bland, generic output that gets massively amplified into an entire bland project.
Specific prompts -> get exactly what I (think I) need. But take a time, and I may not really know what I need yet.
Two other strategies:
1. Start over from scratch and try different specific prompts a few times. This is the AI version of “build one to throw away”.
2. Iterate on each layer of a spec one at a time. Then roll back when you encounter a problem. Spec Kit and similar projects can help do this at light speed, but I’ve also seen people implement similar hyperspeed waterfalls with bespoke Claude Code commands and markdown.
Hah yeah I have a graveyard of dead ends that I learned from but weren't worth trying to recover... The equivalent of the email draft never sent or the doc never shared 😅
And great tip on rolling back a change or going to a previous checkpoint if you've veered too far off course! (Reminds me of Zelda - which I'm currently in the middle of playing again with the family hah)
Such a useful breakdown. Small prompt mistakes really do cascade into larger errors
Did I click because the title said butterflies, and I love butterflies? Yes. Did I stay because it began to talk about the smorgasbord of info that AI outputs, how messy it can get and things to keep in mind when it comes to that? Also yes. Thanks for sharing
This comment made me smile 😊
A couple of years ago I was using Stabile Diffusion to generate images. I was impressed how well it executed my prompts, but also how it added amazing details I hadn’t thought to specify. Now, with even more advanced AI, using the same prompts, the results are mediocre. As you suggest, I have to re-think how to write my prompts to get better results.
Is interesting how in the case of images these assumptions in your prompt led to discovery of new ideas ...I think there's definitely something to be said about that.
That said, in your example the ratio of input : output hasn't changed (prompt to image), versus entire apps being built by a simple prompt now where the ratio on input (prompt) : out (functional app) has drastically changed... Making it much more expensive to then course correct the output through prompt iteration sometimes.
One more thing to add. Besides having clarity in your dinner order it is key to know what kind of cuisine we are going to be dining at. You can ask for a cheeseburger or a sandwich but if the restaurant is a vegan restaurant then neither of those orders will get you 🐔 🤣.
What do I mean by this?
I recently had to do a process that required accuracy and precision. I kept asking the LLM for stuff and it kept giving me wrong answers. It wasn’t funny, although it kept apologizing to me. It wasn’t until I was able to ground my answers with RAG that I was able to get what I needed.
And explaining this to my peers feels so hard it’s like trying to be Jeff Goldblum in the movie Independence Day.
More and more I'm realizing how important and powerful metaphors are becoming... Especially as you get deeper into the weeds of topics that are unfamiliar!
Fascinating analysis of AI prompting as cascading complexity. The butterfly effect metaphor particularly resonates with current industry trends - the shift from 1:1 input-output to exponential multimedia workflows demands upfront precision. Would love to see a follow-up exploring how different AI models handle ambiguity differently, like whether GPT-4 vs Claude have distinct "reasonable assumption" patterns that users should understand. #butterfly
It's both the model, but also the tool that uses it. A lot of prompt chaining and AI systems underly these products which is where things really start to multiply... Sometimes in great ways... Other times in expensive course correcting ways
Great reminder for the stakes when using AI!
And, it makes me think that when building products we also have the responsibility to intuitively guide the user towards the right prompt in the first place (without being annoying!).
A lot of angst could be prevented if the waiter just said "We have salmon or fish and chips." :)
Yes! This is the takeaway for product builders ✨
Vague prompts -> bland, generic output that gets massively amplified into an entire bland project.
Specific prompts -> get exactly what I (think I) need. But take a time, and I may not really know what I need yet.
Two other strategies:
1. Start over from scratch and try different specific prompts a few times. This is the AI version of “build one to throw away”.
2. Iterate on each layer of a spec one at a time. Then roll back when you encounter a problem. Spec Kit and similar projects can help do this at light speed, but I’ve also seen people implement similar hyperspeed waterfalls with bespoke Claude Code commands and markdown.
Hah yeah I have a graveyard of dead ends that I learned from but weren't worth trying to recover... The equivalent of the email draft never sent or the doc never shared 😅
And great tip on rolling back a change or going to a previous checkpoint if you've veered too far off course! (Reminds me of Zelda - which I'm currently in the middle of playing again with the family hah)