17 Comments
User's avatar
Justine Clarke's avatar

Did I click because the title said butterflies, and I love butterflies? Yes. Did I stay because it began to talk about the smorgasbord of info that AI outputs, how messy it can get and things to keep in mind when it comes to that? Also yes. Thanks for sharing

Jaclyn Konzelmann's avatar

This comment made me smile 😊

Justine Clarke's avatar

Glad to hear 😊 Keep sharing!

Always flying, always learning 🦋

Pietro Montaldo's avatar

Such a useful breakdown. Small prompt mistakes really do cascade into larger errors

Petar Dimov's avatar

vague prompts now spawn multimedia chaos where once was just bland text. Specificity upfront is the new superpower

Selcuk Arkun's avatar

Great reminder for the stakes when using AI!

And, it makes me think that when building products we also have the responsibility to intuitively guide the user towards the right prompt in the first place (without being annoying!).

A lot of angst could be prevented if the waiter just said "We have salmon or fish and chips." :)

Jaclyn Konzelmann's avatar

Yes! This is the takeaway for product builders ✨

American Perp Walk's avatar

A couple of years ago I was using Stabile Diffusion to generate images. I was impressed how well it executed my prompts, but also how it added amazing details I hadn’t thought to specify. Now, with even more advanced AI, using the same prompts, the results are mediocre. As you suggest, I have to re-think how to write my prompts to get better results.

Jaclyn Konzelmann's avatar

Is interesting how in the case of images these assumptions in your prompt led to discovery of new ideas ...I think there's definitely something to be said about that.

That said, in your example the ratio of input : output hasn't changed (prompt to image), versus entire apps being built by a simple prompt now where the ratio on input (prompt) : out (functional app) has drastically changed... Making it much more expensive to then course correct the output through prompt iteration sometimes.

Shamim Rajani's avatar

I agree with the importance of good prompting, but how can we write good prompts?

What I do is:

I’ve actually never relied on a prompt library. Over time, I’ve learned how AI “thinks,” where it tends to be strong, and where it needs more direction. Because of that, I naturally write more comprehensive prompts. I already know which details I must include to avoid weak or off-track outputs.

Another thing I’ve noticed is that use-case specialization matters a great deal. When you consistently use AI for a specific type of task, it becomes easier to guide it effectively. For example, when I’m creating a LinkedIn post, I know there should be a CTA—but often I want it to be very subtle. So I make sure to clearly specify the tone and intensity of that CTA in the prompt. That level of clarity makes a huge difference in the final result.

Soo Bin Yim's avatar

This put words to something I’ve been feeling but couldn’t articulate. Getting a result is easy, but undoing the wrong direction takes much longer. It’s a good reminder that clarity early on saves more time than any shortcut later.

Vinaya K's avatar

This made me rethink prompts as more than inputs. In non-linear systems, responsibility compounds... even when intent feels small

Abe Diaz's avatar

One more thing to add. Besides having clarity in your dinner order it is key to know what kind of cuisine we are going to be dining at. You can ask for a cheeseburger or a sandwich but if the restaurant is a vegan restaurant then neither of those orders will get you 🐔 🤣.

What do I mean by this?

I recently had to do a process that required accuracy and precision. I kept asking the LLM for stuff and it kept giving me wrong answers. It wasn’t funny, although it kept apologizing to me. It wasn’t until I was able to ground my answers with RAG that I was able to get what I needed.

And explaining this to my peers feels so hard it’s like trying to be Jeff Goldblum in the movie Independence Day.

Jaclyn Konzelmann's avatar

More and more I'm realizing how important and powerful metaphors are becoming... Especially as you get deeper into the weeds of topics that are unfamiliar!

Tim Ousley's avatar

Vague prompts -> bland, generic output that gets massively amplified into an entire bland project.

Specific prompts -> get exactly what I (think I) need. But take a time, and I may not really know what I need yet.

Two other strategies:

1. Start over from scratch and try different specific prompts a few times. This is the AI version of “build one to throw away”.

2. Iterate on each layer of a spec one at a time. Then roll back when you encounter a problem. Spec Kit and similar projects can help do this at light speed, but I’ve also seen people implement similar hyperspeed waterfalls with bespoke Claude Code commands and markdown.

Jaclyn Konzelmann's avatar

Hah yeah I have a graveyard of dead ends that I learned from but weren't worth trying to recover... The equivalent of the email draft never sent or the doc never shared 😅

And great tip on rolling back a change or going to a previous checkpoint if you've veered too far off course! (Reminds me of Zelda - which I'm currently in the middle of playing again with the family hah)

User's avatar
Comment removed
Jan 8
Comment removed
Jaclyn Konzelmann's avatar

It's both the model, but also the tool that uses it. A lot of prompt chaining and AI systems underly these products which is where things really start to multiply... Sometimes in great ways... Other times in expensive course correcting ways