Hi Mostapha, thank you for taking the time to respond. I'm sorry if my earlier comment sounded a little too harsh. I didn't mean to sound rude, just wanted to get straight to the point. To give you some context, I've played around quite a bit with prompts myself over the past year and I know for a fact that even "modern" LLMs can't really consistently operate the way you want them to simply by using the approach that you laid out ('consistent' being the key word here). There's nothing inherently wrong with the approach, it's simply not very effective and to demonstrate, here are some tests I tried out.
Due to a lack of time, my testing was pretty minimal and without any proper tools but I tested your example prompts on R1, Sonnet 3.5, and 4o. Here are some screenshots: https://postimg.cc/gallery/hqgzHy0
You can see that Claude was the ONLY one who did a decent job with those prompts.
I agree with you on the fact that this is not an ideal implementation. In fact, an ideal implementation would involve not using decorators *at all*, instead using a natural language approach. If and when you use prompts like this in production, you'll understand why. In short, the rate of failure for a simple decorator is just way too high and would never result in a consistent response. The *best* way, imho, is to use very precise natural language.
To demonstrate this, I just went a single step further and asked the LLMs to write a more specific prompt based on the original prompt. Note: for this step, reasoning mode was enabled for the models that support it, but I assume results would be similar without reasoning. Here are some screenshots to give you an idea: https://postimg.cc/gallery/fLcnrZM
I then fed the prompt that the LLMs generated to the same LLMs. Here are the results: https://postimg.cc/gallery/WgkvLCL
As you can see, all three did pretty ok this time.
Again, this is not to say that decorators aren't a good idea; all this depends wholly on your context and the type of LLM you use. A model trained to recognise decorators would undoubtedly respond much better to them. Unfortunately, and as far I know, consumer grade LLMs don't offer that. (If you noticed, in one of the screenshots, Claude even highlighted the fact that they aren't necessary. I found that a little amusing!)
Don't get me wrong, I'm not trying to discourage you from experimenting. It's the only way to learn how to "engineer" better prompts, but there are other methods that are leaps and bounds better. I'm grateful you were open to the critique. I appreciate that very much!
If you're interested, I've written quite a bit about iterating over prompt ideas on my blog, for e.g., there are numerous examples in the following post where I construct an entire system prompt for development work with the help of LLMs. Maybe this is right up your alley? https://aalapdavjekar.medium.com/all-the-wrong-and-right-ways-to-prompt-a-tiny-guide-5bd119d312b3
Happy prompting! :)