Aalap Davjekar
1 min read2 days ago

--

This is a misleading post. The “decorators” approach can be done through natural language as well. LLMs aren’t trained to distinguish and respond differently to these symbols versus regular prompting. Also, they are wholly dependent on what LLM you use. I tested this out on 4o and R1 and 4o failed every test and just responded as if it were a regular prompt while R1 offers a slightly tailored response, although not in the way you described.

+++Reasoning is not going to magically make a non-reasoning model perform reasoning.

+++Fact-checking will not work the way you think it does and makes me wonder if you know how LLMs work at all. They’re statistical word generators, not tools that can critically think for themselves.

This post reminds me of those “prompt-gurus” who were selling prompts a year ago when all you had to do was be a little precise in your wording.

--

--

Aalap Davjekar
Aalap Davjekar

Written by Aalap Davjekar

Technical writer and web developer based in Goa, India.

Responses (1)