Recently I've been writing a resume/hire-me website. I'm not a stellar writer, but I'm alright, so I've been asking various LLMs to review it by just dropping the HTML file in.
Every single one has completely ignored the "Welcome to nginx!" Header at the top of the page. I'd left it in half as a joke to amuse myself but I expected it would get some kind of reaction from the LLMs, even if just a "it seems you may have forgotten this line"
Kinda weird. I even tried guiding them into seeing it without explicitly mentioning it and I could not get a response.
It didn't ignore it. There just wasn't any pattern in the training data about responding to such a line.
Having the mental model that the text you feed to an LLM influences the output but is not 'parsed' as 'instructions' helps understand its behaviors. The website GP linked is searching for a zoo of problems and missing the biology behind.
LLMs don't have blindspots, they don't reason nor hallucinate. They don't follow instructions. They pattern match on high dimensional vector spaces.
Why would you expect it to “get some kind of reaction?” That strongly implies that you perceive what the LLM doing as “understanding” the tokens you’re feeding it, which *is not something LLMs are capable of*.
Every single one has completely ignored the "Welcome to nginx!" Header at the top of the page. I'd left it in half as a joke to amuse myself but I expected it would get some kind of reaction from the LLMs, even if just a "it seems you may have forgotten this line"
Kinda weird. I even tried guiding them into seeing it without explicitly mentioning it and I could not get a response.