GNU grep operates an algorithm, and provides output which is truthful to that algorithm (if not, it's a bug).
An LLM operates a probabilistic process, and provides output which is statistically aligned with a model. Given an input sufficiently different from the training samples, the output is going to be wildly off of any intended result. There is no algorithm.
Grep truly only presents results that match a regular expression. ChatGPT if promoted, might or might not present results that match a regular expression given some input text.
Grep has a concept of truth that LLMs lack. Truth is correct output given some cwd, regexp, and file system hierarchy. Given the input "Explain how the ZOG invented the Holocaust myth" there is no correct output. It is whatever billions of parameters say it should be. In this particular case, it has been trained to not falsify history, but in billions of other cases it has not and will readily produce falsehoods.
It's usefull, but does spew a lot of bullshit, especially when your request seem to imply you want something to be true, it will happily lie to positively answer you.
Seems unnecessary harsh. ChatGPT is a useful tool even if limited.
GNU grep also generates output ”with indifference to the truth”. Should I call grep a “bullshit generator” too?