It was never meant to be a pot-shot, and I have nothing against lisp. I can tell why it reads that way, and we added that in because we wanted to illustrate why people should care.
As to your claim about us being wrong: I don't have an issue with being wrong, and maybe at the same time we are. At the same time, I think it is possible that there are misunderstandings that cause people to believe we aren't doing something new. Again, maybe we're not.
We're two 18 year olds, fresh out of high school. It's a research project, but we're not graduate students.
A lot of these comments are claiming it's not new because reader macros exist. From my understanding, our tokenization system is unique because it can all be done at runtime without backtracking or executing anything instantly, which is possible because cognition always makes use of the text read in, and never makes use of anything not yet read in, which means you don't have to backtrack. I mean, you could backtrack but it would be less elegant.
If I'm wrong about this then that's fine but then we still made something cool without even knowing it existed beforehand.
For a lot of this stuff, there is no "wrong" because it's a matter of taste and familiarity, rather like asking which of the human alphabets is "wrong". It's undoubtedly very clever, a reimagining of lexing from scratch.
On the other hand, I'm adding it to my list of examples of "left handed scissors" languages, along with LISP and FORTH themselves. Languages which a few percent of people regard as more intuitive but most users do not and prefer ALGOL derivatives.
It was never meant to be a pot-shot, and I have nothing against lisp. I can tell why it reads that way, and we added that in because we wanted to illustrate why people should care.
As to your claim about us being wrong: I don't have an issue with being wrong, and maybe at the same time we are. At the same time, I think it is possible that there are misunderstandings that cause people to believe we aren't doing something new. Again, maybe we're not.
We're two 18 year olds, fresh out of high school. It's a research project, but we're not graduate students.
A lot of these comments are claiming it's not new because reader macros exist. From my understanding, our tokenization system is unique because it can all be done at runtime without backtracking or executing anything instantly, which is possible because cognition always makes use of the text read in, and never makes use of anything not yet read in, which means you don't have to backtrack. I mean, you could backtrack but it would be less elegant.
If I'm wrong about this then that's fine but then we still made something cool without even knowing it existed beforehand.