Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Related, I love Rob Pike's talk about lexical Scanning in Go (2011).

Educational and elegant approach.

https://www.youtube.com/watch?v=HxaD_trXwRE



That talk is great, but I remember some discussion later about Go actually NOT using this technique because of goroutine scheduling overhead and/or inefficient memory allocation patterns? The best discussion I could find is [1].

Another great talk about making efficient lexers and parsers is Andrew Kelley's "Practical Data Oriented Design" [2]. Summary: "it explains various strategies one can use to reduce memory footprint of programs while also making the program cache friendly which increase throughput".

--

1: https://news.ycombinator.com/item?id=31649617

2: https://www.youtube.com/watch?v=IroPQ150F6c


Yeah I actually remember that too, this article mentions it:

Coroutines for Go - https://research.swtch.com/coro

The parallelism provided by the goroutines caused races and eventually led to abandoning the design in favor of the lexer storing state in an object, which was a more faithful simulation of a coroutine. Proper coroutines would have avoided the races and been more efficient than goroutines.


I feel like that talk has more to do with expressing concurrency, in problems where concurrency is a natural thing to think about, than it does with lexing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: