Hacker Newsnew | past | comments | ask | show | jobs | submit | cptwunderlich's commentslogin

There is also [Learn Haskell by building a blog generator](https://learn-haskell.blog/) - that might be interesting to you.


The Haskell Language Server (LSP) always needs help: https://github.com/haskell/haskell-language-server/issues?q=...

As for GHC compile times... hard to say. The compiler does do a lot of things. Type checking and inference of a complex type system, lots of optimizations etc. I don't think it's just some bug/inefficient implementation, bc. resources have been poured into optimizations and still are. But there are certainly ways to improve speed. For single issues, check the bug-tracker: https://gitlab.haskell.org/ghc/ghc/-/issues/?label_name%5B%5...

For the big picture, maybe ask in the discourse[1] or the mailing list. If you want to contribute to the compiler, I can recommend that you ask for a gitlab account via the mailing list and introduce youself and your interests. Start by picking easy tickets - GHC is a huge codebase, it takes a while to get familiar.

Other than that, I'd say some of the tooling could use some IDE integration (e.g., VS Code plugins).

[1]...https://discourse.haskell.org/


thanks!


Gosh, I really wish GCC had more/better documentation. Especially big picture stuff. E.g., I would like to know what register allocation algorithms it uses (and how certain details are handled), but looking at that code I noped out...


It uses something they call IRA and LRA on the RTL representation. But bottom line is: it's graph colouring.

https://gcc.gnu.org/onlinedocs/gccint/RTL-passes.html

There's really quite a lot of documentation and published papers when you actually look:

https://github.com/gcc-mirror/gcc/blob/751f306688508b08842d0...


LLVM: Lots of documentation for frontend authors, not so much for backend stuff.

GCC: Lots of documentation for backend authors, actively made difficult to being used for frontends for many years (although things have improved dramatically)


How have things evolved for LLVM backend authors? Has it improved too?


You can read GCC Summit presentations for things like that. The register allocator is called IRA/LRA. It used to have a ball of mud called reload that isn’t worth understanding because it doesn’t make much sense.

GCC’s code style is strange because the original authors wanted to make it look like Lisp for some reason.


Err I always thought it looked strange because it was in C with classes before or after some holy war.

While gcc (and in general compiler) plugins are some of the most interesting tech enablers (be it for fuzzing,, static analysis, or runtime checks injection) 'People competently maintaining gcc plugins' (a sect I'm not a part anymore, thank dog) are amongst the most patient, devoted, unsung angels of this world.


It’s in C++ now. The weird spacing and functions ending in _p are Lispisms.

It’s also garbage collected so it’s still not “normal” C++ but neither is LLVM.

Not sure about the plugin API, but C++ is basically impossible to use with plugins because it’s so hard to keep ABI contracts, so it might not have changed.


Well you basically have to compile the plugins against your gcc's headers anyway, and they're gpl by default (same as wireshark dissectors iirc). No the pain is all the churn on gcc internals and in plugin APIs over the years. You basically become an ifdef monkey and end up testing myriads of gcc versions...


Isn’t the compiler the one who makes the ABI contract?

And I’ve seen _p and friends all over the place usually to differentiate between a pointer and, umm, not pointer. I thought it was a C++ism to be honest.


C++ has things like the fragile base class problem meaning you can accidentally break it easily. There's issues with throwing exceptions across different libraries on some platforms (maybe just Windows?) but I forget the reason why.

p is short for predicate.


Fragile base class problem affects all languages that offer some kind of inheritance, not only C++.


You are of course correct in a broad sense.

However, when the discussion already mentions "ABI contracts", they're probably referring specifically to the "fragile binary interface problem" (especially regarding member field access), which does not affect all languages that offer inheritance.

As the Wikipedia article mentions, this more specific problem is (confusingly) often referred to just as the "fragile base class problem".

https://en.wikipedia.org/wiki/Fragile_binary_interface_probl...


It still affects all compiled languages that offer inheritance, and OOP ABIs like COM.


Maybe I'm still misunderstanding you, but (modern) Objective-C is an example of a compiled language without the fragile base class ABI problem.

(It pays for this with extra indirection at runtime, of course: ivar accesses must first look up their runtime-resolved offset.)


Maybe the lispism are Stallman's legacy. He is a great lisp proponent after all.


This might be less true now, but for a long time gcc's code was terrible and undocumented on purpose. rms wanted it that way, to make it harder for it to be forked or EEE'd by corporations.

Whether that was a good plan is up for debate, but there you go.


People say that dmd's backend is terrible and undocumented, but I don't know what they're talking about:

https://github.com/dlang/dmd/tree/master/src/dmd/backend


> I would like to know what register allocation algorithms it uses

I'm wondering why you'd like to know. If it is just for your curiosity, that's very good. If you want to participate in the compiler development effort, hat tip!

But if you are thinking about tuning your code to such an internal detail, please don't! Coding to an implementation, rather than an interface, is never a good idea.


This is very fundamental not only in maths but in theoretical computer science in general. Even to prove how "hard" a problem is. E.g. look into (Turing/many-one) reductions [1]

Many problems can be reduced to SAT and then you can employ an off-the-shelf SAT/SMT solver to solve it for you. Etc etc.

But in general, being able to reduce problem A to problem B implies that A is not harder than B. I.e., to show that a problem P is undecidable, you can reduce the Halting problem to P.

TSP is NP-hard and can be reduced to SAT, then solved with a SAT solver. But something like determining whether a player has a winning strategy in unrestricted chess (with an nxn board, for arbitrary n) is in EXPTIME and can't be reduced to an "easier" problem.

Edit: Someone ITT also mentioned ILP (integer linear programming). Also a good example. E.g., many optimization programs can be mapped to ILP.

[1] https://en.wikipedia.org/wiki/Turing_reduction


Uff, I started working part-time pretty soon after starting my bachelors and have studied and worked part-time for many years. It's terrible. Worst of both worlds. Now I quit my job and trying to finish my master's thesis. What surprises me is, that he took so few classes. This seems to be different in every country (note: am in Europe). When I did a semester abroad, I got way more credits per class. Here all the classes are either 3 or 6 credits and you need 120 (30 for thesis).

Anyway, I do like the academic side of things and so many topics that I would have never gotten into (compilers, formal methods...), so I wouldn't want to miss it. Just wish I had finished sooner...


That's not really true. There was a huge outcry against the removal of (/=). There are still lots of warts in Prelude and base (head being partial, foldl is in prelude but not foldl'). So yeah, language evolution is still a hard problem


You are right that it's not "really" true, but I do think that at least it's not wholly untrue. Foldable/Traversable got through, and so did Monad Of No Return, the Functor-Applicative-Monad Proposal and several more that I can't name off the top of my head. It does happen, even if we both would like progress to be quicker and more drastic :)

IMO, the existence of the Haskell Report and the inability of the community to update it in a reasonably timely manner is the biggest cause of the persistence of the biggest warts like partial head and foldl. I don't think anyone wants to keep those but "The Haskell Report specifies that they are in the prelude and with the exact implementation they have" tends to kill any discussion. Let's hope the HF makes some progress on that soon!



Related: Potree WebGL point cloud renderer https://potree.github.io/


Uff, φ is all sold out :smh:


Relevant: "Hillel Wayne is Designing Distributed Systems with TLA+" https://www.youtube.com/watch?v=qubS_wGgwO0


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: