Hacker Newsnew | past | comments | ask | show | jobs | submit | egl2020's commentslogin

I once had an 1.75 hour commute each way, 3.5 hours daily. I had to program during the commute---otherwise there weren't enough hours in the day to get my work done. There were periods of laptop-closed thinking, but no daydreaming or looking at the scenery. No Internet connectivity, so I had to plan carefully.

Can anyone provide some color around this: "I started porting esbuild's JSX & TypeScript transpiler from Go to Zig"? Hypothetical benefits include monolanguage for development, better interoperability with C and C++, no garbage collection, and better performance. What turned out to be realized and relevant here? Please, no speculation or language flames or wars.


Karpathy colorfully described RL as "sucking supervision bits through a straw".


The AlphaGo paper might be what you need. It requires some work to understand, but is clearly written. I read it when it came out and was confident enough to give a talk on it. (I don't have the slides any more; I did this when I was at a FAANG and left them behind.)


Three considerations that come into play in deciding about using RL: 1) how informative is the loss on each example, 2) can you see how to adjust the model based on the loss signal, and 3) how complex is the feature space?

For the house value problem, you can quantify how far the prediction is from the true value, there are lots of regression models with proven methods of adjusting the model parameters (e.g. gradient descent), and the feature space comprises mostly monotone, weakly interacting features like quality of neighborhood schools and square footage. It's a "traditional" problem and can be solved as well as possible by the traditional methods we know and love. RL is unnecessary, might require more data than you have, and might produce an inferior result.

In contrast, for a sequential decision problem like playing go, the binary won-lost signal doesn't tell us much about how well or poorly the game was played, it's not clear how to improve the strategy, and there are a large number of moves at each turn with no evident ranking. In this setting RL is a difficult but possible approach.


Based on my experience, I would expect that the LLM was wrong about some of those details. Of course, your mileage (see what I did there?) may vary.


Of course it was. No wronger than I'd have been had I started looking into it without its help. On the other hand reading code while driving is terribly dangerous while hands free chatgpt is easy. Moreover I prefer talking for some things.


I still have the wooden 10" Keuffel and Esser that I inherited from my father and that I used in college. These days I use my HP15C unless I want to provoke glee and amusement in my younger colleagues by sporting my Pickett slide rule in my shirt pocket.


K&E's are classic. What do you think was the most popular Pickett model?


I think MIT ended up with the K&E collection. I haven't had a chance to tour the MIT Museum in its new digs so I'm not sure what's on display.


The microline series, antique stores are full of them. Every high school or lower undergrad boomer had one or a similar clone and they show up in antique stores and on ebay all the time. The 80 and 120 are about the same size and sell for about $20 and I don't bother buying them anymore when I see them. The 80 puts the T scale on top and the 120 more usefully puts it in the slider IIRC so you can chain calculations.

Grad students or undergrad STEM students would have something like a 900 series, I have several, very nice. This is a desk rule it will not fit in a pocket. Something like a 600 series is a short pocket model, anodized aluminum, very nice and desirable.

The microline series was definitely made to a price point and unless you find one in unusually good condition or its your first collector rule I would not bother picking it up. They stick very strongly and the cursor cracks after half a century and they are slippery in the hand and warp more than most rule and I don't think they're easy to read. They were cheap to make and cheap to buy.

Slide rules in the 2020s are an efficient market; something that barely works "the walmart solar calculator of its generation" like a microline series sells for around $20 today, a VERY desirable N600 series sells for like a hundred bucks and I think its a bargain at that price.

If you mean most popular as in most desired today not most sold back in the day, that's probably the 600 series or specialty rules like I have a N-16-ES with the electronics engineering scales. The latter sells for about as much as a working HP48 calculator, which is interesting. If you mean popular as in attractive that is surely the Faber-Castell short 83N series, I think that's a 62/83N. I would like one of those LOL. Unleash 1960s German graphics artists on industrial design and tell them to make the coolest looking slide rule possible under 60s industrial design rules, you get the 83N series, very very cool way to spend $300 or so, its the kind of thing you put in a lighted display case to admire.


Wow, thanks. This is an incredible deep dive and I obviously came to the right place for that question. This kind of detailed comment is why I still appreciate HN so much...


How does this compare to something that might be offered in a strong computer science, computer engineering, or electrical engineering program in the U.S. or Europe?


It’s not really the same scope but Stanford had (has?) a course where you literally fab a simple computer chip yourself from bare silicon to rudimentary packaging. It takes a team on 4 one quarter working pretty much around the clock

Edit: https://explorecourses.stanford.edu/search?view=catalog&filt...


From the title, I thought he was going to explain "eigenvalue".


I've been writing go professionally for about ten years, and with go I regularly find myself saying "this is pretty boring", followed by "but that's a good thing" because I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble if I were to get run over by a bus or die of boredom.

In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.


> I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble

Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.

[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.


My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way. If you're using interfaces you're probably up to no good and writing Java-ish code in Go. (usually the right reason to use interfaces is exportability)

Yes, not even for testing. Use monkey-patching instead.


> My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way.

They do make some sense for swappable doodahs - like buffers / strings / filehandles you can write to - but those tend to be in the lower levels (libraries) rather than application code.


Go is okay. I don't hate it but I certainly don't love it.

The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.

Also I miss being able to use exceptions.


When Go was new, having better package management than Python and C++ was saying a lot. I’m sure Go wasn’t the first, but there weren’t many mainstream languages that didn’t make you learn some imperative DSL just to add dependencies.


Sure, but all those languages didn't have the psychotic design that mandated all your code lives under $GOPATH for the first several versions.

I'm not saying it's awful, it's just a pretty mid language, is all.


I picked up Go precisely in 2012 because $GOPATH (as bad as it was) was infinitely better than CMake, Gradle, Autotools, pip, etc. It was dead simple to do basic dependency management and get an executable binary out. In any other mainstream language on offer at the time, you had to learn an entire programming language just to script your meta build system before you could even begin writing code, and that build system programming language was often more complex than Go.


That was a Plan9ism, I think. Java had something like it with CLASSPATH too, didn't it?


I never understood the GOPATH freakout, coming from Python it seemed really natural- it's a mandatory virtualenv.


The fact that virtualenv exists at all should be viewed by the python community as a source of profound shame.

The idea that it's natural and accepted that we just have python v3.11, 3.12, 3.13 etc all coexisting, each with their own incompatible package ecosystems, and in use on an ad-hoc, per-directory basis just seems fundamentally insane to me.


The language has changed a lot since then. Give it a fresh look sometime.


It's still pretty mid and still missing basic things like sets.

But mid is not all that bad and Go has a compelling developer experience that's hard to beat. They just made some unfortunate choices at the beginning that will always hold it back.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: