Hacker Newsnew | past | comments | ask | show | jobs | submit | smj-edison's commentslogin

I'm taking CS in college right now, and when we do our projects we're required to have a editor plugin that records every change made. That way when they grade it, they see how the code evolved over time, and not just the final product. Copying and posting has very distinct editor patterns, where organically developed code tends to morph over time.

which editor plugin are you using?

I looked to see if BYU had made the source code available, but it doesn't look like they've published it. It's called code recorder, and before we do an assignment we have to enable recording. It generates a .json file that lists every single edit made in terms of a textual diff. They must have some sort of tool that reconstructs it when they grade. Sorry I don't know more!

Edit: I expect it wouldn't be super hard to create though, you'd just have to hook into the editor's change event, probably compute the diff to make sure you don't lose anything, and then append it to the end of the json.


Very interesting, thanks for the insight into modern uni. It’s been a long time since I was there and struggle to imagine what it must be like now.

It does seem like they’re going the wrong way, repelling tech to keep things easy instead of embracing new tech by updating their teaching methods.

But I also think we’ve collectively fallen flat in figuring out what those methods are.


I think it's fair for the projects, since when you first write code you're learning to think like a computer. Their AI policy is it's fine to ask it questions and have it explain concepts, but the project assignments need to be done without AI.

The one requirement I think is dumb though is we're not allowed to use the language's documentation for the final project, which makes no sense. Especially since my python is rusty.

Since you mentioned failure to figure out what better teaching methods are, I feel it's my sworn duty to put a plug for https://dynamicland.org and https://folk.computer, if you haven't heard about them :)


> I am not sure we have the 'best way' to teach anything computer related.

Not saying this is the best way, but have you followed any of Bret Victor's work with dynamicland[1]?

[1] https://dynamicland.org/


Yea, and I think it is amazing, but in the same time it will work for some and not for others

The same way scratch works for some, redstone for others, and https://strudel.cc/ for third

I think the truth is that we are more different than alike, and computers are quite strange.

I personally was professionally coding, and writing hundreds of lines of code per day for years, and now I look at this code and I can see that I was not just bad, I literally did not know what programming is.

Human code is an expression of the mind that thinks it. Some language allow us to better see into the author's mind, e.g forth and lisp, leak the most, c also leaks quite a lot e.g. reading antirez's code or https://justine.lol/lambda/, or phk or even k&r, go leaks the least I think.

Anyway, my point is, programming is quite personal, and many people have to find their own way.

PS: what I call programming is very distant from "professional software development"


I think one thing I've heard missing from discussions though is that each level of abstraction needs to be introspectable. LLMs get compared to compilers a lot, so I'd like to ask: what is the equivalent of dumping the tokens, AST, SSA, IR, optimization passes, and assembly?

That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.


“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.

I’m not saying generative AI meets this standard, but it’s different from what you’re saying.


Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist.

Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.


This is a good point but may be moot. Our consumer-facing LLMs speak C, Python, and JavaScript.

'Decompilers' are work in the machine code direction for human consumption, they can be improved by LLMs.

Militarily, you will want machine code and JS capable systems.

Machine code capablities cover both memory leaks and firmware dumps and negate the requirement of "source" comprehension.

I wanted to +1 you but I don't think I have the karma required.


Also, smuggling a single binary out of a set of systems is likely far easier than targetting a source code repository or devbox directly.


Yeah. It's really important that you have a platform with your users, because that's how you introduce them to features (not always a bad thing). It becomes a touch point with your customers. Google and Apple have their phones, Meta is desperately trying to get their smart glasses working, Amazon is also desperately trying to get something together with Alexa, and Microsoft is... Throwing theirs away?


It reminds me of how Sussman talked about someday we'd have computers so small and cheap that we'd mix dozens in our concrete and be put throughout our space.


Russia started with mixing diodes into concrete a while ago- https://news.ycombinator.com/item?id=41933979


A Deepness in the Sky by Vinge has this as a minor plot point.


I've tried to use Claude Code with Sonnet 4.5 for implementing a new interpreter, and man is it bad with reference counting. Granted, I'm doing it in Zig, so there's not as much training, but Claude will suggest the most stupid changes. All it does is make the rare case of incorrect reference counting more rare, not fixing the underlying problem. It kept heaping on more and more hacks, until I decided enough is enough and rolled up my sleeves. I still can't tell if it makes me faster, or if I'm faster.

Even when refactoring, it would change all my comments, which is really annoying, as I put a lot of thought into my comments. Plus, the time it took to do each refactoring step was about how long it would take me, and when I do it I get the additional benefit of feeling when I'm repeating code too often.

So, I'm not using it for now, except for isolating bugs. It's addicting having it work on it for me, but I end up feeling disconnected and then something inevitably goes wrong.


I'm also building a language in Zig!

Good luck!


Oh cool! I'd love to hear more. I'm implementing an existing language, Tcl, but I'm working on making it safe to share values between threads, since a project I contribute to[1] uses Tcl for all the scripting, but they have about a 30% overhead with serialization/deserialization between threads, and it doesn't allow for sharing large values without significant overheads. I'm also doing some experiments with heap representation to reduce data indirection, so it's been fun getting to learn how to implement malloc and other low-level primitives I usually take for granted.

[1] folk.computer


It's important to distinguish between the Framework 13 and the Framework 16. The Framework 16 is by far the most ambitious of the two, and so it has had a lot more issues. I use a Framework 13, and I've loved it. It's light, has a solid frame, and runs Linux great. The battery life isn't great, and the speakers aren't either, but I've been able to mitigate the latter with EasyEffects.


It's also really ergonomic with `errdefer` and `try`.


Gilad Bracha talks about how they're not mutually exclusive concepts, and I mostly agree (OOP can have tailcall recursion and first order functions for example). But, the philosophy seems very different: functional programming is "standing above" the data, where you have visibility at all times, and do transformations on the data. OOP is much more about encapsulation, where you "send a message" to an object, and it does its own thing. So you could totally write OOP code where you provide some core data structures that you run operations on, but in practice encapsulation encourages hiding internal data.

Though on further thought, may be this isn't FP vs OOP, because C has a similar approach of "standing above", and C is the hallmark imperative language.


As long as you keep C pointers as pointers. The mutable aliasing rules can bite you though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: