Not exactly. They don't have the final say, so if they disagree with something, they can (and will) be overruled. But they don't "stamp" things and aren't otherwise made to approve what they don't like.
> That being said. I have no idea how you'd actually go about teaching students CS these days, considering a lot of them will probably use ChatGPT or Claude regardless of what you do.
My son is in a CS school in France. They have finals with pen and paper, with no computer whatsoever during the exam; if they can't do that they fail. And these aren't multiple choice questions, but actual code that they have to write.
I had to do that too, in Norway. Writing C++ code with pen and paper and being told even trivial syntax errors like missing semicolons would be penalised was not fun.
This was 30 years ago, though - no idea what it is like now. It didn't feel very meaningful even then.
But there's a vast chasm between that and letting people use AI in an exam setting. Some middle ground would be nice.
I wrote assembler on pages of paper. Then I used tables, and a calculator for the two's-complement relative negative jumps, to manually translate it into hex code. Then I had software to type in such hex dumps and save them to audio cassette, from which I could then load them for execution.
I did not have an assembler for my computer. I had a disassembler though- manually typed it in from a computer magazine hex dump, and saved it on an audio cassette. With the disassembler I could check if I had translated everything correctly into hex, including the relative jumps.
The planning required to write programs on sheets of paper was very helpful. I felt I got a lot dumber once I had a PC and actual programmer software (e.g. Borland C++). I found I was sitting in front of an empty code file without a plan more often than not, and wrote code moment to moment, immediately compiling and test running.
The AI coding may actually not be so bad if it encourages people to start with high-level planning instead of jumping into the IDE right away.
Now if only you had read to the end of my comment, to recognize that I was setting up for something, and also applied not just one but several HN guidelines (https://news.ycombinator.com/newsguidelines.html, under "comments")...
We live in a global world and this is super common nowadays. In my own family 2 out of 3 sibling are married with someone who was born in a different continent, one in Asia, the other in Latin America.
And we both met them here in Europe.
People are so welcoming in latin america that when you marry someone, you literally marry the whole extended family. After just a handful of years is not like my partner's aunts and cousins are strangers to me. I can contact them anytime for advice on a topic related to their work/career field and they will do so about mine.
Add to that some cousins and friends who moved overseas and I have many regular contacts that live more than 10000km away from me.
It's not really the "overseas" usecase that is the sticking point for many businesses.
Does your business in Spain ever need to message Brits who are there on holiday? Does your business in Greece ever have customers who drive across the border from Albania?
I don't think that's really true. You still need to think to make what you want to make. You still got to design the program. You just do less typing.
In a sense, AI coding is like using a 3D printer. The machine outputs the final object but you absolutely decides how it will look like, how it will work.
I use LLMs for coding in the exact opposite way as described in the video. The video says that most people start big, then the LLM fails, then they reduce the scope more and more until they're actually doing most of the work while thinking it's all the machine's work.
I use AI in two ways. With Python I ask it to write micro functions and I do all of the general architecture. This saves a lot of time, but I could do without AI if needed be.
But recently I also started making small C utilities that each do exactly one thing and for those, the LLMs write most if not all of the code. I start very small with a tiny proof of concept and iterate over it, adding functionalities here and there until I'm satisfied. I still inspect the code and suggest refactorizations, or putting things into independent, reusable modules for static linking, etc.
But I'm not a C coder and I couldn't make any of these apps without AI.
Since the beginning of the year, I made four of them. The code is probably subpar but they all work great! and never crash, and I use them every day.
I wonder what sort of training data the AI was fed with. It's possible that such if whatever was utilized most was put together into a reference cookbook a human could do most of the work almost as fast based on more normal searches of that data in an overall more efficient way.
What about stack/buffer overflows, use after free and all of the nasty memory alloc/dealloc security pitfalls? These are what I would worry about with C programs.
A "constitution" is what the governed allow or forbid the government to do. It is decided and granted by the governed, who are the rulers, TO the government, which is a servant ("civil servant").
Therefore, a constitution for a service cannot be written by the inventors, producers, owners of said service.
This is a play on words, and it feels very wrong from the start.
You're fixed on just one of the 3 definitions for the word "constitution"—the one about government.
The more general definition of "constitution" is "that which constitutes" a thing. The composition of it.
If Claude has an ego, with values, ethics, and beliefs of an etymological origin, then it makes sense to write those all down as the the "constitution" of the ego — the stuff that it constitutes.
They seem to not conceive of their creation as a service (software-as-a-service). In their minds, the creation(s) resemble(s) an entity, destined to become the mother ship of services (adjacent analogies: a state with capital s, a body politic,..). Notice how they've refrained from equating them to tools, prototypes or toys. Hence, constitution.
These are the first abstract sentences of a research paper co-authored in 2022 by some of the owners/inventors steering the lab business (to which we are subject to experimentation as end-users):
"As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’." https://arxiv.org/pdf/2212.08073
I (and I suspect many others) usually think of a constitution as “the hard-to-edit meta-rules that govern the normal rules”. The idea that the stuff in this document can sort of “override” the system prompt and constrain the things that Claude can do would seem to make that a useful metaphor. And metaphors don’t have to be 100% on the nose to be useful.
How are there so many people in this thread, yourself included, that are so confidently wrong and so brazen about announcing how confidently wrong they are to everyone?
I don’t think it’s wrong to see it as Anthropic’s constitution that Claude has to follow. Claude governs over your data/property when you ask it to perform as an agent, similarly to how company directors govern the company which is the shareholders property. I think it’s just semantics.
Agree 100%; and the analogy with SEO is spot on! Those were everywhere 20 years ago. They're mostly gone, and so are their secret recipes and special tags and whatnot. AI gurus are the same! Not the same people but the same profile. It's so obvious.
"Comment NEAT to receive the link, and don't forget to connect so I can email you" -- this is the most infuriating line ever.
reply