> I think the right mental model is pass-by-value for the first two. There is nothing different in the calling convention between sending a parameter of type int* vs a parameter of type int.
You're talking about parameters of type int; I'm talking about structs that are strictly larger than pointers. Structs which may be nested; for which deep copies are necessary to avoid memory leaks / corruption. And here, the distinction between these "mental models" exhibits a massive gap in real performance.
Here's a deliberately pathological case in C++; I've seen this error countless times from programmers in languages that make a distinction between references/pointers and values:
bool vector_compare(vector<int> vec, size_t i, size_t j) {
return vec[i] < vec[j];
}
int vector_argmin(vector<int> vec) {
if (vec.size()) {
size_t arg = 0;
for(size_t i = 1; i < vec.size(); i++) {
if (vector_compare(vec, i, arg))
arg = i;
}
return arg;
} else return -1;
}
The vector_compare function makes a copy of the full vector before doing its thing; this ends up turning my linear-looking runtime into accidentally-quadratic. From the perspective of this solitary example, it would make sense to collapse reference/pointer into the same category and leave "value" on its own.
But actually these are three distinct concepts, with nuance and overlap, that should be taught to anybody with more than a passing interest in languages and compilers. I'm not here to weigh in on what constitutes a modern language, but the notion that we should just throw this crucial distinction away because some half-rate programmers don't understand it is patently offensive.
My point is the same for int as for vector<int>. There is 0 difference in the C++ calling convention between passing a vector<int> and a vector<int>: they both copy an object of the parameter type. Of course, copying a 1000 element vector is much slower than copying a single pointer, but the difference is strictly the size of the type. The copying occurs the same way regardless. This is also the reason foo(char) is less overhead than a foo(char).
Everything (except reference types) is pass-by-value, but of course values can have wildly different sizes.
Also, the problem of accidentally copying large structs is not limited to arguments, the same considerations are important for assignments. Another reason why "pass-by-pointer" shouldn't be presented as some special thing, it's just passing a pointer copy.
Your vector<int*> is a red herring. The distinction I'm making is between passing a (vector<int>)* and a vector<int>, because those two objects have radically different sizes, and the distinction can and does create severe performance issues. And yet, pointers are still different from references: with a reference, you don't even need your object to have a memory address.
HN markup ate my *... Yes, I'm also talking about vector<int> and vector<int>*. They are indeed of radically different sizes, and the consequences of copying one are very different from the consequences of copying the other.
But this doesn't change the fact that they are both passed-by-value when you call a function of that parameter type.
You're talking about parameters of type int; I'm talking about structs that are strictly larger than pointers. Structs which may be nested; for which deep copies are necessary to avoid memory leaks / corruption. And here, the distinction between these "mental models" exhibits a massive gap in real performance.
Here's a deliberately pathological case in C++; I've seen this error countless times from programmers in languages that make a distinction between references/pointers and values:
The vector_compare function makes a copy of the full vector before doing its thing; this ends up turning my linear-looking runtime into accidentally-quadratic. From the perspective of this solitary example, it would make sense to collapse reference/pointer into the same category and leave "value" on its own.But actually these are three distinct concepts, with nuance and overlap, that should be taught to anybody with more than a passing interest in languages and compilers. I'm not here to weigh in on what constitutes a modern language, but the notion that we should just throw this crucial distinction away because some half-rate programmers don't understand it is patently offensive.