What would be the use of closing over the value rather than the variable? That would stop a lot of interesting use of closed-over variables (like persisting values from call to call).
In fairness while that would be a lot less convenient in reference-based langages in Go you could just close over a pointer to the variable, making the relationship explicit.
That’s how you’d do it using a [=] lambda in c++ or a move closure in rust.
In the "the variable lives in the function scope", the answer is unequivocally "it should be bound once and updated", if it only exists in the scope of the loop function, both "it is bound on each iteration" (and thus safe to close over without surprise) and "it is bound once and updated" are valid answers, but I have a preference for the first, but many languages actually choose the second.
I woudl say taht Python has scoping, yes, but it does not have lexical scoping in any sense of "lexical scoping" that I am aware of. If it did, the code below would not actually work ,as the outside-the-loop print would be trying to access a variable not available in the lexical scope of "the function body", as it is defined and established within the lexical scope of "the loop".
So, at least in my book, no, Python has "global scope", "function scope" and probably one or two more scopes (I think there's a "class scope" as well).
Here's some code in Python, and some equivalent code in Go.
def foo(a_list):
print(f"list is {len(a_list} elements")
for element in a_list:
print(element)
print(element)
And here's the equivalent Go code:
func foo(aList []int) { // Let's use ints...
fmt.Printf("list is %d elements\n", len(aList))
var element int // Notice this declaration! This is ensuring that element is declared outside the lexical scope of the for loop
for _, element = range aList {
fmt.Println(element)
}
fmt.Println(element)
}
Which, now that I have done enough reading I think crystallizes the confusing thing for others being highlighted here (for me). Depending on preference, the lack of block scoping can be surprising for someone. Which also explains my bias, I started with Python which probably plays a large part in why I find function level scoping without block scoping ergonomic.
In the "Swedish 7-bit ASCII", the C code "a || b" would look like "a öö b". The same character mappings were used in Finland, and that's why IRC count the characters {|}[\] as letters (that would typically have been displayed as "äöåÄÖÅ").
On the Compis II computer (a CP/M machine built on the 80186 CPU), there were places for {|}[\] in the character set, but they were in the top half of the 8-bit characters and not generally useful for programming.
Unless you anticipate wanting to check and re-check the contents. On a shelf right beside me (as opposed to the plethora of shelves in the room we call "The Library") is a copy of "Concrete Mathematics" by Graham/Knuth/Patashnik, because I am occasionally going through it again. Quite happy I have not decided to let it go.
There's also Strix[1], a self-guiding anti-armor mortar round. Not actually sure when it was taken into use, I recall hearing rumours about it being in development in the late 80s, and the Swedish model designation (m/94) indicates it was accepted in 1994.
But, 120 mm is a lot bigger than anything I'd be comfortable firing from a rifle.
If your question is "does Gödel Incompleteness have practical applications", my answer is "probably". Not necessarily directly, but indirectly it has inspired much other work.
One of the things it inspired was Turing's work on uncomputable numbers. It turns out that computability and incompleteness are intertwined, which at least I find interesting. And without the "uncomputable numbers" malarkey, I wonder if the Turing Machine formalism would exist (answer, "probably, but maybe looking different and with another name"). And, well, the Halting Problem is essentially Gödel Incompleteness (imagine handwaving here).
As for proof systems, again, the answer is "probably". Knowing that there are true, unprovable, statements in a formalism is something that informs how you approach it, you need to put a limit on how far to go before you say "I don't know" and taht is in and of itself important.
The best (FSVO "best") on-call compensation I had was a fixed sum per standby week, for simply carrying the pager. Then, on top of that, overtime at the going rate (150%, 200%, or 300% of hourly rate) depending on when teh call-out happened, with a minimum 3h compensation for any calendar day in which a callout happened (it used to be 3h per incident, until someone had 12 call-outs that each took about 5 minutes to fix).
For partial on-call weeks, the standby comp was adjusted. For good or bad, it was a literal pager, so we could easily adjust things within the team and file paperwork afterwards, as the NOC just called the pager. The downside of that was that it required being in physical presence to hand the pager over.
Not sure if it was still doing it in 2001, but in the 1997-1998 time-frame Purify also ran on HP-UX. The company I was working for at the time used it and we ended up finding a two-byte (IIRC) leak in the HP gethostbyname() library call (well, at least I think it was gethostbyname, it's more than two decades ago).
That was one of the more annoying tickets to file. We could of course send them the binary, but it would not run without the Purify license file, and we weren't comfortable to send off the license file as well. But, in the end, they accepted the bug. Not sure if there was every any fix, though.