Why do you think senior managers would replace themselves if they could? Given the choice between saving a company and giving a senior manager slightly more money or power, the senior manager will always win.
The ultimate authority in any public company is the board.
The majority of employees will never have to think about it, but everyone from the CEO down is ultimately hired at their discretion.
To see how things normally operate read up on the OpenAI drama. If Sam Altman wasn't as politically savvy as he was the board would have fired him in a Friday announcement and by Monday a new CEO would have been in charge.
And boards are largely made up of senior management of other companies, scratching each other's backs.
If it ever became clear that an LLM or other AI of some sort could genuinely replace senior management, I guarantee you'd see two things:
1) The price for access to that level of AI (assuming it wasn't possible with an open model on commodity hardware, which seems like a safe assumption for the foreseeable future) would quickly rise to be merely competitive with the cost of hiring senior management.
2) Boards would (nearly) all quickly agree that the human factor is very important, and regardless of what statistics you might have seen on the decisions made by these AIs in completely fabricated scenarios, or even a few _obviously_ rigged "real-life tests", there's no way these so-called "AIs" could ever really replace a human where it matters (ie, at the senior management level).
>And boards are largely made up of senior management of other companies, scratching each other's backs.
My experience with boards is that everyone on them would rather be doing something else. If they could offload 90% of their work to AI they would.
I know I did when I was a secretary of (several) boards and used a stt model + Claude to write minutes and agendas for the meetings I was presiding over.
Everyone rather enjoyed having a full and (near) real time overview of everything that had been said over the last three years.
I'm still hoping someone would come up with a good diarization model so we can have in person board meetings with as good transcripts as the zoom ones.
> Boards would (nearly) all quickly agree that the human factor is very important, and regardless of what statistics you might have seen on the decisions made by these AIs in completely fabricated scenarios, or even a few _obviously_ rigged "real-life tests", there's no way these so-called "AIs" could ever really replace a human where it matters (ie, at the senior management level).
Exactly how AI researchers have felt since they AI started doing better than humans in the first tasks. From the ridiculous stat that more chess champions have been disqualified for using AI than we have chess champions. Right now in every chess tournament everyone is "fighting" TWO chess opponents. Their actual opponent ... and an AI chess player (and yes, it's an AI now, IBM and their "not-quite-AI" program is out). If they come even remotely close to beating the AI player, on a move by move basis, they are disqualified and the match is halted.
Humans aren't going to play fair in competing with AIs.
Think roughly L7-L10 at Google, M2-D2 at Meta, 66-70 at Microsoft, etc. People that make "law partner" money but are not C-level executives mentioned in SEC filings.
Decisions would be made 1-2 levels up, so they would not be replacing themselves.
Doing `html('<div>Hello {name}</div>')` would be possible. I have a system that's based on it. Two issues:
- No tooling in Python will do anything with the string, so DX suffers.
- Evaluating the variable requires frame evaluation, which is...problematic.
You could do `html(f'<div>Hello {name}</div>')` and get f-string coding assistance. But you'd also get immediate evaluation. There's nothing the `html` function can do with the `{name}` part.
If you do that, people outside the package can also do Username(x) conversions instead of calling NewUsername. Making value package private means that you can only set it from outside the package using provided functionality.
I don't think most people make a special trip just to get gas. It's usually a quick added stop along the way of another trip, which is less of a nuisance than having to make two small walks to move and plug/unplug your car.
The GP was responding to a point about the closure of gas stations accelerating the decline of ICE vehicles. If all the stations near where you normally travel close, then you are forced to make a special trip to fill up.
The last gas stations standing won't be in remote areas far from everything else. They'll be in heavily trafficked areas that everybody goes to regularly, particularly near grocery stores.
Even if like 80% of gas stations close, the remaining 20% will be situated along major transportation arteries. I expect there will be a gas station somewhere along someone's regular weekly driving route for all but the most remote dwellers.
And in those cases plugging in your EV amounts to no net gain or loss, but there absolutely are people who already have to make a dedicated trip to the gas station because their own town where they do their shopping simply doesn't have one.
No, we're not. We're talking about Norway as a whole, a country where for the vast majority of people, "urban centers" are few and far between. With an anecdote that highlights that even in urban setting there are areas where it's apparently easier to drive an EV than ICE.
They show the DFA for it on the site, it's 3 states. There's a starting state for the first . and then two states that transition back and forth between whether z was the last character or not.
I think what's actually happening here is that they're doing the intersection on the DFAs and then producing a regex from the resulting DFA. The construction of a regex from a DFA is where things get ugly and weird.
> Edit: it is trivial to construct an algorithm with this property.
It's actually impossible to construct an algorithm that has its runtime decrease as the input size grows. You can do this for finitely many examples, but we're talking about asymptotics here. With discrete run-times and a lower-bound of 1 operation, the run-time will have to stop decreasing at some point and just stay at the same constant value for all but finitely-many exceptions. This makes it a constant-time algorithm.
A constant run-time is a counter-example to that caption though, as the problem doesn't take longer as the input grows. An example would be checking if a number is divisible by 2. You only need to check 1 bit, so it doesn't take any longer as you add more bits that the algorithm doesn't touch.
I guess even if you allow probabilistic algorithms and expected runtime, you still can't have an algorithm that runs faster as the input size grows. Because... it takes O(log n) time to calculate any random variable whose expected value has a denominator of size n? I'm not entirely sure but that seems right to me.
With probabilistic algorithms, you still have the fundamental limitation that there's a lower-bound to how fast an algorithm can run. You can't keep getting faster indefinitely.
Having a constant runtime means there's a constant bound on the runtime, not that the runtime is an exact constant value. 100 + 1/n would still be constant, as it's bounded above and below by constants.
To have sub-constant time it would have to go to 0 as n increases, as otherwise we could bound the time below by a constant. This isn't possible, as a computer will take at least 1 time unit on every input. So the expected time will always be at least 1.
Maybe not! With O(log N) such as binary search you are assuming some sort of structure to the data. It is not necessary for an algorithm to scan (otherwise O(N) is as good as it would get)
What if we devise a data structure that becomes more efficient as it grows to do a certain (perhaps useless - but that is OK!) operation.
For example we have a data structure at N=0 with a million disconnected nodes and one of them being the target node.
As N increases add a node with a pointer to the target node. The algorithm to find the target node is a random search through nodes until it finds one with a pointer then halts. As N increases, time on average decreases with complexity O(1-N/(N-K)) where K=1000000
You can't have that because when N=K that's not a valid expression and when N>K you're putting a negative number in the big-O notation. You can't have negative runtime.
All algorithms have a lower-bound on runtime of 1. If a sequence is decreasing and bounded below, it converges to a value [1]. If the sequence is discrete, that means it reaches exactly that limit and never deviates from it.
I'm not sure what your big-O expression is supposed to mean, but if a sequence is decreasing and bounded below (which runtime is) then it has a limit. Since the sequence is discrete, it can't get arbitrarily close to the limit without reaching it. So at some point the runtime will stop decreasing.
You can approximate the runtime with a function that decreases forever, but the actual runtime will be constant for large enough n.
“Tend to” is a mathematical term mean get closer to. Formally, for any e as small as you like there exists an N within e of the value being approached.
This is possible because we are sampling a probability distribution. A dice is discrete but the probability of rolling a 1 as the number of sides increases tends to zero even if there is a whole “1” in a given dice.
An algorithm can't have that property for its expected runtime, as it can only look at a sub-constant amount of the input of it runs in sub-constant time.
it's not possible for it to behave differently as n increases because it doesn't have enough time to distinguish between different values of n once it's large enough.
You could use just the asymptote and call it "constant time".
But that's an extremely limited and misleading analysis, so you should not do that. If the time taken goes from a quadrillion to 7 as the problem size grows, don't call it constant.
"constant time" in complexity theory just means there's a constant bound on runtime. It doesn't have to actually have the exact same runtime down to the instruction for every input. Here, the bound would be a quadrillion.
Of course, showing a constant upper-bound doesn't tell us that it isn't even faster than constant as in the proposition I was responding to. That's why I focused on the constant lower-bound.
I know what it means, and I stand by it being limited to the point of misleading in a case like this.
Running any (halting) algorithm on a human computer is constant time, because you're bound by the number of states you can fit into some terabytes, but nobody should actually try to use that as a final analysis.
> Running any (halting) algorithm on a human computer is constant time, because you're bound by the number of states you can fit into some terabytes, but nobody should actually try to use that as a final analysis.
That's the sort of thing I would usually say to explain why we do things like use asymptomatic analysis and ignore polynomial factors. If you have a different solution, that's great, but you're no longer taking about the kind of runtime the article is about.
It depends on how abstract you want to be. You shouldn't always shove the lever all the way to the most abstract single value.
My suggested solution is "don't delete all nuance". If the runtime doesn't behave nicely, give that a mention. Most things do behave nicely, so that's not much of a burden.
> the kind of runtime the article is about
The article spends plenty of time talking about computational feasibility, which is not about the limits as you approach infinity.
It even equates P with "trivial", which is... not good.
I think the key thing you're missing here is that they were talking about behavior as the number of flips goes to infinity. That long improbable upper end becomes more and more improbable over time. As time goes to infinity, the probability of being in it goes to 0. With a finite population, everyone is ruined.
I ran the same simulation as you. After 100 iterations, the richest one had $71. After another 1,000 iterations, the richest one had $78 (of a total $133). After another 10,000 iterations, the richest one had 10^-165 dollars. They were still an extreme outlier with 95% of the total wealth, but he had almost nothing.