Edit: Reading up on Social Darwinism on Wikipedia, Britannica and History.com, I guess not. Social Darwinism seems either not well-defined and/or assumes that the fitness functions that natural evolution worked out need to be taken nearly as it is. I am not saying the latter.
I am saying there needs to be more clearly defined fitness functions (call them performance criteria, KPIs, etc.), defined for the modern times, which then need to be more consistently followed. This isn't much different than using gradient descent algorithm where some weights are changed in accordance to their impact on the chosen loss function.
Right question to ask, and I do not have have good answers currently. But here are some thoughts:
First, a clarification over what I am not challenging with the status quo. In nature, some organisms are higher up in the food chain and freely kill others. We do NOT define 'fit' in the way where those who are better at killing other humans are favored for survival. We have already set this right by creating laws that punish homicide. This bends the optimum from favoring more physical strength to favoring people to make good overall social contributions, which can be intellectual as well.
The value society should provide to an individual should (generally) be based on the value they provide to the society. This is already majorly the case. However, I challenge inheritances where someone may just be born with a lot more than others without having made those contributions to the society. There are debates present online on this alone, and I cannot claim that the social choice should be exactly this way or that way.
In a democracy, people (except children, etc.) are given equal right to vote. I do not find this optimal. People who understand social dynamics, policies and promises of various parties well (which does not include me) should have more influence on which party should get selected. I do not know how this could be implemented. Perhaps a quiz along with the vote?
I know these are not good and realistic examples. I'll need to think more. However, I do often feel that people who do good and think for the society struggle more while those who put themselves higher at the cost of the society often end higher up.
The entire issue is that the earth surrounding the tubes is acting as a giant buffer. Enough heat has been dumped into it over the years that it has permanently warmed up. Draw heat from it during the winter to warm up homes, and it'll be able to absorb more heat from the tunnel air during the summer.
And because it's permanently warmed up, the long term consequence is the line becomes a health hazard and has to be closed for increasingly long periods.
When wet bulb > body temp people start getting heat stroke, which leads to fainting and potentially death - a bad look for a public transport system.
The likely remedy is to install gigantic refrigeration units in the ventilation shafts and pump in cold air. This will be hugely expensive to build and run.
But the alternative is a tube line that can't be used. So there may not be much choice.
It won't be zero so spreading it across enough people might already solve it. If that still leads to insufficient demand during the hottest weeks, idk, it's energy, surely there's something useful you can do? Store it for next week, pre-heat water for the nearest steam engine (e.g. gas power plants are steam engines running on methane, so if they have to heat the water by fewer degrees.. The problem will be finding a steam engine close to the heat source), supply it to an industrial process that needs temperatures above ambient (egg breeding for vaccine production? Idk), create electricity from the temperature differential between this system and the Thames water using the Peltier effect
I've surely got a too naïve view of economics but if the goal were to not waste resources then there will be things you can do before dumping it into the hot summer air
When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.
That's mostly because Julia questions get answered on its Discourse or Slack. The sharp decline is due to an automatic cross-post bot that stopped working.
No one bothered fixing it, in great part due to Discourse being the main place of discussion, as far as I know.
Even languages like Python and Javascript who are huge show a decline after 2022 which suggests ChatGPT is probably responsible. It would be better to have some other measure imo.
It measures the proportion of questions for that language out of all languages. So, if there is a general decline in Stackoverflow questions, it’s already accounted for in the metric
Can you use to launch an Intel VM on Apple Silicone and visa versa? I’m interested in doing this so I can compile C++ applications for different architectures on MacOS. Do you know of any other “easy” methods?
You can do this without virtualization/emulation, pass ‘-arch x86_64’ or ‘-arch arm64’ to clang. Or both, for a universal binary. And on Apple Silicon, you can test them both thanks to Rosetta.
Tensors used in deep learning are not the same as the definition used by Physicists - blame the DL community for this :). So DL tensors are just N-dimensional arrays of data, and there is no concept of covariance and contravariance of the dimensions. You could think of DL tensors as Cartesian tensors and they don't need to conform to the same transformation laws that Physics tensors do.