IF (a big if!) all your forecast will become true, I don't think what people will mourn most at the time is losing the fun part of coding/developing SW.
Like those laborers who went jobless after the waves of industrial evolution, they should have been planning earlier for other jobs and skills, rather than focusing solely on the fulfillment from making goods.
But yes, like with the Waymo destruction and people booing AI SWSX and the recent study the UK will at least lose 8 million jobs to AI, the changes are hard to imagine.
Like watching Terminator where Sarah is running around searching for a public telephone.
Yep. I don't know what's left for coders to do when AI is doing the coding, but I wouldn't be interested.
A painter gets in it for the painting, not painting management or painting business strategy technical directorship or touching up photographs taken by the new machine.
PS: I feel this way and I'm way younger than you. ;p I don't think it's an age thing as much as a "we were sold the idea that we could code" thing.
This presentation is pretty inspiring to me, but at the same time there is just no obvious way to leverage the claim. How can any management allow subordinates to do things without any objectives to justify?
Generally I buy the reasoning, but maybe that's just I cannot identify any fundamental flaws now.
Surprised to know nobody mentions reinforcement learning here.
Bought three books (in their transitional Chinese edition), whose original titles are,
* Reinforcement Learning 2nd, Richard S. Sutton & Andrew G. Barto
* Deep Reinforcement Learning in Action, Alexander Zai & Brandon Brown
* AlphaZero 深層学習・強化学習・探索 人工知能プログラミング実践入門, 布留川英一
None of them teaches you how to apply RL libraries. The first is a text book and mentions nothing about how to use frameworks at all. The last two are more practice oriented, but the examples are both too trivial, compared to a full boardgame, even the rule set is simple for humans.
Since my goal is eventually to conquer a boardgame with an RL agent that is trained at home (hopefully), I would say that the 3rd book is the most helpful one.
But so far my progress has been stuck for a while, because obviously I can only keep trying the hyperparameters and network architecture to find what the best ones for the game are. I kind of "went back" to the supervised learning practice in which I generated a lot of random play record, and them let the NN model at least learn some patterns out of it. Still trying...
I believe it will add some spice to the model, but you shouldn't go too far at that direction. Any social system has a rule set, which has to be learnt and remembered, not infered.
Two exmaples. (1) grammars in natural languages. You can just see in another commenter here uses "a local maxima", and then how people react to that. I didn't even notice becuase English grammar has never been native to me. (2) Mostly, prepositions between two languages, no matter how close they are, don't have a direct mapping. The learner just has to remember it.
However I would like to mention that sometimes we do think so, as in "the will of the party", at least in some language's context.
Fun fact, when I tried to find similar sentence like "the will of Democratic/Republican Party", google returns 5 results for the former but followed by voters/members and thus not what I want, for the latter, there is no results at all. But as I find "the will of the party", I find an abstract of some paper from my area.
Maybe party is too small for this. It seems like "the will of the nation" is widely used.
Specification. For any real business, it takes huge effort for a group of people across many domains to consolidate what should be done. That's only the what part.
Not saying competitive programming contest easy or something, but just pointing out that in a contest with timing constraint, the requirement realization phase cannot be fitted in.
When I was a kid in the 80s - I convinced my dad he needed a 286 for his construction company - so he could do his books. And he'd need a modem - its the new fax.
(This was so I could play populous, BBS to San Jose and get Grounded for a month for running the phone bill up to 926 for long distance calling into PCLink...
My dad yelled at me for playing video games "WHAT THAT EVER GUNNA DO"
--
years later I left my shift at Intel running the game lab.. to meet my dad for dinner.
He apologized to me for telling me that games and computers would never do anything.
I was touched he remembered.
---
These dumb olympiads will never amount to much, id bet.
This is the same for me. My case was internal transferring from a engineering division with solid background to a newly created division of a different domain. Things turned out to be very different from what I had previous imagined.
At least now I know that, at the end of the day, burnout can only be fixed by other means.
I've read that it takes _at least_ 3 months of complete withdrawal from what was causing the burnout to fully recover from it. I imagine if you throw autism into the mix, it's probably longer than that haha.
Sometimes it feels like the only real solution is to go back to school and get a degree in something that's not related to tech at all. Or go live in the woods.
Sounds intuitive, but there are gaming researches working on that regard. Two related terms (learnt from IEEE Conference of Games) that come to mind:
1. Game refinement theory. The inventors of this theory see games as if they were evolving species, so this is to describe how game became more interesting, more challenging, more "refined". Personally I don't buy that theory because the series of papers had only a limited number of examples and it is questionable how related statistics were generated (especially the repeatedly occured baselines Go and Mahjong), but nonetheless there is theory on that.
2. Deep Player Behaviory Modeling (DPBM): This is the more interesting one. Game developers want their game to be automatically testable, but the agents are often not ready or not true enough. Says AlphaZero for Go or AlphaStar for StarCraft II, they are impressive ones but super-human, so the agnet's behavior give us little insight on how the quality of the game is and how to further improve the game. With DPBM, the signature of real human play can be captured and reproduced by agents, and thus auto-play testing is possible. Balance, fairness, engagement, etc. can then be used as the indirect keys to reassemble "fun."
You are entirely mistaken. While many people are excited about potential future accelerators and such using RISC-V's extension mechanism, it is very much the case that the RISC-V stuff shipping in volume today is embedded / microcontroller stuff.
This is quite deliberate: RISC-V is on track to follow the same path ARM took, starting with the cost optimized lower capability parts then progressively moving up into performance optimized parts. This is really the only viable strategy, because no one is going to invest the billions and decades it takes to get a new high performance design into a totally unproven ISA.
Entropy works in the same Boltzmannian way - within a shell (a Gaussian surface) one takes a measure of the number of configurations of the enclosed elements which produce the same observables outside the shell.
That is, one takes a coarse-grained macrostate "a living human in a functioning spacesuit", a volume (say a cubic metre), and a fine-grained microstate (say a cubic millimetre); the more pairs of microstates one can swap without changing the macrostate, the more the entropy. If you swap part of the living human's aorta with shards of helmet, toenail, or vacuum, you quickly get a non-living, therefore non-metabolizing, therefore observably cooling human, so the entropy is well below a maximum.
But if our macrostate is "a cubic metre of vacuum" you can swap any sized microstate around and still get all the same observables as the original "cubic metre of vacuum" -- entropy is therefore maximal for the shell around that volume. We can repeat this procedure for arbitrarily-sized Gaussian surfaces, and arbitrarily fine microstates.
We recover the second law of thermodynamics by observing that wherever in the entire cosmos you place your shell, you are more likely to enclose a high entropy region than a low entropy one. In an expanding and diluting universe like ours, where more and more high quality vacuum appears between galaxy clusters, there are more places where one's shell will enclose a very high entropy region than a very low one. We then can consider Boltzmann's view of the second law of thermodynamics as it being infinitely improbable to have a completely dynamically ordered state.
Let's contrast the Janus point with a Lemaître style regression cosmology.
In the latter we simply look at the expansion history in reverse, and extrapolate through ever denser and hotter and lower-entropy configurations and (following classical General Relativity) end up at an inevitable singularity. This has some problems, mainly that nobody knows how matter works at the much more extreme heats and densities than we can hope to produce in laboratory conditions on Earth, nobody is happy with a singularity because there is no way to predict that an actual singularity will decay into the fields of the Standard Model, and because the singularity contains everything, there is nothing outside the Gaussian surface to make observations of the macrostate. Returning to Boltzmann, the singularity itself must be completely dynamically ordered, because when it breaks down, it must be able to produce dynamical systems like galaxy clusters and cats.
However, if we somehow prevent the singularity, we might be able to make that prediction in principle even if we cannot do so now. We substitute the infinitely dense zero entropy singularity for merely extremely dense and extremely low entropy, and can at least in principle evade all of the problems in the previous paragraph. The Janus argument is that a singularity-free entropy minimum is plausible if shared with two regions with much higher entropy everywhere else. A shell around the entropy minimum has much less empty space in it than a shell around anywhere else in the two regions, so we can show this relatively low entropy by doing the swapping procedure above.
In both cases we make the argument that we can take a values surface at the entropy minimum and use dynamical laws to predict how that values surface will evolve. In the Lemaître-style system, we get a universe like ours; but in a system with a non-singular entropy minimum, we must have more than one region (one containing a universe like ours, one containing something else that evolved out of the entropy minimum).
In neither approach do we have the means to determine what the initial values should be, so abolishing the singularity Janus style doesn't seem to bring that much of practical calculational value. Moreover if we start with some late-time values surface on this side of the Janus point and work backwards, our known time-reversible dynamical laws do not lead to a Janus point but rather to a Lemaître-style singularity.
Let's return to recovering thermodynamics from the second-law discussion above.
The third law again comes from a statistical mechanics view. There is a unique low-energy state that is perfect vacuum. In an expanding universe, we have regions containing that state after it has been evacuated of galaxy clusters, dust, and gas, and the cosmic microwave radiation has become so sparse and cold as to essentially vanish (a bit technically, we can put in a comoving observer with a Eulerian view of the cosmic microwave background such that the characteristic wavelength of the CMB photons are longer than the observer's Hubble length). Those regions are nowhere near the entropy minimum in either the Janus or in the Lemaître configurations, but are nearly everywhere when sufficiently far in the future. (We somewhat circularly define the past as lowest entropy and future as highest entropy.)
At cosmological scales in an expanding universe there is no especially satisfying way to recover the first law of thermodynamics even though it is perfectly reasonable to treat the whole cosmos as the ultimate closed system. One can think of the expansion as an adiabatic and reversible process on the matter content, however, and that is part of the basis for the Lemaître and Janus models.
> I don't know how to make sense out of this
Well, me neither, frankly. Or rather, I can understand the goal of Barbour's thinking but I think it misses the point. We still have an extremely improbable configuration somewhere when the universe was much smaller and denser, and we have no way to recover that surface using observations made here-and-now. Worse, with what we know of gravitation -- specifically, if we accept Raychaudhuri's focusing theorem (which s a deep and interesting result of General Relativity) -- missing the focusing into a caustic is much less probable than focusing into a caustic. Once you have a caustic, you have a singularity, unless you have some magic means of avoiding it through unknown quantum effects. The Janus point doesn't even seem to open up that option, or rather, it appears to require either insanely good luck or quantum effects modifying General Relativity at energy scales which are astrophysical at modern times.
However, maybe just maybe what's on the other side of the Janus point has different physics that makes a Janus point (that produces our side with high probability) likely. And maybe just maybe in the far future when we can make truly enormous gravitational wave detectors we can spot gravitationally-lensed early-cosmos gravitational waves that might distinguish between a Janus-style early configuration and a singularity early configuration (or some other singularity-avoiding early configuration).
There are also some theoretical questions. The big one: what constrains the Janus point to having exactly two higher-entropy regions rooted to it? Also: are there false Janus points, i.e., is there a hierarchy of relatively low entropy configurations from which two+ higher-entropy regions sprout, but those regions still have enough entropy coupled with dynamical laws that (quoting the article) "The diameter will shrink to a minimum at some moment in time, then grow again"?
I think most working cosmologists would bet [a] a Janus configuration does not seem more probable than any other plausible early-universe configuration and [b] even if it were and thus abolished the singularity, it does not solve the vexing theoretical problems posed by the very early universe's extreme heat and density.
Finally, all of this has evaded Barbour's "timelessness" language, since that was largely missing from the Nautilus article, and my comments are instead rooted in the conventional concordance cosmology.
Like those laborers who went jobless after the waves of industrial evolution, they should have been planning earlier for other jobs and skills, rather than focusing solely on the fulfillment from making goods.