> Admittedly, no real effort is put into explaining how the time machine actually works, other than Gilliam’s signature steampunk wires and bellows.
> Science Score: 9
Primer
> As for the science, the basic idea is that Aaron and Abe are trying to build a device to counter the effects of gravity by creating a room-temperature superconductor (a hot topic in physics this year, albeit a controversial one) that exploits the Meissner effect to remove the magnetic field inside a plain gray box large enough to fit one person.
> The limitation that you can’t travel to times earlier than you turned the machine on is actually very realistic—closed timelike curves in relativity would have exactly that feature.
Primer is the only backwards time travel in media I have ever seen where, apart from a few small problems, I thought the fundamental idea actually made sense and could exist in some (not our) universe.
If you are a business and want to contribute to stabilising the grid (but don't have a Tesla power wall specifically, any device works!): I work at Leap (https://leap.energy) and we have virtual power plants in CA, TX, and NY.
I think what a lot of people here are missing is that the priorities of E.A. are fluid by design.
Yes, existential risk of general AI might be small; but even fewer people are working on it: only about one hundred, worldwide. Similarly, the reason E.A. stepped off the climate change train is because it was a popular issue, which means the marginal benefit of one person contributing decreases.
When more people direct attention to AI safety, another area will be the one where E.A. can contribute most, and the focus will shift.
Indeed not only fluid, but also open to criticism. I've been always quite unsure of how to quantify AI risk. My conclusion at the moment is that it's a complex risk. I don't think singularity is plausible in any way or FOOM.
Still, the changes to society are absolutely hard to grasp. Humans may face (economical) obsolescence in a few decades[1] (humans need not apply[2]). That puts us in a delicate situation: if humans are economically obsolete, a society that only values economic value (and not consciousness per se) will have all reasons to slowly but surely get rid of humans (and animals).
We need a revolution of meaning and ethical understanding: the understanding of the fundamental value of consciousness, and to structure our lives and all institutions around it (each individual needs to have a good understanding of such ethics and meaning so that society as a whole is robust). I think formalization of ethics (which I'm a proponent of) and progress in metaphysics is going to contribute as well to this end. I think EA is simply part of this new understanding of ourselves and the Universe, a sort of coming out of a dark age -- and there's the opportunity to create a lasting society amazing for everyone.
[1] My personal guess-estimate is about 50(-20/+40) years to complete obsolescence, although various researchers have wildly different estimates.
> Yes, existential risk of general AI might be small; but even fewer people are working on it: only about one hundred, worldwide.
You can make that argument, but I think the median view among people working on AI alignment is that the risk is quite high, in the range of >20% chance of disaster within their lifetime.
I started scrum mastering a team in November and the team suffers from too many things at the same time for sure. I have to take a more proactive planning role and I could definitely try out that on call stream idea.
I am naturally a quite chaotic person, I have trouble helping other people create order when I'm already having trouble keeping an eye on all the streams that exist in the team. Any tips would be appreciated.
I wish people would stop bringing up the Pareto principle. "Some things are consistently bigger than other things." Can we talk about why things are bigger than other things instead of saying it's because of some law of nature?
> I wish people would stop bringing up the Pareto principle. "Some things are consistently bigger than other things." Can we talk about why things are bigger than other things instead of saying it's because of some law of nature?
I see what you're saying, but it sounds like you haven't really internalized the principle.. because you're viewing it as a tautology rather than the useful heuristic it is. You can and should still have the probing discussions. But having 80/20 as a rule of thumb will actually help you know where to probe. So you might want to give it another look.
> I am naturally a quite chaotic person, I have trouble helping other people create order when I'm already having trouble keeping an eye on all the streams that exist in the team. Any tips would be appreciated.
There's so much to say about this topic.. it's hard to give many pointed tips. But my advice would be to 1) read far and wide about product management (blog posts like this one, books, research, etc), and especially focus on first principles. 2) Experiment with your team! and 3) *Do not* think that doing one of those weekend certification courses or seminars or whatever is going to help much. Points 1 and 2 will give you far far more insight.
Re: pareto principle, I've never actually looked at what the principle means. "80% of the consequences come from 20% of the causes" is quite useful, as I understand it it means "don't assume that causes of consequences are randomly distributed".
I think I see it often misunderstood the way I misunderstood (/misunderstand?) it: "As with most things, there is the 80/20 rule (Pareto again), exceptions are sometimes warranted in important situations." this sentence just says "you should stick to the rule about 80% of the time"
Re: advice, thanks for taking the time! I hadn't considered research, but luckily I already agree with 3) :P
> I am naturally a quite chaotic person, I have trouble helping other people create order when I'm already having trouble keeping an eye on all the streams that exist in the team. Any tips would be appreciated.
For tracking my own work, David Allen's "Getting Things Done" (GTD) was a huge influence for me (though it can be a bit cult-like and many people get too distracted optimizing their GTD system instead of actually getting things done).
As for managing a team's work, two important tenets of scrum that I think many teams have a hard time sticking with are limiting work in progress and retrospectives. Limiting work in the sprint backlog and each team member's own plate during the sprint is critical for actually getting work across the finish line. Retrospectives can be hard, especially when people are feeling pressure to keep moving forward, but retrospectives are how a team can optimize and gel, which is most important during those times.
apenwarr's (long) "An epic treatise on scheduling, bug tracking, and triage" is a great resource: https://apenwarr.ca/log/20171213
> Can we talk about why things are bigger than other things instead of saying it's because of some law of nature?
It's the opposite of a law of nature: it generally doesn't apply to nature which is bounded by physical constraints.
It happens in artificial fields like economy, software engineering, etc. I'm not sure if there's a general principle, but I heard an intuitive explanation for why it happens in software engineering: it's a recursive field. Software is built from smaller software built from even smaller software and so on. This allows almost unbounded complexity (as opposed to e.g. houses, which are not built from tiny houses ad infinitum.)
With unbounded complexity comes fat tails, and therefore the Pareto principle.
Edit: well, thinking about it, it does apply to some things in nature, like the sizes of bodies of water.