There are several of different options. The most obvious is an independent line of authority delegated from the state's legislature or governor. This is where the police get THEIR authority. Other options include close supervision by the courts, private rights of action (i.e., citizen lawsuits), strong FOIA-like transparency rules, etc...
Certainly, there's a libertarian argument made against requiring seat belt use, including mathematical economics models that can prove it... provided a few unstated assumptions are met--like rational agents, complete information, perfect competition, etc... Of course, the problem is that those assumptions aren't met.
If someone is more seriously injured in a car crash because they weren't wearing a seat belt, the cost of that is going to be borne by society at large in a variety of ways. A more serious accident is going to consume more Police and EMS resources. The increased hospital and resulting medical costs will be shared by millions of people through (even private) insurance companies. If the person dies, the investment the public made in the person's education is lost, etc...
The response to these points from libertarians is typically to propose a system where someone can chose not to be bound by this rule, and pay the price in increased health care/disability premiums or something. But these systems ignore significant information and transaction cost problems. The person makes a statement about whether or not they want to be bound by a seat belt rule, but how is that information verified? (People lie...) If it's just a contract term, not a law, if someone is pulled over for speeding and wasn't wearing a seat belt, do the police ticket the driver for violating the contract? If someone who says they want to follow the seat belt rule is unlucky enough to just forget it the day they got into an accident, do we just let them die? If a crash victim comes into the hospital unconscious, do they verify seat belt contract compliance with the insurance company before spending money on life-saving treatment? Or do they treat the person only if they'll be able to pay out of pocket? The whole system is full of costs, complexities, and moral hazard--it's a mess.
It's SO much simpler and less costly just to have a seat belt law that applies to everyone. Sure, a few people will find it a little paternalistic and annoying, but the other way is a lot worse. We might be willing to deal with a more complex and expensive system if the rule is very intrusive or highly restrictive of personal liberty. Being able to drive without seat belts isn't worth it.
Playing go at superhuman levels, or playing StarCraft at above diamond level, not sure about Dota but wouldn’t be surprised if RL also outperformed hand crafted AI there as well.
Isn't that the sort of thing you thought of as a kid when thinking of AI, rather than "here's another way to serve ads and make hopefully-not-racist financial decisions"?
Tokomak control has some RL experiments but it is definitely not the only way.
AlphaZero is… not quite a practical application.
I am a big believer in using AI to improve the world, but RL is just a direction that isn’t yet sufficiently mature. I think they still need their convnet/transformer moment.
Kids are dumb. The real value of machine learning is mundane business decisions that used to rely an poorly informed gut feelings or painstaking manual efforts
Hi! Sadly I do not, it was a bit too rough on my body. But still work out, and a lot of what I learned at Crossfit is still useful today. But I'll never go back, ha.
Likely because bots and troll farms are totally out of control, and they don't have a more accurate way of fighting it. IMHO, this is also the reason behind some of the heavy-handed moderation on some subreddits. AFAICT, as bad as it is already, this is only going to get worse as (1) NLP AI gets better and (2) there remains no positive defense against sybil attacks.
Just curious... I'm assuming people have already thought of doing a pseudonymous web of trust--or is that somehow inherently impossible even ignoring the old spam-solution form checkbox for universal adoption?
> AFAICT, as bad as it is already, this is only going to get worse as (1) NLP AI gets better and (2) there remains no positive defense against sybil attacks.
I strongly agree. The future of public discourse online looks dim. At that point, on one hand, platforms will certainly require more Personal Identifiable Information, which harms the privacy of regular users. Meanwhile, trolls and bots from various malicious actors will be able to dominate and control many conversations on an unprecedented scale. One would have neither privacy nor security, the web envisioned by digital utopianists as a free, open platform will be completely dead.
100 years ago, the pioneers of aviation believed the invention of airplanes would be a force to end all wars, because it breaks the power asymmetry in battlefields. But we get massive aerial bombardments on civilians instead.
> It was an idea shared by many after the inception of flight. War would become practically impossible, the brothers thought, because the scouting done by aircraft would equalize opposing nations with information on each other's movements, preventing surprise attacks.
> "We thought governments would recognize the impossibility of winning by surprise attacks," Orville said in 1917, "and that no country would enter into war with another of equal size when it knew that it would have to win by simply wearing out its enemy."
> Two years before, he had declared: "The aeroplane will prevent war by making it too expensive, too slow, too difficult, too long drawn out."
> "Yes, we thought it might have military use - but in reverse," said the 76-year-old inventor, whose brother had died at age 45 in 1912. "Because the men who start wars aren't the ones who do the fighting, we hoped that the possibility of dropping bombs on capital cities would deter them."
100 years later, the pioneers of the web believed the invention of the Web would be the force to a free society because it breaks the power asymmetry on information between individuals and the state. But we are now getting massive manipulations from all types of malicious actors.
> "Information," Gage [John Gage from Sun Microsystems] answers, matter-of-factly. "What stopped the Vietnam War was that we told the truth about what was happening. Today, the truth-telling mechanisms that we can put in people's hands are a million times more powerful," he says, wending through the ghetto fringe of East Palo Alto. "And when every person on the planet has access to that power - which is what I'm trying to do - then watch what happens."
Yes, many people now has access to that power, but some, like state actors (or just plain old advertisers), now have order-of-magnitude more power.
If this is intended to make a §230 argument, consider that its reason d'etre is based on the complete impracticality of pre-clearing user generated content.
The practical considerations differ by orders of magnitude when we're talking about paid advertisements.
> the philosophy and sociology of science are a VERY different thing from "the humanities".
I'm not sure that they are.
But my point is simply that accusing scientists of not being self-reflective is silly, accusing them of ignorance of history is silly, and for this piece to be elevated beyond the level of polemic would require something, anything, in the way of persuasion that its assertions were true.
Variables go "out of scope" (in at least one sense) at last use, but are not `Drop`-ed (de-allocated, etc...) until the end of the function. The difference is important because of rust's rule against simultaneous aliasing and mutability. Consider this example:
fn main() {
let mut a = 1;
let b = &mut a;
*b = 2;
println!("{}", a); // prints "2"
// *b = 4; // If this line is uncommented, compile time error.
}
Because b is a mutable reference to a, this means that a cannot be accessed directly until b goes out of scope. In this sense, b goes out of scope the last time it's used. _However_, AFAIK, b isn't actually de-allocated until the end of the function.
Of course, it doesn't matter in this trivial case, because b is just some bytes in the current stack frame so there's nothing to actually de-allocate. But if b were a complex type that _also_ had some memory to de-allocate, this wouldn't happen until the end of main(). But in this case, b's scope also lasts until the end of main, which is kind of like adding that last line back in...
This can be seen in the following example, where b has an explicit type:
struct B<'a>(&'a mut i32);
impl<'a> Drop for B<'a> {
fn drop(&mut self) {
// We'd still have a mutable reference to a here...
// If B owned resources and needed to free them, this is where that would happen
}
}
fn main() {
let mut a = 1;
let b = B(&mut a);
*b.0 = 2;
std::mem::drop(b); // Comment this line out, get compiler error
println!("{}", a); // prints "2"
}
In this example, without the std::mem::drop() line, the implementation for Drop (i.e., B's destructor), B::drop would be implicitly called at the end of the function. But in that case, B::drop() would still have a mutable reference to a, which makes the println call produce a "cannot borrow `a` as immutable because it is also borrowed as mutable" compile time error.
In other words, this "going out of scope at last use" is really about rust's lifetimes system, not memory allocation.
IMHO... this is one of the rough edges in rust's somewhat steep learning curve. Rust's lifetimes rules make the language kind of complicated, though getting memory safety in a systems programming language is worth the trade-off. There's a lot of syntactic sugar that makes things a LOT easier and less verbose in most cases, but the learning curve trade-off for _that_ is that, when you _do_ run into the more complex cases that the compiler can't figure out for you, it's easy to get lost, because there are a few extra puzzle pieces to fit together. Still way better than the foot-gun that is C, though. At least for me... YMMV, obviously.
I'm guessing it's the shift for advocating for equality to advocating for diversity. eg. Harvard admissions where Asians and Whites have a handicap in an effort to increase diversity.
Diversity e.g., in higher education admissions isn't just about what's fair for the students, it's also about what's best for the university and the students.
Universities are naturally xenophillic--both in their desire to understand more about the world and in their desire to influence it. Exploration and openness to new experiences and ideas are also important intellectual values important to develop in students. All of these are enhanced with a diverse student body.
As a more concrete example... $IVY_LEAGUE doesn't gain that much by having its average standard test score go from the 99th to the 99.5th percentile. From the university's perspective, it gains more by admitting students who spread the influence of the university's ideas, enrich and enlarge context in classroom discussions, etc...
> As a more concrete example... $IVY_LEAGUE doesn't gain that much by having its average standard test score go from the 99th to the 99.5th percentile.
Assuming accurate tests and a normal distribution of scores, shouldn't that be approximately equivalent to going from the 80th to 90th percentile?
Given you assumptions, the difference between the 80th and 90th percentile is about twice as big as the difference beteen the 99th percentile and the 99.5th percentile.