>I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning
I don't see how this story gives a "misleading" view of deep learning. From my (admittedly limited) experience with self-driving RC cars, this type of mistake is quite easy for a neural net to make while being quite difficult to detect. In our case, after utilizing a visual back-prop method, we realized our car was using the lights above to direct itself rather than the lanes on the road.
Now, you can refute this and say "well clearly your data wasn't extensive enough" or "your behavioral model is too simple for a complicated task like driving" however as these tools become easier to use, more and more organizations will put them into practice without as much care as the researchers behind most of the current production efforts.
Contrary to this author's claims, despite using data augmentation and a fancy modern CNN, a neural network trained to identify whales hit a local optimum where it looked at patterns in waves on the water to identify the whale instead of distinctive markings on the whale's body.
I don't buy the "this isn't a problem in real world applications" argument being made in this article.
He says that his first attempt at whale recognition looked at waves instead of whales, but
> This naive approach yielded a validation score of just ~5.8 (logloss, lower the better) which was barely better than a random guess.
which is different from the tank story. For the tanks, the neural network appeared to perform well, but was actually not looking at the tanks. Here, it never performed well, and when debugging why not he found that it was not looking at the whales.
> I don't buy the "this isn't a problem in real world applications" argument being made in this article.
Me neither. Especially considering that this story was already alive before the latest deep learning advances. It is totally believable.
And even with a modern CNN approach, you would expect a model to be able to learn a sunny/cloudy categorization much easier than the nationality of a tank.
This story was repeated by professionals for ages because it is totally believable.
I assume you're referring to some simple lane-keeping CNN where the CNN predicts steering angle from a video recording+human inputs: and yes, your dataset isn't extensive enough, and you'll never have enough data either, not due to some amusing bias in your CNN or taking shortcuts, but because it's a reinforcement learning problem and not a classification problem - your RC CNN could learn a better model of the road which doesn't involve lights at all and it won't make any real difference, it'll still be unable to correct for its errors or adapt to new situations and crash.
I did the human version of this when I was a newbie driver. I learned to predict traffic lights changing to red by watching the pedestrian signals as I approached an intersection. All the lights all over the city followed the same pattern. Then one day I happened upon one where the pattern was different, and I stopped for no reason at a green light, like an idiot.
> Page: I think they had a really hard time getting along [Levandowski and another employee], and yet they worked together for a long time. And it was a constant -- yeah, constant management headache to help them get through that.
> Questioner: Do you recall that Anthony Levandowski was put on a personal improvement plan before he left?
> Page: I don't recall that.
> Questioner: Do you recall that Mr. Levandowski wanted to be head of the Project Chauffeur team?
> Page: I mean, that does not surprise me.
> Questioner: Do you recall having conversations with him, where he said to you that he wanted to be head of the team?
> Page: I don't recall, but it wouldn't be surprising, you know. I think he clearly felt things could be done better.
We’re building a savings app for people that struggle to save money. How? We’re using a new form of investment called prize-linked savings (new to the US as of 2014). The simple explanation is that you trade part of your interest for the chance to win from a prize pool of everyone's interest.
As a software engineer at Long Game you’ll be joining a small team of engineers and will have full exposure to all aspects of our product development processes.
We’re looking for developers that enjoy building fun mobile UX and/or engineers with considerable finance experience.
I think I disagree. The hard part is understanding why we have a sense of "self" one that can independently perceive it's place among the solar system and beyond.
The "colors, sounds, tastes, feels, etc" that you refer to can be seen as sensors that perceive narrow views of the various spectrums created by the energy bouncing around in the physical universe.
Examples:
- Hot vs. cold for a human is entirely different that hot vs. cold for a star.
- Color, from our perspective, is the spectrum of light that we need to utilize to effectively interact with our environment (different animals need and perceive different spectrums of light).
- Sound is a vibration that propagates as a [..] mechanical wave of pressure and displacement, through a transmission medium such as air or water. (taken from wikipedia.org/wiki/Sound)
We’re building a savings app for people that struggle to save money. How you ask? We’re using a new form of investment called prize-linked savings (new to the US as of 2014). The simple explanation is that you trade part of your interest for the chance to win from a prize pool of everyone's interest.
As a software engineer at Long Game you’ll be joining a small team of engineers and will have full exposure to all aspects of our product development processes.
We’re looking for developers that enjoy building fun mobile UX and/or engineers with considerable finance experience.
I rode this last night. It has a flaw that I haven't seen mentioned elsewhere.
In the midtown area, the streetcar tracks are in the outermost lanes of the 6-lane street. This causes a problem at intersections because the Qline can easily be blocked by cars that are trying to turn right off of Woodward (the main road).
The problem gets worse when you consider that this rail line was built to incentivize more people to come downtown. More foot traffic will inevitably block more cars at crosswalks, which in turn slows down the streetcar.
I'm assuming they can attempt to fix this by limiting the number of legal right hand turns or fix the lights to stay green until the streetcar passes through, but I'm not sure there will ever be enough pressure to do so.
>Walmart sells the card for $1, and Green Dot charges the usual associated fees: $5 a month if your balance is less than $1,000; $2.50 for ATM withdrawals; etc.
This is absolutely not how one should encourage people to save. This is Walmart trying to steal business from local banks/and or provide banking services to people in rural locations. Personally, I think Walmart should offer this at zero-cost to help their customers live with less financial vulnerability (and thus shop more at Walmart).
Full disclosure: I work at a startup that's using prize-linked savings to help encourage people to save money, but we don't charge a monthly-fee and we actually give our users interest on top of what they win.
Ok if the local banks are serving their communities well then why would Wal Mart even have a market opportunity here? Clearly local banks aren't serving the community or else Wal Mart's efforts would be redundant.
What I'm trying to say is that Walmart isn't actually providing much value over the local bank, as the consumer costs are astronomical in comparison to the average account balance. The whole notion that they're helping the person "save" while charging these fees is ludicrous; and this is what the article is purporting.
You are correct though, local banks aren't necessarily helping people save either, especially those who are living paycheck to paycheck.
I don't see how this story gives a "misleading" view of deep learning. From my (admittedly limited) experience with self-driving RC cars, this type of mistake is quite easy for a neural net to make while being quite difficult to detect. In our case, after utilizing a visual back-prop method, we realized our car was using the lights above to direct itself rather than the lanes on the road.
Now, you can refute this and say "well clearly your data wasn't extensive enough" or "your behavioral model is too simple for a complicated task like driving" however as these tools become easier to use, more and more organizations will put them into practice without as much care as the researchers behind most of the current production efforts.