This is a dumb headline. They probably got a special use permit, as the article suggests in an easy-to-miss paragraph.
DJI has been in this game a long time. They're already on thin ice with US regulators. These places are instantly recognizable and they're the preeminent commercial drone company. There's 0 chance that they did this illegally.
Also, they may very well have contracted to a 3rd-party company that is known for cinematics, who in turn got the right permits because such a company would be accustomed to that.
It's preference. I think the cameras on the non-pro iphones are so ugly -- especially the diagonal design. The pro cameras look ok to me. Can't not see my old college stove when I look at it, but I don't think it's too bad.
I, too, am biased but prefer Pixel's camera layout. Visually, I like the symmetry of the camera bump on the back of the device. Functionally, the symmetrical bump means the device will not rock on a table and it's a nice place to rest your finger and support/handle the device. A design decision that's unique and has some (small) utility.
Tier list:
Good: Pixel line, any phone with no camera bump
Ok: iPhone Pro
Bad: Samsung's many iterations, iPhone 2 camera vertical layout
Horrible: iPhone 2 camera diagonal layout
The whole idea of needing a "camera bump" is sort of a ridiculous design choice just use the extra two millimeters for more battery. It is almost as goofy as the "notch"
> Visually, I like the symmetry of the camera bump on the back of the device. Functionally, the symmetrical bump means the device will not rock on a table and it's a nice place to rest your finger and support/handle the device.
Is anyone using smartphones without a cover that pretty much negates any camera bump those smartphones have?
For folks looking for some other options I bought this cheap server raceway: https://www.amazon.com/dp/B00008VFAP. Super nice to run the cables into the slots and then let the mess be inside there. I screwed one part into the desk and the cover I cut into 4 pieces so I can remove only one piece if I need to move cords around or whatever.
3d printed a few pieces for power supplies and a power strip but for the most part all cables go to the raceway. I had something similar to this original "underware" at some point but it was a pain in the ass whenever I changed cables/etc. So many extra holes in the bottom of my desk now
This project seems to be building on top of Multiboard [0], which is a lattice that is used as a base to mount all those runners and equipment holders on.
So the idea is that you put a lattice up with "a few" screws and then you can change everything else underneath without having to screw around any further.
Do you have a link to openCore? There's a rather unfortunate name collision with a Hackintosh bootloader that's preventing me from finding any info on it no matter which search engine I use.
Which has a completely untenable license. I was interested in these cable holders, but I can’t reasonably print anything that would require me to agree to the multiboard license.
Bookmarks, history, generally historical reliability, and (biggest reason for me) password manager.
I rarely have to type/remember passwords anymore on Android or web and it "just works". I know there are password managers out there that ostensibly handle the password-saving thing and are browser-agnostic but when I tried it in the past I had issues on some sites and, when it did work, it felt clunkier.
> multiple projectors could be used to increase brightness.
Done in at least one early military large screen display system, using six CRTs (RGB x2) aimed at the screen. If one failed, the brightness dropped on the others but you still had an image.
The USAF put a huge amount of effort into early large-screen displays.[1] Not very successfully.
The most successful projection display before LCDs was the Eidophor. (1949-2000)[2] Big, expensive, complicated, high-maintenance, but reliable. Worked by distorting an oil film in vacuum with an electron beam. Biggest buyer was NASA's mission control center. In its final 1990s form, it was bright enough to be used in stadiums.[3]
Agree with your thoughts. To add - I've taken my Model Y on many roadtrips and have been to many superchargers. A solid number _are_ at restaurants/shops/etc. Some aren't -- but there are so many chargers that usually you can alter the planned route to include a longer charge wherever you want.
Not only will batteries get bigger, but chargers will get faster. Most of my stops now are 10-15min so often there's not really a need for any side-questing. Tesla recently added a supercharger-specific leaderboard for their in-car Mario Kart clone, which is super cool. I think we'll see some growth there for that kind of thing, but the market is obviously much lower than gas stations/etc
FSD is getting so good so fast. The difference between 1 year ago and now is night and day. It's a godsend for road trips and it amazes me with each passing month's improvements.
It's not perfect and people shouldn't expect that. But I don't understand how anyone experiences FSD and isn't amazed. It's not unsafe -- if anything my interventions are because it's being _too_ safe/timind.
Weather forecasting isn't perfect. But it's pretty good! And it's getting better! Just because weather forecasting isn't perfect doesn't mean I won't use it and it doesn't mean we should stop improving it.
Is that it? Or is that we read articles like this one based on actual data from actual real world testing and evaluation, and recognise that the risk of failure is probably beyond what most of us are willing to tolerate?
That is a weird conclusion based on no other evidence but my my comment. How do you differentiate what I wrote from what a reasonable person would do in similar circumstances?
The problem is that, as it stands today, FSD requires constant human attention to catch its eventual mistakes. Humans are notoriously bad at that, and the promises of FSD make drivers even more oblivious to their surroundings, ultimately worsening this issue.
1 hour of video is a lot of bytes. But 1 hour of driving doesn’t contain much of the total experiential set possible when driving. Generative AI only really can interpolate between things it has been trained on, it can’t extrapolate. This feels like an infinitesimal amount of driving data.
That said, a POC doesn’t have to be production worthy. In the hands of a major automobile company perhaps this tech is profoundly powerful. Or perhaps it’s a single model in an ensemble. Regardless it’s going to be interesting where this goes.
Generative AI can absolutely extrapolate, that's the whole reason it works.
The whole point of machine learning is to derive the underlying rules relating your input data. Extrapolation is just extending where you follow those "curves" beyond bounds of known data points.
Oh but it can’t, but with a sufficiently complex vector space it can seem like it can. What seems like extrapolation is an interpolation in the semantic vector space, particularly in the transformer / attention model. This is a key difference between human intelligence and current AI, it’s not able to “create” and see beyond what it’s been trained on. Any approximation of that is simply indicative of a very complete training set, and it is sufficiently powerful enough to fool people with its expectation based inference - but when you dig into the details of cutting edge stuff you’re an expert in and ask it conceptual questions that extend beyond the semantic corpus embedded in its vector space, it will hallucinate, or if well fine tuned admit lack of knowledge, because the best it can do is interpolate within its own semantic vector space.
But listen, I’m a big buyer of generative AI, what it does is incredible. But it’s useful to not ascribe more power to a tool than the math allows.
And there are very few machine learning algorithms that do extrapolation at all with any precision. Generally they project an expectation,often of some complex highly dimensional non linear system, which is amazing, but when they are confronted with a novel input pattern they are thrown off. The issue is they’re at their core probabilistic systems, and if the data experiences a regime change that’s unexpected the model will misbehave and output garbage.
Enlighten us then how a generative AI model behaves when confronted with data outside its training space? Where in the model does it allow for the vector space to extend dynamically based on some other process to adapt to new regimes never seen before? Or does it necessarily construct its response by sampling the vector space, and in the case of transformers, apply attention / self attention to boost / dampen dimensions based on the semantic context? Extrapolation means being able to extend your decision space into new areas through synthesis and creativity, interpolation means walking within the trained vector space of the model. Clearly generative AI models as implemented today can’t extrapolate and always interpolate.
I think confusion comes from the idea that you can take a regression or expectation and extend it into the future and is that extrapolation. It isn’t - it’s interpolation still. You’re interpolating between a and a’ using the same function. Extrapolation takes the new regime and data and your existing training and adapts a new behavior. We don’t really understand how humans do this, and we don’t have any machine learning models that can.
To be clear, again, I’m not poopooing ML or generative AI. I think it’s the most powerful thing we’ve created with computers so far. But it’s far from general intelligence, even if it’s a necessary part.
>Enlighten us then how a generative AI model behaves when confronted with data outside its training space?
It behaves just fine.
>I think confusion comes from the idea that you can take a regression or expectation and extend it into the future and is that extrapolation.
Congratulations, you've just defined extrapolation. Someone is definitely confused here but it isn't me.
Of course you can make any claim about what something can or can't do when you make up your definitions.
There are many many clear examples of a language model extrapolating. Rather than accept this, you've opted to conjuring up vague and meaningless definitions and distinctions on the fly.
This is so simple to see. Untestable Definitions are meaningless. Please give us a test of "extrapolation" that all humans can perform and let's see how the Language Model does. You won't be able to but by all means, give it a go.
>Extrapolation takes the new regime and data and your existing training and adapts a new behavior.
By this metric, a large number of humans can’t extrapolate either. In fact if you imagine your first paragraph were written about humans, it lines up pretty well.
Except all humans can extrapolate even if they don’t. Current generative models fundamentally can not, even you want them to.
However I would hold that I can prove you’re wrong. Have you ever seen a human play make believe when they’re young? Draw? You’re judging humanity by post indoctrination crushing of the soul for profit. But every human being, no matter how rigid and unthinking as an adult, was a creative genius at age four.
I wouldn’t go this far, but I would say the “a lot of humans can’t either” argument in LLM convos is a bit worn now. Where it’s true (hallucinating on the edges of certain knowledge, solving math and logical reasoning through approximation and most likely thinking) and where it’s not, it’s all been said already many times.
The key though is that in these things “most humans” isn’t a very useful comment when the discussion is “all AI.” The comment, even if true, acknowledges there exists some humans that do, doesn’t refuse that all AI don’t, so doesn’t advance much of the discussion. In a parallel comment I pointed out that all humans can even if they don’t appear to, then further assert all humans have even if they don’t appear to currently or consistently, so they exist as distinct classes in this space of thinking and reasoning from generative AI.
Huh? I disagree with the premise, and explained why.
Reading over your comments for the last few days, you seem consistently aggressive. If you need to vent to someone about something, you can DM me. Happy to just listen.
If you mean when people who are being dicks make obvious glaring mistakes and can't handle when they're pointed out, I think the word you're looking for is "impatient". This community's standards are higher than the way you're participating. Have some dignity and bring your best side forward, not this petty sass.
You know they're talking about the fundamental differences in learning between advanced and specialized biological systems (humans) and relatively rudimentary digital ones (LLMs). You're not explaining your disagreements you're just demonstrating your disdain for some implied lower-quality humans. That's called "being a dick".
What on earth are you on about? You seem to have unilaterally declared that I was being sassy, when in fact this exists nowhere other than your own head. Then you run around like a sheriff on a power trip ranting about protecting the community from us sassy brats.
Brass tacks: you need to stop what you’re doing, or you’re going to get yourself penalized by the mods. They have a duty to protect the intellectual curiosity of the site. Trust me when I say it’s no fun to be in the penalty box.
You can start by re-reading the guidelines and paying particular attention to "don’t cross examine," along with realizing that it’s not okay to be calling people a dick multiple times when they’re engaging in good faith.
Your call. Either way, I wash my hands of you and this conversation.
"a large number of humans can’t extrapolate either" demonstrates that you're operating under the belief that there are people whom you consider as less-than. What's good faith about that? You don't really have the high horse you think you do here.
You know, you're not the first one to invent and deploy the plausible-deniability-/just-being-civil-style sass. Maybe I'm the first one to call you out on it as transparent, though.
It's ironic that your accusations are followed immediately by your playing mods' deputy. Don't worry they already know me, and I know the limits past which they choose to intervene.
Again, I hope you can understand that self-awareness and good faith are pillars of fruitful conversation here. It helps no one to avoid acknowledging the kinds of rhetoric and tactics you engage in. Good faith entails understanding the fundaments of the belief systems you portray and propagate, or at least being humble when you don't.
At no point have you made a coherent reply to anything that I’ve said. You haven’t explained your position. You haven’t explained what you’re arguing against. And I have not one clue what you could possibly mean by “people whom you consider as less-than." And I don’t care to know, because you’re off in la la land fighting the good fight against demons that exist only in your dreams. I’m trying to snap you out of it. Far from being on a high horse or playing mods’ deputy, I was trying to look out for you as one community member to another.
You are speaking to someone who has been active in this community since day two of its public launch, back when it was called startup news. Normally I don’t appeal to authority, but I’m hoping that whatever delusion you’re under will be dispelled by the realization. When you say I’m not operating in good faith, not only is that mistaken, but it’s plainly mistaken. You can ask any of the hundreds of people I’ve engaged with over the years whether I have ever once done what you seem convinced I’m doing here.
Since self preservation doesn’t seem to rank highly on your priority list, I urge you to chill out before you get to know the mods a lot better than you currently do. Because my instincts are screaming “this person is on a collision course with the mods, and they’ll be busting out the paddle sooner or later." Just because they don’t notice you breaking the rules doesn’t mean diddly-squat. It just means they’re not omniscient, and if you keep rolling the dice like this you’re going to hit snake eyes sooner or later.
I’m going to sleep. I have a newborn to care for. I’ve tried to help you as much as I can. I genuinely wish you the best of luck, and I hope you’ll stop going around poisoning otherwise interesting conversations with baseless accusations. It’s a huge distraction, an emotional drain, and does a lot more harm than whatever you think I was doing above.
It’s fine to disagree with someone. Running around calling them a sassy dick three times in a row isn’t disagreeing. One tip to avoid such comments is to wait until you feel curious about something rather than fulminating. It works for me at least.
Thanks for inspiring me to use Sassy Dick as my porn name if I ever get into the industry though.
Something you appear to be confused about is I'm not disagreeing with your opinion. I am calling your opinion morally abhorrent by disagreeing with the very premise.
You don't have to worry yourself about my preservation.
A more productive use of your time would be to focus more on actually understanding the details of the topic at hand, then maybe you'll comprehend a little better the anti-humanistic nature of your (evidently so implicit as to be invisible to you) bias. To repeat myself: self-awareness is mandated in good-faith discussions.
It can to a point, but it can’t to the extent that humans can with sufficiently complex problems.
Deep learning models like this can theoretically approximate pretty much any problem that can be expressed as a function.
It’s entirely possible that there just doesn’t exist a function from visual data (maybe even including LIDAR and RADAR etc) to correct driver decisions.
Humans can also intuit the behavior of other humans to an extent, even while driving (knowing that someone who is driving erratically is probably fucked up and will be dangerous to stay near). Kind of like a really shitty gossip protocol.
It can only approximate any function for which it’s seen data in the local feature spaces of the function. For anything it’s not seen features for it will do some maladapted interpolation through the feature space it has been trained on. It can’t be creative or synthesize a novel technique based on some more abstract reasoning over the new regime - it literally must attempt to fit its past observations as best it can to the new regime. Humans certainly do that too, but they are also able to step back and synthesize completely new behaviors given completely new data that isn’t just adapting old behavior based on some optimization function telling it that behavior is most appropriate in the new situation.
People are confused because interpolation is actually fairly powerful and is often entirely sufficient. Especially with the GPT4 model it’s so well trained with such a large and varied corpus that it is able to handle many things well, even unexpected things, and seems like it is extrapolating at times. But it still hallucinates, and these are the most obvious symptoms of its inability to extrapolate. It’s just fitting within its trained vector space as best it can.
But ... all this goes for humans too! Is the argument that we should just outlaw driving alltogether, all possible forms?
One famous example of that is how to react correctly when the car starts to slip due to speed, braking, or driving on water, mud, sand, snow or ice. I think everyone knows people's reflexes are to floor the brakes, and start wildly turning the steering wheel, which only results in total loss of control over the vehicle. Is anyone demanding drivers learn to correctly handle cars or other vehicles under these circumnstances? There are only very minimal efforts, because it is completely impractical to teach many humans better driving practices. So we just accept the flaws ... and the constant stream of victims this generates.
Reality: Yes, a driving AI is not ready for all possible situations. It just isn't. It will never be. Is that a problem?
Also reality: Humans drive drunk. Humans drive while under the influence of drugs. Humans drive trucks near kids when they're so tired they can't keep their head lifted up. And roads are full of dead cats, squirrels, mice, ...
Also also reality: AI driving software can, after an accident, be taught to handle the situation that caused the accident, and the result of this learning process can then be uploaded to all instances of this software. Humans will keep making the same mistakes, with the same consequences, over and over and over again. Perhaps there is very slow improvement (mostly by modifying roads), but it takes decades.
Practical view: I have driven around in Mountain View next to self-driving cars. One thing's for sure: self-driving cars behave much better than human drivers. Including me. It's very irritating how good they behave on the road. If the roads have many self-driving cars, I'm pretty damn sure it'll result in much fewer accidents and lower transit times. Never mind that self-driving cars of course solve the parking problem. I don't get why people hate them.
And I hate this goalpost moving where AIs are compared to multiple top-performing humans, that you see everywhere. Of course, there are now cases where AIs have actually beaten groups of top-performing humans (translation, chess, Go, robot control, ...)
Your arguments don't really fit, what was previously said.
There was nothing said against driving AI in general, just that 4700h of videos seems low.
I also get that humans are pretty bad drivers, but isn't that exactly why we shouldn't use them as the baseline for AIs to compare to?
We are now at a point where we can set high standards for AI, so we get a best possible result, because while it isn't feasible to have everyone learn driving over a couple of years, a good AI has to be trained once and can be used by many, so we have the time.
And sure, it can be updated, but should we really trust companies to keep innovating once they are already allowed to have the AI in use? The incentive to do so is far bigger if they have to do so before they got any money out of it.
Interestingly there’s another thread I’m in about generative AI where someone asserts “this goes for humans too” in a sort of similar vein.
However it’s not the case. Humans have the ability to extrapolate from their training data and synthesize new thought and behavior from situations not seen before that’s fundamentally insightful and adaptive. Generative AI and all machine learning I’m aware of are fundamentally expectation driven probabilistic models that synthesize highly dimensional non linear functions from samples of those functions, which means they can’t adapt to new situations dynamically and extrapolate their experiences into new experiences and make decisions that are novel and intuitive given a new regime.
When these models encounter new situations they’ve never experienced or new regimes they interpolate in their learning space to a most likely behavior, but the learning space has nothing similar in it, so the behavior can become seemingly random and highly maladapted.
A classical example of this is obviously LLM hallucinations at the edge of their knowledge which an expert in a field can induce by asking questions beyond the horizon of the field. While humans might not have answers, they can pose interesting theories, while LLMs really can’t - if they appear to it’s simply because their training set is so massive they can interpolate into babbling that sounds good. You could assert humans do this to, and it’s true they do at times, but they also don’t at other times and have novel insights beyond their experience. The fact they can do this sometimes and AI mathematically due to its internal structure can’t at any time is the difference.
Another example is Go playing AI. They can do really well against expert players until someone plays a nonsense series of plays that are random and the AI begin to play worse than amateurs. You can do this with LLMs too, if you give them enough random nonsense or repeated strings, they just leap into some random spot in their vector space and rant about weirdness. Even GPT4 does this.
The answer isn’t to outlaw driving or to stop pursuing AI driving assistants. It’s to build models with an enormous well labeled corpus that covers almost every possible situation, but also build in fail safes that make it easy for a human to be called to attention and take control when things are confusing the AI.
1 hour of driving isn't a lot of driving, or a lot of anything really. There are heaps of individual things that happen maybe once or twice in a lifetime that we're already somewhat equipped to deal with based on our other lived experiences - for example, seeing a ball bouncing towards the road and a child or dog chasing after it, I'm already starting to brake before they even approach the edge of the sidewalk. Or seeing a fast-moving wheel bouncing down the road - I know from watching dashcam footage on youtube that you'd want to keep WELL clear of a 30+kg obstacle with a whole bunch of angular momentum, because it will fuck up anything in its path. Good luck to an AI to figure out what's going on, or what the path of a single fast-rolling wheel is going to look like.
1) how many hours does a pilot need on a 'type' to fly unsupervised?
2) it seems kind of a meaningless unit here, because nobody said it was real world real-time driving hours? And even if it is, if they were logged with the intention of finding 'interesting' scenarios vs. just A-B motorway driving would make an enormous difference.
(Ignoring here ultra lights and paragliders where you sometimes don't need a licence in some jurisdictions.)
So that is the straightforward answer maybe you are looking for.
Then we might say that all pilots are required to be at least 17 years old to obtain a private pilot licence. So that is 148920 hours of pre-training in preceiving objects and movement, and coordinating one's actions with perception.
Then one might also say that one requires to be a human, and that comes with hundreds of millions of years of pre-training where all of our ancestors were evolutionary selected to be good at perceiving and moving. (At least good enough to survive until they could propagate their genes.)
Now this answer might come of as flippant, and maybe it is. What I'm trying to say is that it is hard to compare "training hours" directly between computers and humans. And it is hard because of these two things which humans have "pretraining by lived experience" and "pretraining by evolution".
Maybe. Maybe they incorporate a much larger corpus of knowledge gathered outside driving into everyday and emergency driving, and maybe AI drivers should too.
I've never understood this bundling. I listen to a lot of podcasts and a lot of music. I always know what I want to listen to. I don't need or want podcasts cluttering up my music app when I'm trying to play or discover music. I don't need or want music cluttering up my podcasts when I am trying work through my infinitely long podcast queue. It's never the case that I'm looking for _either_ a podcast or music to fill the silence.
Different use cases. Different audio mediums. Different apps.
YTD Sales
- Lucid: 23.4%
- Tesla: -4.3%
- Rivian: -13.2%
Lucid is the big surprise there to me (though only ~8K sold).