This is essentially what's happened with airliners.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
No - planes cannot "land themselves with zero human intervention" (...). A CAT III autoland on commercial airliners requires a ton of manual setting of systems and certificated aircraft and runways in order to "land themselves" [0][1].
I'm not fully up to speed on the Autonomi / Garmin Autoland implementation found today on Cirrus and other aircraft -- but it's not for "everyday" use for landings.
Not only that but they are even less capable of taking off on their own (see the work done by Airbus' ATTOL project [0] on what some of the more recent successes are).
So I'm not sure what "planes can land on their own" gets us anyway even if autopilot on modern airliners can do an awful lot on their own (including following flight plans in ways that are more advanced than before).
The Garmin Autoland basically announces "my pilot is incapacitated and the plane is going to land itself at <insert a nearby runway>" without asking for landing clearance (which is very cool in and of itself but nowhere near what anyone would consider autonomous).
Taking off on their own is one thing. Being able to properly handle a high-speed abort is another, given that is one of the most dangerous emergency procedures in aviation.
Having flown military jets . . . I'm thankful I only ever had to high-speed abort in the simulator. It's sporty, even with a tailhook and long-field arresting gear. The nightmare scenario was a dual high-speed abort during a formation takeoff. First one to the arresting gear loses, and has to pass it up for the one behind.
There's no other regime of flight where you're asking the aircraft to go from "I want to do this" to "I want to do the exact opposite of that" in a matter of seconds, and the physics is not in your favor.
How's that not autonomous?
The landing is fully automated.
The clearance/talking isn't, but we know that's about the easiest part to automate it's just that the incentives aren't quite there.
It's not autonomous because it is rote automation.
It does not have logic to deal with unforeseen situations (with some exceptions of handling collision avoidance advisories). Automating ATC, clearance, etc, is also not currently realistic (let alone "the easiest part") because ATC doesn't know what an airliner's constraints may be in terms of fuel capacity, company procedures for the aircraft, etc, so it can't just remotely instruct it to say "fly this route / hold for this long / etc".
Heck, even the current autolands need the pilot to control the aircraft when the speed drops low enough that the rudder is no longer effective because the nose gear is usually not autopilot-controllable (which is a TIL for me). So that means the aircraft can't vacate the runway, let alone taxi to the gate.
I think airliners and modern autopilot and flight computers are amazing systems but they are just not "autonomous" by any stretch.
Edit: oh, sorry, maybe you were only asking about the Garmin Autoland not being autonomous, not airliner autoland. Most of this still applies, though.
There's still a human in the loop with Garmin Autoland -- someone has to press the button. If you're flying solo and become incapacitated, the plane isn't going to land itself.
One difference there would be that the cost of the pilots is tiny vs the rest that goes into a flight. But I would bet that the cost of the doctor is a bigger % of the process of getting an x-ray.
In almost 20 years of working in FinTech at various banks, hedge funds startup etc, a lot of this rings true.
e.g.
- Critical path/flow diagrams [0] are incredibly useful for both laying out what has to happen in serial vs what can be parallelized. That being said, I've almost NEVER seen them used and 90% of the time they are used it's b/c I made one
- SO many important processes are not documented so people can't even opine about how to fix them. I once documented a process and everyone agreed step 4 was wrong. What was amazing is no one agreed on what step 4 actually was.
- Most of the big arguments I've seen about projects are less "what should we do" but more "when do we want it" e.g. one party want's it next week but another one wants to have more features so it will take longer. [1] I've often dealt with this by using the following metaphor:
"Oh, so you want to move house every two weeks?
If you give me six months I'll build you the world's most amazing Winnebago/RV with a hot tub, satellite TV, queen size bed and A/C.
If you want it tomorrow I'm going to give you a wheelbarrow, pillow and an iPad."
> Critical path/flow diagrams [0] are incredibly useful for both laying out what has to happen in serial vs what can be parallelized. That being said, I've almost NEVER seen them used...
Making technically good decisions is one of a distressing number of domains where making any attempt at all will put someone a long way ahead of the game vs most people who wield power. Several advanced techniques that nearly nobody seems to do:
"Do we have evidence that this is a good idea?"
"What if we assume that we achieve the most likely outcome of this action, based on past experience and checking what happened when other people tried doing it? Is it a good idea?" [0]
"Assuming we just keep doing what we're doing, where will we be in 12 months?"
[0] Please someone get this one into the mainstream US debate next time they're trying to start a war.
> [0] Please someone get this one into the mainstream US debate next time they're trying to start a war.
Speaking as someone with 20 years in uniform and as a War College grad (if only by correspondence) . . . the military ironically has this wired more than any other institution in the Federal government. The reason the military gets drawn into so much of US foreign policy is not because of a fetish for blowing things up. It's because it's the only institution where formalized planning is a thing, the only one where feats of large-scale logistics are par for the course, and the only one where "I'm not going there because I might get hurt" isn't a valid excuse.
As an example, one of the best ways for distributing aid after a natural disaster is an amphibious task force, because you can send the same Marines in to distribute aid that you would to take territory. And going into relatively unprepared areas and setting up infrastructure for follow-on forces is basically their bit either way.
The problem comes in because military force is never the complete solution in and of itself outside of something like what's currently happening in Ukraine. And when you involve all the other agencies plus scads of glory-seeking politicians, it's hard to keep things from becoming a shitshow.
Military force, as demonstrated by the US and its allies, worked out really well in Korea, Vietnam, Cambodia, Iraq, Libya, Afghanistan (Taliban 2.0 after 20 years), Syria.
Declaring victory and running way is the common theme of US foreign policy.
> outside of something like what's currently happening in Ukraine.
Typical slippery slope. "The problem comes in because military force is never the complete solution in and of itself outside of something like <insert any use of army>"
Point of order . . . "the Ukraine" is what Putin and his ilk call it. Ukraine is the name of the country. Its etymology does come from the Slavic word for "march" or "borderland," but that's the point. Ukraine is its own sovereign nation since 1991. Calling it "the Ukraine" or "the borderlands" subtly legitimizes the Russian claim that it's nothing more than "Novorossiya" or "New Russia."
The Ukraine literally means The Borderlands and refers to the areas fought over repeatedly by all various European countries from Sweden to Poland to Austria to Hungary.
Novorossiya is the name for the southern mainland of The Ukraine. The name dates back to the late 18th century, there’s nothing really “new” about it. Do you consider the name America to be old? Novorossiya is as old.
I suggest you find out how these lands were merged into The Ukraine during the Soviet era.
Seems pretty simple unless you are secretly stanning for the orcs. Ownership of the land should revert to the status quo ante before Putin's initial incursion into Crimea. Those whose property has been destroyed or mined should be compensated with seized Russian assets. Kidnapped Ukrainian children should be returned to what's left of their families.
What alternatives would be more fair, from your perspective?
I read this as a logistical question rather than a moral one. What happens when two farmers can't agree on where their property boundaries sat prior to the war, the fences got ripped out after one side used the area as a staging ground, and any records have been blown up by glide bombs? What happens to the real, physical land that's covered in mines and needs to be either cleared or fenced? Where can Maria and her family stay tonight, next week, and where will they end up? These aren't (necessarily) problems for the military to solve.
Good points all. Like the other poster said, though, first the war must end. It's to our shame (meaning the West's) that Ukraine isn't well into the suing-Russia-for-reparations phase.
> Those whose property has been destroyed or mined should be compensated with seized Russian assets.
Unless you think the resources of the clearly guilty are limitless, this sounds like Versailles-type collective punishment that may be satisfying, and maybe even moral, but is counter-productive long-term.
Putin is no Hitler, though. I suspect that turning Russia into a failed state that the rest of the world will have to support is exactly his plan. He looks at Kim Jong Un with envy, not contempt.
You're being downvoted because that's not a slippery slope argument familiar to the mostly US-based readership and it's hard to tell if you're crazy or not.
Ie, the US identifies very strongly with three wars (revolutionary, civil, and ww2) where military force was a necessary but not sufficient condition. Ie, the US lost in Vietnam and Afghanistan despite having a far more favorable balance of military forces because they could never find a political solution. If you were taught that wars are won by force alone, you were miseducated.
COVID wfh was a weird time. The company I worked for was remote before COVID. Oddly, COVID was a boost to basically every metric, including the ones not tracked for productivity like even number of Slack messages sent or raw number of PRs. I think everyone was just closed off from their out of house obligations, especially families with kids who went to a lot of activities that got cancelled. So they worked instead or were just more rested and less exhausted from everything else in their lives.
The problems started mostly with cohorts hired remote during COVD. Something about COVID wfh attracted a lot of remote candidates with not so great intent: The overemployed people getting multiple jobs, the side hustle bros who needed a paycheck and healthcare while they worked on their startup idea, the 4 Hour Workweek people who tried to travel the world and answer Slack once a day or other people who generally just weren’t interested in actually doing work while remote. It started to add up over time. There were also the people who cancelled daycare and tried to watch kids while they worked, people who were never at their keyboard for some reason, a guy who was always catching COVID or going to a funeral whenever you needed to schedule a meeting. It really wore everyone down. I wished we could have stuck with the pre-COVID remote crew because for some reason everything changed when everyone started WFH.
not to mention normal folks not used to wfh, who were used to spending half their day chatting between cubes or getting coffee. i worked in a very strange office, the coding team of a regional grocery store that maintained our in-house COBOL applications. most of those folks had worked there for 20-30+ years, so it was a huge departure from anything they had ever known.
I think the point was that they are contradictory, yet "data" was shown to indicate they were each sound decisions, implying an inherent dishonesty and willingness to bend data to support an already drawn conclusion.
No, I'm stating that those aren't contradictory. Perhaps they were inaccurate paraphrases of statements that were contradictory, I have no idea. But taking what's written at face value, they are not hard to reconcile. E.g., being productive doesn't necessarily imply innovating.
This is also - disturbingly - how the human brain operates.
1. Some subconscious process makes a cynical decision about what course of action is most beneficial for you.
2. Another part, known as the "Press Secretary" comes up with a good sounding motivation for why this is the morally right thing to do.
3. You now genuinely believe you're doing the right thing, and can execute your cynical plan, full of righteous zeal!
I'm as autism-brained as anyone, and would probably prefer brutal honesty in all communications, but I also think you have to accept that well functioning human organizations don't operate like that, and if you want to be part of such organizations it's best for everyone to accept how they work.
This is not entirely true. Daniel Kahneman wrote about it if you want a source. Humans generally have two ways of thinking: Type 1 and Type 2. Type 1 is fast, heuristic-based, and not always accurate. Arguably, that's why things like racism and bigotry take hold. Type 2 is slow, analytical, and what lets us do things like send men to the Moon or write large software applications.
The trouble is that you have to consciously shift to Type 2 thinking, it takes longer, and it's tiring. Dave Snowden of Cynefin fame has a great bit (paraphrasing here) about the purely Type 2 thinker in ancient days on the African savannah getting eaten by a lion. Because he or she sits there doing a complex analysis of "OK, felid, yellow fur, moving towards me, etc. etc." while the Type 1 thinker goes "Oh shit! Lion!" and runs away.
Type 1 thinking has a role. You just have to be mindful about when you're misusing it.
I’ve literally never seen that happen. It’s always problem, initial hypothesis, request for data (then data is either unavailable or typically supports the hypothesis, occasionally the data doesn't and you go back to the drawing board.
Have you worked on a data team? I've seen that bs a number of times, it's how I mentally grade different managers and PM/POs.
Re the unavailable data: Smart people ask before a big change, get told what devs are missing and need to instrument/record and then leverage those new metrics for the before/after comparison. Not-smart people yolo the changes, ask for the metrics after and go whoops it's too hard or impossible to check.
"Assuming we just keep doing what we're doing, where will we be in 12 months?"
I find this one interesting, a business youtuber I follow said he finally realized that his teams all got ~5% better every year if he just left them alone and changed nothing. He said he used to have all these ideas he wanted to implement, but that if they didn’t have a lot of potential upside, they weren’t worth the short term drops associated with reimplementing, retraining and the teams having to relearn and explore their new problem space.
> What if we assume that we achieve the most likely outcome of this action, based on past experience and checking what happened when other people tried doing it? Is it a good idea?" [0]
I’ve tried exactly this and it was shocking to me that even when faced with examples of themselves failing to do something, people would just willingly go on the record with this:
Lindsay: Well, did it work for [us last time]?
Tobias: No, it never does. I mean, these people somehow delude themselves into thinking it might, but ... But it might work [this time].
I was merged onto a team because we decided to put more wood behind fewer arrows. So I was the Outsider but with seniority and pull (people loved my project and thought we did great, we just couldn’t sell it.)
One of the first big problems I solved was almost by accident. We had a backend and a frontend team and we kept missing deadlines because they would work separately on features and the two wouldn’t mate up so we’d have to do another couple iterations to make everything work and “work” belonged in air quotes because things were hammered to fit.
The biggest of the problems encountered here was data dependencies in the inputs resulting in loops in the APIs where you couldn’t get one piece of data without another, and vice versa. So we started drawing data dependency diagrams during the planning meetings as an experiment, instead of diagramming the data structures, and the problem went away basically overnight.
I’ve seen this a lot, too, but only in specific company cultures. The common problem among all of them was that people in middle upper management thrived in chaos. They didn’t want the important things to be well documented or stable because that took away their opportunities to be the important person who held the secret knowledge to make things work. When something broke they wanted it to remain impenetrable for other teams so they could come in as the heroes.
Oddly enough, these same people would be the ones pushing for documentation and trying to stonewall other teams’ work for not being documented enough. It was like they knew the game they were playing but wanted to project an image of being the people against the issue, not the ones causing the problem. Also, forcing other teams to document their work makes it easy for you to heroically come in and save the day.
In the couple of instances I experienced this, the problem is that the system is like the proverbial elephant that a bunch of blind people are familiar with through touching the parts they are next to; but the complexity is in the relationships in-between.
There needs to be a person who will take charge and learn/document the whole system, except people who work on it are overloaded and too exhausted to take this on. And management doesn't necessarily have the insight or incentive to make this happen. It's an interesting phenomenon.
The degree to which people can disagree on what things are is very impressive. It once took me weeks to get a company to go from 12 definitions of user retention to 4...
I have had people have vicious arguments what a user even was. I just setup a system where custom definitions are allowed because if you want to argue do it with someone who gives a damn.
On a similar note, you can embed Draw.io markup when exporting Draw.io diagrams, meaning the image contains the metadata required to open, modify, and generate a new image (which itself can also contain the new metadata).
Mermaid has its place, but Draw.io is so much more flexible.
I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.
Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.
I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.
They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.
What did your test involve? That's my occupational stack, and I am always curious how interviews are conducted these days. I haven't applied for a job in over 9 years, if that tells you anything.
I've hired someone that didn't solve a specific technical problem.
If they are able to walk through what they are doing and it shows the capability to do the expected tasks, why would you exclude them for failing to 'solve' some specific task? We are generally hiring for overall capabilities, not the ability to solve one specific problem.
Generally my methodology for working through these kinds of things during hiring now days focuses more on the code review side of things. I started doing that 5+ years ago at this point. That's actually fortuitous given the fact that reviewing code in the age of AI Coding Assistants has become so much more important.
Anyway, a sample size of 1 here refutes the assertion that someone's never been hired even when failing to solve a technical interview problem. FWIW, they turned out to be an absolute beast of a developer when they joined the team.
I recently started using python packages for some statistical work.
I then discovered that there are often bugs with many of the python stats packages. Many python numerical packages also have the reputation of changing how things work "under the hood" from version to version. This means that you can't trust your output after a version change.
Given all of the above, I can see why "serious" data scientists stick with R and this article is just another reason why.
A lot of the comment seem to ignore Hotelling's Law [0].
When applied to politics the game plan is:
- In a two party system, start by framing your message to the middle of your party
- This means you capture everyone from middle of your party to the political middle of the population (we'll call these the "closest to center") AND get a few folks farther out from your middle
- This will help you win the "primaries"
- After that, you want to slowly drift towards the middle of the population. This allows you to pull most of the "close to center" folks from your party AND people from the other side
NOTE: the other side should be doing the exact same but coming from the other direction.
Now, people may also ask "Why not start out in the population middle??". The reason is that you:
a. don't win any primaries this way
b. you get "crowded out" by the winners of the two party primaries
Alexa is trash. If you have to basically hold an agent's hand through something or it either fails or does something catastrophic nobody's going to use or trust it.
REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?
> Jobs reports just don't seem at all to square away with the vast anecdotal accounts from both employed and unemployed individuals across a swath of industries.
Agreed and on multiple fronts.
e.g. I can imagine that white collar workers may not claim unemployment due to a combination of embarrassment/"I don't need it as much as other folks" so the numbers are probably under-reported there.
I've also heard it's bad for recent grads but then a recent grad I know sent me the below:
"I’ve been hearing this a lot lately, and honestly, it’s pretty silly. Yes, tech majors are definitely over saturated, but in my opinion, you shouldn’t be able to go to school for four years, do the bare minimum the entire time, and get a great job after college. From my experience, everyone who worked really really hard and knew their stuff got to a place that they’re happy with"
The above could have been what I said back in 2002 right after the dotcom boom. In other words, it's unclear if companies are hiring fewer junior folks, junior folks were benefiting from ZIRP/boom market or a combination of both.
What you said is how the USA worked for my entire life until the H1B saturation finally reached a critical point! Young men and women would find jobs after college, it was NORMAL to have that be the case. What's going on right now is NOT normal. I got a job right out of college in 1995 with a CS degree--in fact, I had two offers, and I was nothing special. People with actual connections did really, really well back then. It was an optimistic time, and yes, Clinton was President, but he lucked out with the dot com era boom. It obviously imploded by 2001 or so.
In the book Fingerprints[0], they mention how, prior to fingerprints, much easier it was to just move to another town/county/state and just start over or even pretend to be somebody else. This was because there was no way to establish your identity with near 100% certainty.
This had pros and cons depending on who you were. For example, thieves loved it as you could drop you criminal record simply by moving somewhere that no one recognized you. On the other hand, there were documented cases of mistaken identity and people being prosecuted just because they looked like someone else. Then there is the case of William West which is better understood by looking at the pictures of two men names William West [1]
Contrast that to today where it doesn't matter which town in the US you live in, there is always a credit record that is tied to you.
I was recently looking at a large timeseries dataset.
I noticed when doing a scatter plot of two variables and noticed that there were several "lines" of dots.
This generally implies that subsets of the two variables may have correlations or there is a third variable to be added.
I did some additional research and it is possible for two variables with large N to show correlation for short bursts even if both variables are random.
I mention for two reasons:
1. I was just doing the above and saw the OP article today
2. Despite taking multiple college level stats classes, I don't remember this ever being mentioned.
Planes can land themselves with zero human intervention in all kinds of weather conditions and operating environments. In fact, there was a documentary where the plane landed so precisely that you could hear the tires hitting the center lane marker as it landed and then taxied.
Yet we STILL have pilots as a "last line of defense" in case something goes wrong.
reply