Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tower systems are the last place in the world you would want a 97% accurate large language model, and the very last place we would culturally tolerate this sort of thing. Innate conservatism is what happens when deviations from perfection lead to collisions, and for the most part "AI" success remains a stochastic matter, gigabyte to terabyte sized tensors that are not human-intelligible. A black box which cannot be readily, safely validated in the real world.

With that said - algorithmic, automated, and digital systems for collision avoidance at the very minimum have and could continue to make ATC jobs significantly easier. The radio voice channel is a particularly low fidelity, low bandwidth way to mete out information and directives.



There will be a list of scenarios competing for that "last place".

Operating rooms, certain military/police situations and self driving cars come to mind. A shared characteristic here is that errors lead to fatal outcomes exacerbated by unclear accountability.


Yes, but, as with collision avoidance systems... Years ago a VA thoracic surgeon was trying to find someone to build them a tool to listen in an OR for surgeon commands and team member acks, and if it saw a command without an ack, to nudge. Context is surgeons are both team managers and individual specialists, and when heads down as specialist, the managerial role gets load shed. So a dropped command may not be caught before bad things happen. The VA does OR pickup teams, so there isn't the polished but idiosyncratic load sharing of long-standing teams. And the acks are more formal. He was fine with a high false negative rate (catching anything is good), and a moderate false positive (nudges are low cost). That seems now plausible. Aside from tech maturity, the biggest challenge then envisioned was willingness to be recorded. Though perhaps the real fix is staffing the team management role, but that was above his pay grade.


The speculation on pilot Youtube is that the helicopter in this incident observed a light in the sky, one of several in the closely spaced train of landing jets on approach to National Airport.

The helicopter pilot asked multiple times for permission to assume liability for visually avoiding the plane in the approach path, and the tower warned about the plane, and he confirmed he could see it. Several times, he insisted he had it in his sights, and it was not on a collision course, and requested and was granted permission to continue through the flight path on that basis. And he did successfully avoid that dot in the sky.

He was looking at the dot in the sky that was about 60 seconds behind the plane that he ultimately collided with.

If that is the case, there is certainly a chance that an automated warning signal from an automated tracking network (not "you're within five miles of another aircraft on the map, watch out" but "your current 3d trajectory is within ten seconds of collision with another aircraft") may have averted this. That isn't AI, it's just having the plane keep secondary track of ADS-B inside the cockpit. And it sounds from a cursory search like it's already standard for commercial planes to have an ADS-B receiver and a Traffic Alert and Collision Avoidance System (TCAS), just maybe not 1980's military helicopters.


Self driving cars are already operating no problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: