Why would it not be enforceable? If you own the copyright on your software anybody that wants to use it has to get a license from you. The traditional way is for you to sell those licenses for money, but you could also decide to give them away based on how much you like the buyer.
Or a hybrid, sell them, but refuse to sell to certain entities and discount up to 100% to others based on how much you like them.
It's a choice for the authors to make based on what type of free they believe in. I think free under MIT and GPL are two different philosophies on how you see "free".
MIT: free for anyone, do whatever you want
GPL: free if you also make your software free
AGPL: GPL but SaaS can't circumvent the requirement to make your software free
I see why principled open source proponents would select GPL or AGPL. They don't just want their code to be used freely by others, they also believe more software should be free and using GPL helps with that.
GPL restrictions don't make software under the GPL not "free" as in freedom. Just a different philosophy.
I like the GPL and think its "virality" is both clever and a worthwhile social goal, but I think it's misleading to call it "free". It directly restricts possible usage of the software in question -- yes, in a way that's designed to increase another kind of freedom, but it restricts nonetheless.
FWIW I have the same quarrel with people who talk about a country being "free". To my mind, a truly free country would have no laws. It would be a horrible place, because the restrictions that laws place on us tend to make things better for everyone (we may disagree on this law or that law, but some laws, like "Don't kill someone without a very good reason", would have >99% popular support anywhere in the world).
"More free" does not necessarily imply "better"; it could be better or worse. I'd like to shift usage of the words "free" and "freedom" in this direction, but think it's probably a lost cause as the words are too emotionally charged with connotations of "good".
The table from the report shows that the tools do crack the window but don't break it. Which is probably the main difference between old glass and the newer layered glass? If you crack an outer layer it is no longer usable, but you can't escape through it.
It's tricky to do for large public websites, because routing happens at the IP level while users want to input a domain name.
That domain could constantly resolve to different IPs, requiring updates to the routing rules, and those IPs could be shared with many other domain names that the user didn't list (for example Cloudflare IPs). So the mapping isn't clean and you're likely to miss some IPs some of the time or incorrectly intercept some traffic that the user didn't want to route through the VPN.
A proxy would not have this problem, it gets to inspect the request and hostname and then decide how to reach that host.
VPN app can still solve it by locally resolving configured domain into special local IP, which get VPNed into real IP on their side. You'll need to encode original DNS name into protocol somehow, so that remote side knows which real IP to access, but it is certainly doable.
Why would they do that? (Not a rhetorical question, just curious). It would suffice to block UK IPs for compliance, if visitors use a VPN to circumvent that Imgur would get more traffic and more ad revenue. No reason to put extra work into blocking those users.
Gives them proof they did their best to "protect minors" even if they circumvented the GeoIP rule: someone trying and realising it still does not work might get X percentage to not bother further thinking there was something smarter at play and not just GeoIP (which there is).
Could be for performance? Basically cache the group lookup result into a signed cookie that can be checked at the edge rather than needing to do a geoip lookup for every request.
> Why would they be using flight computers from before 2002?
Because getting a new one certified is extremely expensive. And designing an aircraft with a new type certificate is unpopular with the airlines. Since pilots are locked into a single type at a time, a mixed fleet is less efficient.
Having a pilot switch type is very expensive, in the 50-100k per pilot range. And it comes with operational restrictions, you can't pair a newly trained (on type) captain with a newly trained first officer, so you need to manage all of this.
I think you're confusing a type certificate (certifying the airworthiness of the aircraft type) with a type rating, which certifies the pilot is qualified to operate that type.
Significant internal hardware changes might indeed require re-certification, but it generally wouldn't mean that pilots need to re-qualify or get a new type rating.
No I meant designing a new aircraft with a new type certificate instead of creating the A320neo generation on the same type certificate. The parent comment wondered why Airbus would keep the old computers around, I tried to explain why they keep a lot of things the same and only incrementally add variants. Adding a variant allows them to be flown with the same type rating or with only differences training (that's what EASA calls it, not sure about the US term) which is much less costly.
Asking from ignorance: shouldn't the computer design be an implementation detail to the captain, while the interface used by who pilots stays the same for that type of airplane? I understand physical changes in the design need a retraining but the computer?
Ideally you would not change the computer at all so your type certificate doesn't change. If you have to (or for commercial reasons really want to) make a change you would try very hard to keep that the same type certificate or at most a variant of the same type certificate. If you can do that then it will be flown with the same type rating and you avoid all the crew training cost issues.
But to do that you'll still have to prove that the changes don't change any of the aircraft characteristics. And that's not just the normal handling but also any failure modes. Which is an expensive thing to do, so Airbus would normally not do this unless there is a strong reason to do it.
The crew is also trained on a lot of knowledge about the systems behind the interface, so they can figure out what might be wrong in case of problems. That doesn't include the software architecture itself but it does include a lot of information on how redundancy between the systems work and what happens in case one system output is invalid. For example how the fail over logic works in case of a flight control computer failure, or how it responds to loosing certain inputs. And how that affects automation capabilities, like: no autoland when X fails, no autopilot and degradation to alternate contol law when Y fails, further degradation if X and Z fail at the same time. Sometimes also per "side", not all computers are connected to all sensors.
The computer change can't change any of that without requiring retraining.
1. I don't think adding robustness necessarily requires changing how systems are presented to the flight crew.
2. Bigger changes than this are made all the time under the same type certificate. Many planes went from steam gauges to glass cockpits. A320 added a new fuel tank with transfer valves and transfer logic and new failure modes, and has completely changed control law over the type. etc.
Since the new versions of the same ADIRU have EDAC, they have been using it on planes since 2002 and they have been putting the EDAC variant in whenever an old one was being returned for repairs, I don't think this is the reason. I think the reason is that they had 3 ADIRU's and even if one got wonky, the algorithm on the ELAC flight computer would have to take the correct decision. It did not take the correct decision. The ELAC is the one being updated in this case.
It’s an oversimplification to position them as opposites. Airbus uses higher level contol, more a flight path than a control surface movement. But pilots can revert to direct law and have full contol authority when required. Boeing aims for a more traditional control feel, you move control surfaces instead of commanding an outcome. But with layers of substantial augmentation on top of it, up to and including for example the 737 MAX MCAS.
In practice, both approaches blend automation and pilot authority rather than strict philosophical extremes. And the practical difference at the controls is also not as extreme as some people think it is.
There is no oversimplifying happening here. There is no documented procedure to switch to direct law in an Airbus.
In fact, the only way to get into direct law on a fully functional plane is to start pulling circuit breakers for the (redundant) flight computers and inertial reference units.
This is highly unusual, so there may be something more to it than only speculation about radiation. The emergency AD says:
> Before next flight after the effective date of this AD, replace or modify each affected ELAC with a serviceable ELAC in accordance with the instructions of the AOT.
>
> A ferry flight (up to 3 Flight Cycles, non-ETOPS, no passengers) is permitted to position the aeroplane to a location where the replacement or modification can be accomplished.
That's a very limiting AD. The "before the next flight" part is unusual, ADs often have a limit to the next inspection or X flight hours or similar, not immediately.
Easy enough to imagine. Something improperly was judged to not need concencus, running the calculation twice, or some other software mitigation. Revise process to include mitigation.
"just run the calculation twice. surely the darned solar radiation will take care not to hit random parts of the memory or any critical registers in the cpu"
Best explanation I've seen (and claims to be from a published but not public report on it), is that their 3 way consensus didn't smooth over repeated wildly wrong outputs correctly. Concestency problem strikes again.
reply