Spoiler: He is not. But he is very good at faking it.
Anytime he tries to give a serious opinion on anything related to computers: It is laughably bad and out of touch (SQL, compilers, languages, performance, etc... ).
He definitively has a scientific background but definitively not "Tech" as far as computer are concerned.
I don’t see how “tech” is limited to software. While your case might be made for software, according to many accounts Musk is a strong driver on the hardware side. For instance, I’ve read the Tesla and SpaceX books by Eric Berger, which are much more focused on technical things compared to the more mainstream books. And while Musk is not in the trenches with a screwdriver, he’s not faking it either.
To be honest, I’m actually interested in this hypothesis: is he legitimately skilled/knowledgeable, or is he indeed faking it? And for either side I would like to see evidence. This question is interesting to me because some of his companies have made substantial contributions to pushing the frontier of technology (reusable landing, high launch cadence, electric cars, energy).
If he is really faking it, that might even be good, because the success of his companies might be replicable and could continue without him. But what if he is not?
He has a public image of "geek/need hero" that is honestly inspiring.
And that benefits him a lot because it bring people to trust his decisions. He has all the interest of the world to maintain this image.
> some of his companies have made substantial contributions to pushing the frontier of technology (reusable landing, high launch cadence, electric cars, energy).
People he hired for these companies made contributions.
Unlike the more common pattern, Elon doesn't hesitate to make straight up engineering decisions for his businesses, including ones that look unnecessarily high risk to a lot of his own engineers. Chopsticks catching spaceships made of stainless steel and self driving cars without lidar are well known examples. The success of those choices earns him legit nerd cred.
Disagree. The current limitations of Tesla self driving are not around difficulties in judging distances that lidar solves. They're around inference deficiencies with accurate geometry.
If the AI was good enough, vision-only self-driving would be at least as good as the best human.
The AI isn't good enough. I'm starting to suspect that current ML learning rates can't be good enough in reasonable wall-clock timeframes due to how long it takes between relevant examples for them to learn from.
It's fine to lean on other sensory modalities (including LIDAR, radar, ultrasound, whatever else you fancy) until the AI gets good enough.
It's safer than human drivers now. That's good enough. It will take more than that to convince world, and it should. I applaud the well earned skepticism. But I'm an old guy who has no problem qualifying for a driver's license, and if you replaced me with FSD 14.2, especially under not ideal conditions like at night or in a storm, everyone would be safer.
I predict a cusp to be reached in the next few years when safety advocates flip from trying to slow down self driving to trying to mandate it.
I can't speak to your driving level, but everything I see about Tesla's FSD has unfortunately been giving me "this seems sus" vibes even back when I was extremely optimistic about them in particular and self driving cars more generally (so, last decade).
Unfortunately, the only stats about Tesla's FSD that I can find are crowd-sourced, and what they show is that despite recent improvements, they're still not particularly good.
Also unfortunately, the limited geo-fencing of the areas in which the robo-taxi service operates, and that they initially* launched the service without the permits to avoid needing a human safety monitor, strongly suggests that it hasn't generalised to enough domains yet.
Lack of generality means that it's possible for you to be 100% right about Tesla's FSD on the roads you normally use, and yet if you took them a little bit outside that area you might find the AI shocking you by reliably disengaging for no human-apparent reason while at speed and leaving you upside down in a field.
* I'm not sure what has or hasn't changed since launch: all the news reporting on this was from sites with more space dedicated to ads than to copy, so IMO slop news irregardless of if it was written by an AI or not
No reason we can't rely on other sensory modalities after the AI "gets good enough," either. Humans don't have LIDAR, but that doesn't mean that LIDAR is a "cheat" for self-driving cars, or something we should try to move past.
In principle, I agree; but remember that people like to save money, and that includes by not spending on excessive sensors when the minimum set will do.
What I think went wrong with Musk/Tesla/FSD is that he tried to cut costs here to save money before it would actually save money.
Im sorry that is just not true. You can never achive the kind of data with vison-only tech. its easy to confuse, you need lidar. anybody that thinks they can achieve self driving safety without that tech is lost.
> Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy.[11][28] At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual.[29] At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,579 in 2024).[30][31]
I think it's fair to say he at least was a nerd. He was a dweeb getting beaten up in school, burying himself in books and computers at home. His skills are doubtlessly outdated now, but does that really mean much? Woz's skills (which to be perfectly clear, outclassed Musk's by miles) are doubtlessly out of date now too, but nobody would say Woz isn't a nerd.
I think the part where he grew into an unstable dirtbag might be influencing the way people see him now. Saying that is is, or at least was, a genuine nerd shouldn't be seen as any sort of excuse for his scamming, lying, etc.
He definitely has talked about a lot of nerdy books. Don't know about his attention span and not sure how to square what he likes with his values. He brings up the Culture all the time but I have my doubts that he's actually read them
I don't know either, I haven't read the Culture books (yet) either so I can't really evaluate that.
I do believe he read a lot of sci-fi in his youth, if only because that would fit the pattern of a young boy who doesn't get along well with their peers and turns towards solitary pursuits like computer programming. He seems exactly the sort to have read lots of Heinlein.
Almost everything about The Culture will be immediately apparent from stuff Musk talks about, but only about half of it would look like he's understood it.
The only real crimes are reading/writing someone's brain without permission (at which point others may call you names and stop inviting you to social events) or destroying a consciousness without backups (where you'll get permanent supervision to make sure you don't do it again). Most biological citizens have a full-brain computer interface for backups and general fun, called a "neural lace".
The AI Minds in charge of everything give themselves fanciful names, which Musk has used for his SpaceX drone ships.
For the reverse:
Almost every biological citizen is gender-fluid, can change physical gender by willing it, and there's a certain expectation that you try things both ways around so you know how to be a good lover. They dislike explosive population growth regardless of if it's organic or machine reproduction, and as everyone can get pregnant if they want to (because everyone can be a woman if they want to and it all works), it's considered quite scandalous to have more than one child.
It's sufficiently post-scarcity that money is considered a sign of poverty. They mostly avoid colonising planets, instead living on ships, or on habitats so large that if one was located at any Earth-Sun Lagrange point (including the one on the far side of the sun), we could see it.
- In large structures, you indubitably finishes with 4 teams doing the exact same thing in parallel and ignoring each other because communication did not pass through
- Managers who tend to do that tend to concentrate all communications through them. This is disastrous for multiple reasons:
- It creates communication bottleneck through them and slow down the entire organization
- "Filtered" information tend to have reduced technical quality that lead to wrong technical decision
- Soon or later, a dubious mid manager somewhere will leverage that to make his team follow *his* agenda and not the one of the company.
- On long term, isolated teams indubitably loose touch with the current mission of the organization precisely because they can not see the big picture
Most people I have seen following this ill practice are some maniac micro-managers that finishes burn out after few years when they do not make their entire team burn out.
The initial 'problem' that silos try to solve is the fact many-many communication in large organization does not scale.
And there is absolutely no need to create 'Silos' or similar non-sense to solve that.
Creating a structure where people can peer-to-peer talk freely coupled with some more broad communication nodes (All hands, Retro, etc ...) is way more productive than any silo bullshit and way less toxic as a work environment.
> So you could be certain that such a high-profile case was not done without the go-ahead of the executive. In that sense, it can be considered politically motivated.
Not really. It is more complex than that.
There is two systems within the system for the "penal" (judiciary) in France:
- Le parquet, with a "procureur" who indirectly under the influence of the executive power.
- The "Juge d'Instruction". They are independent judges called only for complex affairs that are in charge of proof gathering and with more or less free hands.
Sarkozy affairs landed in the second system.
Politicans tend to hate the second systems for obvious reasons.
It is worth to notice that Sarkozy himself tried to reform the system and remove the "Juge d'instruction" entirely but ultimately failed.
Well yes. But no. And that's exactly why there is always a risk of a "politically tainted" investigations.
The "Juge d'instruction" is not an independent judge that will, out of his own will, start an investigation.
He can start an investigation when asked by the "procureur", directly or indirectly under influence of the executive power, or by private citizens, as a "partie civile". The Sarkozy case was started by the former.
On top of that, the "juge d'instruction" is nominated by the Minister of Justice for a period of 3 years, which means it is, once again, linked to the executive power.
It's also worth noting that members of the second system had his picture pinned on a wall called "The wall of the assholes"[1] amongst other political and public servant they did not like. They still claim they are totally independent and impartial when judging any of these figures.
> It's also worth noting that members of the second system
Nope. This picture was found in the office of an Union related to "magistrats".
Magistrats is a broad term that also include Procureurs, Judges but also some Lawyers.
The union is not specifically associated to the position of "Juge d'instruction" by any means.
But yes, generally speaking Politicians do not like Magistrats and Magistrats do not like politicians in France.
And honestly, it is more healthy like that.
> Magistrats is a broad term that also include Procureurs, Judges but also some Lawyers.
The also is key: "Juge d'instructions" absolutely are "Magistrats" - just like Procureurs, etc are. Some of those "Juge d'instructions" are part of this union who put a target on the back of some politicians. How can they claim with a straight face that they are not biased ?
Either they know it's bullshit and they are simply lying; or they really believe their claims and they are just delusional. I don't know which one I prefer.
Question: Since when a random Union is representative of the political opinions of an entire profession ?
Spoiler: They never are.
Specially in France.
Even CGT, the biggest union in the country is currently a perfect good example of that.
CGT is loud. They are often extreme in there political opinions, regularly promoting extreme left ideology, some group historically had even close ties with the communists.... And they represent statistically nobody.
They represent less than 10% of people in France because this is currently the percentage of the unionized worker in the country.
They represent the political opinion of people who are affiliated with them. Once you getting involved in organizations that have a clear and defined political agenda; your whole argument that "nothing you do would ever be politically oriented" and that you are "fully neutral in all situation" becomes incredibly weak.
I am sure some "juge d'instruction" try their very best to be as neutral as possible. Some ostensibly aren't even giving this a flying fuck but both are repeating the same "we are non-political" any time they get the chance. When I hear this, I am unable to know if the person if of the first kind or the second kind. There seems to be 0 investigation internally to weed out the liars which thus casts shadow on the entire profession.
Trust is hard-earned, easily lost, and difficult to reestablish. This scandal touched the very essence of the French judicial system, yet had no major repercussion on the internal organization and processes of those "Juges d'Instruction". It's just business as usual. So until they come up with new systems to ensure better attempt at neutrality and they remove the people that have obviously been plaguing the system for years, it's normal and healthy that any mention of "neutrality" is immediately met with heavy skepticism.
> And getting bribes from foreign dictator is, of course, not allowed.
Couldn't he setup some crypto fund instead? Or investment in ballroom? Or simply just receive present, let say plane, instead of money? Would that help him in this case?
> Couldn't he setup some crypto fund instead? Or investment in ballroom? Or simply just receive present, let say plane, instead of money? Would that help him in this case?
An other French politician, Francois Fillon, tried that with bribes as gift including some luxury Suits. In addition of some public money redirection to his own family.
I really hate this kind of article. Because they do twist numbers to serve a narrative (on renewable energy) instead of showing the complete picture fairly.
> June 2025 was a milestone month: Solar became the EU’s single largest electricity source for the first time ever.
Yes June was a record for Solar power production due to an amazing weather....
But it was a pure disaster for Solar power profitability with an all time low.
The peak was too large for the grid to consume and the price went negative (or null) for the entire month during the solar hours.
That should bring serious questions on the ROI of any future investment in solar capacity and about Europe electricity storage capacity.
> Some countries are already nearly 100% renewable. Denmark led with an impressive 94.7% share of renewables in net electricity generated
This is also miss-leading. Production does not mean Consumption.
Denmark is very far from 94% consumption based on renewable. It rely heavily on import from German grid (Coal and Gaz powered) almost every night and this is a disaster in term of CO2 emission.
That leads to emissions over ~140CO2g/kwh in average, meaning way over what other Scandinavians countries are able to do (e.g Sweden < 15gCO2/kwh)
> In total, 15 EU countries saw their share of renewable generation rise year-over-year.
Yes but that does not mean CO2 emissions are falling (which should be the only thing that matter).
Belgium is closing perfectly working nuclear powerplants recently that are providing around 30% of the country consumption.
Meaning the country CO2 emission are expected to increase significantly this year due to that and this is just plain stupid.
Spain might follow a similar track and this is disastrous.
- Renewable are good but what Europe need is massive investment in energy storage through battery and/or pump hydro. And this is nowhere here. Blind praise in solar capacity is counterproductive.
- If we do not carefully control our current capacity of non-controllable renewable in Europe, we might doom the ROI of an entire industry for the decades to come. And this is the taxpayer will have to sponge all this mess financially speaking.
- What matters is CO2 emission and CO2 reduction, not renewable capacity. This kind of article favors wrong political decisions by putting first and foremost renewable capacity as the only metric that matters. The Belgian nuclear situation is one of these terrible decisions.
To provide some numbers on the storage side of things. On European battery storage [1]:
* 2024 - 21.9 GWh installed.
* 2025 - 29.7 GWh predicted to be installed.
* 2029 - Between 66.6 GWh and 183 GWh to be installed for 2029. Total capacity estimated to be 400 GWh.
The UK also recently received applications for 52.6 GW of storage Long Duration Energy Storage cap and floor scheme [2]. LDES in this context is classed as 8hrs or greater. Seasonal storage is not included.
I don't know if this sufficiently plugs the gaps, but it does show a large increase in installed battery storage, which appears to be accelerating.
Solar capacity is over 400GW now in Europe and projected to be over 700GW in Europe in 2028.
So, considering that. The battery storage estimate you give is still one order of magnitude under of what would be needed. Even considering the optimistic numbers.
Apologies, the 2029 figure was the annual install amount. Total estimated installed amount is 400 GWh. Solar Power Europe says "780 GWh by 2030 to fully support the transition".
From the page[1]:
> By 2029, the report anticipates a sixfold increase to nearly 120 GWh, driving total capacity to 400 GWh (EU-27: 334 GWh). However, this remains far below the levels required to meet flexibility needs in a renewable-driven energy system. According to our Mission Solar 2040 study, EU-27 BESS capacity must reach 780 GWh by 2030 to fully support the transition.
This is also only up to 2029. Battery prices are dropping and the amount of batteries being manufactured is increasing, so I don't agree the continued installation of solar is a big problem.
> Apologies, the 2029 figure was the annual install amount. Total estimated installed amount is 400 GWh. Solar Power Europe says "780 GWh by 2030 to fully support the transition".
It is still nowhere enough. It is barely the capacity to support few hours of consumption of the European grid.
Most of the solar production will go wasted.
That means that the price of the solar production will tank and go negative during most of the spring-summer period.
And that is terrible as far as ROI on the production systems are concerned.
> It is still nowhere enough. It is barely the capacity to support few hours of consumption of the European grid.
You just need to move the excess to times of high demand.
> Most of the solar production will go wasted.
Germany saw renewable curtailments (including wind) of 3.5% in 2024. I can only find reports it will reach 10% by 2030 in Germany and 10% in the EU. I would define "Most" as 50%+.
> That means that the price of the solar production will tank and go negative during most of the spring-summer period. And that is terrible as far as ROI on the production systems are concerned.
This depends on the market. The UK guarantees a price for renewables that have a Contract for Difference (CfD), so they're unaffected. I don't know much about the other European markets, so this might happen.
Any developer will account for this though, so money will flow out of renewables and into storage if there are serious issues around over capacity - unless you have schemes like the UK's CfD.
Finally, I disagree with your prediction
> we might doom the ROI of an entire industry for the decades to come
You have plenty of price signals in energy markets so I can't see a scenario where there's a complete misallocation of resources into renewablews and not storage. In addition investment predictions for renewables and storage are healthy and not of an industry in distress.
> price went negative (or null) for the entire month during the solar hours
Sounds like a great investment opportunity for storage providers?
Isn't this how it's going to work itself out just due to pure economics? Solar panels become so cheap to build and install that people keep doing it just to eke out more power during the more expensive duck curve hours/cloudy days. This causes even more overproduction during the daylight hours, which makes storage more attractive to build
My understanding is that most new solar being built today is being paired with batteries for this reason. Then they can sell the energy at night when the price is better.
Add to that cost of electricity routinely rising in EU. The practice shows that with the current technology intermittent renewable generation above a certain threshold in the total generation mix results in a sharply higher cost of electricity for consumers when accounted for all additional expenses (storage, more robust grids, "smart" grid controls, etc.). And we got this with massive EU subsidies on top of dirt cheap solar panels subsidized by the Chinese government.
Yeah, and you can even consider yourself lucky if it's just downvotes, sometimes your messages just get flagged, like when I called renewables being a major reason for the Iberian blackout with citations from the official report.
TBH your first phrase is how every bad comment starts so I can understand reflex downvotes, BUT, your actual content after that is fantastic, and it took me a while to mentally go to "oh wait they make sense here"
> If the Licensee Distributes or Communicates Derivative Works or copies thereof based upon both the Work and another work licensed under a Compatible Licence, this Distribution or Communication can be done under the terms of this Compatible Licence.
They do mention explicitly "Derivative works", meaning you can not just convert an EUPL software component to GPL and call it a day.
To my understanding: If you do include an EUPL component inside a GPLv3 project and it is allowed. But the component itself stay under EUPL.
(I would appreciate the confirmation of a lawyer from EU, I am not one).
IANAL but that could mean that if I take the code of EUPL project Work and I fork it as Libre Work and add a very minor and useless feature that uses another work licensed with some GPL license (not difficult because I can pick the dependency by license,) I have a derivative work that I can distribute under GPLv2.
However it seems strange that they didn't think about that. Maybe it's only a bad choice of words, which is equally strange.
> However it seems strange that they didn't think about that. Maybe it's only a bad choice of words, which is equally strange.
I think it is just a different intend.
To my understanding, the EUPLv1.2 is structured as a weak copy-left license in the spirit of the MPLv2 but with a major effort on license compatibility.
The intend seems to never be a "strong license" that enforce "strong copyleft" like the GPLv3 / AGPL everywhere.
It is more to give a license under which you can create a project that blend a lot of different component under different licenses (GPL, MPL and co) without requiring an army of lawyer to check the compatibility of this mess.
That is currently immensely valuable in academic software and in large international collaborations.
It also clarify the License contamination behavior over Linking at European Level which is very welcome, because it is frankly speaking, a mess, with license like LGPL.
The argument here is (in brief) "Package management is hell, package managers are evil. So let's handle the hell manually to feel the pain better".
And honestly speaking: It is plain stupid.
We can all agree that abusing package management with ~10000 of micro packages everywhere like npm/python/ruby does is completely unproductive and brings its own considerable maintenance burden and complexity.
But ignoring the dependency resolution problem entirely by saying "You do not need dependencies" is even dumber.
Not every person is working in an environment where shipping a giant blob executable built out of vendored static dependencies is even possible. This is a privilege of the Gamedev industry has and the author forgets a bit too easily it is domain specific.
Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.
Some of us are developing libraries/frameworks that are used embedded in >50 products with other libraries with a hell of multiples combinations of compilers / ABI / platforms. This is not something you want to test nor support without automation.
Some of us have to maintain cathedrals that are constructed over decades of domain specific knowhow (Scientific simulators, solvers, Petrol prospection tools, financial frameworks, ... ) in multiple languages (Fortran, C, C++, Python, Lua, ...) that can not just be re-written in few weeks because "I tell you: dependencies sucks, Bro"
Managing all of that manually is just insane. And generally finishes with an home-made half-baked bunch of scripts that try to badly mimic the behavior of a proper package manager.
So no, there is no replacement for a proper package manager: Instead of hating the tool, just learn to use it.
Package manager are tools, and like every tool, they should be used Wisely and not as a Maslow's Hammer.
I am not sure how you got this conclusion from the article.
> So let's handle the hell manually to feel the pain better
This is far from my position. Literally the entire point is to make it clearer you are heading to dependency hell, rather than feel the pain better whilst you are there.
I am not against dependencies but you should know the costs of them and the alternatives. Package managers hide the complexity, costs, trade-offs, and alternative approaches, thus making it easier to slip into dependency hell.
It is an alternative, just clearly not one you like. And it's not an oversimplification of the problem.
Again, what is wrong with saying you should know the costs of the dependencies you include AND the alternative approaches of not using the dependencies?—e.g. using the standard library, writing it yourself, using another dependency already that might fit, etc.
> Some of us works in environment where the final product is an agglomerate of >100 of components developed by >20 teams around the world. Versioned over ~50 git repositories. Often mixed with some proprietary libraries provided by third-party providers. Gluing, assembling and testing all of that is far beyond the "LOL, just stick to the SDL" mindset proposed here.
Does this somehow prevent you from vendoring everything?
> Does this somehow prevent you from vendoring everything?
Yes. Because in these environment soon or later you will be shipping libraries and not executable.
Shipping libraries means that your software will need to be integrated in other stacks where you do not control the full dependency tree nor the versions there.
Vendoring dependencies in this situation is the guarantee that you will make the life of your customer miserable by throwing the diamond dependency problem right in their face.
You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.
In the game development sphere, there's plenty of giant middleware packages for audio playback, physics engines, renderers, and other problems that are 1000x more complex and more useful than any given npm package, and yet I somehow don't have to "manage a dependency tree" and "resolve peer dependency conflicts" when using them.
When you're a library, your customer is another developer. By vendoring needlessly, you potentially cause unavoidable bloat in someone else's product. If you interoperate with standard interfaces, your downstream should be able to choose what's on the other end of that interface.
> You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.
And you just don't know what you are talking about.
If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.
This is not up to me to fix or choose the library and the driver version that the customer will use.
He will choose the certified version he will ship, he will test my software on it and integrate it.
Vendoring dependency for anything which is not a final product (product as executable) is plain stupid.
It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
If you want to vendor, do vendor, but stick to executables with well-defined IPC systems.
> If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.
If you're writing an ADAS system, and you have a "dependency tree" that needs to be "resolved" by a package manager, you should be fired immediately.
Any software that has lives riding on it, if it has dependencies, must be certified against a specific version of them, that should 100% of the time, without exceptions, must be vendored with the software.
> It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
The exact opposite. Vendoring is the ONLY way to prevent the ABI madness of "v1.3.1 of libfoo exports libfoo_a but not libfoo_b, and v1.3.2 exports libfoo_b but not libfoo_c, and in 1.3.2 libfoo_b takes in a pointer to a struct that has a different layout."
If you MUST have libfoo (which you don't), you link your version of libfoo into your blob and you never expose any libfoo symbols in your library's blob.
And in addition: Yocto (or equivalent) will also be the one providing you the traceability required to guarantee that what you ship is currently what you certified and not some random garbage compiled in a laptop user directory.
Did Yocto ever clean up how they manage the sysroot?
It used to have a really bad design flaw. Example:
- building package X explicitly depends on A to be in the sysroot
- building package Y explicitly depends on B in the sysroot, but implicitly will use A if present (thanks autoconf!)
In such a situation, building X before Y will result in Y effectively using A&B — perhaps enabling unintended features. Building Y then X would produce a different Y.
Coupled with the parallel build environment, it’s a recipe for highly non deterministic binaries — without even considering reproducibility.
> Did Yocto ever clean up how they manage the sysroot?
It's better than before but you still need to sandbox manually if you want good reproducibility.
Honestly, for reproducibility alone. There is better than Yocto nowadays. It is hard to beat Nix at this game. Even Bazel based build flows are somewhat better.
But in the embedded world, Yocto is pretty widespread and almost the de-facto norm for Linux embedded.
> but implicitly will use A if present (thanks autoconf!)
When you want reproducibility, you need to specify what you want, not let the computer guess. Why can't you use Y/configure --without-A ? In the extreme case you can also version config.status.
Things using autotools evolved to be “manual user friendly” in the sense that application features are automatically enabled based on auto detected libraries.
But for automated builds, all those smarts get in the way when the build environment is subject to variation.
In theory, the Yocto recipe will fully specify the application configuration regardless of how the environment varies…
Of course, in theory the most Byzantine build process will always function correctly too!
You're providing a library. That library has dependencies (although it shouldn't). You've written that library to work against a specific version of those dependencies. Vendoring these dependencies means shipping them with your library, and not relying on your user, or even worse, their package manager to provide said dependencies.
I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, but if they're not certifying the "random library repos" that are part of your code, I pray I never have to interact with your code.
I dabbled my fingers in enough of them to tame my hubris a bit and learn that various fields have specific needs that end up represented in their processes (and this includes gamedev as well). Highly recommended before commenting any further.
> I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, [..], I pray I never have to interact with your code.
You illustrate perfectly the attitude problem of the average "gamedev" here.
You do not know shit about the realities and the development practice of an entire domain (here the safety critical domain).
But still you brag confidently about how 'My dev practices are better' and affirm without any shame that everybody else in this field that disagree is an idiot.
Just to let you know: In the safety critical field, the responsibility of the final certification is on the integrator. That is why we do not want intermediate dependency to randomly vendor and bundle crap we do not have control of.
Additionally, it is often that the entire dependency tree (including proprietary third party components like AUTOSAR) are shipped as source available and compiled / assemblied from sources during the integration.
Thats why the usage of package manager like Yocto (or equivalent) is widespread in the domain: It allows to precisely track and version what is used an how for analysis and traceability back to the requirements.
Additionally again, when the usage of binary dependencies is the only solution available (like for Neutrino QNX and its associated compilers). Any serious certification organism (like the TUV) will mandate to have the exact checksum of each certified binary that you use in your application and a process to track them back to the certification document.
This is not something you do by dumping random fu**ng blob in a git repository like you are proposing. You generally do that, again, by using a proper set of processes and generally a package manager like Yocto or similar.
Finally, your comment on "v1.3.1 of libfoo" is completely moronic. You seem to have no idea of the consequence of duplicated symbols in multiples static libraries with vendored dependencies you do not control nor the consequences it can have on functional safety.
It certainly gets in the way. The more dependencies, the more work it is to update them, especially when for some reason you're choosing _not_ to automate that process. And the larger the dependencies, the larger the repo.
Would you also try to build all of them on every CI run?
What about the non-source dependencies, check the binaries into git?
- Funding for startup in Europe is spare. There is nothing similar to the USA's VC culture here.
- Market is naturally more fragmented due to language barrier.
- Talent pool is spreader over multiple countries. We do not have any equivalent to the valley here.
- There was (up to now) 0 protectionism that would give a competitive advantage on EU soil to a EU company in front of an American giant.