What about blog spam written by human content writers?
The trouble is we've already had a web flooded with "ai content" long before GPT was public. Plenty of young writers have been trained to churn out thoughtless streams of writing based on prompts that appear to be written by an intelligent mind but are often filled with meaningless non-sense.
My industry specific example is Towards Data Science, content created by fleshy AIs that often looks very insightful at first glance, but when viewed by an expert ends up being mostly incorrect gibberish.
The problem lies with carbon's essential role as part of the energy cycle that powers the majority of this planet.
H2O + (Solar) Energy + CO2 => Useful hydrocarbons (everything form sugar to gasoline)
O2 + Hydrocarbons => Useful Energy + CO2
When you burn a log you're really using a solar battery that took potentially decades to charge and roughly 25x the energy you feel from the fire (photosynthesis is only 4% efficient).
The same process that feeds us powers the global economy, but at the cost of emitting CO2 in direct proportion to the energy we're using and benefiting from.
This is why "miracle" solutions are so unlikely. Because they require a major disruption of this process in a way that it's not clear is fundamentally possible. Anyway to massively remove CO2 from the atmosphere is fundamentally going to require energy. And because of the nature of inefficiency, will always be a poor use of any energy you used to create the problem.
This is obviously where things like nuclear fusion do provide the possibility of breaking this process because they create a lot of energy outside of solar powered carbon cycle.
Strong agree on the remote training being entirely possible. My first dev job was remote (long before the pandemic) and onboarding was not a problem at all.
In fact because you need people to get set up remotely, I find the documentation tends to be better at all remote companies. In-Office companies sort of assume that you can just tap someone on the shoulder if you get stuck so there's more often, in my experience, gaps in the documentation.
I particularly find this a strange claim since open source projects have been successfully onboarding new people remotely prior to there even being efficient ways to screen share/video chat etc.
Not to mention the added hypocrisy that at nearly every company I've worked at, big and small, C-levels are almost never physically in the main office building. Sometimes they're traveling the globe to work on making deals, but sometimes they just want to be at home with their family, or take a semi-vacation.
If you can run a company on the go or at home, certainly I am capable of shipping quality code at home.
We had a “Company webinar” where they explained that it was impossible to do work when out of the office, so forcing everybody in was the only choice.
Of the five directors on the call, one was in the office. Three were at home. The CEO division of support employees were also exempt from the requirement.
I was once at a start up where management decided that progress was inadequate. In truth progress was inadequate, but it was a hard science problem so the amount of work done is not necessarily the main factor here. They insisted that everyone work till at least 7pm (the place had a 9am sharp start time so that amounts of a long day with dangerous chemicals and sensitive equipment - this is the equivalent of staying till 9pm for software folks who amble in at ~10:30am). In order to avoid having to stay late themselves, management adopted a rotation whereby one manager would stay late one night a week while the rest took off on time (now early). Someone found a copy of this schedule in the printer, made a few extra copies and posted them around the office which scrapped that policy. Long story short the company was never going to make it anyway.
Which is exactly what is happening. This is public science, so pretty safe for them, but from the company survey 40% of employees are planning to leave in the next two years (one of the Directors said they had probably just misunderstood the question), many have left already, they actively withdraw jobs where the candidates say they want to work remotely, but still manage to find highly paid remote-only positions for their friends.
We’re asking for half a billion from the government, I wonder how closely they will look.
On a positive note I think you can get this without being C-suite (although I also think it shouldn’t be hard to get for anybody!)
I work at an HFT firm as a C++ dev as a junior. Finance loves the office. I go in 5 days a week.
The one person that doesn’t come in at all? My boss with 15 years of experience. He wrote most of the trade engine himself, the firm would fall apart if he left.
So he can set his terms and he is home 100% of the time.
>He wrote most of the trade engine himself, the firm would fall apart if he left.
I get that he's probably good at what he did, but to be fair, if you as an organization leave such a high bus factor open, having the company's livelihood depend on one developer, you're dysfunctional at best and asking for trouble at worst.
Ideally good leadership should want that knowledge spread around or just not let it get to that point from the start.
> having the company's livelihood depend on one developer, you're dysfunctional at best
This is the sort of banal nonsense that seems to be obviously true on its face but bears no semblance to reality. In fact, it illustrates the classic East coast vs Valley divides. The original team of secdb was half a dozen people. Each of them was invaluable & their code managed literally a few trillion USD of the world economy. person who wrote the proprietary graph language & compiler for that system was 1 single hotshot c++ guy on the standards committee, who also wrote a chapter in the programming pearls book. gs continues to use secdb & the firm is over 150+ years & counting. Meanwhile, the Valley startups I worked for since - they had this idea that everybody must know everything, all knowledge is diffuse etc. Lot of time taken in teaching backend programmers javascript, frontend guys system infra...all out of goodness of their heart. End result, neither the startup nor the programs we wrote lasted even a decade, even in the best case. In fact, median tenure in these places was under 2 years, whereas most east coasters are lifers.
>In fact, it illustrates the classic East coast vs Valley divides.
I don't live in the Valley neither on the East Coast, I'm from Europe which is where my vantage point for my argument lies. Maybe Valley companies can afford to do that because they pay the highest salaries in the world therefore they can always fix any problem because they can throw enough money at them.
>Lot of time taken in teaching backend programmers javascript, frontend guys system infra...all out of goodness of their heart.
That's equally dysfunctional. Reducing the bus factor doesn't mean that the front desk lady must know your codebase, it just means that there shouldn't be any master in the team who holds the keys to the knowledge kingdom and doesn't sahre his knowledge with the rest of the team.
> there shouldn't be any master in the team who holds the keys to the knowledge kingdom
that's exactly what i'm disagreeing with. It sounds like a good feature on paper. In reality, it seldom is. Most east coast companies of significant size, if they do anything sufficiently complex, will have 1, sometimes 2-3 guys, who hold the keys to the kingdom & know everything. The kind of domain knowledge that cannot be transferred in kt sessions. In all the IBs I've worked at, there was always the point person who knows everything about one thing, & if he got hit by a bus, man you were in serious shit. Its just the cost of doing business in that domain, can't really derisk it by writing everything down.
otoh most Valley firms do very generic shit, with young troops recruited every so often, who stay just long enough to make the jump to the next faang. Even in the Valley, you have L8,L9s whose disappearance can cause significant damage. Its simply not possible to transfer all knowledge to rank and file. some things are just very hard. Code has a way of getting very convoluted very fast. End of the day, SWeng is a very young field. There are no rules like multiple people must know your codebase. There are actual prop funds in chicago running out of 1 big R file written by the cofounder. Whole fund runs out of a single R program! No joke. Big world out there.
I think the point you are missing here is documentation. There’s no reason to have all the bus-factor — it’s just laziness on the part of organizational structuring and lack of mandate to create quality infrastructure with good documentation, you don’t need to be a Fortune 500 to do this. Spoken as a catfish programmer who spent many years repairing the products of “rockstars” after they fucked off to wherever they went. Also if your stuff is too “complicated” to write down that’s a smell — people have documented far more complex pieces of tech than any software only shop has created look at medicine field or any mechanically engineered system. People love to make excuses about not writing stuff down, turns out you only have yourself to blame once you pull the trigger on your foot gun. Also your characterizations of the East and west coast are criminally juvenile — I’ve worked on both coasts and in Europe, sure there are a few orgs like you say all over the world, those are the ones where the contract is not worth the headache of dealing with an org that can’t tell their own asshole from a hole in the ground because “it’s in jimmys head”
I worked at a similar company before. Management don't care about that at all. We had our fundamental stone quit due to getting married and moving abroad. They just found another heavily specialized engineer and made him an obscene offer. Didn't take long for him to be on top of the systems either.
The one thing that I am still in awe is that the new engineer had his PhD in chemistry, and never attended a CS class once, but was nevertheless a world class engineer and hacker. Miss working with him.
>They just found another heavily specialized engineer and made him an obscene offer
That's unfortunately not something that most smaller companies can afford to replicate and still be alive.
If you've got unlimited money sure, everyone's replaceable as the problem boils down to having enough money to poach the next best replacement, but hiring the best of the best with guarantee they can take over the most complex codebases quickly without impacting operations, is an endeavor that can sink small companies if the bus factor hits their core.
> hypocrisy that at nearly every company I've worked at, big and small, C-levels are almost never physically in the main office building. Sometimes they're traveling the globe to work on making deals
That’s not hypocritical. To be effective in those require that they meet in person to make those deals, rather than zooming in lot a call from home or a tropical island.
It is very reasonable to expect different jobs/roles benefit from different operating modes.
> I don't understand how intelligent people can continue to believe this stuff is the future of finance.
My experience is that all of the really intelligent people interested in crypto did leave after the ~2012 wave of excitement. At that time when you saw people give talks on crypto they were almost entirely technical with very little focus (or interest) on becoming rich. That was when the people involved tended to be technical idealists. I didn't buy that crypto was the future then, but I wanted to be wrong.
Fast forward to ~2017 during the next crypto boom and the conversation was around the non-technical people at technical companies getting excited. People did believe crypto was going to become the currency of the world, so there was still some idealism, but it was mainly about getting in to get rich. By this wave a good chunk of the idealists I knew were entirely disillusioned.
Then the 3rd wave which just happened to correlated with a massive injection of money in the market by the Fed. At this point it was just literally get-rich-quick dreamers with more dollars in their hands than sense. Nobody I know who has gotten in during this period even has a coherent vision of what the future looks like, they just have too much money and think crypto is the way to get insanely rich one day. It's also when people completely unrelated to tech started getting involves. People who don't even know how to use a wallet, and rely 100% on 3rd parties to manage all of it.
People are holding on to crypto for the same silly reasons that coworkers of mine keep all their vested stock in companies that have dropped 50%+ in value over the last year. It's because they earnest believe that the era of low interest, free money is the norm. They believe this current macro is just a blip, and if they just hodl a bit longer it everything will go back to "normal".
For those curious what the big deal is here: PyTrees make it wildly easier to take derivatives with respect to parameters involving a complex structure. This makes it much easier to organize code for non-trivial models.
As an example: if you want to implement logistic regression in JAX, you need to optimize the weights. This is easy enough since this can be modeled as a single value, a matrix of weights. If you want to model a 2 layer MLP, now you have to use 2 matrices of weights (at least). You could treat this as two parameters to your function (which makes the derivative more complicated to manage) or you could concatenate the weights and split them up, etc. Annoying, but managable.
When you get to something like a diffusion model you now need to manage parameters for a variety of different, quite complex, models. It really helps if you can keep track of all these parameters in whatever data structure you like, but also trivially just call "grad" with regard to these and get your models derivative with respect to its parameters.
Pytrees make this incredibly simple, and is a major quality of life improvement in automatic differentiation.
A good chunk of this comes directly from Richard Hamming's incredible The Art of doing Science and Engineering (a video of the specific lecture on n-dimensional spaces can be found here[0]) and yet I see no mention of this talk anywhere in the article, which is unfortunate.
I highly recommend checking out Hamming's lectures if you find this enjoyable.
Problem is that everything needs X powered by clean energy, which means we need both a lot more clean energy and, if we want to slow/stop climate change, to massively reduce our use of fossil fuels... which requires a lot more clean energy.
We also, so far, globally have not shown any evidence of replacing fossil fuels with green energy, only supplementing them.
Direct carbon capture, electric vehicles, electric building heat, electric industrial process heat, and desalination are all massive new sectors that need to be powered that are barely on the radar of the existing electric grid.
And we need more power to let emerging economies rise out of poverty, where having electric pumps and washing are truely life-changing from a quality of life perspective.
So we need about 5x more total energy while also shifting from 80% fossil to 0% fossil.
This is why I advocate for nuclear while most people advocate for wind and solar. We need advocates for all clean energy. Nuclear can actually use its own direct low-carbon heat to help with buildings (district heat) and industry, and that heat can also help in desal in some cases, though RO is the preference these days and doesn't really need that much heat input.
Tidal power is a thing. Tidal power for ocean desalination eliminates the need for transmission infrastructure. Desalination has so many obvious upsides that I'm increasingly suspicious of the people insisting it's too difficult to contemplate, none of whom contribute anything of value to the discussion.
I'm still not entirely convinced that pipes aren't an anti-pattern. Absolutely an improvement over nested function calls:
a(b(c(d))) vs d |> c |> b |> a
but I'm not convinced pipes are better than more verbose code that explains each step:
step1 = c(d)
step2 = b(step1)
result = a(step2)
I've written a lot of tidy R and do understand the specific use cases where it really doesn't make sense to use the more verbose format, but generally find when I'm building complex mathematical models the verbose method is much easier to understand.
I think having intermediate variables is sort of 'littering', and requires extra work in the naming which might not be necessary. Also, with pipes, you can just take out any intermediate step by commenting out a line or deleting it. You cannot do this with your method above without then going and rewriting many different arguments. I also like piping because you can quickly increment and build a solution - quicker than naming intermediate steps anyway.
Naming intermediate steps require some non-trivial efforts. It can even distract from the main task of getting the results.
In programming the code will be read multiple times and good names will help the future readers. But in data science the calculation will be most likely will not be reused. So efforts to name things will be waste of time.
I suggest that trying to strictly only bind output to a symbol if it will be used in multiple places.
So when I read code and I see some "intermediary" value bound - it tells me immediately "this thing will be used in several spots". Thereby bindings actually start to convey extra information
Anyway, it's just something that's worked for me. In all other scenarios I will use threading/pipeline (maybe Clojure specific). If steps are confusing/complex then you make a local named lambda or add in the extreme case.. comments
If nothing else, you can just pipe the code and then write comments explaining what's left after each step. But the verbose code can be substantially slower (which happens when piping can be used to perform all these operations lazily).
It's open source in the sense of OSINT [0]. Clearly confusing on a site like Hacker News, but this has been standard usage of the term for that community for a long time now.
Thank-you for the clarification. "Open-source" is definitely different from "Open Source."
Meaning Open-source (sourced from open sources) but claiming Open Source is disingenuous.
Wikipedia isn't helpful either, because it refers to OSS as Open-source Software [1].
Open Source meaning may be more useful in comparison to Free Software. Stallman refers to "Open-source" (hyphenated) only once in this article, but only to refer to it as confusing versus free software [2].
It's possible "OSINT as Open-source" has been in use for longer than Stallman's use of "open source," but definitely they are different.
It's strange a site would sell up a feature on HN as "Open-source content, in the meaning of OSINT" without being up-front about it. The default assumption would be "open source as code that is free to modify, etc."
The mental gymnastics would be
1. They claim it is "open source."
2. They are talking about _content._
3. It must be the OSINT kind of "open."
This could be a pattern, because they're always needing to add another comment, "Just kidding, we meant OSINT open; we're not sharing the code."
... documentation could be open source too, though--in the sense of "free to modify, etc" and not "sourced from freely available data."
Could it be both? Only if they accept contributions, I guess.
The trouble is we've already had a web flooded with "ai content" long before GPT was public. Plenty of young writers have been trained to churn out thoughtless streams of writing based on prompts that appear to be written by an intelligent mind but are often filled with meaningless non-sense.
My industry specific example is Towards Data Science, content created by fleshy AIs that often looks very insightful at first glance, but when viewed by an expert ends up being mostly incorrect gibberish.