After programming with elixir and phoenix for a few years (with many prior years of rails experience) I have a hard time seeing why one would choose rails.
Elixir is more performant, has compiler safety guarantees that are only getting better as types are introduced, is actually designed from the ground up for web dev (being based on the Erlang VM), and... it's just way more fun (subjective I know). Elixir is what I always wished Ruby was, and I couldn't be more excited about the coming type inference.
Programming with Elixir makes me feel like Ruby is a previous generation language, much like Ruby made me feel that way about Cobol or Fortran, it really is that stark.
> is actually designed from the ground up for web dev (being based on the Erlang VM)
Nit: this makes it sounds like the BEAM was designed for web dev, which it was not. Erlang came out of Ericsson and was built for telecoms (OTP stands for Open Telecom Platform), which is where its unique set of trade-offs comes from. Many of those trade-offs make a ton of sense for web but that's not because it was designed for web, it's because there's overlap between the fields.
One way to see the difference between telecoms and web is to ask yourself when was the last time that you were working on a project with an availability SLA of 9 nines (1/3 of a second of downtime per year) or even 6 nines (32s per year). Some web dev has that, but most doesn't come close, and if that's not you then Erlang wasn't built for you (though you may still find you like it for other reasons!).
Very true it is actually designed for telecoms, but like you mentioned the distinction is so small it's not really even a stretch to say it is purpose built with at least the general architecture of web in mind.
In the grand scheme of things, if we're considering everything from web to bridge building, yeah, the distinction is small. But within the world of software engineering specifically it's not all that small and it's worth being precise when we're talking about it.
Whatsapp and telecoms have a lot in common, so no one questions that they benefited a ton from the BEAM.
Airbnb, though? The main similarity is that they both send large quantities of signal over wires.
Again, none of this is to stop you from liking the BEAM, but when we're talking about professional software engineering it pays to be explicit about what the design constraints were for the products that you're using so that you can make sure that your own design constraints are not in conflict with theirs.
no. in the modern web world you often have persistent client server connections, which make it a distributed system out the gate. the most inefficient way to deal with this is to go stateless, and without smart architecture to deal with unreliable connection, it's really your best choice (and, it's fine).
since BEAM gives you smart disconnection handling, web stuff built in elixir gives you the ability to build on client-server distributed without too much headache and with good defaults.
but look, if you want a concrete example of why this sucks. how much do you hate it when you push changes to your PR on github and the CI checks on browser tab are still not updated with the new CI that has been triggered? you've got to refresh first.
if they had built github in elixir instead of ruby would almost certainly have this sync isdur solved. in maybe two or three lines of code.
And if you need that kind of persistent immediately reactive connection and are willing to pay the price, go for it! If that's truly a requirement for you then you're in the subset of web that overlaps substantially with telecoms.
I'm not cautioning against making the calculated decision that realtime is a core requirement and choosing the BEAM accordingly. I'm cautioning against positioning the BEAM as being designed for web use cases in general, which it's not.
Many projects, including GitHub, do not need that kind of immediate reactivity and would not have benefited enough from the BEAM to be worth the trade-offs involved. A single example of a UX flow that could be made slightly better by rearchitecting for realtime isn't sufficient reason to justify an entirely different architecture. Engineering is about trade-offs, and too often in our field we fall for "when all you have is a hammer". Realtime architectures are one tool in a toolbox, and they aren't even the most frequently needed tool.
what price? learning a new language that is designed to be learned from the one you already know with fewer footguns? ok fine.
but you make it seem like going to elixir is some kind of heavy lift or requires a devops team or something. the lift is low: for example i run a bespoke elixir app in my home on my local network for co2 monitoring.
and for that purpose (maybe 300 lines of code? yes, i do want reactivity. wrangling longpoll for that does not sound fun to me.
To name just a few costs that aren't worth it for many businesses:
* A much smaller ecosystem of libraries to draw from.
* Much weaker editor tooling than with more established languages.
* An entirely different paradigm for deployments, monitoring, and everything else that falls under "operations" that may be incompatible with the existing infrastructure in the organization.
* When something does go wrong, using a weird stack means you have less institutional knowledge to lean on and fewer resources from people who've been doing the same thing as you.
* A whole new set of foot guns to dodge and UX problems to solve related to what happens when someone's connection is poor. This has come up repeatedly in discussions of Phoenix LiveView—what you get in reactivity comes at the expense of having to work harder to engineer for spotty connections than you would with a request/response model.
* More difficulty hiring people, and an increased tendency when hiring for selecting people who are really just obsessed with a particular tool and unwilling to see when the situation calls for something else.
There are many more, these are just the ones I can think of without having a concrete application with concrete requirements to analyze. In the end for most apps reactivity is so much a "nice to have" that it's hardly worth sacrificing the stability and predictability of the established option for moderately better support for that one aspect of UX, especially given that you can always add reactivity later if you need to at a slightly higher cost than it would have come at with Erlang.
If reactivity is a core requirement, that's a different story. If it's polish, don't choose your architecture around it.
When José Valim “moved on” from Ruby development to work on Elixir, his fans followed.
It’s a bit like true believers switching faiths when their leader changes religions.
It’s hard to make sense of it from the outside looking in, but it’s definitely a thing that happened historically and occurs in small and big ways even today.
i had never heard of jose until i started working in elixir. i had about two years of grueling ruby experience[0] and boy was elixir an amazing breath of fresh air.
what is also a thing is stockholm syndrome and sunk cost fallacy.
[0] ok, now that i think about it also this was when ruby was having serious 1.84-2.0 transition troubles and i had difficulty reinstalling ruby if i needed to redo the os because some python wheel broke everything, so i had other historical reasons to leave ruby with a bad taste in my mouth. i think my gripes about activerecord are real, though.
> in the modern web world you often have persistent client server connections
Is this actually true though? I’d be interested if you know any data backing that perspective. I only know what I’ve worked on and my anecdotal experience doesn’t match with this statement. But I know my sphere doesn’t represent the whole. In terms of state, by now there are many ways of dealing with persistence and reconnection. Not only are most of those problems solved with existing technologies and protocols but they’re everywhere in web dev. Maybe we’re talking past each other? Did I misunderstand your point?
I switched fully to elixir close to a decade ago now and library availability is still lagging. For pretty much any company I can be pretty sure there will be JS/Ruby/Python/C#/Java integrations/libraries and occasionally you'll find one for elixir maintained by someone that stopped responding to github issues 3 years ago.
It's definitely better but I can definitely see why you'd still choose rails these days.
I agree with this sentiment, though in practice it doesn't seem to be much of an issue the vast majority of the time. Sometimes you do need that niche library though, and end up forking and updating for your needs.
Given how rarely this comes up it feels like a tolerable problem that will only diminish as Elixir adoption continues to increase; I am aware of many rail shops that are slowly and quietly switching everything to Elixir, and it feels like that snowball continues to pick up pace as Elixir improves and those libraries are created.
It may come up rarely for you but for my workflow I hit into it at least once a month and if you're a new user you will hit into it more frequently as they initially port stuff over. I'm not sure what the solution is and it obviously hasn't been enough to keep me out of the ecosystem but it is something that is noticeably worse.
I have a very extensive experience with both Ruby on Rails and Elixir/Phoenix. Also ended up building large full-stack apps on either framework.
In the beginning when Ruby on Rails said hello to me, I instantly fell in love with it and the simplicity and the natural semantics that flow with it. It was absurdly easy to write new features and ship them to production. As the codebase grew and the team grew we started running into situations where APIs broke, or to trace the workflow of things in terms of finding where methods came from, finding parent modules of modules, and finding their parents, configuration, and I started to note a general lack of IDE autocomplete and type-safety.
Then after a few years I jumped ship to Elixir and if felt like a breath of clean air when I had to learn FP. Everything was simple enough to understand. Performance knocks Elixir, Node, Python and any other interpreted stack out of the water. The Phoenix framework was, and is said to be thoughtfully designed and although there was no IDE support, we still had Elixir LS which was great enough to provide realtime guidance, linting and type safety during compile time.
I was able to ship a very large app into production and it was bulletproof.
The problem with Elixir was our other engineers struggled to shift away from Node, or any other stacks they already knew. They found the entire FP world to be weird. Hell, I found it weird too at times. Simple mutations of maps and arrays, that would be trivial in Ruby ended up being so complex in Elixir. In the end it felt like my team was not on the same page.
I guess Elixir would be great if you ran a 3-person team or something, but since we were not, we got back to Ruby.
In today's world though, I am largely looking at Go, for a backend system. IDE support is up there with Java, and the ecosystem is old and mature enough to find any package that you look for. Performance is C-like and learning curve is lean.
I was working with Go a lot as something complementary to Ruby/Rails. I have ended up with so much Ruby work. Either maintenance of large successful efforts from years ago, new development for those same companies, or new development from the people who have experienced great success with Ruby on Rails. I can't seem to get away from it, and that's just fine.
At this point, I putting together teams and getting new developers into Ruby on Rails. I'm also seeing companies move back to full stack RoR after the luster of React has worn off. Also, modern RoR can get you so far now with a fraction of the dual framework headaches of a RoR backend/JS frontend.
Great to hear. I agree - all the react/SPA bloat along with other layers like Vite/SSR/Webpack etc. is not needed for 95% of the apps today.
Any MVC framework with HTMX, JQuery (yes)/Hotwire/Stimulus/Turbo, etc. puts the productivity and deployment speed of the front-end setup above to extreme shame.
Rails has more baked in for the typical crud app. Example:
Try to create a way for people to upload documents like images and PDFs and documents. Okay easy enough on both platforms and I want you to generate a preview for each of those files so that people can easily find those files. Now I want you to add pagination. Now I want you to add column sorting so that people can sort by file size or by name or by upload date. Finally I want you to add a search field. Going by the way all of this stuff needs to live in the URL so that you can bookmark all the different you know choices you've made.
The stuff is pretty trivial and rails but in elixir you would have to bake it all yourself very boring code that doesn't really matter. This is why I chose to build my startups admin dashboard in rails despite our main production API being an elixir.
LiveView uploads are baked in, previews and all. Everything else you list is included in the Flop library, if you want something off the shelf. In rails you are still including Kaminari or whatever other gems for all this too, so this is really no different.
Extremely risky tbh I would have an extremely hard time if I go off path or need to hire someone. It would be almost negligence to choose it unfortunately
It's the opposite since it standardises everything as oppose to roll your own.
If you need to hire someone you'd need to train them on your system no matter what, with a framework you can use their documentation to explain where things are and how they work.
Elixir is already a small fraction of a small and shrinking community (Rails). Ash is a tiny fraction of an already tiny fraction. I cannot imagine defending this choice to anyone unless I was literally the CEO of a company and answered only to myself.
Elixir really needs to lose the perception, if there is one, of it being a subset of the Ruby/Rails community. It's true that the initial influx of Elixir developers came from the Ruby world back when Elixir was new, but that was a long time ago. Tons of Elixir folk come into it nowadays without a Ruby background.
Elixir and Ruby really aren't that similar anyway. The syntax differences are very superficial - Elixir's a functional language with very style and semantics to Ruby, and that's even before you get into the magic of OTP and the BEAM, for which Ruby has nothing comparable.
It doesn't really matter though. You have to train new staff on your systems/code base no matter what you use. So if they don't already know ash it's the exact same as if you didn't use it. Only now you can point them at the ash docs and buy them the ash book and they'll know where everything in your system goes.
I've been using Elixir for over 10 years, if it was ever a "small fraction of the Rails community" it was during its formative years only. Elixir is fully its own thing. We don't even really talk about Ruby? I really do think you've got a mixed up perception on that front
> Add in problems finding developers skilled in Elixir and Phoenix and the small available libraries.
Is this actually a problem you see? I'm going on 15 years in the industry and haven't seen any issues training people up on a new language in just a couple months.
If you need an expert in some library or language to make meaningful business progress I feel like that says more about whatever tool or language you're using, and I simply don't see that with phoenix or elixir in the years I've worked with it.
I feel like the sentiment of “we can train a competent dev in our language and stack” has given way to “we want a dev with proven experience in our language/stack” over the last few years. I suspect this has something to do with more non-technical staff being put in between candidates and the engineers they’ll be working with during the hiring process. These non-technical staff rely on “x years of experience in thing” to know if a person might be competent at that thing.
I think that this is one of the reasons networking is becoming more and more important, because it lets a candidate demonstrate their generally-applicable development skills to a fellow engineer who is capable of making qualitative engineering judgements.
> Is this actually a problem you see? I'm going on 15 years in the industry and haven't seen any issues training people up on a new language in just a couple months.
Some years ago the largest company using Elixir in the US, or at least on the west coast, abandoned Elixir because they couldn't find enough developers.
Yes. The adoption is poor despite the loud voices.
That's so disappointing to hear. I have an intern who hadn't touched Elixir 4 weeks ago who is already making meaningful PRs. She's done the PragProg courses and leans a bit on Copilot/Claude, but she's proving how quickly one can get up to speed on the language and contribute. To hear that a major company couldn't bring resources up to speed, to me, shows a failure of the organization, not the language or ecosystem.
The Ruby on Rails project I'm currently involved in doesn't struggle with training people, but rather with retaining them. There have been a few instances where we trained a junior developer and got them up to speed, only to lose them within a year. For small teams, this can be quite frustrating and disheartening.
This issue might be partly due to the project being in a somewhat niche and conservative industry, so there are no startup vibes. However, since they started looking for someone ready to make a longer commitment than a developer who has just started their career, things have improved. But this approach also limits the pool of available developers.
It's worth noting that we also use Elixir in this project (the chief architect is quite fanboyish about it), but we have never had any new developers come in with pre-existing knowledge of Elixir.
It's a matter of taste, but i found Ruby syntax to be annoyingly inconsistent, and do |..| ... end being something that isn't quite a lambda a huge source of confusion.
also activerecord doing "trust me bro" things behind the scenes (like pluralization) drove me up the wall.
to be fair ecto does a small bit of this too, but at least it doesn't change spellings (so you can global search an identifier).
If you're looking for something to invest in for the long term I think Rails wins by a mile. They have the funding, investment and strong companies dependent on it to keep it marching forward - both the framework and surrounding libraries
In terms of successors there’s maybe Julia, or otherwise you’d have to use Python or Matlab/Octave, with all that going to a scripting language entails. In any case it doesn’t really feel like there’s been a replacement.
Elixir is a great language, but it lacks a framework as polished and full-featured as Rails. Phoenix could have been far more popular if it had something like Active Record.
Many Rails developers try Phoenix at some point because they may need better performance. They’re so accustomed to the Rails structure that they assume Rails has done everything right. However, Ecto and ActiveRecord are two very different beasts. When Rails developers try out Ecto, they often feel there’s too much boilerplate and believe the Rails design is much more intuitive. This, I think, is one reason Phoenix struggles to attract Rails developers. If it can’t please Rails users, it will rarely appeal to others.
Ecto was literally the component I liked less in all the Phoenix stack when I worked with it after a dozen of years of Rails.
I did maybe 5 years of Phoenix for a customer of mine and went back to Rails for another customer. It's good enough and overall Rails is easier to deploy IMHO. Capistrano vs I don't remember what.
Oh man, this must just be subjective because I find Ecto to be beautiful compared to the absolute trainwreck of Activerecord. Having compile time guarantees through Ecto is wonderful.
Yes, that must be the case because if my customers and I would care about compile time guarantees we would not be working with Ruby.
In that years long Phoenix project one of the developers on the team added dialyzer type annotations to the functions in the files he worked on. Everybody else did not bother. The project ended up with no type checking. The service run and the company did well.
Overall using Phoenix was a good experience. I never used Elixir in any other project and never for my own programs. I use several other languages for my own little scripts, mainly bash, Ruby, Python and Lua. I think that I really like dynamic typing.
As someone who has used elixir for startups and loves it, the benefit of Rails for a startup is that it's easier to pick up. html/live projects can create more confusing layouts, which can make it harder to learn if you're trying to get something running. Rails is a great framework to use if you're primary product is not a website, but you need a website.
I thought to include other implementation from Ruby and Elixir. Rails has always been doing much more and on the heavier side of things. There are also many test that simply by switching server to iodine brings the performance to Elixir / Phoenix level.
These benchmarks will be forever amusing. I've done two full migrations of a Rails app to Phoenix and the differences we've seen in our telemetry boards ranged anywhere from 4.5x all the way to 20x.
IMO TechEmpower lost all credibility long time ago after it was demonstrated they do nothing against heavily gamed benchmarks where people literally do basic string matching against regexes instead of doing proper HTTP parsing. Some even relied on characters being at exact positions.
Add to this how slow they were to adopt normal production application code changes to Elixir apps where it was proven that the author of the benchmark had no idea how to code an Elixir/Phoenix app and yeah, it does not look good for TechEmpower.
All that being said, use what you like (some even expressed the confusing stance of "I like Ruby's syntax more" which, need I even comment how unprofessional that is?). But to claim Elixir is just some mere 2x times faster than Ruby is misguided. My real production experience from the last 10 years says this is bull.
What's unprofessional about liking one syntax more than another? Of course this shouldn't be the main reason to adopt a language, but having a preference is totally OK.
To expand on this, many states have a large white pine lumber industry. The white pine is highly susceptible to a type of fungus harbored by currants.
The fungus does not spread from white pine to white pine, only from currants to currant, or currant to white pine, so eliminating the nearby currants protects the white pine industry.
Apparently this is no longer much of an issue. Quoting [0]:
"The federal ban was lifted in 1966, though many states maintained their own bans. Research showed that blackcurrants could be safely grown some distance from white pines and this, together with the development of rust-immune varieties and new fungicides, led to most states lifting their bans by 2003. Blackcurrants are now grown commercially in the Northeastern United States and the Pacific Northwest. Because of the long period of restrictions, blackcurrants are not popular in the United States, and one researcher has estimated that only 0.1% of Americans have eaten one. [...] By 2003 restrictions on Ribes cultivation had been lifted across most of the states, though some bans remain, particularly on the blackcurrant. State laws are enforced with varying degrees of efficiency and enthusiasm; in some states, officials effectively ignore the ban."
they're also available at a local walmart as rootstock. I bought one. If i find a nursery that has it i will buy more, but i like growing "weird" plants that no one has heard of, like soapberries, kumquats, that sort of thing.
those grow wild all over the land here, i just found out what they were called last year; although i had heard they're not edible but to leave them for birds. I'll ask the Ag Center if they're safe to eat.
Please do ask your AG center, but they’ll tell you they’re safe to eat. I make a jam of sorts with the berries. They’re not real sweet but are totally edible
One of the most powerful features of vscode is extensions. Until zed has a comparable extension set to rival all the things I use in vscode, I couldn't care less about 50ms vs 70ms lag on keystroke renders.
Response time matters, but man, there's just such toxicity abound with people who act like speed is all that matters, and act all high and mighty that things are so bad.
These modern systems do soo much helpful stuff. The responsiveness out of the box is pretty not bad if not stellar, and then these plugins really layer in whole worlds of help. Like in so many places, this complexity & nuance can just get totally eaten to pieces by the hungry wolves happy to shred anything showing any signs of being insufficiently super l33t for their superb sensibility.
> such toxicity abound with people who act like speed is all that matters
Maybe it is a point of obsession to some.
But I've experienced that sensitivity to latency is very subjective.
Maybe there's an age component as well, with younger people being more sensitive.
When I use other people's computers and they have not optimized the latency (e.g. running a sluggish Windows with lots of background processes stealing the CPU, Alt+Tab taking several hundred milliseconds, etc.) I get angry, and I don't understand how anyone can live with a computer that is orders of magnitude slower than your own reactiveness.
Typescript takes away a significant amount of the pain for me; the only hold up after that was getting an environment set up to compile it. Deno supporting typescript without any configuration is incredible.
TypeScript is mostly additive, so all the footguns are still there if you aren't careful to avoid them. It also doesn't do anything about the extremely meager standard library that is inferior to what other mainstream PLs had 20+ years ago.
I've done freelancing before at 4 days a week and loved it, and have generally negotiated into 4 day workweeks at various startups. Give a shout-out to your company if they're hiring ;)
As someone currently growing more cucumbers than I can handle (gardener problems), I've been eating at least 2 a day of a few different varieties, including a somewhat bitter pickling variety.
That said, I've never experienced the burping side effect many report. Makes me very curious what's going on metabolically to cause some to burp but not others. Can gas produced by the microbiome of the upper tracts of the intensities make it back up into ones stomach?? All very fascinating regardless.
Because the FDA exists?? Because that would be fraud to claim otherwise, and fruit/veg sources are easily traceable back to their source, particularly at quantity?
Do you have evidence to back up this claim? I'm aware CA puts many warning stickers on various products... but isn't it possible that profit-seeking corporations are, in fact, using cancer-causing materials simply because they're cheaper?
The point isn't that the amount of cancer caused is literally zero. Just by chance, everything will have some (generally infinitesimal) effect on cancer, and often it will be positive. The question is whether "causes cancer" is being applied to products that cause amounts of cancer that are so small that it's not worth warning people about. That consumer products and businesses are covered in these warning and few people take them seriously is prima facie evidence that this is the case, but you'd have to dig into the numbers to be sure.
For instance, Wikipedia:
> The requirements apply to amounts above what would present a 1-in-100,000 risk of cancer assuming lifetime exposure (for carcinogens)
Using the standard ~$5M statistical value of life, this mean that you need to label a product if it is estimated to impose the equivalent of $50 in costs if someone is regularly exposed to the chemical over an entire lifetime. I'm not sure what frequency of exposure is being assumed here, but naively that means that if I use the product once a week, it requires notification of about 2 cents worth of harm per usage.
You're not going to get more customers by labeling your product with "99% less cancerous than what the standard requires" next to the warning that it causes cancer.
They're not labeling things that are particularly known to be harmful. CA Prop 65 warnings are on all rice, coffee, and multi-tenant garages. When you begin labeling things that common and benign as "cancer-causing" people learn to tune it out. Pretty sure rice, coffee, and/or multi-tenant garages are found pretty much everywhere.
This article is not very convincing. I mean sure, the thing about coffee was over the top, but they also stopped requiring that one.
Meanwhile the other examples it uses are that you have to be warned when you're being exposed to things like diesel exhaust. Which, um, actually does cause cancer.
Regulations always have dumb results. If you put the warning on anything with lead in it, people make fun of the warning on a Tiffany lamp. If you don't, people are shocked that you don't have the label on a child's toy with lead paint.
Regulators are never going to thread the needle that well. They're not capable of it. This is a huge problem when they're banning stuff. When they're labeling stuff, eh. People will figure it out.
People don't seem to change their behaviour due to those warnings. Nobody's going into a coffee shop, seeing that warning sticker and thinking 'ah whoops better get out of here' are they? The warning 'cancer-causing' has no effect.
This works, if you have 20 coffee shops, and one of them has "there's asbestos in this building" warning.
If you have warnings literally everywhere, for minor things, that noone really cares about, because the risks are miniscule, people will start ignoring even the dangerous but identical-looking signs. "this item causes cancer" ... are we talking about asbestos, or are we talking about a roasted potato? If the labels are the same, people stop noticing them.
I agree with your sentiment, but fear of asbestos is also another danger that has been highly exaggerated. Asbestos is only dangerous if it is particularized and inhaled in high quantities over a period of time. Men that changed breaks that had asbestos in them and thus lots of asbestos dust or men who worked on installing asbestos pipes and were cutting them all the time, were the ones who got cancer (or their wives who washed their dusty clothes). The fear of asbestos objects or buildings that have, say asbestos insulation on pipes in the basement, is not reasonable and another example of overblown fear that probably cost the US a hundreds of billions dollars (wild guess) that could have been spent much more productively on something else.
Ultimately you're describing how asbestos is generally handled, apart from the rare exceptions of subsidies to preemptively replace it. But eventually, maintenance has to be performed on things made out of asbestos, which would then disperse it into the air and surrounding environment. So sure, asbestos is basically inert until it's disturbed, but once some part needs to be disturbed then it makes sense to do a full scale remediation rather than setting up expensive containment and only finishing part of the job.
I was going to suggest that asbestos was a bad example, because, in most cases, as long as it's left undisturbed, it's completely safe. The only risk from asbestos is from breathing it into one's lungs. If it's not in the air, it's not a problem.
But, then I thought: hmm... maybe his is a great example. People are terrible at assessing risks. The word 'asbestos' is likely to cause a greater reaction than is warranted. It's the opposite side of the coin from peoples' reactions to those prop 65 signs.
I almost bought olive oil, then noticed the California warning sticker that it contained lead, and didn’t buy it - I don’t see that on all olive oil. So it does make a difference sometimes
My social circle is in CA. None of us pay any attention whatsoever to prop65 labels. They're about as useful as any other type of product or business labeling: there's so much of it that it's just visual noise that's long-ago been brainfiltered out of existence.
Devil's advocate, it has an effect on some minority of people. Then the company loses sales and has the incentive to stop using the carcinogen if possible.
Your lifetime risk of getting cancer from that thing might have been one in a thousand, so you don't really care, but the company has ten million customers and getting them to change prevents 10,000 cancers.
This is a pretty good alternative to banning the thing. Because if there is a reasonable way to stop using the carcinogen, you don't want to be the company that has the cancer warning when your competitors don't. But if there isn't, maybe the risk is low enough that people make an informed choice to take the risk for the benefit of the thing with no better alternative, and that's fine too.
I absolutely pay attention to Prop 65 when I buy products and will find alternatives. I also try to find out _why_ there's a prop 65 warning and then decide how much I care (e.g. if an SSD has it, I don't care because I know I'm handling it so little and it shouldn't be offgasing anything; where as with food or things I'm always touching, then I care very much).
The fact that every coffee shop in California is still open, despite people being warned for years that they sell products that cause cancer. The vast majority of people clearly do not care about the warning.
And what do you think is the benefit of putting unsupported warnings on things? Do you think it's actively beneficial? Do you think it's harmless? If it's beneficial or harmless we might as well go ahead and put a warning label on absolutely everything regardless. Then how do we react to this linked article? We'd ignore it.
If you're the one who wants warning labels on things that don't need warning about then you justify that position!
It's not clear to me if you're arguing just about warnings on coffee or if you're arguing that all warning labels are useless.
>The fact that every coffee shop in California is still open, despite people being warned for years that they sell products that cause cancer. The vast majority of people clearly do not care about the warning.
Or they care, but have balanced the risks vs. their enjoyment of coffee. But they may see a warning on, for example, olive oil which contains lead, and decide to buy another product.
>putting unsupported warnings on things?
What do you mean by unsupported here? As in, not supported by science? Or by the people? Because I'm pretty sure it's well supported by science that certain products are carcinogenic and that consuming them, unsurprisingly, isn't very good. We can argue about what thresholds constitute a tangible risk, for sure, but either way the fact that some things cause cancer is surely considered "supported".
>that don't need warning about
Same question -- just referring to coffee or all labels on everything? I agree with this if you're just referring to coffee, but there are certainly labels that I do pay attention to and consider a warning useful.
I think there's a happy middle-ground here. If my favorite juice has lead, I want to know. If my favorite coffee shop has a 1 in 10,000,000 of causing cancer, I probably don't need the warning each day.
The problem I see with these labels is they lack specificity. A sticker on the visor in a new car says this vehicle contains chemicals that cause cancer and /or birth defects. I know the paint does, as do all the fluids.
What about the steering wheel and the arm rests?
My pen doesn't have a warning, is that because the manufacturer chooses to consider exposure through skin contact only, but chewing on it is actually a sizeable risk?
I think the reality is we have so many terrible chemicals all around us that it feels like an over reaction, when it's actually the exact opposite -- manufacturers have made many a deal with the devil.
In many cases it's natural risk, not the products at all.
Everything has some amount of lead back from the days when it was used recklessly. Everything has some amount of mercury that's still going up smokestacks. (Now we catch most of it--not all of it!) Plants pick up some arsenic from the soil--for medical reasons I eat a lot of rice and it's enough of an issue I make sure to buy rice grown in low-arsenic areas.
Lots of comments here opposed to Martin Fowler's advice. I'm curious if anyone who is anti-Fowler has software design books they _do_ recommend? (Asking because I'd love to read them)
Since the problem usually isn't with the software design advice, but "people read software design books and then try to force whatever they read into their projects", I guess what would make a book "good" for that is "the author isn't popular enough to cause something to be trendy". IMHO Fowlers writing is just fine as "these are things people have done and you might consider" (and what I've read mostly seemed to be written that way, not overly pushy), but its so popular that if he writes about a new thing, too many people then jump onto it as the next big thing, if it matches their problems or not, and that gets painful to work with. Although for Fowler to write about something, I think it already has some trendiness going on and he's not usually on the forefront of new ideas?
People like Martin Fowler or Robert Martin seem to be more famous for the books that they write about software instead of being famous for the software that they write. I don't think it should be possible for somebody to become an authority on software design without them showing their designs to the world. If they work exclusively on proprietary software, then they should provide some other kind of proof that the recommendations that they make are beneficial and demonstrate how helpful they are. E.g. anonymous metrics from their client pool, detailed analyses of various OSS projects, etc.
When e.g. John Carmack talks about a technical topic there's a very long and very public record of the kind of experience that he has. It's reasonable to trust that they are correct, although one should still verify before betting their project/company/career on that piece of advice.
The people that don't like him don't enjoy any books. They just enjoy "getting stuff done." That's really what I find when talking to people about why they don't like concepts that he introduces. They have no alternative, they just don't like people who come across as dogmatic.
Given that this is Google, my money is on them either abandoning it, or making a new version every 2-3 years (for no discernable reason) making no actual progress as a result.
I would LOVE to be proven wrong, but Google's track record with bringing products to market, and actually keeping them for longer than a media boost, is downright depressing.
At some point it feels like every Google side project is just a media buy for their real business: hiring engineers to drive more ad revenue, engineers who are enticed by shiny projects like these.
It’s not depressing, it’s wonderful. Google’s abject failure in almost every area is great. They have nearly unlimited resources and a fantastic (though but as great as they think) engineering pool. It would suck if they managed to successfully attack adjacencies.
I have a hard time calling immense wastes of resources "wonderful". Think of how many man-hours of at least better-than-average engineers have been wasted developing something like 5 separate chat apps... it's absurd.
Elixir is more performant, has compiler safety guarantees that are only getting better as types are introduced, is actually designed from the ground up for web dev (being based on the Erlang VM), and... it's just way more fun (subjective I know). Elixir is what I always wished Ruby was, and I couldn't be more excited about the coming type inference.
Programming with Elixir makes me feel like Ruby is a previous generation language, much like Ruby made me feel that way about Cobol or Fortran, it really is that stark.