Hacker Newsnew | past | comments | ask | show | jobs | submit | more whitefish's commentslogin

Real estate agents (in the USA at least)


As someone who just recently sold FSBO, this rings true to me. Real Estate agents provide an entirely 100% superfluous service especially in relation to the absurd amount of commission they charge. The important aspects of selling a home are done by inspectors, appraisers, and lawyers. The Real Estate agents are basically leeching middlemen that keep as many people out as possible to keep their perceived monopoly, which sadly most of us are forced to use out of our own will because despite it being explicitly laid out as unethical in their guidelines, real estate agents will only work with other agents and skip over homes that are FSBO. The only other equivalent I have found is financial advisors.

* a little caveat: I am little bit bitter about the process and the "locking out" I faced. One real estate agent even told me this up front, which I was like "Dude you know I could report you for saying this and you could lose your license right?" But alas I just shook my head and carried on.


> real estate agents will only work with other agents and skip over homes that are FSBO

Is it typical in the US for a buyer to have an estate agent too? If so, why? If not, I don't understand what this means.

Where I'm from, typically only the seller would have an estate agent.


I just purchased a home. Basically as a buyer the buyer's agent is paid out of the sellers end of the sale. The fee paid to the sellers agent really, but the seller pays in the end.

So... I can not have a buyer's agent and pay for it anyways. Or I can have an agent. The only advantage of having an agent is that getting showings is a little easier and you spend less time on the phone.

Houses have showings on Saturday/Sunday and offers are in on Monday in hot markets. There is no time to negotiate reducing the price in exchange for not using a buyers agent. If you have a complicated offer it might not get selected. People are waving inspection and financing contingencies in some markets.

I think it more or less comes down to buyers and sellers being kind of stupid. Heck... just about everyone I dealt with throughout the process was kind of stupid. The lender (at least the one I finally ended up with) and home inspector were both competent. Everyone else demonstrated they had no idea how to do their job.

I walked into so many houses with rat poop and dead insects everywhere. Houses were priced and bid on with only a weak correlation between the location and quality of the house and what people offered. It's obvious neither the buyers or the sellers knew whether what they were looking at was any good and major deficiencies were not being priced in. A house could be right next to an active commuter rail (not a stop, just the track) and it wouldn't be reflected in the price!


"Real Estate agents provide an entirely 100% superfluous service".

While I definitely agree that the amount of money realtors get for selling a home is ridiculously high, the above statement is definitely NOT true (and I've sold several homes through a realtor).

Realty is often a more risky business than people think. People often only focus on the houses that sell quickly, not the ones that sit for a long time. A realtor can invest many hours in selling a home and get nothing if it doesn't sell. I had a home on the market more than a year (and it was a fairly new, nice home at a reasonable price, it was just a bad time to sell). We gave up, and if it had not been for a deal that came up a short while later, my realtor would have gotten nothing for that year's worth of effort (I had negotiated commission with her originally, due to her selling the home I was originally interested in, and I re-negotiated commission on the second deal).

Again, I'm not saying that the commissions aren't too high -- they are. And it is a lucrative business if you want to spend all of your weekends and evenings showing houses. Personally I think it's a good short-term career for young people who are single. Money isn't everything, and I would not even consider doing the job as a married person.

Also, not all realtors are made equal. There are a lot of them that put as little effort as possible into selling your home, and just hope that realtor.com listings will get them a buyer without putting much into it. However, the best realtors do put work into selling your home, and their network and ability to find buyers is of value. I just don't think it is worth what they charge for it. But, then again, given the hours/days they have to work, I guess I'd want to be paid very well if I was doing it.

In some ways it is a bit like doctors (talking about in the U. S. here) -- it takes a long time to go through med school, and many come out up to their necks in debt, so by the time they are done with residency they feel they have earned the high fees they get paid (and/or they just need them so they can get out of debt and finally be profitable). Of course the medical profession is a complicated mess, so there's more to it than that.


Not not saying that there isn't room in America's big economy for that, but the topic was "Which professions are paid too much given their value to society" -- Real estate agent is an easy, immediate answer to this question. The essentials of the home buying process are provided by the lawyers, inspectors, and appraisers.


Two anecdotes:

Buying a house is the biggest list of "fees" I've ever seen in my life. I felt so nickel-and-dimed with random $50-500 fees tacked onto every part of the transaction. My mom just sold a condo as well, same thing with tons of superfluous random fees throughout the process on that side of the transaction.

A friend of mine, who is a really nice guy, so I don't hold this against him at all, but was kind of flopping in his career until his early 30s. Started as a real estate agent out of the blue in the Seattle market a few years back. Is now making 2-3x more than most of us. I'm happy to see him successful, but I also take this as an example of the job having pretty low barrier to entry to make a lot of money.


Not just the USA, 99.99% of them are leeches. Head hunters are in the same category.


If you want to go with MVC use this: https://github.com/Rajeev-K/mvc-router


This will lower salaries in India for IT professionals. As a result outsourcing to India will become more attractive.


That ship has done sailed. American firms have learned what makes sense to offshore and what doesn't. Hence the carnage.


American firms learned that once they have thousands of resources in India it's more cost effective to just hire them as direct employees instead of going through an outsourcing company. In the long run this probably works out better for everyone, except the owners of the outsourcing companies.


>it's more cost effective to just hire them as direct employees

it also makes it easier to bring them later (after 1 year) into US on L1 without all these limitation of H1 like prevailing wage, yearly cap, spouse of L1 can work, GC seems to be easier (at least that seemed to be the case 10 years ago when I last paid attention to immigration)


Well german firms aren't there yet in my opinion. They propagate offshoring as the opportunity to lower costs by 90% while preserving the same quality. In my company the last "transformation" talk made clear that no department will be left untouched in the next off and near shoring activities..


Which german company? That sounds stupid. I think no company profited from that over the long term, ever. Management must be stupid to think 90% of costs can be cut... if anything, costs may go up


All the German companies that belong to Fortune 500.


P.T. Barnum citing short-cycle natality statistics on line 2.


I am not that old, but I remember reading a headline like this on Slashdot once... The results have been less than stellar. When my non-tech enabled friends complain about tech support, it's usually with a fake Indian accent. I think the "IT professional" in India has hurt quite a few off-shoring companies.


> When my non-tech enabled friends complain about tech support, it's usually with a fake Indian accent.

Find some new, better friends.


Should hospitals such as UK's NHS and other such organizations use dumb terminals (or chromebooks) instead of Windows? That way data is centralized on servers where it is easy to backup and harder for hackers to hold to ransom.


If instead of desktop win32 applications they had used web applications none of this would have happened.

Servers are much less vulnerable for a number of reasons:

1) People managing and configuring them are more security conscious than the vast majority people. Come on, nobody downloads an email attachment or connects an USB they found in the parking lot to a server.

2) It's much cheaper to keep a server updated than a thousand Windows clients.

3) Like whitefish pointed out, even in the worst case scenario you can restore a backup and keep on truckin'.


It'd be a good start if they just didn't use Windows.

But yeah, definitely. It's pretty damned unlikely that an OpenBSD backup server would get wormed, unless an ME exploit is involved.


Let's be clear on this. No matter how secure the operating system initially, if it stays unpatched then over time it will become more and more vulnerable as uncovered exploits go unfixed.

The reason a machine might go unpatched is because it might support some critical hardware (eg medical) for which there is only one or two vendors and only a particular combination of HW and SW are supported (eg due to a specific custom hardware driver).

To lay the blame for this at a single vendor's feet is naive.


True, but I'm sure there are a lot of cases where the OS wasn't updated because of the necessary investment to jump to a new Windows version.


There are very few free/open-source operating systems that get security patches for as long as Windows does.

Major versions of OpenBSD are only supported for 5-6 years. Most Linux distributions only get 3-5 years. Red Hat promises 10 years of support, the same as Windows 7/8/10. None comes close to the 13 years that Windows XP was supported for.

So you're gonna have to update anyway, at roughly the same interval if not more often, as if you had used an enterprise edition of Windows.


Major versions of OpenBSD are only supported for 5-6 years.

I thought that security updates are only made for -current, the current stable release, and the previous stable release. So, 1 year of support, not 5-6.

A cursory look at the errata seems to confirm this.


Most of the time, upgrading from one minor version to the next is painless. If you installed OpenBSD 5.0, you are expected to keep updating all the way to 5.9. (For some reason, OpenBSD always makes exactly 9 minor versions for each major version.)

Most Linux distros don't even make any fuss about minor versions, using them only as an opportunity to build fresh installation images. New minor versions are security patches for the major version and all previous minor versions.


> It'd be a good start if they just didn't use Windows.

I hear tell that server wise NHS IT will also support OpenSUSE, and their record of keeping that patched is almost as good as their record for doing so with windows.


Yes, they really should. Some important facility should not use window anymore because it is too open to the public to hold an attack. Or the hospital's computer should not be connected to the internet. Most of the time the computers within a hospital are just doing local task.


Maybe they should not have connected all of the computers across the country into a single network.


Maybe they should have kept their systems up to date instead of running XP.


This affected all versions of windows, not just XP. You're right about the updates though.


It's not actually like that. They have a heavily restricted backbone and lots of little isolated networks hanging off it. This is lots of independent cases of idiocy causing infection.

Policy controls, poor patching and user education are the root cause of the NHS problems.


IMHO, best approach is to use a (hypothetical) system where all apps are sandboxed by default.


Routers can and should be independent of View technology. Here's one that works well for React but is independent of React:

https://github.com/Rajeev-K/mvc-router

The MVC model is well-understood, and does not require Redux. MVC does not imply two-way data binding, btw.


Nice! Thanks for commenting. The reason that I tied the two together for react is that it saves quite a bit of boilerplate to let the router pass on the application service than having to manually add the service to each page individually.

However, the route and router are not necessary for the rest of the library to work and the rest of the library can be used as a plugin to use with a different router.

In terms of DDD principles - if you have decided you might need CQRS for your particular case, you might not want to spend a large amount of time setting up the routing/CQRS framework and you might want to spend all of your time on your actual business logic instead.

I am currently using this in a few of my projects and it's proving quite neat, although I may still need to add some details. I hope others can find it useful - if not, then at least this was a worthwhile experiment.


Who here is old enough to remember Larry Ellison's Network Computer? Chromebook is the realization of Larry Ellison's vision.

Back in the mid 90's Larry Ellison said, "we need computers that do less, not more". He was widely ridiculed at that time. Now we know he was right.


Larry Ellison didn't invent this, terminals and "thin computing" have been around for a long time, it's just that browsers are now the best way to interface rich interfaces.

The origins of VNC are an AT&T project where people would wear RFC tags, walk up to any workstation and have their session. This was long before Ellison was talking about Network Computers. There is also John Gage from Sun's motto, "the network is the computer."


I don't think it's given that he's right or wrong. Apple has chosen a very different path (powerful devices with local apps) and they're also extremely successful.

The real lesson, I think, is that most people don't want to _manage_ devices. They don't want to deal with backups, they don't want to manage disks, filesystems and drivers, and if you can take all of that away they'll be happy to store all their documents in your cloud.


For home and personal devices, Apple's approach is fine. But for business and education, central management is essential.


>But for business and education, central management is essential.

For business, it depends.

I think my company is fairly typical in that we offer fully managed systems but don't require that people go that route. (But you're more or less on your own for support if you choose to self-manage.)


Those who think React + ReactRouter + Redux is too complex -- you're right. But there is an easier way to use React: MVC. React is the V in MVC.

https://github.com/Rajeev-K/mvc-router

Note that using MVC does not imply 2-way binding!


React is very usable without Redux. I'm not sure how they became bound so tightly together.

I keep seeing this pattern of redux as a page-level data store, whereby on each page load you pull your data from rest APIs, put it in that page's part of the data store, to be modified with that page's reducers, triggered by that page's action creators. Then it's all hooked up to one single page-level connected component which passes all the state down to other components as props anyway, making it functionally identical to just using the React state in that page component.

The justification for this is usually either "Now you can make your page components pure functional components!" (whooptee-do) or "Redux scales better" (citation needed). Pretty thin.


I agree that "only use functional components" is a heavily over-opinionated approach (I actually tweeted a counter-statement to that recently: https://twitter.com/acemarke/status/855192917727735808 ).

There _are_ definitely a number of benefits to using Redux in a React app. I actually co-authored an article that discusses some of them: https://www.fullstackreact.com/articles/redux-with-mark-erik... . TL;DR: time-travel debugging and better hot reloading for development, easier management of data that needs to be used in multiple places throughout the component tree, and all the niceties of having centralized state (logging, state persistence, issue reporting, etc).


I'm beginning to think there'd be a way to make Redux a helluva lot clearer by sticking everything in a class with a well-defined interface (but: ewwwww, classes, impure, burn the heretic!). Keep all the definitions for an actioncreator, event (action), and its reducer in one place. Hand the whole class to Redux and forget about it. Can compose actions/reducers inside those classes if you want/need to, so no loss there.

The pile o' reducers thing just isn't a very useful way to organize code IMO, and having everything split over several files makes following what's happening a PITA (especially without something like Typescript to let your tools give you the information you need, rather than having to keep it in your head or go look it up manually).

Redux feels... not over-engineered, exactly, but maybe mis-engineered.

... or we could just cut out the middleman and go all Actor model on this problem. Just sayin'. (yes, I know there are actor-model libs for React out there, but frankly the churn-related breakage and confusion in the most popular tools is so bad I'm afraid to step outside the mainstream, where it's probably even worse—plus we don't get to choose our own libs/patterns all of the time)


>I'm beginning to think there'd be a way to make Redux a helluva lot clearer by sticking everything in a class with a well-defined interface (but: ewwwww, classes, impure, burn the heretic!).

I thought the same thing. That's why I started using VueJS with Vuex. Vuex accomplishes the same things as redux in a much more manageable and centralized manner. Plus, you don't have to worry about connecting your components to the store through `react-redux` or whatever. With Vue and Vuex, you just pass the store object to the root view component and it's available in every single child component via `this.$store`. You can then `dispatch` actions which perform `commits` which call `mutations` which update the state. Then, in your component you create a computed property that uses a `getter` to return the piece of application state you want. It's a really simple top down data flow and everything is namespaced. It's so absurdly simple and easy that I don't understand why redux doesn't take a page from vuex's book.

I really should write a Vue + Vuex tutorial for beginners as I'm super happy with the way it works.


You don't even need a class, but either way, this is essentially how Elm works.

There are very different semantics at work there though. Elm is Fractal, Redux is not. Each have tradeoffs. Some stuff gets simpler, some stuff gets harder (the 1:1:1 scenario where you have a component, an action, and a reducer to achieve 1 thing gets easier. The N to N to N scenario where a reducer can handle things from all over the system, gets harder).

It's not misengineered, its just optimized to make a different set of problems easier at the cost of making others harder.


There's already a whole bunch of "define Redux action creators and reducers as classes" libraries out there. I've added the ones I've seen to my Redux addons catalog, in the "Variations" category: https://github.com/markerikson/redux-ecosystem-links/blob/ma... (which is where I list stuff that builds on Redux, but takes it in a "non-idiomatic" direction).

There's definitely several different schools of thought about how to organize and structure Redux-based code. Dan is a big fan of the "small independent slice reducers responding separately to the same action" approach. Others prefer to see all possible state changes for a given action in one place. And yes, while the intended use of Redux is based on functional programming, there's also those who prefer putting OOP layers on top.

I'm actually working on a blog post that will try to clarify and discuss what actual technical limitations Redux requires of you, vs how you are _encouraged_ to use Redux, vs how it's _possible_ to use Redux. Been busy with other stuff, but hoping to make progress on that post this week. If you're interested, keep an eye on my blog at http://blog.isquaredsoftware.com .

You may also be interested in an issue I recently opened to discuss possible future improvements and "ease-of-use" layers that could be built on top of the Redux core: https://github.com/reactjs/redux/issues/2295 .

Finally, I'm always happy to chat about Redux (and React) over in the Reactiflux chat channels on Discord. The invite link is at https://www.reactiflux.com . Feel free to drop by and ping me.


> React is very usable without Redux. I'm not sure how they became bound so tightly together

Because that's what easily-excited developers see on HN and Twitter, so they think they have to use it.


Without ragging on developers that do this, I actually think it's a very serious problem. I frequent the #react irc channel a lot, and almost every beginner comes in with the same sentence: "I really like the way ___ does ___, where do I start?" (or some variation of)

If you're using the words "like" or "cool" to describe your latest dependency then that's a serious red flag. These libraries exist to solve problems, if you can't describe the problem you're trying to solve with one of them then you shouldn't be using it.


I don't understand Redux, I have almost 20 years of coding (I probably suck at it though). I once asked a developer at an interview if he could explain the redux stuff he had used in an assignment. He couldn't and I really tried to understand all the boilerplate and inner designs of the library, but I found it really hard to get into. Mobx though, 2 minutes and you get it AND you get more efficient in building complex UI:s.


You started on the wrong foot. Learning from someone who cannot explain his own code is not the right context to learn any tech.

Redux is a modern implementation of CQRS/Event sourcing, a design pattern that existed before you even started coding. Many developers, in different languages, learned it and used it (if you want to implement an undo stack for instance, there is not a lot of other solutions around). Trust yourself and your intelligence. Our devs learn it in about 3h with our training program and get a return on investment in about a week.

MobX is nice and if it fits you, go for it. If you are a hands-on kind of guy, you may discover the limits of MobX by using it, and from the trenches finally understand why Redux, and why does it need 3 objects (a store, a reducer, an action).


I'd be happy to answer any questions you might have about Redux if you'd like. The Reactiflux chat channels on Discord are a great place to hang out, ask questions, and learn. The invite link is at https://www.reactiflux.com . Feel free to drop by and ping me.

Also, I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics. It includes links for learning core Javascript (ES5), modern Javascript (ES6+), React, and much more. I also published an "Intro to React (and Redux)" presentation at http://blog.isquaredsoftware.com/2017/02/presentation-react-... , which is a good overview of the basic concepts for both React and Redux.


I learned the unidirectional data flow pattern with flux before redux and redux makes flux look like war & peace! :P

The no frills way I like to think about redux is your rendered page is the result of a function operating on a state object. Changes to the state object trigger re-renders. There's obviously some subtleties in there and complex use cases, but that's the basic gist.


It's two event/message dispatchers slapped together. One for "reducers" that take an event/message and apply it to state, one that automagically applies the resulting state-change event to do stuff to your views. You mostly don't need to worry about the second one.

As far as I can tell, that's it.

For bad reasons they've decided to stick with the terrible "action" name for their events/messages, which has made the whole thing super confusing (turning "actionCreator" into "eventCreator" immediately makes things much clearer, for instance).

There's also a ton of convention/process taught on top of it for some reason that's IMO not that great, and makes it really hard to see what's actually part of Redux and what's cruft on top of it that you can skip/modify. Redux-as-typically-presented is mostly you doing stuff to follow a (kinda painful) pattern, not the Redux library helping you do stuff.

[EDIT] I'd add that the communication pattern of the docs and various attempts to help people understand Redux seems to largely be "oh, you didn't get it? Let me say the same thing again but louder". Which is why there's SO MUCH documentation and chatter for something fairly simple, I think. Which just compounds the problem.


A lot of the naming of stuff goes back to Redux's Flux heritage. The Flux architecture labeled those objects as "actions", so Redux (since it was intended as a "Flux implementation") kept that naming.

You are right that the majority of usage is really at the user level, than the library level. I'm actually working on a blog post that will try to clarify and discuss what actual technical limitations Redux requires of you, vs how you are _encouraged_ to use Redux, vs how it's _possible_ to use Redux. Been busy with other stuff, but hoping to make progress on that post this week. If you're interested, keep an eye on my blog at http://blog.isquaredsoftware.com .

If you have concerns with the docs, I'd appreciate any specific suggestions or ideas you might have for improving them. Docs issues and PRs are absolutely welcome, from you or anyone else who wants to help improve them.

Finally, I recently opened up an issue to discuss possible future improvements and "ease-of-use" layers that could be built on top of the Redux core: https://github.com/reactjs/redux/issues/2295 . Would be happy for any feedback you could offer.

(edit: just noted I replied to you a couple different times in this thread, and repeated myself a bit. Offer of discussion absolutely still stands :) )


This is my experience as well. The largest struggle I have with Redux is understanding where things belong. Where do ajax calls go? Should I use redux-thunk? That feels like an anti-pattern to me. Redux-saga is interesting, but it's going to be really tough for unfamiliar devs to understand generators with infinite loops. I also don't feel like writing an infinite loop every time I want to perform a simple fetch. With mobx, shared code goes in a class, it doesn't matter if it's async or not, and observable properties get a decorator.


Dan wrote a couple great answers on SO that discuss why async behavior in Redux is normally done via middleware such as `redux-thunk` and `redux-saga` - see http://stackoverflow.com/a/35415559/62937 and http://stackoverflow.com/a/34599594/62937 .

As a short version: you _can_ put your async logic right into components, but it's nicer to move that logic outside components for reusability. Middleware have access to `dispatch` and `getState`, so they act as a loophole that enables you to perform async work and then interact with the store.

My own take is that `redux-thunk` is sort of the "bare minimum" approach to async behavior, as it allows you to do stuff with promises and async functions, or complex synchronous logic. Sagas are useful for complex async workflows, and there's also some popular observable-based side effects addons as well. Ultimately, the approach you use is up to you.

If you'd like more info on what good Redux code structure looks like, you may want to read through the "Redux Architecture" and "Redux Techniques" section of my React/Redux links list at https://github.com/markerikson/react-redux-links/blob/master... . Also happy to answer any questions you might have.


Or you could just cut out the middle steps and make the switch all the way over to Elm ;)


I recently completed two projects, one in Elm and one in React. Elm was a lot more fun. JSX in particular felt very unwieldy compared to the Html package in Elm. Being able to write everything in a neat functional language was a nice experience for me.


I'd like to see React support shadow-dom and web components. Not holding my breath however, since Facebook considers web components to be a "competing technology".

Unlike real web components, React components are brittle since React does not have the equivalent of Shadow DOM.


Nothing stops you from using web components in React. There's even standalone (https://github.com/Wildhoney/Standalone) which allows you to transform React components into common web components.

But modelling React into web components makes about zero sense. The spec started many years ago and it is utterly outdated and useless. Web components aren't event components, they are pluggable templates. They expose all the problems and issues that React has already solved. While web components are dependent on vendor policies and specs, yet do a fraction of what a React component does, React is already out there serving apps to mobile, desktops, shell console, watches and so on - because it is what web components should have been.


You can use web components in React just fine if that's your cup of tea.

https://facebook.github.io/react/docs/web-components.html

There's also more opinionated integrations like https://www.npmjs.com/package/skatejs-react-integration.

And of course you can do it the other way around too.


The problem when you use Web Components in JSX is that there is no way to wire up any custom events fired by the Web Component. JSX will only wire up "known events".


That's just when you use custom elements in _React_.

I've definitely wired up JSX createElement implementations that handle attributes and events properly. My preference is for Polymer-style naming: `on-*` for event handlers, `$` suffix for attributes and everything else is a property. Others who've done this use special `attributes` and `events` properties.


As soon as Shadow DOM v1 is implemented and active in stable in Chrome, Safari and Firefox, I am pretty sure it will start a new trend. And React might be persived like JQuery, Ember, Angular1 as a fad, that has been superseeded by new native browser capabilities. KISS.

https://en.wikipedia.org/wiki/Web_Components


It won't. Web components are simple encapsulation sugar. Apps still have no means for dynamic structures while all the terrible templating pitfalls that frameworks like Angular brought are still present. React is a real solution. It isn't even a question or an "if" any longer. React has taken over, or rather, its principles have. They're clean, concise and don't twist the slightest standard all the while not breaking a sweat.

React has grown so powerful it isn't even just about the browser any longer. It runs everywhere. The browser has finally become a dumb pipe, something it should always have been. Web components are trying to reverse that, but you'd be ignoring innovation if you fell for it.


You can use JSX for Web Components too, see: https://github.com/wisercoder/uibuilder There is no need for a complex framework like React. Web Components + JSX is better than React. Why use a proprietary framework with patent issues when you could be using a standards based solution instead?


React is not complex, it solves complex problems, elegantly. Web components make it worse than it already is. By catering to the dom they fragment HTML even further. Now you rely on attributes for "if", "each", "of", "loop" and so on. It literally enforces frameworks shipping template parsers and script engines that duplicate and fight against Javascript, each with their own syntax. Without frameworks like Polymer you only get encapsulation and that's utterly pointless.

The extensible web is about low-level access. Web components go against everything that stands for. They're again pushing a bad vision that was decided on almost 10 years ago by people sitting in a closed room trying to interfere with how we write apps, resulting in a useless spec that adds complexity and weight. That spec btw took years of pushback against Apple and other vendors that weren't even interested in web apps rivalling their native stores. They have eventually given it green light because the spec is so tame it won't threaten a native app in a hundred years.

React on the other hand isn't much more than a simple idea:

    const Component = ({ text }) => <span>{text}</span>
    const Another = () => <Component text="hi there" />
We can express UI declaratively and functionally. It solves our problems and pretty much has set the web free. Now there are dozens of frameworks that follow these principles, it's made it possible for Javascript to move on to native space, other platforms and devices. We're closer to truly universal apps than ever. Nothing proprietary about it, just technology that has managed to come through by its own merits which has evolved into an actual, living standard.

The W3C has the worst kind of track record, don't expect anyone to fall for all this hand waving for "standards," no one is that naive anymore. If a spec doesn't perform it gets discarded and by this point we already know web component don't.


No. With 8 bit you had to execute multiple instructions to add two numbers. Same with 16 bit. This problem went away with 32 bit. Adding more bits beyond 32 does not bring proportional benefits because the numbers we deal with fit in 32 bit.


"the numbers we deal with fit in 32 bit"

Except when they don't. Everyone already forgot tweet number 2147483648? :) https://techcrunch.com/2009/06/12/all-hell-may-break-loose-o...


> No. With 8 bit you had to execute multiple instructions to add two numbers. Same with 16 bit.

Wrong (for x86-16 vs. x86-32). Just use an operand-size size override prefix (0x66) with your 16 bit real mode ALU (in this case 'add') instruction to make it a 32 bit ALU instruction. Works from 80386 on, where the 32 bit registers were introduced.


I agree, most numbers we deal with fit in 32 bits with the exception of double precision floating point and indexes for really large data sets. As Moore's law seems to be ending perhaps there might be a sweet spot at 48bits for both integer and FP.

The one thing I found absurd with RISC-V is the 128bit variant. Most 64bit processors today don't even support a full 64bit virtual address space do they?


Couldn't you say this about the software industry in general? In the 90's I used to have a Sun workstation on my desk. It ran the powerful Solaris operating system, but had just 16MB of RAM! Today you need 1GB of RAM to run an OS comfortably. My question: what does modern Linux do that Solaris from the 90's did not, that it requires 50x more memory?


I'm not sure if you're serious, but modern linux does a lot of things that solaris didn't do back then. A few more points: my linux system ran "OK" with 8MB RAM in 1995, although it started to swap when I ran emacs, X11, and g++ at the same time (this was ultimately fixed by maxxing out the RAM to 32MB).

I have small Linux systems today that work comfortable with 128MB of RAM used.

What my modern linux system does that Solaris didn't do in the 90s: runs a browser than can render absurdly detailed scenes using OpenGL where the object model itself exceeds 1GB (it fits nicely on the graphics card, but the computer is also loading that from the net at gigabit speeds and storing another copy in RAM).

That said, I think it's true that modern systems just waste oodles of resources unnecessarily.


The SGI Indy came with just 16 MB RAM, and ran the IRIX graphical environment and also had OpenGL. IRIX had excellent system administration tools.

https://en.wikipedia.org/wiki/SGI_Indy

https://en.wikipedia.org/wiki/IRIX


We had Indys with 32MB of RAM in '97-98. They were basically useless- the OpenGL implementation wasn't that great, and the rest of the system was massively underprovisioned (constant slow disk IO). However, we also had a bunch of better SGIs with 64+MB of RAM and those worked great.


Back when EMACS stood for Eight Megs And Constantly Swapping.

It's salutary to consider that the memory for a reasonably spec'd machine a little over 20 years ago is now a rounding error

I remember at about that time having 80MiB of RAM: a) my machine flew, and b) plenty of people would ask why I needed so much


the "exotic" machine at the time was a Digital Alpha (64 bit! In 1995) that had 64MB IIRC. It did fly. The solaris machines we used took 20 minuts to boot and another 1 minute to open a shell.


If you set up your Linux distro from scratch you can easily manage under 200MB of RAM. I have an LFS I set up which idles at 150MB or so. A LOT of stuff is bloat in modern distributions.

I recommend everyone to try and build an LFS (Linux From Scratch) at least once or at least set up an ArchLinux box to get a feel for how Linux actually works and where the bloat is.

As an example, the music player daemon (MPD) has 110 dependencies among which is wayland or x-org, I don't know why.

Also, using udev to only load needed kernel modules helps with a lot of memory usage.


Can you recommend any good tools for measuring and plotting overall system memory usage over time? I always thought it would be a fun project to try to strip down a Linux distribution and see how low I could get worst-case memory usage.


You can simply pipe top's output to a file using something like cron. There is a very good tool for boot times though integrated into systemd. Call it using `systemd-analyze`. It has a lot of subcommands you can check by `systemd-analyze --help`.

Also, if you'd like to compare performance degradation over time, NixOS is a good choice to run because it can simply rollback to old configurations and you can see what changed.


If you could give me a brief list of what features you expect from such a tool I'll be happy to take it up as a project and release it.


Don't reinvent collectd :)


Thanks for the recommendation. Exactly what the parent commenter was asking for.


Yup, thanks, collectd seems to be exactly what I was looking for.


Well, I don't know, could your Solaris (mostly) seamlessly connect and disconnect to wireless networks? I don't even think there were that many wireless networks in the world back then :)

Anyway, this is is just 1 contrived example of something modern OSs do, and that OSs from the 90's didn't do.

Sure, there's some bloat, but a lot of it is the "Mozilla kind": "Mozilla is big not because it is full of useless crap. It is big because your needs are big".


> could your Solaris (mostly) seamlessly connect and disconnect to wireless networks? I don't even think there were that many wireless networks in the world back then :)

Let alone USB. During my training, there were lots of desktops running Windows NT 4.0, which did not know USB. By the time I had already become so used to copying data back and forth with a USB thumb drive (even though back then their capacity was still measured in megabytes), that it became fairly annoying to walk up to some computer, plug in the thumb drive, then realize this machine is running NT 4.0, not Windows 2000, cursing, then looking for another way to get some piece of data on that computer: Usually put it on some other machine copy it over the network.

Ah, those were the days... ;-)


> Anyway, this is is just 1 contrived example of something modern OSs do, and that OSs from the 90's didn't do.

This doesn't add anything, because contrived or not, it doesn't answer the question that was posed: "What does modern Linux do that Solaris from the 90's did not, that it requires 50x more memory?"

It hits the first part of the question, but not the second part. We could add wifi to a 90s OS without severely inflating the memory requirements. My Nintendo DS (with 4MB RAM) "seamlessly connects and disconnects to wireless networks", after all.

You could easily answer the question with a sensible, non-contrived answer, and I'm not sure why you didn't.


Isn't a significant chunk of OS memory usage caused by the desktop UI? If we're talking about servers, I'm guessing it would be because of drivers and more built-in functionality, some of which is rarely used.


Yeah — the image buffers (screen resolutions) are bigger, we run more apps, more advanced desktop environments… also in the old days (before OS X, Compiz (fun with desktop cubes!) and Vista Aero) people used to run without compositing, which meant one common image for all apps to draw into.

Just booted my laptop (FreeBSD -current amd64, 1366x768 display) and started X: only 204M "Active" memory. Of the 204M: 60M is syncthing, 39M compton, 31M Xorg, 11M i3bar, 11M polkitd, 10M i3, 9M dunst, 5M wpa-supplicant… Not counting syncthing that's 144 megabytes. I think that looks reasonable. You can go lower and optimize for low memory (no polkit, no compton — 94M) or go higher and optimize for usability and fancy UI features (install gnome :D)


If it takes 1008MB to seamlessly connect to a wireless network, something is clearly wrong.


> 1 contrived example


I often wonder the same. In '90 I helped build a primitive web browser, including an interpretive language. Used it to instrument a couple nuclear power plants. Displays were graphically rich, and had to load in < 1 sec and display real-time plant data. I don't ever recall waiting for my IDE (emacs) to start, or even waiting very long for a full build to complete.

When we have less, we make due with less. When we have more, we consume more. It's sort of like money.


Forgive the nitpick, but it is "make do", if only because "make due" reads very awkwardly in British English where "due" is pronounced as if it is spelled with a U and an E, not as if it is spelled with two O's


I learned something grammatical today.


Oh noes. You need lots more to run OSX El Capitan as I found out after acquiring a second-hand iMac. It was unusable at 1GB.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: