> “There were pictures of huge mountains of ‘earth apples’,” she recalled, using the word Erdäpfel, an affectionate term for the potato sometimes used by Berliners
Fun fact: the Hebrew translation of potato, תפוח אדמה, is the portmanteau of "earth" (אדמה) and "apple" (תפוח).
If you should ever be so fortunate as to have too many potatoes, see if you can shred them with a food processor and combine with onion, egg, salt, and pepper to make potato kugel, which freezes exceptionally well.
Erdäpfel is used in many dialects and has plenty of variants.
Actually the various different words for potatoe and their distribution across Germany, Swiss and Austria is linguistically quite interesting (see this map [1]).
The legend is in German and roughly translates to (from top to bottom):
I suppose this "earth apple" formulation coming up in several languages is partly because potatoes are from the New World, and Old World languages won't have a "traditional" word for them. Whereas in English it's basically a loanword.
It also makes more sense when you realize that 1) pomme in older French meant fruit generally, not apples specifically, and 2) sweet potatoes were introduced to Europe well before white potatoes were. So "earth fruit" seems fitting.
a term falling out if use does not make it foreign. even if no longer common pommes frites is still a french term. the french wikipedia page also does not give any indication that the term is no longer used.
Potatoes originated from the Americas, so I suppose that word was created in the past 500 years. But even for modern computer names, I would thing old languages would just use amalgamations like that.
Wiktionary says it was in Old High German a thousand years ago, but defines that word as "pumpkin, squash, melon", which is strange since pumpkins are New World too.
AWS China is a completely separate partition under separate Chinese management, with no dependencies on us-east-1. It also greatly lags in feature deployments as a result.
US stock market index funds will crash when the US stock market crashes. That will require very large sums of capital to decide to move away from US capital markets. To give an idea of how much money would need to move - VTSAX alone is about $2.1 trillion, with hundreds of billions of dollars of shares of each Mag7 stock.
You basically need the world to decide that EMEA/JAP markets are collectively stronger than the US stock market, and to collectively move their capital outside the US to be deployed in EMEA/JAP. Moves away from Mag7 to US value stocks will be captured by the US stock market index funds; moves into commodities will be seen as opportunities to buy the dip before a market rebound. You can view attempts by US private equity to purchase real estate as attempts to hedge against overvaluation in US markets, but if the US has another Great Depression, those real estate purchases won't be able to fetch high rents or prices anyway.
In short, just follow the normal advice, which is not to put all your money into US domestic stocks, but to also purchase foreign stock market index funds, which help to hedge against the risk of the entire US stock market crashing. In the long run, US index funds are still a good investment - US courts still are quite powerful to settle contract disputes, the US does not have capital flight controls, and American business culture is still one of the hardest working, greediest forces on the planet - a Great Depression v2 would not change that.
All assets are correlated. When the stock market inevitably crashes, and it will, we just don't know when, so will other world stock markets. And the cycle will repeat.
Capital is not going to "move away from US capital markets" because those markets tend to over-perform and will likely still over-perform. What companies are you investing in that are not nVidia, Google, Amazon, Meta, Apple, Open-AI, Anthropic etc. etc.?
It's really hard to predict market crashes. I think it makes sense to be more cautious but also that's what could have been said a year or 2 or 5 years ago, in which case you would have missed a lot of potential gains.
> US stock market index funds will crash when the US stock market crashes. That will require very large sums of capital to decide to move away from US capital markets. To give an idea of how much money would need to move - VTSAX alone is about $2.1 trillion, with hundreds of billions of dollars of shares of each Mag7 stock.
I'd like to make a technical note about markets because I see this mistake repeated in the comments. The money doesn't have to move out of the US markets to somewhere else for the stock market to crash. It only requires a destruction of confidence. For a hypothetical example, suppose the S&P 500 closes at 7000 on a Friday, and everyone loses confidence in the S&P 500 over the weekend (for whatever reason). The market can open on Monday 3500 with not a share traded before the open (no money was moved out of the market), and investor portfolio values are now cut in half. Since confidence is broken, nobody buys the dip, and the market closes Monday down to 3000.
It's an extreme example, but it's worth understanding the fundamental underpinnings. The markets are a confidence game. Sometimes we forget because we have good reason to be confident (e.g. in the S&P 500) and so it fades into the background that something like this could even happen, but it's not hard find these sorts of events in history.
You are correct, but only insofar as destroying paper value. If investors have a firesale because the market would prefer to realize whatever value might be rescued, even at a loss, but the proceeds of the sale stay inside the US, then that capital is more likely to be reinvested in the US once investor confidence returns. This is the underlying reason why most long-investors should continue to hold their positions despite short-term losses. The fact that NVDA has a $4.6T market cap, as a product of about 24 billion shares multiplied by about a $190/share price, does not mean that the market believes that all 24 billion shares could be sold for that $190/share price. That is a convenient fiction that falls apart when investor confidence bursts, but that does not in and of itself truly represent value destruction (Nvidia employees will still wake up the next day and go to work), at least not until second-order-effects kick in (e.g. Nvidia employees leave because their RSU packages are no longer competitive compensation). People who stay long in the stock market can wait for investor confidence to return, in which cash is reinjected into the stock market, and the losses in diversified portfolios are not realized. If the S&P 500 investor takes a 50% hit in a crash, decides to hold, then the S&P 500 rises by 140% in the next two years, then the investor who held will still realize a nice return.
The way in which that narrative does not happen is if the capital leaves entirely to be locked up in other investments; in the context of index funds which would anyway rebalance to rise with those other investments, if the capital leaves for other countries, to investments that are not covered by the index funds.
Correct. The price of the market is the price people are willing to pay. It is not directly related to the move of capital. That said prices are also a function of supply and demand, if there is no demand (e.g.) for US stocks then it is more likely price will go down. If everyone wants to sell US and buy Europe, e.g. because they think the European competitors to Apple, Google, Amazon, nVidia and such will outperform, then presumably the prices at which those companies trade will trend down.
I think there's two kinds of software-producing-organizations:
There's the small shops where you're running some kind of monolith generally open to the Internet, maybe you have a database hooked up to it. These shops do not need dedicated DevOps/SRE. Throw it into a container platform (e.g. AWS ECS/Fargate, GCP Cloud Run, fly.io, the market is broad enough that it's basically getting commoditized), hook up observability/alerting, maybe pay a consultant to review it and make sure you didn't do anything stupid. Then just pay the bill every month, and don't over-think it.
Then you have large shops: the ones where you're running at the scale where the cost premium of container platforms is higher than the salary of an engineer to move you off it, the ones where you have to figure out how to get the systems from different companies pre-M&A to talk to each other, where you have N development teams organizationally far away from the sales and legal teams signing SLAs yet need to be constrained by said SLAs, where you have some system that was architected to handle X scale and the business has now sold 100X and you have to figure out what band-aids to throw at the failing system while telling the devs they need to re-architect, where you need to build your Alertmanager routing tree configuration dynamically because YAML is garbage and the routing rules change based on whether or not SRE decided to return the pager, plus ensuring that devs have the ability to self-service create new services, plus progressive rollout of new alerts across the organization, etc., so even Alertmanager config needs to be owned by an engineer.
I really can't imagine LLMs replacing SREs in large shops. SREs debugging production outages to find a proximate "root" technical cause is a small fraction of the SRE function.
> SREs debugging production outages to find a proximate "root" technical cause is a small fraction of the SRE function.
According to the specified goals of SRE, this is actually not just a small fraction - but something that shouldn't happen.
To be clear, I'm fully aware that this will always be necessary - but whenever it happened - it's because the site reliability engineer (SRE) overlooked something.
Hence if that's considered a large part of the job.. then you're just not a SRE as Google defined that role
Very little connection to the blog post we're commenting on though - at least as far as I can tell.
At least I didn't find any focus on debugging. It put forward that the capability to produce reliable software is what will distinguish in the future, and I think this holds up and is inline with the official definition of SRE
I don't think people really adhere to Google's definition; most companies don't even have nearly similar scale. Most SRE I've seen are running from one Pagerduty alert to the next and not really doing much of a deep dive into understanding the problem.
I think you've identified analogous functions, but I don't think your analogy holds as you've written it. A more faithful analogy to OP is that there is no better flight crash investigator than the aviation engineer designing the plane, but flight crash investigation is an actual failure of his primary duty of engineering safe planes.
Still not a great rendition of this thought, but closer.
Well, all kinds. Alerting is a really great way to track things that need to change, tell people about that thing along established channels, and also tell them when it's been addressed satisfactorily. Alertmanager will already be configured with credentials and network access to PagerDuty, Slack, Jira, email, etc., and you can use something like Karma to give people interfaces to the different Alertmanagers and manage silences.
If you're deploying alerts, then yeah you want a progressive rollout just like anything else, or you run the risk of alert fatigue from false positives, which is Really Bad because it undermines faith in the alerting system.
For example, say you want to start to track, per team, how many code quality issues they have, and set thresholds above which they will get alerted. The alert will make a Jira ticket - getting code quality under control can be afforded to be scheduled into a sprint. You probably need different alert thresholds for different teams, and you want to test the waters before you start having Alertmanager make real Jira issues. So, yeah, progressive rollout.
Having worked on Cloud Run/Cloud Functions, I think almost every company that isn't itself a cloud provider could be in category 1, with moderately more featureful implementations that actually competed with K8s.
Kubernetes is a huge problem, it's IMO a shitty prototype that industry ran away with (because Google tried to throw a wrench at Docker/AWS when Containers and Cloud were the hot new things, pretending Kubernetes is basically the same as Borg), then the community calcified around the prototype state and bought all this SAAS/structured their production environments around it, and now all these SAAS providers and Platform Engineers/Devops people who make a living off of milking money out of Kubernetes users are guarding their gold mines.
Part of the K8s marketing push was rebranding Infrastructure Engineering = building atop Kubernetes (vs operating at the layers at and beneath it), and K8s leaks abstractions/exposes an enormous configuration surface area, so you just get K8s But More Configuration/Leaks. Also, You Need A Platform, so do Platform Engineering too, for your totally unique use case of connecting git to CI to slackbot/email/2FA to our release scripts.
At my new company we're working on fixing this but it'll probably be 1-2 more years until we can open source it (mostly because it's not generalized enough yet and I don't want to make the same mistake as Kubernetes. But we will open source it). The problem is mostly multitenancy, better primitives, modeling the whole user story in the platform itself, and getting rid of false dichotomies/bad abstractions regarding scaling and state (including the entire control plane). Also, more official tooling and you have to put on a dunce cap if YAML gets within 2 network hopes of any zone.
In your example, I think
1. you shouldn't have to think about scaling and provisioning at this level of granularity, it should always be at the multitenant zonal level, this is one of the cardinal sins Kubernetes made that Borg handled much better
2. YAML is indeed garbage but availability reporting and alerting need better official support, it doesn't make sense for every ecommerce shop and bank to building this stuff
3. a huge amount of alerts and configs could actually be expressed in business logic if cloud platforms exposed synchronous/real-time billing with the scaling speed of Cloud Run.
If you think about it, so so so many problems devops teams deal with are literally just
1. We need to be able to handle scaling events
2. We need to control costs
3. Sometimes these conflict and we struggle to translate between the two.
4. Nobody lets me set hard billing limits/enforcement at the platform level.
(I implemented enforcement for something close to this for Run/Appengine/Functions, it truly is a very difficult problem, but I do think it's possible. Real time usage->billing->balance debits was one of the first things we implemented on our platform).
5. For some reason scaling and provisioning are different things (partly because the cloud provider is slow, partly because Kubernetes is single-tenant)
6. Our ops team's job is to translate between business logic and resource logic, and half our alerts are basically asking a human to manually make some cost/scaling analysis or tradeoff, because we can't automate that, because the underlying resource model/platform makes it impossible.
Since you are developing in this domain. Our challenge with both lambdas and cloud run type managed solutions is that they seem incompatible with our service mesh. Cloud run and lambdas can not be incorporated with gcp service mesh, but only if it is managed through gcp as well. Anything custom is out of the question. Since we require end to end mTLS in our setup we cannot use cloud run.
To me this shows that cloud run is more of an end product than a building block and it hinders the adoption as basically we need to replicate most of cloud run ourselves just to add that tiny bit of also running our Sidecar.
> Cloud run and lambdas can not be incorporated with gcp service mesh, but only if it is managed through gcp as well
I'm not exactly sure what this means, a few different interpretations make sense to me. If this is purely a run <-> other gcp product in a vpc problem, I'm not sure how much info about that is considered proprietary and which I could share, or even if my understanding of it is even accurate anymore. If it's that cloud run can't run in your service mesh then it's just, these are both managed services. But yes, I do think it's possible to run into a situation/configuration that is impossible to express in run that doesn't seem like it should be inexpressible.
This is why designing around multitenancy is important. I think with hierarchical namespacing and a transparent resource model you could offer better escape hatches for integrating managed services/products that don't know how to talk to each other. Even though your project may be a single "tenant", because these managed services are probably implemented in different ways under the hood and have opaque resource models (ie run doesn't fully expose all underlying primitives), they end up basically being multitenant relative to each other.
That being said, I don't see why you couldn't use mTLS to talk to Cloud Run instances, you just might have to implement it differently from how you're doing it elsewhere? This almost just sounds like a shortcoming of your service mesh implementation that it doesn't bundle something exposing run-like semantics by default (which is basically what we're doing), because why would it know how to talk to a proprietary third party managed service?
There are plenty of PaaS components that run on k8s if you want to use them. I'm not a fan, because I think giving developers direct access to k8s is the better pattern.
Managed k8s services like EKS have been super reliable the last few years.
YAML is fine, it's just configuration language.
> you shouldn't have to think about scaling and provisioning at this level of granularity, it should always be at the multitenant zonal level, this is one of the cardinal sins Kubernetes made that Borg handled much better
I'm not sure what you mean here. Manage k8s services, and even k8s clusters you deploy yourself, can autoscale across AZ's. This has been a feature for many years now. You just set a topology key on your pod template spec, your pods will spread across the AZ's, easy.
Most tasks you would want to do to deploy an application, there's an out of the box solution for k8s that already exists. There have been millions of labor-hours poured into k8s as a platform, unless you have some extremely niche use case, you are wasting your time building an alternative.
I will just say based on recent experience the fix is not Kubernetes bad it’s Kubernetes is not a product platform; it’s a substrate, and most orgs actually want a platform.
We recently ripped out a barebones Kubernetes product (like Rancher but not Rancher). It was hosting a lot of our software development apps like GitLab, Nexus, KeyCloak, etc
But in order to run those things, you have to build an entire platform and wire it all together. This is on premises running on vxRail.
We ended up discovering that our company had an internal software development platform based on EKS-A and it comes with auto installers with all the apps and includes ArgoCD to maintain state and orchestrate new deployments.
The previous team did a shitty job DIY-ing the prior platform. So we switched to something more maintainable.
If someone made a product like that then I am sure a lot of people would buy it.
This is one of the things that excites me about TigerBeetle; the reason why so much billing by cloud providers is reported only on an hourly granularity at best is because the underlying systems are running batch jobs to calculate final billed sums. Having a billing database that is efficient enough to keep up with real-time is a game-changer and we've barely scratched the surface of what it makes possible.
Thanks for mentioning them, we're doing quite similar debit-credit stuff as https://docs.tigerbeetle.com/concepts/debit-credit/ but reading https://docs.tigerbeetle.com/concepts/performance/ they are definitely thinking about the problem differently from us. You need much more prescribed entities (eg resources and skus) on the modelling side and different choices on the performance side (for something like a usage pricing system) for a cloud platform.
This feels like a single-tenant, centralized ACH but I think what you actually want for a multitenant, multizonal cloud platform is not ACH but something more capability-based. The problem is that cloud resources are billed as subscriptions/rates and you can't centralize anything on the hot-path (like this does) because it means that zone/any availability interacting with that node causes a lack of availability for everything else. Also, the business logic and complexity for computing an actual final bill for a cloud customer's usage is quite complex because it's reliant on so many different kinds of things, including pricing models which can get very complex or bespoke, and it doesn't seem like tigerbeetle wants calculating prices to be part of their transactions (I think)
The way we're modelling this is with hierarchical sub-ledgers (eg per-zone, per-tenant, per-resourcegroup) and something which you could think of as a line of credit. In my opinion the pricing and resource modelling + integration with the billing tx are much more challenging because they need to be able to handle a lot of business logic. Anyway, if someone chooses to opt-in to invoice billing there's an escape hatch and way for us to handle things we can't express yet.
Every time I’ve pushed for cloud run at jobs that were on or leaning towards k8s I was looked at as a very unserious person. Like you can’t be a “real” engineer if you’re not battling yaml configs and argoCD all day (and all night).
It does have real tradeoffs/flaws/limitations, chief among them, Run isn't allowed to "become" Kubernetes, you're expected to "graduate". There's been an immense marketing push for Kubernetes and Platform Engineering and all the associated SAAS sending the same message (also, notice how much less praise you hear about it now that the marketing has died down?).
The incentives are just really messed up all around. Think about all the actual people working in devops who have their careers/job tied to Kubernetes, and how many developers get drawn in by the allure and marketing because it lets them work on more fun problems than their actual job, and all the provisioned instances and vendor software and certs and conferences, and all the money that represents.
Headline is wrong. I/O wasn't the bottleneck, syscalls were the bottleneck.
Stupid question: why can't we get a syscall to load an entire directory into an array of file descriptors (minus an array of paths to ignore), instead of calling open() on every individual file in that directory? Seems like the simplest solution, no?
One aspect of the question is that "permissions" are mostly regulated at the time of open and user-code should check for failures. This was a driving inspiration for the tiny 27 lines of C virtual machine in https://github.com/c-blake/batch that allows you to, e.g., synthesize a single call that mmaps a whole file https://github.com/c-blake/batch/blob/64a35b4b35efa8c52afb64... which seems like it would have also helped the article author.
It's not the syscalls. There were only 300,000 syscalls made. Entering and exiting the kernel takes 150 cycles on my (rather beefy) Ryzen machine, or about 50ns per call.
Even assume it takes 1us per mode switch, which would be insane, you'd be looking at 0.3s out of the 17s for syscall overhead.
It's not obvious to me where the overhead is, but random seeks are still expensive, even on SSDs.
You could use io_uring but IMO that API is annoying and I remember hitting limitations. One thing you could do with io_uring is using openat (the op not the syscall) with the dir fd (which you get from the syscall) so you can asynchronously open and read files, however, you couldn't open directories for some reason. There's a chance I may be remembering wrong
io_uring supports submitting openat requests, which sounds like what you want. Open the dirfd, extract all the names via readdir and then submit openat SQEs all at once. Admittedly I have not used the io uring api myself so I can't speak to edge cases in doing so, but it's "on the happy path" as it were.
You have a limit of 1k simultaneous open files per process - not sure what overhead exists in the kernel that made them impose this, but I guess it exists for a reason. You might run into trouble if you open too many files at ones (either the kernel kills your process, or you run into some internal kernel bottleneck that makes the whole endeavor not so worthwhile)
That's mainly for historical reasons (select syscall can only handle fds<1024), modern programs can just set their soft limit to their hard limit and not worry about it anymore: https://0pointer.net/blog/file-descriptor-limits.html
>why can't we get a syscall to load an entire directory into an array of file descriptors (minus an array of paths to ignore), instead of calling open() on every individual file in that directory?
You mean like a range of file descriptors you could use if you want to save files in that directory?
What comes closest is scandir [1], which gives you an iterator of direntries, and can be used to avoid lstat syscalls for each file.
Otherwise you can open a dir and pass its fd to openat together with a relative path to a file, to reduce the kernel overhead of resolving absolute paths for each file.
Devs need to write the 1% of automated tests needed just to prove that what they wrote works in the ideal case. QA is valuable for writing the 99% of automated tests that prove that the software works in the edge cases, with DevOps occasionally dropping in to make sure that the test suite runs quickly.
The way you solve Product and QA being at odds is very simple: QA loses, until they don't. When trying to find product-market-fit, it doesn't make sense to delay delivery to prove that an experiment works in exceptional circumstances. Eventually you do have product-market-fit, and you want to harden the features you already shipped, which is where QA comes in - better internal QA finds the bugs rather than your (future) customers. Eventually you start launching features to a massive audience on day 1, and you need QA to reduce reputational risk before you ship. The right time for QA to intercede and get a veto on delivery changes over the lifetime of the product, and part of whether or not QA is a net-add is whether your organization (leadership) is flexible enough to accept and implement that flexibility.
Is there anyone out there that has actually, in the real world, realized CUE's promise of bundling type safety + data/configuration + task running in such a way that does not require wrapping it in shell scripts? Can you set up your CI/CD pipelines so that it's literally just invoking some cue cmd, and have that cmd invocation be reasonably portable?
The problem is, once you have to wrap CUE, the loss of flexibility within a special-purpose language like CUE is enough for people to ask why not just bother writing the scripts in a general purpose language with better ecosystem support. And that's a hard sell in corporate environments, even ones that find benefit in type safe languages in general, because they can just pick a general purpose language with a static type checker.
Not sure if that’s what you mean but we have apps where all you need to deploy them to Kubernetes is run “cue cmd deploy”.
> The problem is, once you have to wrap CUE, the loss of flexibility within a special-purpose language like CUE is enough for people to ask why not just bother writing the scripts in a general purpose language with better ecosystem support.
cue cmd is nice but it’s not the reason to use CUE. The data parts are. I would still use if I had to use “cue export” to get the data out of it with a bit of shell.
So cue cmd also built the image, authenticated to a private registry, pushed the image, authenticated to the private Kubernetes cluster, and ran kubectl apply?
No, that’s why I said deploy. All it does is run kubectl apply and kubectl rollout status.
Only those are directly tied to the data in CUE. there’s not much advantage to running other commands with it. You can run arbitrary processes with cue cmd though.
Yeah but that's kinda my point. OK you can write policy to control the Kubernetes configuration with CUE. What about policy to control the Dockerfile, let alone the policy to control the cloud infrastructure? No? So the Security folk writing policy need to learn two languages - one for general-purpose policy, plus CUE specifically for Kubernetes manifests? Why not write the policy for Kubernetes manifests in the general-purpose language they're using for the rest of the policy? And so on and so forth, which make CUE's value proposition dubious in the enterprise.
I can't speak for CUE, but I've worked with CI and "build orchestration tools"in the past. Most CI providers provide executor APIs that let you override it as a plugin. One example is https://buildkite.com/resources/plugins/buildkite-plugins/do... - you mark this as "this is using docker" and configure it in the environment, and then you provide the command. You need to be very careful about the design of the plugin, but I've done it a few times and it's viable.
I can't fully answer your question but I did once spend about a week porting plain internal configuration to cue, jsonnet, dhall and a few related tools (a few thousand lines).
I was initially most excited by cue but the novelty friction turned out to be too high. After presenting the various approaches the team agreed as well.
In the end we used jsonnet which turned out to be a safe choice. It's not been a source of bugs and no one complained so far that it's difficult to understand its behaviour.
Cue.js has a wasm port. I really like cue for my spec driven development tool Arbiter, it is great for structured specs because it acts like a superset of most configuration/programming languages.
Make Single Room Occupancy (SRO) housing legal again.
Having barely room for little more than a bed forces you to get out during the day. Stuff happens when your default for where to spend your time is not at "home". SRO halls also usually had more room for common spaces to meet and socialize with other people in a similar position in life, and of course, SRO is a very cheap housing option.
I lived on a kibbutz for nearly three years after university and had similar levels of personal space. While I definitely would not want to live on a kibbutz for my whole life, there were very significant downsides (internal politics), I made some lifelong friends there and overall consider that experience to have been very positive, in particular for my social life.
that's not comparable to SRO in the city, where you'd be sharing living space with far more diverse and vibrant characters. no one in their sane mind would choose to live in one of those, unless they were on the brink of homelessness.
Oh we definitely had diverse and vibrant characters, and that was part of what made living there fun. I also find it strange that in a page about solutions to loneliness, you reject one that, by your own admission, would introduce someone lonely to a wide variety of new people.
But if you're trying to use it as a euphemism for drug addicts, I think you'll often find that they end up homeless, despite there being SROs, because they spend their SRO rent on drugs instead, and they get evicted. If you're trying to use a euphemism for sex workers, the successful SROs usually had strict rules around the Single Resident part.
Basically it's just like hotels, in the sense that there are both seedy, run-down, crummy hotels and there are upscale hotels. That there are some crummy hotels is not an indictment of hotels in general. If you make the category legal, you will find worse and better examples, and lonely people would have their choice of establishment that would help put them back into close proximity with others.
>I also find it strange that in a page about solutions to loneliness, you reject one that, by your own admission, would introduce someone lonely to a wide variety of new people.
I don't see how sharing the bathroom and the kitchen with alcoholics, drug addicts, ex-cons, and mentally ill could possibly alleviate one's loneliness. and trust me, even a few of those per floor are enough to make living there an unpleasant experience.
you picture SRO as some kind of hippie commune thing. it's not. again: no one in their sane mind actively chooses to live in such inhumane conditions. it is utterly bizarre to me that someone would romanticize sharing a toilet with fifty other people.
Like I said, if SROs are legal, you will get better and worse examples. Certainly a lot of people lived in university dormitories which are not, of course, filled with the dregs of society. Is there a market for an SRO hall filled with young Congressional staffers in Washington DC? Or one on Wall Street for young entry-level folk working in investment banks pulling 16 hour days in their first couple of years? Almost certainly. You keep out the dregs of society the same as anywhere else: you charge more than the cheapest places and ask people to sign a strict behavioral code where violations result in quick eviction.
And you share a toilet with a hundred other people in your workplace. So what? SRO rent pays for cleaning staff for common areas.
Your competitors are not necessarily targeting the same users, and their internal strengths and weaknesses are different from yours. All comparisons to competitors are superficial and distract you from building what your users want and improving upon your internal strengths and weaknesses.
I'll agree with you that the author tried to put in a sound bite and it failed to clarify the author's point.
The author is trying to argue for hiring early engineers who have exhibited ownership values and who want to take ownership for their work. These are the people for whom you establish "extreme transparency" (see: late in the post), a Google Doc for them to help align with others on high-level plans, a kitchen for people to informally talk in, and then get out of their way. That kind of environment is indeed in and of itself quite motivating for a certain kind of engineer.
Of course, it doesn't scale to BigCorp-size. Eventually you have too many cooks in the kitchen. The truth is that the vast majority of engineers really do want someone to tell them exactly what to do, so that they can come in to a highly structured 9-5 job and earn a paycheck that pays their mortgage and feeds their family. Author's prescriptions do not apply to large companies or to most engineers, and Author makes it clear as such.
Fun fact: the Hebrew translation of potato, תפוח אדמה, is the portmanteau of "earth" (אדמה) and "apple" (תפוח).
If you should ever be so fortunate as to have too many potatoes, see if you can shred them with a food processor and combine with onion, egg, salt, and pepper to make potato kugel, which freezes exceptionally well.
reply