Hacker Newsnew | past | comments | ask | show | jobs | submit | azurelogic's commentslogin

Another vote in favor of disc brakes. There's a greater sense of braking power modulation with disc brakes. If your rims aren't true, you're going to have a bad time with rim brakes. Also, I remember the huge difference in cold weather performance, which mattered a lot to me during the cold Midwest winters during college.

The same thing has happened with cars and performance. Stuff gets heavier, more expensive, and more complicated, and yet we manage to squeeze greater performance and efficiency out of less displacement. And then there's the eye watering power that comes from marrying this kind of tech to a big V8.


I found myself in a similar position a couple of years back. One of the big things that changed for me is that I had to widen my field of view on a project. I had already worked the "full stack" through previous jobs, but now I was responsible for designing how everything connected: APIs, CDNs, servers (or "less"), storage options, security, CI/CD, etc. It's not unlike how we grew from single lines of code to functions to modules to applications. Now, you move to assembling all of the mechanisms into complex systems. Based on your training, I imagine you already know a good amount about the various tools at your disposal, and it is time to apply all of that together.

Your days of learning new things is not over. You don't need to know the minutiae of everything, but you need to know what can and will work together, what may cause pain in integrating, and where your risks are. Oh, and you'll have less time to tinker your way through the new things.

There are things that you may not be used to doing though. You'll spend more time in meetings trying to figure out what you're actually supposed to be building. You'll spend time investigating new tools to use (replace libraries and packages with services in your mind). You'll probably be expected to do even more project planning and estimating.

And one of the most difficult for some people: you'll have to delegate. I still struggle with this one sometimes. You cannot possibly build it all, and now the people you have to rely on to build it will probably lack your experience, and on top of that, it's up to you to make sure that the requirements are crystal clear to them. And that's not their fault. We were all junior devs at one point. Crawl, walk, run. On the flip side, remember that some people WILL have experience in things you don't. Identify those strengths and leverage them!

Oh, and there's a decent chance you'll need to manage how much all of this crap costs... Brilliant solutions aren't so brilliant when they cost more to run than they generate in revenue.


As someone who has been working almost daily with Lambda and other AWS serverless services for close to 2 years, it astounds me to see so much FUD and misinformation among one of the smartest technical communities that I know.

Serverless is about making as much computing power as you demand available on demand with a "pay only what you use" price model without demanding that you maintain the underlying systems. In addition, embracing this model tends to allow you to break out of the request driven model of needing to call APIs for everything. Instead, you enable a whole host of new event driven interactions between application components.

Even this article misses one of the more interesting advantages: scalability under varying load. AWS Lambda may cost more than an equivalent EC2 instance if run constantly, but it scales up and down almost instantly without adding cost. You just pay for the compute time. So, burst traffic is no sweat (up to your account limits or function configured limits). If your application has highly irregular load, Lambda can keep it more responsive while saving you money too.

And it is nothing like paying for shared hosting or other single unit, where you still have no horizontal scalability.


You're right on the first point. They aren't comparable. GitLab is a community driven, open core product that is far more powerful than GitHub. GitLab has an extremely robust CI system built in. In addition, you can even use them to create private Docker registries, which I find useful for creating my own private CI images. Plus, they have a better security model (5 tiers), wildcards for branch protection, and tag protection. I'm sure there's other places that they differ, but before you disparage a company, please be informed about who they are and how great their product actually is.


Why do people say "X and Y aren't comparable" and then proceed to compare them? :)

I'm not (only) trying to be pedantic here -- I'm just pointing out the loaded language. In both business models and product recommendations, people use the word "comparable" to justify their conclusions, rather than to explain their comparisons themselves.

It is all about framing a decision for a particular use case at a particular price point.

I guess you could say I'm the kind of person that can't help but notice that modern communication seems to be fraught with this pattern: let's state our conclusions without much justification and then choose our language minimize the "rationality" of alternatives. I think we can do better, as a community.


Part of it is due to an alternate definition of comparable, which is "of equivalent quality; worthy of comparison", as opposed to "able to be likened to another; similar". It's being used as shorthand for "they aren't equivalent", and then they go to justify that assessment.

I get your point though. The form and structure of the language used, consciously of subconsciously, often conveys quite a bit more information than the words themselves impart. Sometimes this is meant to communicate or subconsciously sway the reader, sometimes it's leakage of the writer's mental state.


Exactly. With this definition, the phrase "compare and contrast" makes much more sense. The point here was to leap frog the fact that GitHub and GitLab have very comparable feature sets, and instead highlight the things that GitLab has that GitHub is totally missing.


I find the argument that Gitlab is community driven a joke. They are not. And they never can. The community edition can never be ready for more than git hosting.

We were recently evaluating it, and found that basic needs like code reviews are not covered. And their hiding of LDAP Integration (luckily the extended community stepped in and duplicated their module) is a joke.


I don't get your point. GitLab has basic code reviews [1, 2] and LDAP support [3]. LDAP support is better in EE than in CE but that's expected. What is missing for you?

1: https://gitlab.com/help/user/project/merge_requests/merge_re...

2: https://docs.gitlab.com/ce/development/code_review.html

3: https://docs.gitlab.com/ee/administration/auth/ldap-ee.html


Exactly. (1) is Enterprise edition, same as proper (3) LDAP Integration.

And I forgot about squashing :)

All I say: it is expected. Gitlab is capable, just not in all editions


If you want to self-host FaaS, there's OpenWhisk. The problem that I have seen with the whole concept of self-hosted FaaS though is that you lose some of the key benefits of "serverless": no maintenance of underlying systems and pay for exactly what you use. Self-hosted means you have to maintain the underlying systems and you have to pay to keep those servers running 24/7 with sufficient scale to support your usage model. It may make deployment easier once you adapt to it, but it's not really giving you the full benefit.


> pay for exactly what you use

Is that a pragmatic consideration, though, or just a conceptual one?

My main concern is with the word "exactly", since cloud providers can charge a remarkable markup, which means that, though one is paying proportional to ones use, but that's not necessarily desirable, if the alternative is, for example, to pay less-than-proportionally (e.g. via an economy of scale).

Is the FaaS markup significantly lower? Higher? Do the decision makers even care?

I'm somewhat familiar with the possibility of reducing costs at IaaS providers like AWS with things like dedicated instances and the marketplace. Is that available with FaaS? Does that not matter because it, essentially, removes the benefit of minimal support?


So far everyone I meet that uses it production is using Lambda. It might be because they want to consume it as a SaaS. But maybe a self-managed solution would have to be compatible with Lambda to gain adoption.

Of course self-hosted is more work then SaaS, but I'm not sure that maintenance and scale are much worse. With Kubernetes you can autoscale things easily.


What would the goal of self-hosting a FaaS solution be?


On-premise deployment might be one goal.


Indeed! That's a good one.


This page was immediately taken over by the ad redirects for free iPhones and crap. Can we get this replaced with a not shady source?


I've been wishing for something like this or a standalone unit for CarPlay. It would be amazing if I could just mount something in the CD slot in my car and plug in to the aux jack. I'm sure Apple would have a cow at the idea though.


There are aftermarket radios that support Android Auto/Car Play. Assuming you don't have a weird in-dash radio, you should be able to fit something like this[1].

[1] http://www.kenwood-electronics.co.uk/car/nav_mm/carplay-mm/D...


All computers have the potential for slowdown as they age because 2 things naturally happen: cooling systems clog with dust and thermal paste degrades. Both of these eventually reduce the efficiency of the cooling system, resulting in higher CPU temps. Unchecked, this can result in CPU throttling by the hardware itself.


3 things. Hard drive/SSD. Full ssds rUn slower naturally. Hard drives with bad and remapped sectors can go dramatically slower.


I would assume that if you had no fan, then the only things that could “degrade” on a computer would be the battery, thermal paste, and the capacitors (and maybe the screen backlight in a laptop.)


Maybe the poster is talking about total sunk cost of phone and all of the batteries


Dust-clogged fans and degraded thermal paste can also be a factor, particularly after 3 years.

iFixit guide for removing your heatsink: https://www.ifixit.com/Guide/MacBook+Pro+13-Inch+Retina+Disp...

My favorite thermal paste: https://noctua.at/en/nt-h1.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: