Hacker Newsnew | past | comments | ask | show | jobs | submit | faleidel's commentslogin

If you pay X to get Y amount of work done and are happy about it why would you care that the person is also doing Z amount of work somewhere else? Z could be an other job, or a hobby or anything else, but it is none of your concern.


The issue is they want to maximise Y.

If you're working for someone else, it is by definition meaning that Y has plenty of room in it and can be done better.

They could replace the worker with anyone else that has the same skill as them, but who would put all effort into Y, and effectively double the rate of production for not much change.

It's even better than that, because they now have one big lever to tweak the workload. Some weeks it might be 1.5Y, other times it's 2.1Y, software engineering definitely has its good and bad periods, week-by-week.

If the person who works two jobs isn't replaced, not only do we have less work done than what's possible, but they can also burn out easier.

And we can't change what their other work is telling them to do.

If an engineer is burning out or need a break, there's only so much could be done (other than stopping work altogether.. but again, why? For another company to burn out our employees?) - there's a lot less wiggle room in 1Y than 2Y.

(Note: I said "we" from pov of manager but I am not one, I'm just an engineer, and I thought about the two job question for quite a while, but determined that it's definitely not worth my time. I get more fulfilment from doing my own things.)


>If you're working for someone else, it is by definition meaning that Y has plenty of room in it and can be done better.

And why would I care about that as an employee? It's the employer's problem to use my time effectively, not mine. It is ridiculous to assume I should have interest in helping you make me work harder for the same amount of pay.

>They could replace the worker with anyone else that has the same skill as them, but who would put all effort into Y, and effectively double the rate of production for not much change.

So there's this pool of magical workers who can do twice the amount of work for the same salary as the current employee and yet the company is not utilizing this opportunity at all? This scenario makes no sense, if it was possible to do then the company should already be doing it regardless of whether someone has a second job or not.


> And why would I care about that as an employee?

You don't, the company cares. And whether or not you're a part of that company affects how much you care.

> It is ridiculous to assume I should have interest in helping you make me work harder for the same amount of pay.

You are making yourself work harder. Not the company.

I guess we're having a miscommunication here because you're thinking in terms of "value of labour" and all that cruft.

> So there's this pool of magical workers who can do twice the amount of work for the same salary as the current employee ...

We have people on our team who don't know what CSV files are, and who spend 3 or 4 days on typing out things from spreadsheets, rather than using copy paste. Labour is not inherently valuable for the sake of labour.

The value from a software engineer isn't from hours worked. It's the quality of the result. Man is more than machine.

As long as you can produce twice as much quality software from two companies, then there's no issue with moonlighting a job.

But I'm saying that you're painting yourself into a corner, because now you have to negotiate and do all the "non-labour-related" BS twice as much, and I would argue that the quality of your output degrades, not improves.

It would be more effective to double your money with the current employer as it gives you and them (and you) more space, vacation, room to slow down and speed up when you need, etc.


> The issue is they want to maximise Y.

Exactly - they want to maximize Y while keeping X constant.


I think most places will give you more money if it's a reliably true way to get more work done.

That's always been my experience anyway.

You can't blame people for wanting to conserve money, especially against unknowns.


I'm not aware of any salary arrangement where they are paying X to get Y amount of work done. Your describing per-project contracting work, and this is plenty common in the software industry. Instead they are paying X dollars to get Y hours of your time (usually in the US 40).


Angular is using rxJs which is a reactive programming framework, but I think it was an error to do so.

The angular project I am working on is now 5 years old and the parts of the application that are the least understood are the ones with more rxJs in it. We even have custom rxJs operators that nobody understand anymore...

The way we do things now is to transform everything we can into promises because it's more easy to work with.

With promises you have a few functions with which you can do everything. With rxJs you have dozens of function with specific use cases and most of them looks alike. It's too easy to not take the right one and new peoples in the project needs to learn a lot of things to understand the codebase.

I was interviewing some angular devs aand asked the question: what is the difference between a promise and an observable and 80% of the time the answer was "for observable you need to sunscribe to get the result"). That shows a clear lack of understand of rxJs.

Anybody had a better experience with angular and rxJs?


Yeah totally the opposite. I really like RxJs and used it along side my senior dev for good effect at my last perm for several years. Our junior did find it a bit tricky sometimes though (edit: I don’t mean to imply you weren’t “senior” enough, just an observation that it was a new set of concepts to learn).

We would literally never use promises because we’d become so comfortable with how we (and angular) managed the lifecycle of observables. And we never ever used a async/await after some nasty bugs in previous projects. We never started making our own operators. And I think we never really got too complex with it. RxJs marbles and learnrxjs were often consulted. But the code seemed nice and clean and reliable. Testing was more difficult and harder to understand but we got there.

We switched to using NgRx for our state quite early in the project after a brief flirtation with observable services, so that probably pushed us further down the observable route. We kept NgRx up to date and found the helper functions really nice so there wasn’t too much boilerplate. Making new selectors and integrating them into components with the async pipe is just so damn easy with NgRx and RxJs. Effects would get mildly stupid in terms of complexity and I did have a habit of hilariously writing “these were your father’s parentheses” at the end of any particularly long set of closing parens... but yeah it seemed to just work and give us relatively few bugs and none that I recall were hard to track down. It was all very smooth.

The only thing I’d ever really complain about on that project other than our build times was MSAL, which I hated with a passion.


Yes and no.

We found that rx.js is bad for coordination. I think coordination is a big part of services in angular UI's. You need to fetch some data, wait for it, ask for different thing, maybe change some state. Promises and await are great for this and especially await syntax is readable (go channels could be even better). In rx.js you had to nest multiple switchMaps for dependant queries and for state - you either produce state as a stream result or you put some `tap` or `subscribe` with `takeUntil`.

But rx.js shines with more complex user interactions - drag and drops, brushes, interactive forms (angular has nice, reactive form api). We even put some state machines inside streams, so input signals produced events. and we had `scan` operator that manipulated machine.

My personal issue with rx.js is that BehaviourSubject is leaky - you can do anything to it, and break it's property of 'always has a value'. It is nice if your service is a reactive value that you can inject and transform or render using async pipe. But you need to be careful what operators are you using on it.


> My personal issue with rx.js is that BehaviourSubject is leaky

RxJS best practice is that BehaviorSubject should be used infrequently to never, mostly as "internal plumbing" to things like building your own mini-Operators, and that they should never escape an API boundary. If you need to pass a BehaviorSubject to a consumer you always .asObservable() it and the consumer must treat it like a regular Observable (your API boundary is always Observable<T>, never Subject<T>). (BehaviorSubjects are an imperative "back door" that breaks building things the reactive way.) That's one of my biggest personal problems with the Angular core libraries is how many BehaviorSubjects leak out everywhere in that API design. EventEmitter is a big giant BehaviorSubject. The Routing APIs leak BehaviorSubjects. Angular's "Reactive" Forms leak imperative BehaviorSubjects all over the place. The bad use of BehaviorSubjects by the core libraries leaks to the rest of the Anuglar ecosystem and there are so many "Angular best practices" that are "RxJS worst practices" purely from these early API design decisions.

> We found that rx.js is bad for coordination.

RxJS is great at "coordination". It requires a different mindset. Angular is awful at helping you get into that mindset. Angular's HttpClient is especially "bad, heavy promises masquerading as Observables" which makes it hard to think of queries/API fetches as events that can return refreshed data over time (streams of fetch results), more similar to those user interactions you saw good results from. There's a lot of useful coordination operators in RxJS beyond `switchMap()` like `mergeAll()` and `concatAll()`. If you think of a stream of request events flowing into a stream of response events flowing into a state machine (possibly with a very simple, similar `scan` to your user interaction model to reduce your state over time, a little like the "redux" pattern [1]), RxJS can be brilliant for "coordination" as data arrives.

(Angular kind of sets it up to fail. With how HttpClient works. With how Async Pipe works and isn't the default.)

> or you put some `tap` or `subscribe` with `takeUntil`

This is also where Angular "best practices" and I diverge, and I think also stems from RxJS "worst practices". RxJS best practice is also to use `tap` as infrequently as possible. It's a worst chance escape hatch at best. RxJS best practice is also the `subscribe` as "late" as possible and also as infrequently as possible and that you never have a `subscribe` without an `unsubscribe` to clean up resources, including memory. (Which can be very important if you are trying to do everything the reactive way. You can move all your setup into Observables, including the setup and teardown of vanilla JS components.) The overuse of `subscribe` in Angular components seems one of the biggest obvious reasons why so much of the usage of Observables in the Angular ecosystem looks like bloated Promises. (Which is directly a bad example set by the core library's own HttpClient.) (I also think some of Angular "best practices" uses of `takeUntil` aren't great either. I was taught that Observable completion should "mean something" as it's a key event in the stream, and shutting down Observables early mean you miss later events.)

If Angular's template language took Observables directly, without needing an "async pipe", most of those manual subscribes would just vanish in an instant. Most of the needs for `ngOnInit` and `ngOnDestroy` "lifecycle events" disappear. (As Observables have lifecycle events already in subscribe/unsubscribe setup/teardown.) Angular could have not needed Zone.js at all nor its complex "Change Detector" apparatus. Angular could have done smart things in coordinating Observable observations by templates. (A lot of what React's last several major releases have been about in building its Concurrency, Suspense, and related systems out have been about among other things throttling "non-important" DOM updates together to things like requestAnimationFrame and doing very complicated work under the hood to set all that up from VDOM changes. In an Observable world you can pipe a `mergeAll()` through `debounce(0, requestAnimationFrameScheduler)` and get things like that "really easy".)

I wound up trying to encode all of my personal best practices for writing powerful, reactive components in Angular into an opinionated reusable "component framework": https://www.npmjs.com/package/angular-pharkas

I haven't yet found a good way to encode my mindset to "reactive service classes" in Angular, in part because "it feels obvious" to me and isn't really a pattern so much as a mindset, which I know it is exactly not obvious or lots of people would be doing it and Angular's ecosystem would be less full of bad examples. Probably a key place to start based on the above conversation is that I almost always wrap any calls to Angular's HttpClient in a "forever Observable". Whatever the input is to call that HTTP API, whether it is a refresh signal or some sort of other input event (Observable) hide the `switchMap()` in an Observable of results over time. Most "dependent" data streams are coordinated sometimes as simply as a `combineLatest()` and others are `scan()` reductions (even full "state machines" in some cases of those reducers). Everything is returned as Observable<T> and never any Subject<T>. Everything flows into the Components and `subscribes` are as late as possible (and these days hidden entirely away in "Pharkas" binds). Learn when to `share()` observables (or more often `shareReplay(x)`) and reuse existing Observables rather than build new ones.

I don't know how much that helps. I've been able to wring a lot of good reactive programming out of a massive Angular frontend, but I've been fighting Angular itself (and the terrible debugging/performance experience of the gross, unnecessary Zone.js), and the greater ecosystem of "Angular best practices" the whole way. Every Junior Developer with Angular experience that looks at the code for the first time generally thinks it is readable but that "it looks nothing like Angular I am used to". It's definitely not "Angular best practices" as people are learning them today.

Also, a useful related tip for removing most uses of `tap`: install `rxjs-spy`. It's great. It provides a very simple `tag('some-observable-name`)` operator that is a no-op in Production builds and in Development builds gives you a `window.spy` toolkit that lets you log events from tagged Observables (or a regex matching tagged Observable names) or even set debugger breakpoints at tags.

[1] Though I tend towards "lots of little observable streams" over combined ball of single state refiltered back into little observables like in the "proper" "redux" pattern or how tools like NgRx try to implement that in the Angular world.


Yes - totally the opposite. In fact, I think the issue with Angular is that they've not finished the work to make Angular RX-everywhere (for example, reactive components need to be manually plugged into Subjects at the moment).

With RxJS you should use it everywhere (observing state, observing component inputs, side effects etc). It's when you use it half-heartedly that you get problems with merging different programming paradigms.

The biggest issue with RxJS that we've found is that some devs have a super hard time getting to grips with the paradigm, and if your project is mostly those types of people, it will end up a disaster.


> With RxJS you should use it everywhere (observing state, observing component inputs, side effects etc). It's when you use it half-heartedly that you get problems with merging different programming paradigms

Yes, yes ,yes! I have been working with angular professionally for five years and fell in love with rxjs. If you manage to use it for everything, it really shines. Your entire codebase becomes declarative and it works beautifully. The only downside is that it takes some time to get it started up from the ground up, but once you do making changes and adding features becomes trivial. Try making smallish pipes and comment their purpose. Break them modular pieces.


On the one hand, I absolutely love working with rxjs on a fairly large angular app as a side project. It affords me the chance to be clever with code, to reason about coordinating multiple asychronous streams, and so on. Firestore gives me observables of the queries I run where the data update themselves, and the user generates events that turn into data mutations. It is all just super great.

On the other hand, I recognize that doing this well and correctly (understanding how to pipe together operators instead of creating lots of intermediate subscriptions manually, which I see a lot in example / stack overflow code) requires a high level of understanding.

It is a whole change in how you think about event pipelines and code structure. I don't think I would want to migrate my day job systems to it, because then you need everyone on the team to develop that understanding. I'm sure they _could_, but working with promises and trad event handlers is a lot simpler, as long as you keep the rest of the data / eventing pipelines simple.


I understand this paradigm within the context of stream processing, but it seems like a weird way of modeling most REST api-based web applications.

I feel like a promise-based model makes way more sense for most simple web applications. Where you have a http request and response, a promise model seems to suffice for most applications communicating via http-requests to an API layer. Modeling as a stream doesn't make sense to me, and seems like it would over-complicate an otherwise simple mental-model.

In most of the simple web-applications I have encountered there is one or more requests made to unique endpoints for data after the document response has been completed. No need for handling multiple events from the same endpoint, debouncing, multi-casting, unsubscribing, back-pressure, or whatever else. These operations seem to make way more sense in the context of stream processing.


Rx is a hot-shot, look what I can do, I am smarter than you, paradigm and not much more. Source: I did that to co-workers when I learned it.


I would like to see gzip compression added to the benchmark


    curl https://raw.githubusercontent.com/mortie/jcof/main/tests/corpus/meteorites.json | gzip -9 | wc

    Gives me 34569
So the comparison is:

    JSON: 244920 bytes
    JCOF: 87028 bytes
    GZIP: 34569 bytes


To be fair you would gzip the JCOF encoding in this example too.

Author mentions gzip doesn’t work for some use cases. For use case mentioned I’d expect sqlite to be similar, at least that is the default thing I’d reach for.

If for some reason sqlite wasn’t sufficient probably a custom binary encoding controlled and updated via code instead of config would be next.


> To be fair you would gzip the JCOF encoding in this example too.

Just tested my 84M fake social media file - `jcof` gives 44M, `gzip` gives 19M, `jcof+gzip` gives 17M. In essence, you've gained 2M for two CPU intensive procedures instead of one. Doesn't seem all that worth it?


10% is a lot of your egress.


A fair point.

Prompted me to check if the higher zstd levels worked any better on my 84MB fake social graph - nope - and then if LZMA was any good - yes, `lzma` at 5 or higher on the raw JSON beats `jcof | lzma` by ~2M every time. `lzma -4` beats it by ~400k.

If I sort my object keys (a la `jq -S`), `lzma` beats `jcof|lzma` at every level (`gzip` never gets close, `zstd` gets closer.)


EDIT: Nope, I was wrong, I was doing `lzma` against `jcof|zstd`. `jcof|lzma` is still sneaking ~1M below `lzma` at all levels.


meteorites.json, re-encoded via both JSON.stringify() and jcof.stringify():

             json   jcof  jcof/json
    plain  244975  87083      0.355
  gzip -6   35829  33152      0.925
  gzip -9   34384  32875      0.956
    xz -9   27864  28696      1.030
And I imagine this is close to ideal for jcof. So unless that last few % really matters, gzipped JSON is probably much better in the general case.



Newer studies have put doubt to olivine's effectiveness: https://www.technologyreview.com/2022/03/30/1048434/why-usin...


That was about putting olivine in seawater, not spreading it on soil.

My concern with all these approaches is the nickel content of olivine, which can be several tenths of a percent.


Yes, you would favor using the low-nickel supply on farmland.

I wonder if nickel in olivine would be harmful on beaches.


This is pretty much exactly the same idea: the paper referenced by the article mentions basalt specifically, which has olivine as a major constituent.


Here's a study that found that the olivine hypothesis might not work as expected: https://www.technologyreview.com/2022/03/30/1048434/why-usin...


Search for "monoid category" on google. A monoid is a mathematical object in cathegory theory.


While discoverability can be better with GUI's I find that googling things is always better for CLI tools. Most of my CLI search have a one or two lines of bash I can simply copy paste while solutions for GUI programms often involves lot's of screenshots of dropdowns to open and if the look of the application changed or the element's in the dropdown's are not the same it can be hard to follow.

Taking not's of how to do things or creating shorcuts will also always be better with CLI's since manipulation text is easy and always supported.


Give cht.sh a try. You'll like it. It kind of does googling for you.

Doesn't work everytime though.


If I recall right he was locked inside, not outside.


That was the most kafkian aspect. How could he earn more money to pay the door if it locked him inside?

In another instance, the door also didn't want to let him in without paying.


There is activityPub which is used by mastodon (a twitter clone). Some peoples are making reddit clone that federate with activityPub a bit in the same way that emails federate.


Explaining your decisions is a skill in itself. When you are working in a team it is very valuable.


How is the screen cable? Will it break as easly as with the last model?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: