Hacker Newsnew | past | comments | ask | show | jobs | submit | arcatek's commentslogin

> Huh? It queries whenever you stop typing.

That's relatively infrequently - TabNine offered me accurate completion while still writing my code, whereas with Copilot I not only have to wait for it to return the answer, I also have to hope it knows an answer. If it doesn't, too bad, lost time for nothing.


The article doesn't go into how to integrate TypeScript in the monorepo for development - what we do on the Yarn repository is that we point all the package.json `main` fields to the TypeScript sources, then use `publishConfig.main` to update it to the JS artifacts right before being published.

This way, we can use babel-node or ts-node to transparently run our TS files, no transpilation needed.


By this are you saying your “app” project is the one that actually transpiles the TS from your shared packages?

Wouldn’t that mean the shared packages tsconfigs aren’t respected if you changed something like strict options? And also that a clean build of the whole monorepo is going to recompile each shared file for every app project rather than just once?


Yeah, I would be interested to hear from others how they accomplish this. I played around with Nx and it uses TypeScript project references. It is a lot of boiler plate to set up every time you want to create a new app or library. Fortunately, their generators do this with one command.


In the past, I'd put a "typescript:main" field in package.json and configured my bundler to prefer that field. I gave up at some point - probably when I migrated to rollup.

Moving forward, I'm going to use wireit for these things. Pure modules get built with tsc. At the highest level (e.g. where it needs to be embedded in a page), make a bundle with rollup.

wireit has two nice properties: incremental building and file-system-level dependencies. Within a repo, you can depend on ../package-b. However, if you have multiple monorepos that often get used together, you can also depend on ../../other-project/packages/package-b. No matter where in your tree you make changes, wireit knows when to make a new bundle.

I've just started with wireit (it was only launched recently), but it seems to be a nice solution to wrangling dependencies between related JS libraries.

[1] https://github.com/google/wireit


We use pnpm and meta-updater to keep the TS project references in sync. An example of a project setup that way is pnpm's repo itself.

https://github.com/pnpm/pnpm/blob/main/.meta-updater/src/ind...


> If there are clearly better algorithms, why not refactor npm and add them in experimental flags to npm

While node_modules has many flaws, in the current ecosystem all modes have their own pros and cons, and there isn't a "clearly better" algorithm: node_modules has less friction, PnP is sounder, and pnpm's symlinks attempt to be kind of an in-between, offering half the benefits at half the "cost".

Like in many computer science things, it's tradeoffs all the way. Part of why Yarn implements all three.


pnpm also implements all three: https://pnpm.io/feature-comparison

But I think it is best to use Yarn for PnP and pnpm for the symlinked node_modules structure.


However note that ESM in Node comes with drawbacks that prevent end-users from relying them in various situations. Those will be mostly solved once loaders become stable, but until then it's still advised to ship packages as both CJS and ESM.


How do you do both?


Usually you setup a transpiler to target both. For example, my packages are bundled with rollup towards both cjs & esm:

https://github.com/arcanis/clipanion/blob/master/rollup.conf...


Helpful, thank you!


It's very easy to re-export ESM (import) as CJS (require) and vice-versa. The main issue is that ESM by default

For example to use `require` inside ESModules you would do:

```mjs import { createRequire } from 'module'; const require = createRequire(import.meta.url); require('./whatever-in-cjs'); ```

There is a reason this isn't "by default" though since ESM doesn't "silently" interop with CJS to not make writing universal code harder.


Out of curiosity, how do transaction logs handle things like created_at fields, or randomly generated UUID, which rely on contextual data? Is the server time/rng seed faked for each replayed transaction?


The transaction log is everything that gets done, not how it is done, so it can be replayed reliably. How values that are random, time sensitive, or otherwise arbitrary, are derived during a transaction is not important. What is logged is the fact that the values x/y/z were recorded in row 123,456 which is in page 987,654,321¹ which means that when the log is replayed you end up with exactly the same state the original database was in at the point the log is run to.

[1] in fact it could just be logged at the page level, the granularity of the log structure will vary between systems, if logged at the row level it may be the case that the physical datafile after restore is not exactly the same but the data will still be “random” values & all.


In most cases, transaction logs/write-ahead logs will contain the return value of non-determistic functions like created_at or random uuid, instead of the function call.


To reiterate on what sibling comments said, I'm the one who spawned the discussion and implementation of Corepack, and npm remained largely out of it; the push mostly came from pnpm and Yarn.

Additionally, unlike other approaches, Corepack ensures that package manager versions are pinned per project and you don't need to blindly install newest ones via `npm i -g npm` (which could potentially be hijacked via the type of vulnerability discussed here). It intends to make your projects more secure, not less.


If anything this makes it worse.

- No security checks are present in the package manager download and installation process so there are still no guarantees.

- Existing installations of package managers are automatically overwritten when the user calls their binary. What if this was a custom compilation or other customisations were made?

- This solution does a lot more behind the scenes than just run that yarn command that the user asked for but hand't installed.

- Why not simply notify the user when their package manager isn't installed or only allow it with a forced flag? (As has been suggested uncountable times by numerous people anywhere this topic came up over the years.)

Disrespecting user autonomy, capacity to self-regulate, and ownership over their machine and code is not the way.

Edit: Formatting


If they start to play with different rules, one of the hard fork remaining branches (or both) will be refused by everyone on the network and be worthless.

There's no telling whether it'd be the "hijacked" branch or the original one - assuming they control 50%+ of the mining power, there's a decent argument that the remaining miners would follow their lead if only to stay on the largest branch.


What miners do is irrelevant: the code known as bitcoin, with a 21M hard cap, simply will reject any block that creates more than its allowed bitcoins as invalid, the same way it rejects a random string of bytes as invalid.


Depending on your definitions this might or might no be true but it also might be totally irrelevant.

There is a protocol and system specification. There are implementations of that specifications. There is a distributed system running those implementations. And the distributed system has a state. Each of those can change and each of those or a combination of them could arguably be called Bitcoin.

If everyone would run new implementations with a different coin cap, you can argue that it is no longer Bitcoin because Bitcoin is a very specific specification with a 21M coin cap, but this would have little bearing on the actual situation.


This is like saying because some people own the majority of printing presses, money is worthless.

Money has worth because people accept it in exchange for goods and services.

Bitcoin has worth because people accept it in exchange for goods and services.

It’s not the miners that create value, it’s the merchants. If miners start some fork they’ll leave the main blockchain, which will run fine without them. And they have absolutely no way of forcing anyone to use their fork. Only if the merchants start accepting coins from the forked blockchain will it become valuable. But that’s up to the merchants, not the miners.

There are problems with one miner controlling over 50% of the mining power. This is not such a problem.


Cryptoheads really gone from "printed money is becoming worthless, we need currency with enforced scarcity" to "it's all arbitrary anyway".

It's been quite educational watching the whole cryptocurrency community re-invent economics 101 and find out the problem has never been technical, always been political.


The point is you need the economic majority to be on side in order to succeed with a hard fork. Just the top 50 miners switching on their own isn't enough.


Add 50 more - to reduce hash rate even more. Announce hard fork right after difficulty adjustment. And then, suddenly, these who have not forked will be stalled for several weeks. Instead of 10 minutes per block, the time will be 30 minutes and possibly more. And to adjust the difficulty those who have not forked would need... a hard fork?

Yes, these 100 miners are pools. But where pool participants will go then? Will pools who have not forked keep pool participation fees low?

Etc.

The game here is not quite simple. It is much more complex than appears at first sight.


The difficulty adjusts automatically. Bitcoin wouldn’t have come this far if it were that fragile.


It doesn't adjust immediately though. If a large majority of miner suddenly disappears then the block mining rate does indeed drop until the next difficulty adjustment.


So if 50% of the mining power left the next block would take on average about 20 minutes. Where it routinely takes about that long, because that’s just the way statistics work. A non issue.

Here’s a graph of the time blocks took over the last three years:

https://bitinfocharts.com/comparison/bitcoin-confirmationtim...

Look at the peaks and try to remember the issues that resulted in.


First, 20 minutes in average for power-distributed time-to-block would result in much, much higher peaks. In your chart I see 25 minutes for a block, then it will be 50 minutes.

Second, instead of two weeks to hash rate adjustment, it will take four weeks.

And if these staying with this slow bitcoin would decide to leave to more profitable currencies (not necessarily Bitcoin, there are other SHA256-based PoW schemes), that will push hash rate adjustment even further into future.


These 25 minute peaks were caused by a similar event, witness the also slow rates in the surrounding blocks. So that’s approximately how bad it would get in this ‘disastrous’ event.

https://www.cnbc.com/2021/07/03/bitcoin-mining-difficulty-dr...

We survived.


In the sense that updating the code to do something else would make it “not Bitcoin”, sure. But I really don’t think this matters to much of anybody if the “OG Bitcoin” drops to zero and the hard fork with new behavior wins.


Developers of the core protocol would notice, add a flag and then miners/clients/exchanges who want to remain with the same dev team can signal they want to follow the "dev" chain instead of the "miner" chain.

Usually forks have checkpoints as well so things can't change willy-nilly.


If blocks in the "miner" chain break the rules of the "dev" chain, then no flag is required. You'll automatically stick with the chain that contains valid blocks, as determined by the implementation that you're running.


I was under the impression that the default temporary GITHUB_TOKEN for forked repos (which is what happens with PRs) were read-only. Isn't that the case?

https://docs.github.com/en/actions/reference/authentication-...


It is. As far as I'm aware issues like these are only problematic if you either manually run a workflow (it uses your credentials) or have a workflow with the "pull_request_target" trigger (uses a token with write access). The latter has a plethora of potential pitfalls and should be avoided if you can.


Indeed, pull_request_target should be avoided.

The better model to use here is "pull_request" to do the work of building/testing a PR, and then a separate workflow that triggers on "workflow_run" to collect the results and attach them to the PR.

It's really not a lot of fun to implement though :/


Github badly need to add an abstraction for passing an artifact between workflows. The official recommendation for how to use workflow_run is comically messy (20+ lines of javascript-in-yaml because actions/download-artifact doesn't support fetching artifacts across repos):

https://securitylab.github.com/research/github-actions-preve...

Kinda hard to expect average users to grok this, running a follow-up workflow in a secure context with some carried over artifacts should be trivial to do declaratively.


I wonder if GH could/should make it a lot more convenient to implement with some additional abstractions, to encourage the secure approach by making it as easy as the insecure one.


That's the article I used when I implemented exceptions in an LLVM-based compiler, so it's applicable to more than just GCC.


Is your work public? If my article was useful, I'd love to have a look at what you did!


It's public, but I doubt it still compiles against recent LLVM versions! I started it 8 years ago to get a better understanding how features like classes & operator overload would work in a JS-like language. It was really fun!

https://github.com/castel/libcastel/blob/master/runtime/sour...

https://github.com/castel/libcastel/blob/master/runtime/sour...

I remember that at the time there were very few resources on personality functions, even in the LLVM doc - I had to make a lot of research before finding your articles, which were extremely helpful!

I got reminded of them yesterday after someone pinged me on a Stack Overflow answer I made at the time, asking for an updated link; after I found your long-form article I figured it would be a good topic for HN as well :)

https://stackoverflow.com/questions/16597350/what-is-an-exce...


ahhh, I see you got hit by some of the fallout of my migration from Wordpress to Blogspot. Sorry about that!

I tried to set http301s but gave up when I had to pay for it :)


I know you're joking, but I worked in a startup where a massive effort was made to completely rebuild the 1.x website - both backend and frontend. It required a lot of work from everyone, and I remember we even stayed late more than a few times to bring it to finish line.

So it was a bit disheartening that the founders never bumped the version to 2.x once the rollout was achieved. It's perhaps a bit nitpicky, but it felt like the work wasn't properly appreciated.


Maybe I'm aged, but I always just use the date. If I build today, the version will just be 210811.1 and it updates automatically in each build.

I'm a one man shop though so I never branch for major/minor releases.


> I'm a one man shop though so I never branch for major/minor releases.

Honestly, for my one-man projects I use 0. to indicate "there is absolutely no backwards compatibility guarantees because I'm still fucking around" and 1. to indicate "this is in prod and I'm confident about it" (with attendant discipline on how semver major/minors are supposed to be used).


Yes, and 2 is "I probably broke your favorite API."


Probably?


That's regular Semantic Versioning

https://semver.org/

> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

> Version 1.0.0 defines the public API. The way in which the version number is incremented after this release is dependent on this public API and how it changes.

> If your software is being used in production, it should probably already be 1.0.0.


Basically, nobody wants to deal with backcompat. Having worked somewhere where we supported stuff more than a decade old, it's not entirely surprising. It's really hard to know what crazy stuff customers are doing with your stuff, and fun new things often die off during the pitch.


think of react-native being used in production for years while it's in 0.x.x


Ask me how I can tell you haven't been using this strategy for more than a decade or two. ;)

I say that mostly jokingly, but stuff like this was really annoying around the turn of the century, in a death by a thousand cuts kind of way.

Please, just put the full year in. It's only two more digits, and will prevent the older people that see it from building a little bit of rage up every time they see it.


I've been using it for over 20 years...


Y2K100, aye!


Or even trying to determine which value represents year, month, or day. Internet date format is the only way to go, IMO.


This works well only if you're supporting a linear evolution of releases, no branching or LTS or back-porting.


There’s nothing that says you can’t have patches in calver, as e.g. `YY.MM.patch`. So you can definitely have an LTS calver release, or even back port new features. You just have to do it in the patch version, or adopt some other variation of the scheme, e.g.`YYYY.minor.patch` or `YY.MM.minor.patch`


Yep. This is what Ubuntu does, roughly. Version 2020.04 is LTS and will get security updates until 2030 April (04)


This is the exact strategy we use for our rollouts too. 2021-08-12.01 was just released.


minor/major makes sense for, when a company plans to support a major for longer period of time, if the development is continuous, the date sounds good


I've come to the same conclusion for the main product which always moves forward and currently has limited and tightly coupled relations. If the product gets lots of external consumers, then making it more versioned might make sense.

But for common libraries SemVer feels good solution for not breaking the main products and helps making developers to think about breaking changes etc.


What are you going to do when it is August 11th 2121? Reuse the build number?


Add a 1. in front. It earned it.


not a problem. it's physically impossible for any software to be maintained that long.

you can't prove me wrong


Sure I can. It will just take some time to do so.


If the project actually does last that long, I'm sure it wouldn't be hard to just extended the version by 2 digits to include the full year.


You mean that you're _sure_ that nothing will be dependent on the format of the version string of a >100yo project?

I'd bet everything I own against that


Is it a Mozilla project because that is some Mozilla type version inflation?


Instruct the AI to move the project to a versioning system that supports full calendar year.


This is a solved problem.

[Y2K Programming solutions]: https://en.wikipedia.org/wiki/Year_2000_problem#Programming_...


This is what makes sense to me as well.

It is also much easier to reference when talking with other devs, users, etc.

We all know the calendar and a date is much easier to remember.

Straight increases 5, 6, 7 ar also easier for user to reference.


That's what Jetbrains does


Unfortunately I've found there often needs to be separate internal and external version numbers.

An internal rewrite where all the "old bugs" are fixed, but minimal new features are added may feel like a 2.0 for those who worked on it, but for external customers it's the same tool, with the same functionality, just maybe looks a little different.

A 2.0 is often heralded with marketing fanfare, so it needs justification.

I'm not saying it's right, or that one rule fits all; I've seen it first hand and feel your pain.


And on the flip side, sometimes internal version 11.3 is a boring release with only small changes since 11.2, but if one of those changes was a feature that marketing cares about then bumping to The New Exciting 12 may be in order.


Opposite(?) example is Windows NT 5.1 / 6.1


I have seen completely opposite things in my previous startup I worked for. The product wasn't event production ready, they would call it version 1.0, version 2.0 and now version 3.0

Versions were bumped now and then, without real major changes.

They use this as marketing gimmick to create buzz that something major is being released but actually it was same old stuff just not ready for primetime.


It is so weird how many people seem to have a stake in the version number. Personally I'm all for date or build numbers, and removing version numbers entirely


Yeah, after 1.x, the 2.0 is one last savior. Beyond that, there is no hope.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: