Hacker Newsnew | past | comments | ask | show | jobs | submit | more techsystems's commentslogin

Floorp is a fork by Japanese developer, though his English is not perfect. It removes all backwards compatibility to speed things up, competing vs. Thorium to be best performing browser, if anyone's interested.


Any article you recommend on this?


Very nice collection, thanks for the share


So just reword it to 'check'


The reason why Nvidia is buying now does not have to do anything with Arc or GPU competition. There are mainly two reasons.

1) This year, Intel, TSMC, and Samsung announced their latest factories' yields. Intel was the earliest, with 18A, while Samsung was the most recent. TSMC yieled above 60%, Intel below 60%, and Samsung around 50% (but Samsung's tech is basically a generation ahead and technically more precise), and Samsung could improve their yields the most due to the way set up the processes, where 70% is the target. Until last year, Samsung was in the second place, and with the idea that Intel caught up so fast and taking Samsung's position at least for this year, Nvidia bought Intel's stock since it's been getting cheaper since COVID.

2) It's just generally good to diversify into your competitors. Every company does this, especially when the price is cheap.


I am curious where you get your information about Samsung being more “precise”.

I was recently looking into 2nm myself, and based on wikipedia article on 2nm, TSMC 2nm is about 50% more dense than the samsung and intel equivalent. They aren’t remotely the same thing. Samsung 2nm and Intel 18A are about as dense as TSMC 3nm, that’s been in production for years.


This information is a bit dated but ...

Since "nm" is meaningless these days, the transistor count/mm2 is below.

As reference: TSMC 3nm is ~290 million transistors/mm2 (MTr/mm2).

             IBM      TSMC   Intel   Samsung
  22nm                       16.50  
  16nm/14nm          28.88   44.67   33.32
  10nm               52.51  100.76   51.82
  7nm                91.20  237.18   95.08
  5nm               171.30    
  3nm               292.21    
  2nm        333.33
https://news.ycombinator.com/item?id=27063034

https://www.techradar.com/news/ibm-unveils-worlds-first-2nm-...


I think the intel 7nm is unrealistic. If true intel wouldn’t be “behind”

According to Wikipedia intel 7nm density is ~62 MTr/mm2. I cannot find the source wikichip page mentioned in your reference post.

FWIW, I am not in the semi industry and all my info are from Wikipedia https://en.m.wikipedia.org/wiki/7_nm_process https://en.m.wikipedia.org/wiki/2_nm_process


> I was recently looking into 2nm myself, and based on wikipedia article on 2nm, TSMC 2nm is about 50% more dense than the samsung and intel equivalent.

I did the math on TSMC N2 vs Intel 18A, and the former is 30% denser according to TSMC


>2) It's just generally good to diversify into your competitors. Every company does this, especially when the price is cheap.

This definitely isn't a thing that every company does (or even close to every company).


Not every company, but the largest ones do.

Microsoft once owned a decent amount of Apple & Facebook for example.


While #2 is true, there are a myriad of ways to do that without a press release.


> but Samsung's tech is basically a generation ahead and technically more precise

Are you comparing Samsung against Intel here specifically, or also TSMC?


Didn't Japanese camera makers do this and it pushed Olympus' board members to force the selloff of that division?


How do you know Intel 18A yield if this is one of the biggest secrets?


> ndarray = "0.16.1" rand = "0.9.0" rand_distr = "0.5.0"

Looking good!


I was slightly curious: cargo tree llm v0.1.0 (RustGPT) ├── ndarray v0.16.1 │ ├── matrixmultiply v0.3.9 │ │ └── rawpointer v0.2.1 │ │ [build-dependencies] │ │ └── autocfg v1.4.0 │ ├── num-complex v0.4.6 │ │ └── num-traits v0.2.19 │ │ └── libm v0.2.15 │ │ [build-dependencies] │ │ └── autocfg v1.4.0 │ ├── num-integer v0.1.46 │ │ └── num-traits v0.2.19 () │ ├── num-traits v0.2.19 () │ └── rawpointer v0.2.1 ├── rand v0.9.0 │ ├── rand_chacha v0.9.0 │ │ ├── ppv-lite86 v0.2.20 │ │ │ └── zerocopy v0.7.35 │ │ │ ├── byteorder v1.5.0 │ │ │ └── zerocopy-derive v0.7.35 (proc-macro) │ │ │ ├── proc-macro2 v1.0.94 │ │ │ │ └── unicode-ident v1.0.18 │ │ │ ├── quote v1.0.39 │ │ │ │ └── proc-macro2 v1.0.94 () │ │ │ └── syn v2.0.99 │ │ │ ├── proc-macro2 v1.0.94 () │ │ │ ├── quote v1.0.39 () │ │ │ └── unicode-ident v1.0.18 │ │ └── rand_core v0.9.3 │ │ └── getrandom v0.3.1 │ │ ├── cfg-if v1.0.0 │ │ └── libc v0.2.170 │ ├── rand_core v0.9.3 () │ └── zerocopy v0.8.23 └── rand_distr v0.5.1 ├── num-traits v0.2.19 () └── rand v0.9.0 ()

yep, still looks relatively good.


    cargo tree llm v0.1.0 (RustGPT)
    ├── ndarray v0.16.1
    │   ├── matrixmultiply v0.3.9
    │   │   └── rawpointer v0.2.1
    │   │       [build-dependencies]
    │   │       └── autocfg v1.4.
    │   ├── num-complex v0.4.6
    │   │   └── num-traits v0.2.19
    │   │       └── libm v0.2.15
    │   │           [build-dependencies]
    │   │           └── autocfg v1.4.0
    │   ├── num-integer v0.1.46
    │   │   └── num-traits v0.2.19 ()
    │   ├── num-traits v0.2.19 ()
    │   └── rawpointer v0.2.1
    ├── rand v0.9.0
    │   ├── rand_chacha v0.9.0
    │   │   ├── ppv-lite86 v0.2.20
    │   │   │   └── zerocopy v0.7.35
    │   │   │       ├── byteorder v1.5.0
    │   │   │       └── zerocopy-derive v0.7.35 (proc-macro)
    │   │   │           ├── proc-macro2 v1.0.94
    │   │   │           │   └── unicode-ident v1.0.18
    │   │   │           ├── quote v1.0.39
    │   │   │           │   └── proc-macro2 v1.0.94 ()
    │   │   │           └── syn v2.0.99
    │   │   │               ├── proc-macro2 v1.0.94 ()
    │   │   │               ├── quote v1.0.39 ()
    │   │   │               └── unicode-ident v1.0.18
    │   │   └── rand_core v0.9.3
    │   │       └── getrandom v0.3.1
    │   │           ├── cfg-if v1.0.0
    │   │           └── libc v0.2.170
    │   ├── rand_core v0.9.3 ()
    │   └── zerocopy v0.8.23
    └── rand_distr v0.5.1
        ├── num-traits v0.2.19 ()
        └── rand v0.9.0 ()


linking both rand-core 0.9.0 and rand-core 0.9.3 which the project could maybe avoid by just specifying 0.9 for its own dep on it


It doesn't link two versions of `rand-core`. That's not even possible with rust (you can only link two semver-incompatible versions of the same crate). And dependency specifications in Rust don't work like that - unless you explicitly override it, all dependencies are semver constraints, so "0.9.0" will happily match "0.9.3".


So there's no difference at all between "0", "0.9" and "0.9.3" in cargo.toml (Since semver says only major version numbers are breaking)? As a decently experienced Rust developer, that's deeply surprising to me.

What if devs don't do a good job of versioning and there is a real incompatibility between 0.9.3 and 0.9.4? Surely there's some way to actually require an exact version?


They are different:

    "0": ">=0.0.0, <1.0.0"
    "0.9": ">=0.9.0, <1.0.0"
    "0.9.3": ">=0.9.3, <1.0.0"
Notice how the the minimum bound changes while the upper bound is the same for all of them.

The reason for this is that unless otherwise specified, the ^ operator is used, so "0.9" is actually "^0.9", which then gets translated into the kind of range specifier I showed above.

There are other operators you can use, these are the common ones:

    (default) ^ Semver compatible, as described above
    >= Inclusive lower bound only
    < Exclusive upper bound only
    = Exact bound
Note that while an exact bound will force that exact version to be used, it still doesn't allow two semver compatible versions of a crate to exist together. For example. If cargo can't find a single version that satisfies all constraints, it will just error.

For this reason, if you are writing a library, you should in almost all cases stick to regular semver-compatible dependency specifications.

For binaries, it is more common to want exact control over versions and you don't have downstream consumers for whom your exact constraints would be a nightmare.


Note that in the output, there is rand 0.9.0, and two instances of rand_core 0.9.3. You may have thought it selected two versions because you missed the _core there.

> So there's no difference at all between "0", "0.9" and "0.9.3" in cargo.toml

No, there is a difference, in particular, they all specify different minimum bounds.

The trick is that these are using the ^ operator to match, which means that the version "0.9.3" will satisfy all of those constraints, and so Cargo will select 0.9.3 (the latest version at the time I write this comment) as the one version to satisfy all of them.

Cargo will only select multiple versions when it's not compatible, that is, if you had something like "1.0.0" and "0.9.0".

> Surely there's some way to actually require an exact version?

Yes, you'd have to use `=`, like `=0.9.3`. This is heavily discouraged because it would lead to a proliferation of duplication in dependency versions, which aren't necessarily unless you are trying to avoid some sort of specific bugfix. This is sometimes done in applications, but basically should never be done in libraries.


Sorry, I don't understand the "^ operator" in this context. Do I understand correctly that cargo will basically select the latest release that matches within a major version, so if I have two crates that specify "0.8" and "0.7.1" as dependencies then the compiler will use "0.8.n" for both? And then if I add a new dependency that specifies "0.9.5", all three crates would use "0.9.5"? Assuming I have that right, I'm quite surprised that it works in practice.


It’s all good. Let me break it down.

Semver specifies versions. These are the x.y.z (plus other optional stuff) triples you see. Nothing should be complicated there.

Tools that use semver to select versions also define syntax for defining which versions are acceptable. npm calls these “ranges”, cargo calls them “version requirements”, I forget what other tools call them. These are what you actually write in your Cargo.toml or equivalent. These are not defined by the semver specification, but instead, by the tools. They are mostly identical across tools, but not always. Anyway, they often use operators to define the ranges (that’s the name I’m going to use in this post because I think it makes the most sense.) So for example, ‘>3.0.0’ means “any version where x >= 3.” “=3.0.0” means “any version where x is 3, y is 0, and z is 0” which 99% of the time means only one version.

When you write “0.9.3” in a Cargo.toml, you’re writing a range, not a version. When you do not specify an operator, Cargo treats that as if you use the ^ operator. So “0.9.3” is equivalent to “^0.9.3” what does ^ do? It means two things, one if x is 0 and one if x is nonzero. Since “^0.9.3” has x of zero, this range means “any version where x is 0, y is 9, and z is >= 3.” Likewise, “0.9” is equivalent to “^0.9.0” which is “any version where x is 0, y is 9, and z is >=0.”

Putting these two together:

  0.9.0 satisfies the latter, but not the former
  0.9.1 satisfies the latter, but not the former
  0.9.2 satisfies the latter, but not the former
  0.9.3 satisfies both
Given that 0.9.3 is a version that has been released, if one package depends on “0.9” and another depends on “0.9.3”, version 0.9.3 satisfies both constraints, and so is selected.

If we had “0.8” and “0.7.1”, no version could satisfy both simultaneously, as “y must be 8” and “y must be 7” would conflict. Cargo would give you both versions in this case, whichever y=8 and y=7 versions have the highest z at the time.


Awesome. Thanks for taking the time. Glad to understand all of this better. I feel a bit silly now meticulously going through and changing all of my "0.9.3"s to "0.9" in the past, but at least now I know better.


You're welcome!

It is true that, if the change works on z < 3, you are expanding the possible set of versions a bit, so it's not useless; one could argue that you should only depend on z != 1 if there's a bug you want to make sure that you use the versions past when it works, otherwise, no reason to restrict yourself, but it's not a big deal either way :)


This doesn't sound right. If A depends on B and C - B and C can each bring their own versions of D, I thought?


Within a crate graph, for any given major version of a crate (eg. D v1) only a single minor version can exist. So if B depends on D v1.x, and C depends on D v2.x, then two versions of D will exist. If B depends on Dv1.2 and C depends on Dv1.3, then only Dv1.3 will exist.

I'm over-simplifying a few things here:

1. Semver has special treatment of 0.x versions. For these crates the minor version depends like the major version and the patch version behaves like the minor version. So technically you could have v0.1 and v0.2 of a crate in the same crate graph.

2. I'm assuming all dependencies are specified "the default way", ie. as just a number. When a dependency looks like "1.3", cargo actually treats this as "^1.3", ie. the version must be at least 1.3, but can be any semver compatible version (eg. 1.4). When you specify an exact dependency like "=1.3" instead, the rules above still apply (you still can't have 1.3 and 1.4 in the same crate graph) but cargo will error if no version can be found that satisfies all constraints, instead of just picking a version that's compatible with all dependents.


can does not mean must. Cargo attempts to unify (aka deduplicate) dependencies where possible, and in this case, it can find a singular version that satisfies the entire thing.


This doesn't mean anything. A project can implement things from scratch inefficiently but there might be other libraries the project can use instead of reimplementing.


is this satire or does I must know context behind this comment???


These are a few well-chosen dependencies for a serious project.

Rust projects can really go bananas on dependencies, partly because it's so easy to include them


The project only has 3 dependencies which i interpret as a sign of quality


I don't know if OP intended satire, but either way it is an absurd comment. Think about how "from scratch" this really is.


How does the context length scaling at 256K tokens compare to Llama's 1M in terms of performance? How are the contexts treated differently?


Unlimited plans are generally grandfathered. I don't know if I came across any company that didn't honour and grandfather the unlimited.


Not answering your question or building anything, but what your'e making sounds like what I'm looking for!


I feel this is the only answer needed. It's also very average-person friendly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: