Hacker Newsnew | past | comments | ask | show | jobs | submit | dazzlefruit's commentslogin

That's the kind of projects I love to see. Actually socially useful :) Keep it up!


Thank's a lot! So far the project haven't paid off much. But that is not the most important goal. It is about making something really useful for people. In this case nice cities and neighbourhoods.


President: ok, hit this military base with 100 soldiers to let them know we’re serious

Million-inhabitants city nearby: burns to the ground due to garbage guidance

President: …dig out the plan for world war 3, I guess.


ICBM is likely to deliver a nuke. Which will probably take out a 20 km radius circular area. It is not going to be used to target a base with 100 soldiers.


It is if the goal is to deter without launching a city roast fest. That’s one possible step to e.g. answer a first limited strike without starting a disastrous exchange. And US nukes can be set to low yield.


If NATO ever has any intention of deterrence through detonating a nuke, it will be done in exactly the way it's been specifically planned and openly explained: France has a specific nuclear policy, one of the only ones, that allows them to use nukes before an adversary does. They have a specific nuclear missile fired by plane that will be used as essentially a giant warning shot.

Nobody would fire a warning shot using an ICBM, because if things are hot enough to require a warning shot, that leaves a whole lot of interpretation up to your adversary who is quite likely to respond with their own ICBMs, before finding out yours were loaded with conventional munitions to make a point.

There are however, shorter range ballistic missiles, Russia has a lot of them (the US mostly gave up mid range ballistic missiles in a nuclear treaty) and has even launched a few with concrete "nuclear warhead simulation masses" at Ukraine.


Or a dozen nukes.


What about the App Store? I never used a Symbian phone but I don’t remember a device that would have allowed as much as the iPhone in the way of custom apps.


I used the hell out of Symbian. There were stores and telcos would sell apps too. Games were far better on Symbian because it wasn't a touch screen, it was practically a joystick. We had chat apps and dating apps. I was a thirsty college guy who spent a lot of data on these apps but eventually settled for SMS because data was too expensive. Web experiences sucked; my first job was converting websites to jQuery mobile, which later became obselete thanks to responsive websites.

But you can't build a business like Uber on Symbian. The iPhone brought about all these marketplace apps. They stopped being toys, and became more like mini computers.


Arguably the entirety you need to threaten is Search and ads. Remove those, what happens to the rest?


I don't use my phone when reality is better. Consider that some people's lives suck most of the time.


Believe me, I've been there. But staying on the phone doesn't help fix the root cause of the suck.


(base) sounds like Anaconda. Try typing conda deactivate?


Why is pathlib bad?

Edit: I'm asking because pathlib is as good as a Python lib could be for me. Path manipulations are extremely clear and always safe. What more do you need?


It's broken just as os.path is. Python doesn't work well with file names in principle: it wants everything to be Unicode. That works for many, but if you want reliable code... you just have to throw all of that away.

Also, in case of pathlib, it adds no value on top of os.path of which it is a wrapper. Instead, it made the original library it wraps worse, because now os.path also needs to know about pathlib to be able to handle path fragments represented as pathlib instances.

All in all, it offers very little utility (a handful of shortcuts) vs increasing the size and memory footprint of "standard" library, complicating dispatch and therefore debugging... it's a bad trade.

Just not to get you confused. It's not an awful trade. It's not like the sky will fall down on you if you use it. It's just mostly worthless, with negligible downsides.


> Python doesn't work well with file names in principle: it wants everything to be Unicode. That works for many, but if you want reliable code... you just have to throw all of that away.

Windows' APIs use UTF-16 and most file name encodings on Linux are UTF-8. How should Python handle this better?

> Also, in case of pathlib, it adds no value on top of os.path of which it is a wrapper.

Completely disagree. os.path is annoying to use. Treating paths as objects with methods and joining them with / makes my life much easier.

> increasing the size and memory footprint of "standard" library

By a ridiculous amount. pathlib is just another pure Python module with a bunch of simple functions and classes. [1]

> complicating dispatch and therefore debugging

You can simply declare and accept `Union[str, os.PathLike]` and convert the paths to whatever you want at the entrypoints, then use that in your own project. Where is the complexity? I've never seen this make debugging harder, it's just an additional type.

[1] https://github.com/python/cpython/blob/d9fc15222e96942e30ea8...


> most file name encodings

You just repeated what I said. Just read it again. You already know the answer, you just need to understand it.

> Completely disagree.

And make no worthwhile argument.

> You can simply declare and accept `Union[str, os.PathLike]`

What does this have to do with debugging? Do you know what "debugging" means?


I'm guessing verbosity? It also reads like they don't know why pathlib exists and assume they were created at the same time.

os.path came first, often works by poking the filesystem directly even when it doesn't seem like it needs to (vague memory, not completely certain), and I believe has os-specific quirks (so code won't necessarily work without changes).

pathlib was created later as a pure-python implementation that allows for manipulation paths without touching the filsystem except when you explicitly tell it to. Because it's also pure python I don't think it has any os-specific quirks either, but I haven't explored it in depth. Code should work across operating systems without changes.

I think I also remember at one point people talking about completely replacing os.path with pathlib, or at least gutting os.path to the essentials that wouldn't work as part of pathlib.


In VSCode (which uses PyRight/PyLance) and Python 3.10:

  JSONObject = None | str | int | bool | list["JSONObject"] | dict[str, "JSONObject"]
  # this type checks
  a: JSONObject = {"a": [1, 2, "7", True, {"false": None}]}
  # this doesn't type check
  b: JSONObject = {"a": [1, 2, "7", True, {"false": object()}]}


Cool, guess they finally fixed this. Must've been in the last ~1 year, give or take. Of course, it relies on quoting your types, which is... a matter of taste, I suppose.


You can also use from __future__ import annotations so the quotes become unnecessary. https://peps.python.org/pep-0563/


Does that work with recursive types? I have had mixed results with `from __future__ import annotations` personally, but I haven't written much Python in ~a year or so.


It's worked for many years, but you won't often see it used outside of class definitions because all of the other tools struggle with it (Pylint, Flake8, Pylance, etc. spit out some variation of an undefined variable error).


I don't think it has worked for years. I had used that same exact __annotations__ and my recollection was that recursive types still did not work.


It's worked for years. I've been using it the whole time.


Quoting types that are defined later is a wart but it's not very bad. VSCode's UI will happily handle it as if the quotes weren't there.


For the tuple example:

  from typing import TypeVar
  
  T, U, V, W = TypeVar('T'), TypeVar('U'), TypeVar('V'), TypeVar('W')
  
  def concatenate(a: tuple[T, U], b: tuple[V, W]) -> tuple[T, U, V, W]:
      return a + b
For the generic type transformation example, I'm not sure what you mean:

  from typing import Any, Callable
  
  Transformer = Callable[[dict[str, Any]], dict[Callable, Any]]
This seems to match your question but it's really weird.


Your tuple example only works if both tuples have two elements. I specifically mentioned arbitrary fixed-size tuples (as in, tuples with an arbitrary non-variable length).

Your generic type transformation example also doesn't come close to what Typescript does. The resulting dict will not have known keys based on the keys of the input dict. In Typescript I can write a function that takes an object with known keys, and it returns an object with those same keys having their values mapped to a different type, with the keys still known. I threw together a quick example - just look at the type of resultA/resultB by hovering over the variables[0].

But both are great examples - they are probably the closest you can get in Python, and they are so far removed from the thing I want to represent that they are completely useless.

[0]: https://www.typescriptlang.org/play?#code/MYewdgzgLgBAhjAvDA...


"Arbitrary fixed-size tuples" probably don't have a widely accepted meaning :) I read it as "size known when compiling the function".

The second example is cool. But I can't find a good practical use case for either example.

If you have a collection that's both heterogeneous and whose [size/key set] is statically known, when would it make sense to apply such generic transformations to them? This sounds like you have tuples or dataclasses where all elements have different meanings (since their types are fixed and different) _and_ you want to treat them like a generic collection _and_ you need the type checker to infer the result type.

The main use of tuples or dataclasses or `NamedTuples` is to pass or return values to/from functions without dealing with long lists of arguments. The elements aren't in the same category, it doesn't make sense to process them as one big collection, they mean different things.

(Also I think you made a mistake in your previous message, you wrote "an object with each key turned into a function" but it's the values that change types here.)


> "Arbitrary fixed-size tuples" probably don't have a widely accepted meaning :) I read it as "size known when compiling the function".

That is exactly what I'm talking about - the size of the tuples is known statically.

> The second example is cool. But I can't find a good practical use case for either example.

There are many interesting use cases in libraries, especially for some of the more esoteric features. Everything can be used for additional type safety.

> If you have a collection that's both heterogeneous and whose [size/key set] is statically known, when would it make sense to apply such generic transformations to them? This sounds like you have tuples or dataclasses where all elements have different meanings (since their types are fixed and different) _and_ you want to treat them like a generic collection _and_ you need the type checker to infer the result type.

Simple example - I have functions that return a Rust-like Result type, and I want to transform that into a different tuple-based format using a decorator. The transformation itself is static, but I can't write one function that handles it all, because Pythons type system is simply not developed enough. Something that would be incredibly easy in Typescript.

> The main use of tuples or dataclasses or `NamedTuples` is to pass or return values to/from functions without dealing with long lists of arguments. The elements aren't in the same category, it doesn't make sense to process them as one big collection, they mean different things.

But I have a use case for exactly this feature. Why should the language limit me? Why should I implement x functions that take different tuple lengths, with me having to choose the correct one for each use case, when I could write one function that does all?

> (Also I think you made a mistake in your previous message, you wrote "an object with each key turned into a function" but it's the values that change types here.)

Sure, though I could also literally write a Map that turns all object keys into functions.


> But I have a use case for exactly this feature. Why should the language limit me? Why should I implement x functions that take different tuple lengths, with me having to choose the correct one for each use case, when I could write one function that does all?

If the PSF ran a poll for the most-wanted type checking features, I don't think this would come close to first. This sounds very niche. The people working on typing seemed very busy in the last few versions.

> Simple example - I have functions that return a Rust-like Result type, and I want to transform that into a different tuple-based format using a decorator. The transformation itself is static, but I can't write one function that handles it all, because Pythons type system is simply not developed enough. Something that would be incredibly easy in Typescript.

Returning errors as values isn't really how you're supposed to use Python though. And why is there a second tuple-based format that does the same thing?

It also smells slightly off that the rest of the code takes an object that's exactly similar to the first function's result, but with a transformation uniformly applied over the values. Shouldn't the first part's output and the second part's inputs both be clearly declared independently of each other? And then wouldn't it be an extremely niche case that both types are identical except for one transformation applied to all values? Is it worth the language complexity and a dedicated function?


> If the PSF ran a poll for the most-wanted type checking features, I don't think this would come close to first. This sounds very niche. The people working on typing seemed very busy in the last few versions.

Sure, but while it's not possible to type basic functions like ones that concatenate tuples, I can say that Pythons typing system is not superior to Typescripts.

> Returning errors as values isn't really how you're supposed to use Python though.

Okay, so how am I supposed to handle non-exceptional errors? Because using exceptions for that kind of thing absolutely isn't good practice.

> And why is there a second tuple-based format that does the same thing?

Legacy. Typescript allows me to do refactoring of things like these step by step and very easily. Python doesn't, because it's inflexible.

> It also smells slightly off that the rest of the code takes an object that's exactly similar to the first function's result, but with a transformation uniformly applied over the values. Shouldn't the first part's output and the second part's inputs both be clearly declared independently of each other? And then wouldn't it be an extremely niche case that both types are identical except for one transformation applied to all values? Is it worth the language complexity and a dedicated function?

Are we taking apart my code now or what? I can't stop the world and focus only on refactoring things to be neat and tidy for months on end. But I can improve individual parts, bit by bit - and in a language with a better typing system, I can do so way easier.


> Sure, but while it's not possible to type basic functions like ones that concatenate tuples, I can say that Pythons typing system is not superior to Typescripts.

That's not really fair. The uses that make sense when considering Python's convention ("Pythonic" code, nebulous but usually well-understood) are supported.

I think what confused me about these examples is that they imply multiple values that have completely different meanings, but all get processed as equals anyway. That was before you talked about refactoring old code though.

> Okay, so how am I supposed to handle non-exceptional errors? Because using exceptions for that kind of thing absolutely isn't good practice.

If an operation on a homogeneous collection can say "nope" for some values and process others, the values would be typed "T | None". If the data isn't really a collection but a structured mapping, in general, attributes would be made optional on a case-by-case basis. If all attributes happen to be optional but the mapping itself is non-optional, that sounds more like an accident of this specific case than something we should complicate a language over. If this happens over a whole codebase, I guess I feel for you. Maybe that's when it makes sense to give up a bit of static typing and treat these values a bit more like data and a bit less like separate arguments, no matter what kind of complicated typing the language can do.

> Legacy. Typescript allows me to do refactoring of things like these step by step and very easily. Python doesn't, because it's inflexible.

> Are we taking apart my code now or what?

Honestly yes, this example seems so unusual that it doesn't make sense to debate it without knowing concretely what's happening in your code that this needs to be supported.

I guess that this specific example would get easier. But I would hardly call one type system superior over that.

Edit: my final opinion on this is "this is something that's technically possible if you follow language and conventions to the letter, but with experience you see that it's a bad idea that won't fit well with the language and you should change the design to avoid it". It happens in all languages IMO.


> That's not really fair. The uses that make sense when considering Python's convention ("Pythonic" code, nebulous but usually well-understood) are supported.

You're focussing on a very specific part of what I wrote. Just because my specific example consists of a Result type being transformed into a Tuple, it doesn't mean that the basic use case of "concatenate two tuple types" is so far out there.

> f an operation on a homogeneous collection can say "nope" for some values and process others, the values would be typed "T | None". If the data isn't really a collection but a structured mapping, in general, attributes would be made optional on a case-by-case basis. If all attributes happen to be optional but the mapping itself is non-optional, that sounds more like an accident of this specific case than something we should complicate a language over. If this happens over a whole codebase, I guess I feel for you. Maybe that's when it makes sense to give up a bit of static typing and treat these values a bit more like data and a bit less like separate arguments, no matter what kind of complicated typing the language can do.

But the static typing is extremely helpful, it prevents many kinds of errors. Not being able to use it for these kinds of things makes Python a worse language, no matter how you cut it (in the sense that it would be a better language if you could).

> Honestly yes, this example seems so unusual that it doesn't make sense to debate it without knowing concretely what's happening in your code that this needs to be supported.

Again, my specific example doesn't matter. The use case is "function takes in two tuple types and returns a concatenated version". That's something a type system should be able to handle.

> I guess that this specific example would get easier. But I would hardly call one type system superior over that.

On what metric besides expressiveness would you rate type systems?


> I specifically mentioned arbitrary fixed-size tuples (as in, tuples with an arbitrary non-variable length).

This is wrong. Again, Arbitrary fixed-size tuples are equivalent to structs with an arbitrary amount of properties. Languages shouldn't do this, it destroys the nature of what a TUPLE is which is essentially just a struct with no names.

The concept you are going for is isomorphically encapsulated by ANOTHER type:

    List[Any]
You should be using the above type to encode what you want conceptually.

That being said if javascript has variadic tuples then it's not a very good type system imo. It encodes redundant concepts. Why have a tuple with Variadic arguments when I have Arrays that do the exact same thing?


> This is wrong. Again, Arbitrary fixed-size tuples are equivalent to structs with an arbitrary amount of properties. Languages shouldn't do this, it destroys the nature of what a TUPLE is which is essentially just a struct with no names.

Okay, that might be your personal feelings on the topic. But do you understand the concept of "generic functions"? Sometimes you have to apply generic transforms to data. Being able to correctly express your transformations in a type system isn't "wrong", it's useful.

> List[Any]

Sorry, but I really think you don't understand what I'm talking about. If I write a function that handles tuples of arbitrary length and that function returns a transformed version of that tuple, I keep the information about individual tuple elements. This is thrown away in a list.

> That being said if javascript has variadic tuples then it's not a very good type system imo. It encodes redundant concepts. Why have a tuple with Variadic arguments when I have Arrays that do the exact same thing?

Arrays don't do the same thing, so they are not redundant concepts. Tuples have elements in specified positions with specified types. Arrays have one type (possibly a union type) over many elements.


>Okay, that might be your personal feelings on the topic. But do you understand the concept of "generic functions"? Sometimes you have to apply generic transforms to data. Being able to correctly express your transformations in a type system isn't "wrong", it's useful.

There's nothing like this in any type system I've seen. A struct with a generic amount of properties? Nonexistent. This isn't personal. This is the definition of a tuple. A tuple is a struct with no names. It is not a personal opinion.

You can have generic functions that operate on generic types but there's no such thing as a struct with generic amount of properties. Closest thing is a list.

>Sorry, but I really think you don't understand what I'm talking about. If I write a function that handles tuples of arbitrary length and that function returns a transformed version of that tuple, I keep the information about individual tuple elements. This is thrown away in a list.

This isn't an opinion. There's no such thing as tuple types of arbitrary length unless the implementer decides to get hand wavy with the definition of what a tuple is.

What you're talking about is only possible with dependent types. Very very few languages support this but the risk of doing this is it makes type checking undecidable. It's also extremely challenging to program this way.

Does typescript support dependent types? Probably but that's outside the realm of normal programming it's most likely exists as obscure tricks. You're getting into Idris, proof checkers and such.

Imagine this:

    func (x: Array[N], x:Array[M]) -> Array[M + N]
where M and N is the size of the array. It's called dependent types because types are getting mixed with programming level terms.

This is essentially what you need, but you want this level of type checking with structs/tuples. It's not just a "generic variable" It is much more then that:

   func(x: Tuple[*args1] y: Tuple[*args2]) -> Tuple[*(args1 + args2)]
There's nothing wrong with this but once you get into this it's beyond traditional type systems. Practical programming rarely ventures to far into this world since it's really hard to even fully prove even trivial things. You'll see it's bringing the execution of programs into the type checking level.

Maybe typescript has some shortcut that makes this level of type checking available for tuples, maybe that's what you're getting at. Unlikely that dependent types are supported generically.

If ts Does supports dependent types, this is definitely something I did not know about. It does change the equation, but I suspect it's very much outside normal usage of the language.

>Arrays don't do the same thing, so they are not redundant concepts. Tuples have elements in specified positions with specified types. Arrays have one type (possibly a union type) over many elements.

Arrays are the thing you want for variadic containers. For memory optimized languages like rust or C++ arrays are defined with a size. Array[5] is a different type then Array[3].

You can define a "interface" that accepts generic arguments to arrays:

    func(a: Array[], b: Array[]) 
but you can't define the function above where a function creates a new type that's dependent on the internal types of a and b.


> There's nothing like this in any type system I've seen. A struct with a generic amount of properties? Nonexistent. This isn't personal. This is the definition of a tuple. A tuple is a struct with no names. It is not a personal opinion.

I have literally shown you two type systems that have this feature. Why do you ignore them?

> You can have generic functions that operate on generic types but there's no such thing as a struct with generic amount of properties. Closest thing is a list.

Why are you acting like TypeVarTuples don't exist?

> This isn't an opinion. There's no such thing as tuple types of arbitrary length unless the implementer decides to get hand wavy with the definition of what a tuple is.

Again, why are you acting like TypeVarTuples don't exist?

> Arrays are the thing you want for variadic containers. For memory optimized languages like rust or C++ arrays are defined with a size. Array[5] is a different type then Array[3]

No, they are not, and I don't understand how you still don't get that. Python arrays don't carry any information about their length in their type, Python tuples do. They are not the same.

> but you can't define the function above where a function creates a new type that's dependent on the internal types of a and b.

Python literally already supports typing a function that creates a type that's dependent on the internal types of a and b. Why do you keep claiming it doesn't?


Tuples are already fixed size by nature, so adding a redundant "fixed-size" in that description was confusing. I also thought you meant a predefined size like always 2-tuples or always 3-tuples.


> Tuples are already fixed size by nature, so adding a redundant "fixed-size" in that description was confusing.

No, you can type variable-length tuples in Python. A variable int tuple, for example, can be typed as Tuple[int, ...].

You can't concatenate two variable-length tuples, which makes sense - where would the cutoff be? But you should absolutely be able to concatenate two fixed-size tuples, and it's very limiting that you can't.


> A variable int tuple, for example, can be typed as Tuple[int, ...].

That's a type that matches tuples of any length, not a variable-length tuple. The size of a tuple can't be changed. A variable-length tuple doesn't even really make sense, what you'd want there is a list.

> You can't concatenate two variable-length tuples, which makes sense - where would the cutoff be? But you should absolutely be able to concatenate two fixed-size tuples, and it's very limiting that you can't.

This whole statement doesn't make sense. I'm assuming you're still talking about type definitions and not actually tuples.


> That's a type that matches tuples of any length, not a variable-length tuple.

A tuple with a type that matches variable lengths of tuples is a variable-length tuple for that piece of code. You're free to show me some official definitions that proves this wording false, but until then it's useless nitpicking. Though you should probably take that up with Guido, who also calls them variable-length tuples: https://github.com/python/typing/issues/30

> This whole statement doesn't make sense. I'm assuming you're still talking about type definitions and not actually tuples.

The statement makes perfect sense, thank you. If you have trouble understanding my messages without me repeating the whole definition every time, maybe just skip them.


I think they've interpreted "variable length tuple" as "a tuple whose length can change", not "a tuple whose length could be one of multiple options".

The former is of course not possible with tuples being immutable, which is why they're talking about lists.


Yeah, that seems likely. I'm not sure how to express the concept aside from the name I've seen used in the community (and my many subsequent explanations), considering I was explicitly talking about typing.


I don't know either. Since the alternative interpretation is a contradiction in terms, you'd think this name would cover the intended meaning.

Maybe something like "unknown length tuple"? chatgpt suggested "arbitrary-length" or "undetermined-length" as synonyms, that could be a more easily understood expression perhaps.


Those are good suggestions, thanks!


You mean that the tuples' size is statically known at the calling site, while your message could be interpreted as the size being statically known in the callee.

I think this is clearer. The statement "arbitrary fixed-size tuples" sounded a bit like "an immutable mutable variable". It doesn't really say what's arbitrary about the tuples and in what context the size is fixed.


Considering the opposite is called a "variable-length tuple", and I wanted to express that I'm talking about arbitrary tuples with non-variable length, what wording would have made this clearer?


What's missing is in which context the tuple's length is variable, and in which context it is fixed. You can have a tuple size fixed everywhere (because the callee sets it) or a tuple size fixed at each call (and propagated to the callee statically).

"Arbitrary" doesn't really help because it could refer to the elements' values, to their types, or to the tuple's length. Also "arbitrary" and "variable-length" sound like synonyms to me.

Guido might use some expressions in the context of Python steering discussions but that doesn't make them less obscure for the rest of us who read C++ docs every day instead.


> What's missing is in which context the tuple's length is variable, and in which context it is fixed.

Simple example: a function has a parameter whose type is "variable-length tuple of int". You can pass any tuple in that is known to have 0..n elements, all of type int. What would you have me call that, other than the name I've seen used in discussions on this feature?

> "Arbitrary" doesn't really help because it could refer to the elements' values, to their types, or to the tuple's length.

Read it as (arbitary (fixed-size tuples)). It was meant to forgo answers describing functions with known tuple sizes.


> Simple example: a function has a parameter whose type is "variable-length tuple of int". You can pass any tuple in that is known to have 0..n elements, all of type int.

And n is fixed at the calling site, right? I wonder if something like "TypeVar, but for a list of type arguments" could solve your problem.

What's funny is that this is already kind of implemented in `typing.Concatenate`, but only for function parameters [1], not for type hint parameters.

Anyway, I would have written "a well-typed function that concatenates two arbitrary tuples whose size is statically known at the call site". Can't really remove "at the call site" or "statically known" without being ambiguous.

Edit: just found out about `TypeVarTuple`. So really we're only missing `concatenate`.

[1] https://docs.python.org/3/library/typing.html#typing.Concate...


> And n is fixed at the calling site, right? I wonder if something like "TypeVar, but for a list of type arguments" could solve your problem.

Yep, and TypeVarTuple should - all the syntax etc. is in place, there is an Unpack operator for TypeVarTuples, allowing you to e.g. append or prepend individual types to a TypeVarTuple. But you can't unpack more than one TypeVarTuple in an expression, it's specifically disallowed - so I can't properly type my function.


I'd call that function: polymorphic over tuple length.


That kind of works. I guess I keep thinking about it from the perspective of the type itself instead of the function that uses it.


It can also get confusing, because in some languages there is no general notion of being a tuple or n-tuple if you wish. Even if they are casually all called tuples.

For example in Haskell 2-tuple and 3-tuple are simply distinct types, as distinct as Int is from String. You can't speak to the type system about "all n-tuples".


The tuple thing requires variadic generics from my understanding.

I don't thing variadic generics support is supported in most statically typed languages. The only one I can think of right now that supports this is C++.


Typescript supports it too (quick example[0]) :) and Python actually as well, but currently you can't unpack two TypeVarTuples in the same type expression: https://peps.python.org/pep-0646/

[0] https://www.typescriptlang.org/play?#code/C4TwDgpgBAglC8UDaA...


Wow this is very cool thanks for sharing!


You're welcome! That's why I'm very excited about Typescript, the system is very powerful :)


The Chrome versions of the first few years were so nice to use. It was the _lightest_ major browser for a time. It's insane how it has drifted since then.


Has it drifted?

I don't see it. I think all the other browsers just had to become light and fast too. Even Microsoft was forced to say goodbye to IE, and instead based Edge on Chromium. And tech people were eventually able to switch back to Firefox because it got much faster too.

Google wanted a world where all browsers were light and fast in order to efficiently run complex webapps -- and they achieved that. Kudos.


Chrome is often criticized for overusing RAM. Personally I stopped using it a couple years ago, but when I stopped, it was very far from light; I remember it freezing for a few seconds for lack of RAM in a way other browsers (Firefox with multiprocessing, Edge before it got rebuilt over Chromium) didn't.

The original Chrome just felt like a barebones window to the Internet. Though I agree that Firefox et al. became much less sluggish over time. (Is that only their performance improvements or did hardware get better faster than they grew?)

Also maybe "light" and "fast" shouldn't be lumped together. Chrome can definitely be fast when it has enough resources. That and sandboxing seem to make it much _heavier_ in RAM.


As a web developer I also have to tell you that my industry has gotten more cavalier about using resources. Unless your benchmark is browsing sites that you know have not changed in 15 years, the heaviness you feel could be from development teams using shiny new frameworks.


Chrome isn't using all that RAM.

The web pages you visit are.


Did it really?

Because browsers got good, the web got orders of magnitude more complex. If you try loading a modern web page in an old version of Chrome, you'll see just how much faster Chrome has gotten.

Or alternatively, try viewing an old webpage in new Chrome. It's still super light and zippy.


Yes, feature creep has happened in a really big way because there is an obvious profit incentive to Google if every last bit of computing happens in-browser. Glossing over the thorny topics like “my browser shouldn’t care what hardware I run it on”, the Web* set of standards hasn’t stopped ballooning since the release of chrome. WebRTC made sense. But WebUSB? WebGPU? WebAssembly? etc. etc. Each can have interesting use cases individually, but in aggregate they have become a whole second operating system filled with compromise and bloat.


It also looked much nicer. These thick curved tab decorations, unnecessary ovals everywhere, yuck.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: