Hacker News new | past | comments | ask | show | jobs | submit | beders's comments login

Re: HTTP Flows.

As we know networks never fail, read timeouts aren't a thing and error handling is not necessary as networks never fail and read timeouts aren't a thing and error handling is not necessary as networks never fail and read timeouts aren't a thing...


Yea! Just like how the electricity grid of an entire country is guaranteed to be operating every day. And if it wouldn't, then we'd be quick at finding out what the cause is.

    #[table(name = person, public)]
    pub struct Person {
        name: String,
    }
No, just say no. Do not conflate your relational database schema with however you treat that data in your app. You want explicit control, not annotation magic.


This sounds like a fundamental disagreement with using ORM. Are you advocating for always writing queries (not excluding the possibility of a query builder) over treating database rows as application-side entities that can have CRUD operations performed on them / persisted?


yeah, I wouldn't recommend trying to do this with pure Java but you could pass around method handles for that purpose.

You certainly would want to use an `interface` and that means you need an object. It could be an object that has no fields though and receives all data through its methods.

But it does go against the spirit of objects: You want to make use of `this` because you are in the land of nouns.


The issue is names.

Every darn little thing in Java needs a name.

If there is no good name, that's a hint that maybe you don't need a new type.

Obligatory Clojure example:

    (defn full-name [{:keys [first-name last-name]}] 
          (str first-name " " last-name))
This defines a function named `full-name`. The stuff between [] is the argument list. There's a single argument. The argument has no name. Instead it is using destructuring to access keys `:first-name` and `:last-name` of a map passed in (so the type of the unnamed argument is just Map)

This function works for anything that has a key `:first-name` and `:last-name`.

There's no need to declare a type ObjectWithFirstNameAndLastName. It would be quite silly.


> Every darn little thing in Java needs a name.

Don't all types need names, regardless of what language you use?

Look at typescript, and how it supports structural typing. They don't seem to have a problem with names. Why do you think Java has that problem when nominal type systems simplify the problem?

> There's no need to declare a type ObjectWithFirstNameAndLastName. It would be quite silly.

Naming things is hard, but don't criticize typing for a problem caused by your lack of imagination. A basic fallback strategy to name specialized types is to add adjectives. Instead of ObjectWithFirstNameAndLastName you could have NamedObject. You don't need to overthink it, with namespaces and local contexts making sure conflicts don't happen.

There are two mindsets: try to work around problems to reach your goals, and try to come up with any problem to find excuses to not reach your goals. Complaining about naming sounds a lot like the second.


> Don't all types need names, regardless of what language you use?

No.

- In very dynamic languages (like javascript), most types arguably don't have names at all. For example, I can make a function to add 2d vectors together. Even though I can use 2d vectors in my program, there doesn't have to be a 2d vector type. (Eg, const vecAdd = (a, b) => ({x: a.x+b.x, y: a.y+b.y}) ).

- Most modern languages have tuples. And tuples are usually anonymous. For example, in rust I could pass around 2d vectors by simply using tuples of (f64, f64). I can even give my implicit vector type functions via the trait system.

- In typescript you can have whole struct definitions be anonymous if you want to. Eg: const MyComponent(props: {x: number, y: string, ...}) {...}.

- There's also lots of types in languages like typescript and rust which are unfortunately impossible to name. For example, if I have this code:

    #[derive(Eq, PartialEq)]
    enum Color { Red, Green, Blue }

    fn foo(c: Color) {
        if c == Color::Red { return; }

        // What is the type of 'c' here?
    }
Arguably, c is a Color object. But actually, c must be either Color::Green or Color::Blue. The compiler understands this and uses it in lots of little ways. But unfortunately we can't actually name the restricted type in the program.

Rust can do the same thing with integers - even though (weirdly) it has no way to name an integer in a restricted range. For example, in this code the compiler knows that y must be less than 256 - so the if statement is always false, and it skips the if statement entirely:

https://rust.godbolt.org/z/3nTrabnYz

But - its impossible to write a function that takes as input an integer that must be within some arbitrary range.


> Arguably, c is a Color object. But actually, c must be either Color::Green or Color::Blue. The compiler understands this and uses it in lots of little ways. But unfortunately we can't actually name the restricted type in the program.

I think that's less a question whether you can, but rather whether you should or shouldn't design it that way... (I'll use TypeScript here for the simpler syntax)

It'd be perfectly fine to do something like this:

  type ColorR = 'red';
  type ColorG = 'green';
  type ColorB = 'blue';
  
  type ColorRGB = ColorR | ColorG | ColorB;
But which constraint should your new type ColorGB (your variable c) adhere to?

  // constraint A
  type ColorGB = ColorG | ColorB;

  // constraint B
  type ColorGB = Exclude<ColorRGB, ColorR>;
I'd argue if the type ColorGB is only needed in the derived form from ColorRGB within a single scope, then just let the compiler do its control flow analysis, yes - it'll infer the type as constraint B.

But if you really need to reuse the type ColorGB (probably some categorization other than all the colors), then you'd need to pay close attention to your designed constraint.


I really wish I could do in rust what you're doing here in typescript.

Say I have some enum like ColorRGB here. In some contexts, only a limited subset of variants are valid - say, ColorGB. There's a few ways to code this - but they're all - in different ways - horrible:

1. Use ColorRGB in all contexts. Use asserts or something to verify that the value is one of the expected variants. This fails at expressing what I want in the type system - and the code is longer, slower and more error prone as a result.

2. Have two enums, ColorRGB and ColorGB. ColorRGB is defined as enum ColorRGB { Red, Restricted(ColorGB) }. This lets me encode the constraint - since I can use ColorGB explicitly. But it makes Color harder to use - since I need to match out the inner value all over the place.

3. Have two enums, ColorRGB and ColorGB which both have variants for Green and Blue. Implement conversion methods (impl From) between the two types. Now I have two types instead of one. I have conversions between them. And I'll probably end up with duplicate methods & trait impls for ColorRGB and ColorGB.

Luckily this doesn't come up that often. But - as your typescript example shows - it can just be expressed directly in the type system. And LLVM already tracks which variants are possible throughout a function for optimisations' sake. I wish rust had a way to express something like Exclude<ColorRGB, ColorR>.


The named integer range thing is interesting. I guess it depends on what you goal is. Could you use asserts? Could you wrap the integer in an object and embed the restriction logic there?


Doable in TypeScript[0], but I'd wager if that's really necessary.

[0]: https://www.typescriptlang.org/play/?#code/C4TwDgpgBAogdgVwL... from StackOverflow[1]

[1]: https://stackoverflow.com/questions/39494689/is-it-possible-...


> but I'd wager if that's really necessary.

I think its more useful in a language like rust, because the compiler can use that information to better optimize the emitted assembly.


Even in compiled languages I think the benefits won't outweigh the difficulties in compiler optimization.

It starts with the question of how I would like to design my sequences:

- Is it the range (0, 10] in steps of 1?

- Is it the range (0, 2^10) in steps by the power of 2?

- Is it the range (0.0, 2PI] in steps of EPSILON?

How would the compiler engineer generalize this optimization?

And the question would continue whether I'd be really able to define them that precisely before runtime. Most applications are just dealing with lightweight structures, mostly serialized JSON nowadays, and then even there are enough fuck-ups[0] where such an optimization wouldn't help at all.

I can imagine the places where they really matter are some heavy memory intensive data structures like deep neural networks, graphics or video and the like - for the time being they're just dealing with tensors of data type floatX, and that seems to be fine AFAIK.

I mean, I'd be really nice if the smaller memory footprint could come out-of-the-box during compilation. But all the CLI tools written in rust certainly don't have the use case to put this complication on the shoulders of compiler research.

[0]: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...


You could do it with asserts or as a wrapped type, but both of those approaches are only checked at runtime. That means you don’t get runtime errors instead of compiler errors. It also limits the compiler’s ability to take advantage of the constraint to optimise. Also asserts need to be added everywhere - by both you and the compiler. And wrapped types are annoying to use since you lose access to all the standard arithmetic operators.

It would be better if the type system could encode types like this directly. Better for ergonomics and better for optimisation.


"Most modern languages have tuples. And tuples are usually anonymous." Sure! And I LOVE ending up with var names that are always Item1 and Item2... Very descriptive!


Nobody’s saying anonymous tuples are necessarily a good idea. Just that lots of languages provide them.


All types don't need names exactly like the Clojure example shows. There is no "type" for the argument, likely it's a map under the hood. And maps with keywords are used broadly across Clojure projects as a method of pass groups of data and no, you don't have to name that arbitrary collection of data. Rich Hickey has an amazing presentation on the silliness of a commonly used Java Web Request library where the vast majority of object types required could have just been little nested maps instead of creating a ton of bespoke like types that make it difficult to assemble and manipulate all the bits. The interface he roasts could be completely discarded an nearly no benefits lost in terms of readability or usability. Hickey is also famous for saying he would rather a few data structures and a 100 little algorithms instead of 10 data structures and 10 algorithms per data structure.


> Hickey is also famous for saying he would rather a few data structures and a 100 little algorithms instead of 10 data structures and 10 algorithms per data structure

This is a quote from Alan J. Perlis, not Rich Hickey, and it's certainly not what Rich is famous for.


I think Java culture had something to do with the ridiculously verbose names, but even more so the prevalence of Factory and Singleton paradigms in Java created these issues. Maybe because it was the first OO language that a lot of procedural coders came to in the 90s. Those patterns became sort of escape hatches to avoid reasoning about ownership, inheritance and scope. They're still common patterns for emergencies in a lot of ECMA-whatever languages resembling Java: You need a static generator that doesn't own the thing it instantiated, and sometimes you only need one static object itself... but one reaches for those tools only as a last resort. After thinking through what other ways you could structure your code. The super-long-name thing in Java always felt to me like people trying to write GOTO/GOSUB type procedural programs in an OO language.


For whatever reason Java gets the blame for what was already common Smalltalk, C++, Clipper 5, Object Pascal, Actor, Eiffel, Objective-C,....before Oak idea turned into Java.

To the point many think the famous patterns book used Java, when it is all about Smalltalk and C++ patterns.


> For whatever reason Java gets the blame for what was already common (...)

Java's problem is that it's hugely successful, and to some it's the only language they ever experience in their formative years. Thus, because poor workmen always blames the tools, Java becomes the root of all evil.


That and the historical illiteracy the industry suffers from.


> I think Java culture had something to do with the ridiculously verbose names

People complain about this all the time but I'd rather take a verbose name than not knowing what is going on. Sometimes these naming conventions help.

Digging into older poorly named, structured and documented codebases is not fun.


Definitely. If you have functions that only do one thing and are only called a few other place in the code, then by all means, name them explicitly. And I'd rather see that than a bunch of the same inlined anonymous function. I'm not really knocking the naming patterns as much as the data/logic patterns that required them. Factories are not a good thing. They usually imply that globals need to hold references. Obviously, so do Singletons. Global lifecycles in general are just easy to create and really hard to manage on your own.

Good comments are better than good function names.


> Factories are not a good thing. They usually imply that globals need to hold references.

No. Your statements are fundamentally wrong at many levels. Factories are tools to instantiate objects of a specific type a certain way. They are constructors that don't suffer from the limitations of constructors. A factory does not impose any requirement on object life cycle.


How you use a factory matters. A factory that sits as a static function inside the same class it's creating is one thing. It may just be a nicer interface to instantiate the class than the constructor itself.

On the other hand, it may do something with the object it's instantiating before passing it back. Such as putting it into global scope, or tying it to something outside its chain. Factories that sit outside the class file are often prone to all kinds of grotesque later mucking-with that causes the instantiated object to bind to other scope, and never be properly garbage collected. That's what I'm talking about.

Build better constructors and you won't usually need factories. If you need factories as an API, build them next to the constructor.


correct. i chuckle when someone talks so authoritatively on a topic they didn’t try to fully understand in the first place.


> I think Java culture had something to do with the ridiculously verbose names, but even more so the prevalence of Factory and Singleton paradigms in Java created these issues.

It sounds like you're confusing things. Factories are a way to instantiate objects of a specific type a certain way. Singletons is just a restriction on how many instances of a type there can be.

These are not Java concepts, nor are they relevant to the topic of declaring and naming types.


They are not Java concepts, but abusing them by using in contexts where they are a bad answer is something Java tends to take to the limit. (I fight the same abuse in C++, and it always seems that Java "best practices" is where the idea came from)


tbf those are features of early Java frameworks (particularly the awful java enterprise crap)


C# has anonymous types, for example. Kind of like tuples but you can name the fields.


The types of closures are unnamable in C++ and Rust; each closure has a unique type that can't be written out. Function types (that is, "function item" types) in Rust are also unnamable.


You kind of answered your question, didn't you?

Because the types in typescript don't need names. And the type "object with firstName and lastName" is one such type that doesn't need a name.

So:

> They don't seem to have a problem with names.

Yes. The problem is much smaller there, and mostly caused by programmer cultures that insist on naming everything.


I frequently use anonymous types in my unit tests in Go. I create a custom type describing the inputs, behaviors and expected outputs of a test case. I create a collection of these cases, and use Go's standard library testing package to run the test cases concurrently scaling to the CPU's available threads.

Here's a simple example: https://github.com/dharmab/skyeye/blob/main/pkg/bearings/bea...


Anonymous types are a thing in some languages. You also have adjacent concepts like anonymous namespaces in which you can dump types that require a name so that the names don’t leak out of the local context.

Sufficiently flexible namespaces do solve most of these problems. Java is kind of perverse though.


Java:

Map m = new HashMap() {{ System.out.println("I am a unique subclass of HashMap with a single instance!");}};


Now when you establish full functional languages, most languages will allow you to do

    fullName = map list \(firstName, lastName) -> firstName + " " + firstName
and type it as `funfullName: (String, String)[] -> String`.

I have worked on large scale systems in both types and untyped languages and I cannot emphasize strongly enough how important types are.


The only thing with anonymous functions is, when the boss says "please include every user's middle initial", you need to go find every instance of an inline function that resembles this. Consolidating that function in a getter in a class object called Person or User or Customer is a lot nicer.


This is more a question about architecture.

But one thing is certain: When you have that one function that is used 165 times throughout the code base, having a type checker is certainly going to help you when you add in the users middle initial.


> This is more a question about architecture.

In an ideal world :)

In the real world the customer doesn't know what they want and you can't fully guess what they want or need ahead of time no matter how many diagrams you draw.

Incidentally, one of the few good things that came out of the "agile" religion.


You are completely right!

And the exact point I tried to communicate.

When you decide to have loose architectural structures, you might allow just writing these types of functions adhoc.

The further you go in the project, the more your strengthen the architecture, where needed.


In Python you're describing a Protocol. It's actually super reasonable to have a ObjectWithFirstNameAndLastName noun like this. You don't ever need to construct one but you can use it in the type slot and objects you pass in will be checked to conform. You see all kinds of weird specific types floating around the standard lib like this for type hinting.

Duck typing is great, what's even better is documenting when when they need to quack or waddle.


I think protocols have two major drawbacks regarding readability and safety. When I have a protocol, I cannot easily find its concrete implementations, so it becomes harder to see what the code is actually doing. As for safety, protocols have no way of distinguishing between

  class Command:
    def execute(self) -> None:
      # some implementation
and

  class Prisoner:
    def execute(self) -> None:
      # some other implementation
The implementor of the Prisoner class might not want the Prisoner class to be able to be slotted in where the Command class can be slotted in. Your type checker will be of no help here. If you use abstract base classes, your type checker can prevent such mistakes.

So when it comes to your own code, the drawbacks of the structural Protocols in comparison the nominal ABCs are pretty big. The pros seem non-existent. The pro, I guess, is that you don't have to type the handful of characters "(Baseclass)" with every concrete implementation.

But they do have one major advantage: if you have third party code that you have no control over, and you have some part of this codebase that you want to replace with your own, and there's no convenient way to do something like the adapter pattern, because it's somehow a bit deeply nested then a Protocol is a great solution.


Really depends on your intent. Ideally code has meaning that reflects your problem domain and not just what happens to work at the moment.

Code that just works right now never scales.


Did you forget to include `middle-name`?

There's no way to tell.


If we're philosophizing here:

1. This (or maybe a less trivial form of this) will bite you in the ass when you end up using other people's unnamed types. Or even when you use your own unnamed types that come from code you haven't touched in three years.

2. That's what interfaces are for in Java. Or at least modern Java.


I’ve first learned Java in introduction to programming in 2001 and that’s what interfaces were for back then already.


Interface are more fundamental to Java than classes.

Sadly, at the beginning, many people came to Java from C/C++, and they did the thing we used to call "writing C/C++ code in Java".


I'm late to the Java party, I first did anything serious in it in 2009-10 and it was Android not "traditional" Java. So no idea about early Java's history.

The interfaces is the only thing i loved from it.


One day I will write a Java project that won't contain a single "class" keyword (it will have interface, anonymous classes, and static methods), as a piece of art.


This sounds like passing JS objects around and having dependencies between caller and callee on their content being undefined and assumed. I can't think of much worse than that for anything other than a trivial codebase.

At least in Javascript you have JSDoc.


Not just names, but a separate file and a package to fit in. I need a small data object, sorry, you have to put it in a separate file and then think of the package it goes in and so on and so forth. Not to mention in Spring you then need to annotate it with something. That is why I say Java development is a pain.


This is my main obstacle when wanting to create a simple data-class for 6 variables that I pass as arguments in a couple of places. I already have ItemReference, ItemStatus, ItemVersion and ItemMetadata, what do I call that new type which has an item's path, version, status, uid and hash?


This is easy and idiomatic in Golang with its combination of Interfaces and Duck Typing.

Why is it that dynamically typed languages usually develop static typing extensions (including Clojure)? Perhaps people don’t enjoy hunting down tedious spelling issues such as last-name vs family-name?


you are just describing a function. Not a type.

And as others replied, the issue of naming a type exists in all languages.


Meh. OOP has lots of nouns, FP has lots of verbs.


I found it quite disappointing to find a marketing piece from Nikki.

It is full of general statements that are only true for a subset of solutions. Enterprise solutions in particular are vastly more complex and can't be magically made simple by a syncing database. (no solution comes even close to "99% business code". Not unless you re-define what business code is)

It is astounding how many senior software engineers or architects don't understand that their stack contains multiple data models and even in a greenfield project you'll end up with 3 or more. Reducing this to one is possible for simple cases - it won't scale up. (Rama's attempt is interesting and I hope it proves me wrong)

From: "yeah, now you don't need to think about the network too much" to "humbug, who even needs SQL"

I've seen much bigger projects fail because they fell for one or both of these ideas.

While I appreciate some magic on the front-end/back-end gap, being explicit (calling endpoints, receiving server-side-events) is much easier to reason about. If we have calls failing, we know exactly where and why. Sprinkle enough magic over this gap and you'll end up in debugging hell.

Make this a laser focused library and I might still be interested because it might remove actual boilerplate. Turn it into a full-stack and your addressable market will be tiny.


> Whatever framework you choose will be obsolete in 5 years.

Reagent was born in 2013 and still works just the same in 2025. And it adapted to React changes quite nicely. You won't have to refactor Reagent code from 10 years ago.

Technology and programming language choice matters.


I'm missing something on the discussion of correctness for Linear types:

     let file: File := openFile("test.txt");
     writeString(file, "Hello, world!");
     g(file); 
If I have any other function 'g' that takes a File and returns Unit, wouldn't the compiler be ok with that. Now I have a dangling file pointer.


AFAICT the compiler would forbid to use `file` after the `writeString(file)` line, because it has been consumed.

You can get something like affine types out of the linear types constructed this way, by returning new one-time value every time when operations on an object can continue:

  let file_1: File := openFile("test.txt");
  let file_2 := writeString(file_1, "Hello, world!");
  g(file_2);


Linear typed are "use exactly once". In this case you consume "file" when you pass it into writeString and then it is (compile time) unavailable to be used with g, afterwards.


looks like if Zig and Golang had a child. But I would expect something like file.writeString()


yes, I mistyped. I wanted to take the File object from writeString

    foo = writeString(file, "Hello world!");
    g(foo);


`g` still has to make `foo`, the new handle for `file` disappear.

I guess the question is around. If the File library can implement `fn CloseFile(f: File) -> ()`, why can't I implement `fn g(f: File) -> ()`?

At least I should be able to do so using `File::CloseFile` underneath, but what guarantees that only the module defining a type can define sinks for it?

I guess this is an implicit restriction that isn't talked in depth. It should be fine to add sources and sinks to a linear type, but only if they are implemented through the module's own sources and sinks.


A function like `closeFile` will consume the linear value but not produce a new linear value - it returns `Unit`. It does this via a destructuring assignment (For example, File would be a record type with a field named handle).

    function closeFile (file : File) : Unit is
        let { handle } := file;    (* This consumes the file *)
        fclose(handle);
        return nil;
    end;
A module will usually provide opaque types, with the actual definitions encapsulated by an implementation file, similar to how C and C++ do it. Users of this library don't know what is in the type `File`, so they're unable to use the destructuring assignment themselves - so their only option is to call `closeFile`, or the compiler will complain that the linear value is not consumed.

You can of course, call `closeFile` from `g`, and then it's fine to make `g` return Unit.

    function g (file : File) : Unit is
        let file0 := (* do something with file *);
        return closeFile(file0);
    end;
The compiler actually enforces this. If g takes a `file` argument and returns Unit, then it MUST call `closeFile`. Failure to do so won't compile.

For a real example, check how the RootCapability type is defined in `builtin/Pervasive`[1]. It's declared as an opaque type in the .aui file, along with `surrenderRoot`. This is all the user of the type knows about it.

    type RootCapability : Linear;

    function surrenderRoot(cap: RootCapability): Unit;
In the .aum file both the type and surrenderRoot are actually defined.

    record RootCapability: Linear is
        value: Unit;
    end;

    function surrenderRoot(cap: RootCapability): Unit is
        let { value: Unit } := cap;
        return nil;
    end;

[1]:https://github.com/austral/austral/tree/master/lib/builtin


g also has to use file exactly once


And here I am flinging hiccup and Reagent for years now without any major changes. Reagent has been around since 2013. They even survived the transition to functional React components without major changes. The hiccup I wrote 5 years ago is still perfectly fine.

The only issue we have is JS UI libraries we depend on who have a funny understanding of backwards compatibility.


I wish I had stuck with clojurescript. I found clojure in 2013 or so but could never justify putting it in production. But that would have been a good 12 years of frontend development bliss. The clojure team has the right mindset.


I can only assume this comes from an observation that once your product matures, the static types become more apparent and you have a better idea how flexible your data modeling should be.

i.e. we are gradually adding more runtime type-checks to our Clojure codebase. (Runtime check are even more powerful than dependent types)


That's probably the most incorrect analogy. If I have objects, I can change their state in the middle of any bookkeeping.

There should be a different name for functional programming with mutable objects.


I think the analogy works. It's all about the tradeoffs.

Double-entry is harder to grasp, but it has certain properties, like being able to sum along rows or columns. With that property, you can then make assertions like "if anything is off by a cent, then there has been a mistake and it needs to be looked at again."

On the other hand, single-entry is much simpler, you can just record a figure for a date with a reason, and be done with it. It widens the pool of employable candidates, it's easier to onboard new employees, and you don't have any elites screaming at you for doing accounting the wrong way.

If you take a hybrid approach and mix the two, then on average you only have to fill in 1.5 entries per transaction, so it's easier and faster than double-entry, but you can still express some transactions with two entries if it's more elegant, on a case-by-base basis.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: