Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who dropped C++ almost entirely a decade ago because I couldn't deal with both the immense complexity and the legacy baggage of C++, this is pretty exciting honestly.

What I find a bit strange is that he explains that he's been working on this since 2015 (with "most of the 'syntax 2' design work in 2015-16") and he doesn't want to give documentation because "that would imply this project is intended for others to use — if it someday becomes ready for that, I'll post more docs."

Why this reticence to opening the project and have others play with it and, potentially, contribute to it? I can't imagine a new language becoming successful with this mindset.



I think it's more about losing control. As soon as the documentation is open, there is a lot more pressure to explain the decisions made in an accurate or accepted fashion. Add some good old bike-shedding or contributions that miss the point and a project that was once fun becomes tedious to work for.


It also adds a lot of drag to iteration to have to go back and rework documentation.

On this kind of project, it's more than syntax-deep. Documentation would have a lot of reasoning and justification in it, which can take deep work to keep up to date.


Sutter isn't saying there isn't documentation, but that he's not releasing it.


Documenting something for yourself and documentation that is ready for public release are two very different animals.


Yeah, I couldn't agree more. this is exactly what's happening in ISO C++ and Herb should be well aware of its problems since lots of his good proposals failed to proceed many times thanks to WG21 rife with all the bureaucracies and nitpicking. This kind of skunk works needs to be fully driven by a competent individual or small group until it's finally able to demonstrate a good value proposition and build trusts by the project itself.


And a common cause of getting stuck in local maxima (specifically, in committee/consensus-driven projects) is that each individual decomposed feature doesn't obviously deliver value on its own merits, but the sum of a group of such features can reach a higher local maxima.

Aka, why argue about feature A, when the goal and long game is how A+B+C+D works?


A lot of decisions made, BTW, have been explained in proposals for the C++ committee that came out of this work.


> Why this reticence to opening the project and have others play with it

I heard this recently: open projects are like Good Will Hunting but in reverse. You start as a respected genius, and end up as a janitor getting into fights.


> As someone who dropped C++ almost entirely a decade ago because I couldn't deal with both the immense complexity and the legacy baggage of C++, this is pretty exciting honestly.

This seems to be just a syntax frontend for C++. The underlying semantics stay the same.

BTW, if you dropped C++ a decade ago, you should now look into the modern improvements (C++20).


> Important disclaimer: This isn't about 'just a pretty syntax,' it's about fixing semantics


> This seems to be just a syntax frontend for C++. The underlying semantics stay the same.

That is very much not true. In fact the syntax simplification is of less interest to me than the clarification/simplification of semantics. Most of the dangerous / confusing parts of c++ come from the necessary c compatibility.

So for example, you don’t have to worry about the bug-inducing C integer promotion rules.


> if you dropped C++ a decade ago …

Fair. Still, the more recent C++ improvements tend to roll out slowly across the ecosystem and work environments.


What does the C++20 change in a day-to-day work? I know they added "stuff" (as always) but is there anything that really benefits >80% of all cpp programmers?


Compared to a decade ago a bunch of stuff in no particular order:

1) malloc/new & free/delete are now solidly legacy territory of the "unless it's a placement `new`, you're likely doing it wrong". make_unique && make_shared all day long.

2) templates that are understandable by mortals thanks to `if constexpr` instead of SFINAE nightmares.

3) static_asserts

4) lambdas (which is going to get way more useful with https://en.cppreference.com/w/cpp/utility/functional/move_on... )

5) std::format

6) attributes like [[nodiscard]] being standard

7) std::move making passing std:vector & similar containers around not being terrifying (this is what also really helps #1 be possible)

I'm sure I missed some stuff, but I reach for all of those regularly.


1) I'm always wary of something that is mature that has documented and extensive historical issues being handwaved away with "it's all fixed in this or the next release"

2) in light of that your comment is unintentionally hilarious. C++ became a syntax swamp 15 years ago and it is getting worse every release. I anticipate it getting worse as rust, a hilarious syntax soup in its own right, continues to march forward.


> 1) malloc/new & free/delete are now solidly legacy territory of the "unless it's a placement `new`, you're likely doing it wrong". make_unique && make_shared all day long.

This is awfully wrong. C++ might be convenient to express ownership and manage object lifetimes, but they are not the only way to express ownership by far.

Take for instance Qt, which relies heavily on new-ing up objects still up to this day, as it has its own ownership and object lifetime management system.


Swapping out unique_ptr/shared_ptr for some other smart pointer container doesn't negate what I said.

New/delete are still basically deprecated territory. Qt isn't any different here, other than it seems they are behind the curve with make() variants of their pointer containers. So you'd want to make your own of that, and then go back to the world of "new/delete are deprecated"


You can always create your own “algorithms”. Our codebase has one that creates a new QObject that is a child of an existing one, returning a (raw) pointer to the new child object. That’s a case of not “no raw `new`”, but the next-best thing: isolating raw `new` to the one algorithm that does just that, and letting all other code depend on that.


What makes move_only_functions useful?


std::function is literally an object that can be invoked like a function. As an object, it can contain multiple values.

Move-only values are very used for common situations of unique resources that should not be duplicated.

But, what if you want to enclose a move-only value in a std::function? Are you simply out of luck and must give up on your dreams? A move-only function lets you get that work done.


My majority usage of lambdas is for use with work queues (so think Java executor). With the std only the easiest way to build that is a vector of std::function. But then your lambdas can't capture unique_ptrs, even though lambdas themselves have supported move captures for a long long time now. You can do this with std::packaged_task, but that has internal heap allocations for the future. Which if you don't need is just overhead, and not cheap overhead at that.


auto


to me concepts and coroutines really reduce the amount of boilerplate needed.

- just the ability of doing

     if constexpr(requires { T::some_member; }) { ... }
to check if a type has a member variable / function / whatever makes code infinitely clearer than the previous mess requiring overloads.

- coroutines finally enable to properly abstract the data structures used by a type's implementation, e.g. you don't need to spill out anymore that your type stores stuff in std::vector or std::array or boost::small_vector or std::list etc etc to client code, and they simplify async code very well. for instance with Qt: https://qcoro.dvratil.cz/reference/core/qprocess/#examples

- three-way comparison and automtic generation of comparison / equality function is really great for removing needless boilerplate

- void f(auto arg) { ... } instead of template<typename T> void f(T arg) { ... }

- foo{.designated = 123, .init = "456" }; (although it was already more or less supported on every relevant compiler for years


Sadly `if constexpr (requires {})` was not implemented in MSVC until VS2022, which was not even released when I tried using it and backed out because it wouldn't build on MSVC, and is still a new and less-adopted compiler than 2019: https://developercommunity.visualstudio.com/t/requires-claus...


designated initializers are a trap. you can't require certain fields are set... this has been a big pain point for my team. guess we should have used builder pattern or something


> you can't require certain fields are set

There is a way, but it's cursed:

https://godbolt.org/z/fzc6WEz5e


Clever. Wrap it in a concept, and it might have legs.


I would say constructors are what should be used for such kind of compound initializations.

C# is going the route of allowing required/optional fields in designated initializers, and from my point of view it is just a mess for what should be a constructor call.


In principle Modules would be huge, but in practice you can't use them as the compiler you have doesn't implement them yet.

C++ 20 gets a format feature that's basically a weaker {fmt} library but as part of the standard library. A string formatting feature with reasonable performance and safety as you might be used to from other modern languages.

Concepts is nice, that's basically way to express duck typing, it is often described like you're getting more than duck typing but that's all you actually get - but hey, duck typing is useful, and Concepts should have decent error messages whereas doing the equivalent with SFINAE is a recipe for awful diagnostic output.


Please do not post falsehoods seeking to mislead readers. You have been corrected on this point several times before.

In fact, C++ Concepts can be used for early enforcement of compile-time duck typing, just for better error messages. In the Standard library they are commonly used that way, for backward compatibility.

But Concepts can also implement named-property matching. It is entirely up to the designer of a library how much of each to present.


> You have been corrected on this point several times before.

You have repeatedly (across numerous HN threads) insisted that Concepts aren't just duck typing but that doesn't make it so.

> But Concepts can also implement named-property matching.

That's still just duck typing (and it's awkward to do properly). Contrast with the (eventually abandoned) C++ 0x Concepts, which actually has semantic weight to it. Concept Maps allow C++ 0x Concepts to have some sort of chance in a language that's already in widespread use, because you can adapt an existing type to the Concept using a Map.

But C++ 20 Concepts doesn't offer any of that, after about a decade it's just duck typing.


Templates are duck typing.

Concepts are just the equivalent of if instanceof.


Concepts can be used for instanceof. But they can be used in other ways, too.


Not really a C++ expert, but two things I saw in the documentation and that came in useful were the "contains" method of maps and the std::span class for passing around contiguous sequences of objects without worrying about creating copies of vectors.


Because this isn't a new language which is attempting to become successful. This is a playground to experiment with ways to evolve C++ into a better language.


All the more reason to have people play with it, no?


Other people playing with and using it decrease his ability to make breaking changes freely.


It seems like other people should come up with their own "playground" for experimenting?


Becoming a successful lamnguage does not seem to be a goal of the project:

"Cppfront is a personal experimental compiler from an experimental C++ 'syntax 2' to today's 'syntax 1,' to learn some things, prove out some concepts, and share some ideas."


And this work has resulted in things that were discussed or even got into subsequent C++ standards.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: