CUDA Tile is an open source MLIR Dialect so it wouldn't take much to write MLIR transforms to map it from the Tile IR to TOSA or gpu + vector + some amdgpu or other specialty dialects.
The Tile dialect is pretty much independent of the nvidia ecosystem so all it takes is one good set of MLIR transform passes to run anything on the CUDA stack that compiles to tile out of the nvidia ecosystem prison.
So if anything this is actually a massive opportunity to escape vendor lock in if it catches on in the CUDA ecosystem.
That's not really the point. The point is that Nvidia is updating a lot of their higher level CUDA tooling to integrate with and compile to Tile IR. So this gives an escape hatch for tools built on top of CUDA to deploy outside the ecosystem.
I'm super appreciative to see posts like this. I love NixOS and Nix in general but getting cross compile to "just work" became a lot less trivial in a post-flake world and I feel like every time I have to do it I need to relearn the entire process.
Having a clear guide like this to keep as a handy reference is massively appreciated.
There's a lot of things that really jam up recycling.
One of them is plastic grocery bags. They just cause a lot of problems in the mechanisation of recycling so it's very non-trivial to work around them.
Oils and biowaste of course are of course another issue. Especially for corrugated fiberboard (brandname: cardboard) and the like.
And then also it's hard for machines or lineworkers to easily differentiate plastics without sufficient market or regulatory pressure. If consumers are already generally sorting by broad category then they take most of the legwork out (leaving the facility to check their work) and those consumers also apply market pressure on manufacturers to make it obvious how their product is expected to be recycled.
And of course also there's just a general component of everyone doing a little at a time to keep things organised from the start making the entire process an order of magnitude easier and more efficient for everyone downstream.
I'd suppose this really depends on how you are developing your codebase but most code should probably be using a trailing return type or using an auto (or template) return type with a concept/requires constraint on the return type.
For any seriously templated or metaprogrammed code nowadays a concept/requires is going to make it a lot more obvious what your code is actually doing and give you actually useful errors in the event someone is misusing your code.
Generally, you don't. I'm not sure why the parent suggested you should normally do this. However, there are occasional specific situations in which it's helpful, and that's when you use it.
1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.
2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.
3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.
1. You're simply not going to achieve consistency across the board, because even if you dictate this by fiat, your dependencies won't be like this. The issue of the function name being hard to spot is easier to fix with tooling (just tell your editor to color them or make them bold or something). OTOH, it's not so nice to be unable to tell at a glance if the function return type is deduced or not, or what it even is in the first place.
2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.
3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.
1) consistency, 2) scoping is different and can make it a significant difference.
I have been programming in C++ for 25 years, so I'm so used to the original syntax that I don't default to auto ... ->, but I will definitely use it when it helps simplify some complex signatures.
Most of the relevance of this is limited to C++ library authors doing metaprogramming.
Most of the "ugly" of these examples only really matters for library authors and even then most of the time you'd be hard pressed to put yourself in these situations. Otherwise it "just works".
Basically any adherence to a modicum of best practices avoids the bulk of the warts that come with type deduction or at worst reduces them to a compile error.
I see this argument often. It is valid right until you get first multipage error message from a code that uses stl (which is all c++ code, because it is impossible to use c++ without standard library).
Those aren't the ugly part of C++ and to be entirely honest reading those messages is not actually hard, it's just a lot of information.
Those errors are essentially the compiler telling you in order:
1. I tried to do this thing and I could not make it work.
2. Here's everything I tried in order and how each attempt failed.
If you read error messages from the top they make way more sense and if reading just the top line error doesn't tell you what's wrong, then reading through the list of resolution/type substitution failures will be insightful. In most cases the first few things it attempted will give you a pretty good idea of what the compiler was trying to do and why it failed.
If the resolution failures are a particularly long list, just ctrl-f/grep to the thing you expected to resolve/type-substitute and the compiler will tell you exactly why the thing you wanted it to use didn't work.
They aren't perfect error messages and the debugging experience of C++ metaprogramming leaves a lot to be desired but it is an order of magnitude better than it ever has been in the past and I'd still take C++ wall-o-error over the extremely over-reductive and limited errors that a lot of compilers in other languages emit (looking at you Java).
But "I tried to do this thing" error is completely useless in helping to find the reason why the compiler didn't do the thing it was expected to do, but instead chose to ignore.
Say, you hit ambiguous overload resolution, and have no idea what actually caused it. Or, conversely, implicit conversion gets hidden, and it helpfully prints all 999 operator << overloads. Or there is a bug in consteval bool type predicate, requires clause fails, and compiler helpfully dumps list of functions that have differet arguments.
How do you debug consteval, if you cannot put printf in it?
Not everyone can use clang or even latest gcc in their project, or work in a familiar codebase.
> How do you debug consteval, if you cannot put printf in it?
This will be massively improved in the next std release hopefully since Compile Time Reflection looks like it's finally shipping.
And of course there exist special constexpr debuggers but more generally you really should be validating consteval code with good test suites ahead of time and developing your code such that useful information is exposed via a public (even if internal) interface.
And of course in the worst case throws function as decent consteval printf even if the UX isn't great.
But of course there's more work on improving this in the future. There's a technique I saw demoed at one of the conferences this year that lets you expose full traces in failure modes for both consteval and non-const async code and it seems like it'll be a boon for debugging.
----
> Not everyone can use clang or even latest gcc in their project, or work in a familiar codebase.
Sure note everyone is that lucky but it's increasingly common. AFAIK TI moved entirely to clang based toolchains a few years back and Microchip has done the same. The old guard are finally picking up and moving to modular toolchain architectures to reduce maintenance load and that comes with upstream std support.
And I see this argument often. People make too much fuss about the massive error messages. Just ignore everything but the first 10 lines and 99.9% of the time, the issue is obvious. People really exaggerate the amout of time and effort you spend dealing with these error messages. They look dramatic so they're very memeable, but it's really not a big deal. The percentage of hours I've spent deciphering difficult cpp error messages in my career is a rounding error.
Do you also consider that knowing type deduction is not necessary to fix those errors, unless you are writing a library? Because that is not my experience (c++ "career" can involve such wildly different codebases, it's hard to imagine what others must be dealing with)
There's actually multiple standard libraries for embedded applications and a lot of the standard library from C++11 and on was designed with embedded in mind in particular.
And with each std release the particularly nasty parts of std get decoupled from the rest of the library. So it's at the point nowadays where you can use all the commonly used parts of std in an embedded environment. So that means you get all your containers, iterators, ranges, views, smart/RAII pointers, smart/RAII concurrency primitives. And on the bleeding edge you can even get coroutines, generators, green threads, etc in an embedded environment with "pay for what you use" overhead. Intel has been pushing embedded stdlib really hard over the past few years and both they and Nvidia have been spearheading the senders and receivers concurrency effort. Intel uses S&R internally for handling concurrency in their embedded environments internal to the CPU and elsewhere in their hardware.
(Also fun side note but STL doesn't "really" stand for "standard template library". Some projects have retroactively decided to refer to it as that but that's not where the term STL comes from. STL stands for the Adobe "Software Technology Lab" where Stepanov's STL project was formed and the project prior to being proposed to committee was named after the lab.)
AFAIK Stepanov only joined Adobe much later. I think he was at HP during the development of the STL, but moved to SGI shortly after (possibly during standardization).
The other apocryphal derivation of STL I have heard is "STepanov and Lee".
AFAIK He started work on the STL at HP and then SGI based on his and Musser's work on Generic Programming but it didn't get the name STL until he was at the STL.
This of course coming from Sean Parent who was and afaik still is quite close with Stepanov. Sean Parent of course being famous in his own right but also being notable for bringing Stepanov to Adobe and being the one to push for Stepanov to propose the STL to WG21 (the C++ ISO Std committee).
freestanding requires almost all std library. Please note that -fno-rtti and -fno-exceptions are non-conformant, c++ standard does not permit either.
Also, such std:: members as initializer_list, type_info etc are directly baked into compiler and stuff in header must exactly match internals — making std library a part of compiler implementation
have you actually read the page you linked to? None of the standard containers is there, nor <iostream> or <algorithm>. <string> is there but marked as partial.
If anything, I would expect more headers like <algorithm>, <span>, <array> etc to be there as they mostly do not require any heap allocation nor exceptions for most of their functionality. And in fact they are available with GCC.
The only bit I'm surprised is that coroutine is there, as they normally allocate, but I guess it has full support for custom allocators, so it can be made to work on freestanding.
> Please note that -fno-rtti and -fno-exceptions are non-conformant, c++ standard does not permit either.
I did not know that.
My understanding was that C does not require standard library functions to be present in freestanding. The Linux kernel famously does not build in freestanding mode, since then GCC can't reason about the standard library functions which they want. This means that they need to implement stuff like memcpy and pass -fno-builtin.
Does that mean that freestanding C++ requires the C++ standard library, but not the C standard library? How does that work?
Honestly? No idea how the committee is thinking. When, say, gamedev people write proposal, ask for a feature, explain it is important and something they depend on and so on, it gets shot down on technicality. Then they turn around and produce some insane feature that, like, rips everything east to west (like modules), and suddenly voting goes positive.
The "abstract machine" C++ assumes in the standard is itself a deeply puzzling construct. Luckily, compiler authors seem much more pragmatic and reasonable, I do not fear -fno-exceptions dissapearing suddenly, or code that accesses mmapped data becoming invalid because it didn't use start_lifetime_as
One of required headers in freestanding, <cstdlib>, is labelled "C standard library", but it is not <stdlib.h>
Something similar with other <csomething> headers.
This kinda implies C library is required, if I read it correctly, but maybe someone else can correct me:
https://eel.is/c++draft/library.c
> The ISO C standard defines (in clause 4) two classes of conforming
implementation. A "conforming hosted implementation" supports the whole
standard including all the library facilities; a "conforming
freestanding implementation" is only required to provide certain library
facilities: those in '<float.h>', '<limits.h>', '<stdarg.h>', and
'<stddef.h>'; since AMD1, also those in '<iso646.h>'; since C99, also
those in '<stdbool.h>' and '<stdint.h>'; and since C11, also those in
'<stdalign.h>' and '<stdnoreturn.h>'. In addition, complex types, added
in C99, are not required for freestanding implementations.
> The standard also defines two environments for programs, a
"freestanding environment", required of all implementations and which
may not have library facilities beyond those required of freestanding
implementations, where the handling of program startup and termination
are implementation-defined; and a "hosted environment", which is not
required, in which all the library facilities are provided and startup
is through a function 'int main (void)' or 'int main (int, char *[])'.
An OS kernel is an example of a program running in a freestanding
environment; a program using the facilities of an operating system is an
example of a program running in a hosted environment.
> GCC aims towards being usable as a conforming freestanding
implementation, or as the compiler for a conforming hosted
implementation. By default, it acts as the compiler for a hosted
implementation, defining '__STDC_HOSTED__' as '1' and presuming that
when the names of ISO C functions are used, they have the semantics
defined in the standard. To make it act as a conforming freestanding
implementation for a freestanding environment, use the option
'-ffreestanding'; it then defines '__STDC_HOSTED__' to '0' and does not
make assumptions about the meanings of function names from the standard
library, with exceptions noted below. To build an OS kernel, you may
well still need to make your own arrangements for linking and startup.
*Note Options Controlling C Dialect: C Dialect Options.
> GCC does not provide the library facilities required only of hosted
implementations, nor yet all the facilities required by C99 of
freestanding implementations on all platforms. To use the facilities of
a hosted environment, you need to find them elsewhere (for example, in
the GNU C library). *Note Standard Libraries: Standard Libraries.
> Most of the compiler support routines used by GCC are present in
'libgcc', but there are a few exceptions. GCC requires the freestanding
environment provide 'memcpy', 'memmove', 'memset' and 'memcmp'.
Finally, if '__builtin_trap' is used, and the target does not implement
the 'trap' pattern, then GCC emits a call to 'abort'.
So the last paragraph means that my remark about the Linux kernel might be wrong.
So the required headers are all about basic constants for types, the types themselves (bool), and basic language features like stdarg, iso646 or stdalign. Sounds sensible to me. Not sure what C++ does with that.
This also actually matches the links provided by you. In https://eel.is/c++draft/cstdlib.syn you see that not all declarations are actually marked for freestanding implementations.
Depends on the type of LIDAR. LIDAR rated for vehicle use is at a wavelength opaque to the eyes so it hits the surface and fluid of your eye and reflects back rather than going through to your cones and rods.
It isn't however opaque for optical glass (since the LIDAR has to shine through optical glass in the first place) so it hits your camera lens, goes straight through, and slams the sensor.
You seem to be implying that all automotive lidar are 1550 nm but that's not true. While there are lots of 1550 nm automotive lidars (Luminar on Volvo, Seyond on NIO) there are also plenty of 850 nm to 940 nm lidars are used in cars (Hesai, Robosense, etc). Those can pass through water and get focused to your retina, but they are also a lot lower power so they do not damage cameras.
Also although that energy longer than 1400nm is generally absorbed by the cornea and lens, it is still energy, and it is not a hard bandpass filter per se. Safety is relative at higher wattages.
NGL I thought sub 1550nm LIDAR had been banned for use in new automotive applications already? I clearly must be mistaken but I had thought that was the case.
Not banned. In addition to the Chinese lidars I mentioned, the Valeo Scala on Audi cars is 905 nm, and then there are also Ouster (865 nm), Innoviz (905 nm), Livox (905 nm) etc. The large spinning lidar on top of the Waymo Jaguar I-Pace is also purportedly 905 nm, although in the past they also had a swivelling 1550 nm lidar in the dome of the Chrysler Pacifica cars (situated just underneath a smaller spinning 905 nm one).
The eye safety threshold for 850/905 nm is a lot lower than 1550 nm, so they output way less power, but the much better sensitivity of silicon sensors makes up for it partially. You can also squeeze out more range using clever signal processing and a large optical aperture (which allows you to output more light, but since the light is spread out across the aperture, the intensity doesn't exceed the threshold). Typically, the range of 850/905 nm lidars is less than that of 1550 nm lidars though.
On the bright side, due to lower power, there hasn't been any instances (to my knowledge) of 850 nm and 905 nm lidars damaging cameras, whereas at least two different 1550 nm lidars have been known to destroy cameras (Luminar and AEye).
On the Luminar lidar website [1] they proudly advertise "1,000,000x pulse energy of 905nm".
It's not that nobody told the director. It's that the director knew nobody cared what was actually in the show as long as the end product moved units on the shelves.
It's part of the reason the names are so wild. He was actively pushing the envelope with outrageous names during pitches to see how far he could go before producers would stop nodding along without paying attention.
Those names include "A Baoa Qu", "Gelgoog", and a variety of insane character names that sometimes sound cultureless yet futuristic like Bannagher Links and sometimes are just "M'Quve" or "Full Frontal".
I outlined it over in another comment[1] so I'm not gonna copy it all over but the point isn't to eliminate all trust. The point of trustless architectures (of which blockchain and smart contracts are one) is that you are eliminating implicit trust.
You are taking all the implicit trust, lowering it into explicit trust assumptions, and formalising who is allowed to make what decisions when, what happens when they do, and how the other parties are permitted to respond.
You are moving all of those implicit assumptions about how a contract, interaction, or relationship work and formalising them into something explicit and upfront so that all participants can evaluate their risk tolerance and trust levels prior to agreeing to a given contract or interaction.
And of course you are also sprinkling in a heavy dose of automation to smooth out the complexities of these explicit, mechanised contracts such that the happy paths are buttery smooth and the unhappy paths are at the least bearable and correspond to the contract you signed on to at the beginning of your interaction.
Clicked the link but ctrl+f doesn't find any posts by you.
> The point of trustless architectures (of which blockchain and smart contracts are one) is that you are eliminating implicit trust.
That is also the point of laws and contracts as we have them today. How does, explicitly, blockchain improve on that?
> You are moving all of those implicit assumptions about how a contract, interaction, or relationship work and formalising them into something explicit and upfront so that all participants can evaluate their risk tolerance and trust levels prior to agreeing to a given contract or interaction.
What implicit assumptions aren't removed by laws and contracts as we have them today that are removed by blockchain and smart contracts?
> And of course you are also sprinkling in a heavy dose of automation to smooth out the complexities of these explicit, mechanised contracts such that the happy paths are buttery smooth and the unhappy paths are at the least bearable and correspond to the contract you signed on to at the beginning of your interaction.
Without any examples of what is being automated, how and what it is that is made buttery smooth... you really aren't saying anything here. Can you expound on any of those claims?
TLDR: By what you said the only thing that blockchains and smart contracts bring is a new medium to write contracts on.
> That is also the point of laws and contracts as we have them today. How does, explicitly, blockchain improve on that?
It's essentially automated tooling. The happy path (i.e. buyer and seller are in agreement) "just works" but when there's a disagreement you can rely on the contract to walk through all of the conflict resolution paths with whatever level of complexity the contract builds in for consensus from multiple third parties, etc.
i.e. It's tooling that replaces manual bureaucratic arbitration with state machines and consensus algorithms.
For two party smart contracts this means there's no third party but there's an inherent risk of exploitation by one party or the other by the design of the contract. It's inherent to two party contracts relying on any physical exchange but if you trust the party the contract is weighted in favor of, it cuts out any opportunity for arbitration and the complexity that comes with that. Now the only trust assumption is the two parties trust in each other.
For contracts with some arbitration process however things get more complicated. Who all is involved in arbitration. Who does the buyer trust. Who does the seller trust. What's the reputation of one of these arbiters? This reputation can be loosely represented as a set of markets for the arbiter with demand from sellers and demand from buyers. If those two markets are out of sync from each other that suggests an impartial arbiter and both parties can reason about that.
> What implicit assumptions aren't removed by laws and contracts as we have them today that are removed by blockchain and smart contracts?
Well. Part of it is that laws are an inherently fuzzy thing and how they are upheld is entirely dependent on a long running and constantly evolving chain of interpretations from past court decisions. And of course how they are upheld in a specific case comes down to how well lawyers are able to convince a judge or a collection of jurors who were more or less selected at random with anyone semi-literate about the law thrown out ahead of time. So it boils down to "who is best able to sway the opinions of this random collection of people who are as illiterate about the law as the lawyers could manage to get them". Which mostly just boils down to feelings.
Of course contracts often go to arbitration instead of to court proper so it's a different case there but arbiters are single authorities that almost universally side with the bigger entity (i.e. whoever is paying them to handle arbitration). So unless you are two large orgs, arbitration is inherently biased.
So an alternative is a largely automated system where multiple third parties who are selected ahead of time by the buyer and seller can be relied upon for arbitration and where their decision is for all intents and purpose final. The buyer and the seller have equal decision making power in the selection of these third parties and they can evaluate the reputations of these third parties prior to entering the contract.
i.e. you are moving away from trust in a large system with a thousand moving parts all performed by infallible people swayed by emotions and an endless process of appeals OR a single arbiter almost always paid by the larger party who will always rule in their favor. Instead putting your trust into a strict set of automated rules with a formal analysis of outcomes backing it + some optional assortment of selected third parties + a consensus mechanism for those third parties.
> TLDR: By what you said the only thing that blockchains and smart contracts bring is a new medium to write contracts on.
Yes. It is exactly that. A new medium to write contracts on. Manual bureaucratic systems and thousands upon thousands of people working in a complex legal system are replaced by a machine. Humans are still in the loop of course but only for making specific decisions at specific times in the process.
And at the time of agreeing to the contract the relevant parties can ideally rely on tooling to explicitly outline at what points each party is taking on a degree of risk, the likelihood of that risk, and the process for moving forward in those cases.
An extremely reductive TLDR is that the goal is to take a system that relies on an army of lawyers and legal analysts and reduce it down into something digestible and navigable by a single lawyer (or even a well educated layperson) with all the existing complexity abstracted away by formal methods tooling.
The contract can hold the money in escrow such that it can only be sent either to the seller or returned to buyer.
The seller and buyer can then both walk the contract through a state machine on agreement (i.e. confirm shipping, confirm delivery, potentially also confirmation for a return process) and when the buyer and seller come to a disagreement (ex: seller attests they've shipped the product and it should be delivered but the buyer asserts they havent/the tracking on shipping is invalid) or one of the participants is non-responsive for a certain amount of time then the contract moves into arbitration.
In arbitration one or more third parties then step in to serve as arbiters/oracles that decide in the favor of one party or the other and commit those decisions to the contract and the contract then derives consensus from those decisions and proceeds to the corresponding state/action of the contract (i.e. refund vs close).
Now your arbiters/oracles/third parties have reputations and you can reason about how trustworthy they are before you enter into the contract.
This means all parties can evaluate their risk tolerance and trust levels before entering the contract/on agreement.
-------
TLDR: Trust is inherent to any system reliant on the physical world. The point of smart contracts, etc is to formally encode those trust assumptions and the procedures of the contract in as trustless of a way as possible and to allow distribution of that trust across parties with most of the coordination overhead being automated/abstracted away.
And importantly smart contracts provide an extremely low friction happy path. In the happy path where all parties are satisfied, it's extremely efficient and responsive. But in every other path, the conflicts, incentives, and resolution procedures are clearly defined for all parties involved.
Read Irrationality, Extortion, or Trusted Third-parties: Why it is Impossible to Buy and Sell Physical Goods Securely on the Blockchain. Or just read the title, it has the main point.
Did you read the paper? The paper is arguing the exact same point I was arguing. To quote the paper:
> Finally, assuming that the parties are rational agents and the smart contract language is Turing complete, we argue that it is impossible to implement the basic sales escrow as a smart contract without trusted third-parties or vulnerability to extortion. In other words, any escrow smart contract has one of the following three demerits:
> – Assuming irrational agents who are willing to punish the other side, even if it is not in their own interest; or
> – Relying on a third-party; or
> – Enabling at least one of the two parties to extort the other.
> In summary, we illustrate that the smart contract and Dapp community is wrong in assuming that the current implementations of two-party escrows have a well-designed mechanism that incentivizes rational actors to be truthful. More shockingly, we show that the smart contracts on programmable blockchains have inherent limitations that make it impossible to implement such a contract. In a sense, this can be considered the first incontractability result on programmable blockchains.
----------
This is exactly what I was arguing.
I never claimed that two party escrow is ideal. I was explicitly saying that two party escrow is an intractable problem and that you must formalise your trust assumptions instead and either accept some level of trusted third parties OR without third parties accept some level of risk of exploitation by one party or the other. Even with third parties there is still risk for exploitation but depending how it is implemented that risk is lesser.
Again this is a matter of formalising trust assumptions and explicitly outlining who you are trusting, what you are trusting them to do, and how much you trust them to do it. And in doing so up front both parties can evaluate their risk tolerance based on the agreed upon contract before progressing.
reply