Hacker Newsnew | past | comments | ask | show | jobs | submit | BadInformatics's commentslogin

I think it's more complicated than that. The projects that are getting the funding are usually the hard, technical ones, but that funding also supports better docs + more time for API design. This doesn't apply to bleeding edge stuff, but look back through the core SciML libraries and there's no shortage of effort directed towards "dull" stuff like docs + improving compile times. Likewise for the core language: a lot of recent work is bread and butter engineering like (again) improving compile times, filing rough edges off of APIs and (gradually) tackling the deployment story.

Now, one area where this dull problem work isn't as noticeable is on the "core" deep learning libraries (Flux and Zygote). AFAICT those two haven't received any significant funding for a couple of years, and there is at most 1 full time, active contributor for both of them. Compare with JAX or even higher-level wrapper libraries like Flax, Haiku or PyTorch Lightning, which have 5-10+ full time core devs. Given this, is it surprising that progress on anything (including docs + interface design) is slow?


You're in luck, because (assuming the scans are in a compatible format), this is exactly what 3D Slicer was designed for.


Who is doing more than a couple rounds of interviews outside of (pardon the scare quotes) "tech companies"? Has anyone run into, say, a bank pulling 4+ rounds of interviews, or is this limited to FAANG(M), SV companies and startups that seek to emulate them?


Sorry, you are right. Non-tech companies totally slipped out of my radar. I never interviewed with one so I have no idea how it works there. I don't know what is more depressing an interview with a tech company or working on a legacy Java enterprise system at a bank.


Banks are so much more than legacy Java systems. Name a cutting edge research domain or a a 'cool' programming language and banks will be at the forefront of work with that.


Bronson and Pigott were a nice read, but it's unclear where Föll is getting most of his conjecture from because they certainly don't talk about it. Frankly, the whole piece smacks of the same Diamond-esque "they made fireworks, we made guns" trope that has been thoroughly torn apart since. Now to his credit, he does claim ignorance at the top of the page, but that seems to be quickly forgotten given how much hyperbole is spewed later on.

[1] is a better lay overview of medieval-era steel-making. And for a great breakdown of forces behind European success in the modern era, see Brett Devereaux's series on EU4 [2]

[1] https://www.youtube.com/watch?v=5djVkOgu8vs

[2] https://acoup.blog/2021/05/28/collections-teaching-paradox-e...


It reads like Foell was mostly citing Wagner, not B or P. Did you read those?


Trying to interpret individual radicals of a character as standalone components using their original meaning is enticing, but more often than not incorrect. For example, the character for maternal aunt uses the same radical. Phonetic-semantic compound characters are very, very, common. The standalone pronunciation of 夷 doesn't appear to have turkic/steppe origins either [1].

Moreover, we know Mongolian writing (because of the geopolitics of the time and its status as a younger written tradition) borrowed quite liberally from its southern neighbours. Including, but not limited to, China [2]. So while Wagner's point about proliferation of ironmaking techniques from outside the (nominal) Chinese state at the time makes sense, the whole phonetic angle doesn't.

As for the points about centralization and family name elitism, the first lasted less than 200 years, by which time many formerly aristocratic family names had become _so_ diluted so as to be almost meaningless. One of the main conceits of a major character in RoTK is that he's an average Joe who only gets a modicum of respect for having the same surname as the dynastic family. It also completely ignores the existence of profession-based surnames like 匠 ("artisan", notably 1/2 of 铁匠/blacksmith).

[1] https://en.wikipedia.org/wiki/Dongyi#Yi [2] https://en.wikipedia.org/wiki/Mongolian_writing_systems


Notably? I would say “artisan” is too broad and “carpenter” is the usual meaning. the profession based surnames are far from being dominant either compared to smith, which almost always means blacksmith in the west.

While surnames are diluted this kind of “joke” about surnames still exists today, so there is at least some meaning

https://en.wikipedia.org/wiki/Zhao_family_(Internet_slang)

FWIW according to Baidu wiki the character yi itself has a nomadic origin.

https://baike.baidu.com/item/夷/678050


It does not, and any discussion of whether certain public health measures should've been implemented should take that into consideration. Toronto-area hospitals were literally sending ICU patients to smaller cities because their own wards were overflowing. Moreover, attrition rates among clinicians (nurses especially) has been atrocious over the past year or so. People are only willing to put up with so much shit for so long, and most provincial systems have zero slack at the moment.

That said, measures like GP described were/are in play in many cities. Seniors time was a fixture in the first few months of the pandemic, especially in smaller areas that did not experience a large caseload.

That's another point too: I think a lot of HN commenters are unaware of just how fragmented and regional the Canadian healthcare system is. No two provinces implemented the same restrictions or policies at the same time, and only a couple put in strict stay-at-home style lockdowns. Note how the article mentions large increases in both Ontario (lax policies, then sudden strict lockdowns) with Alberta (very few restrictions). Even in Ontario, walking outside the biggest few cities would result in an immediate drop of most of the strict measures present in, say, the GTA. I know it's hard to capture this nuance discussing with strangers on some random online forum, but it's essential if we are to properly discuss cause and effect.


Canadaland did a great series on the pathology of Vancouver real estate recently [1]. TL;DL there is no consensus on the root cause, but the usual suspects of bureaucracy, NIMBYism and foreign investment all make an appearance.

[1] https://www.canadaland.com/podcast/real-estate-3-terminal-ci...


Vancouver may have "no industry" relative to SV, but it is a veritable black hole for tech on the west coast of Canada. The same rat race of high-skilled, well-paying jobs only being available in HCoL cities is just as much of an issue north of the border. The even smaller gap between compensation and CoL in Vancouver, Toronto, etc. just serves to make things more miserable.


> Do you think AMD should solve every problem CUDA solves for their customers too?

They had no choice. Getting a bunch of HPC people to completely rewrite their code for a different API is a tough pill to swallow when you're trying to win supercomputer contracts. Would they have preferred to spend development resources elsewhere? Probably, they've even got their own standards and SDKs from days past.

> everyone else using GPU's is running fast as they can towards Vulkan

I'm not qualified to comment on the entirety of it, but I can say that basically no claim in this statement is true:

1. Not everyone doing compute is using GPUs. Companies are increasingly designing and releasing their own custom hardware (TPUs, IPUs, NPUs, etc.)

2. Not everyone using GPUs is cares about Vulkan. Certainly many folks doing graphics stuff don't, and DirectX is as healthy as ever. There have been bits and pieces of work around Vulkan compute for mobile ML model deployment, but it's a tiny niche and doesn't involve discrete GPUs at all.

> Is it just too soon too early in the adoption curve

Yes. Vulkan compute is still missing many of the niceties of more developed compute APIs. Tooling is one big part of that: writing shaders using GLSL is a pretty big step down from using whatever language you were using before (C++, Fortran, Python, etc).

> do ya'll think there are more serious obstructions long term to building a more Vulkan centric AI/ML toolkit

You could probably write a whole page about this, but TL;DR yes. It would take at least as much effort as AMD and Intel put into their respective compute stacks to get Vulkan ML anywhere near ready for prime time. You need to have inference, training, cross-device communication, headless GPU usage, reasonably wide compatibility, not garbage performance, framework integration, passable tooling and more.

Sure these are all feasible, but who has the incentive to put in the time to do it? The big 3 vendors have their supercomputer contracts already, so all they need to do is keep maintaining their 1st-party compute stacks. Interop also requires going through Khronos, which is its own political quagmire when it comes to standardization. Nvidia already managed to obstruct OpenCL into obscurity, why would they do anything different here? Downstream libraries have also poured untold millions into existing compute stacks, OR rely on the vendors to implement that functionality for them. This is before we even get into custom hardware like TPUs that don't behave like a GPU at all.

So in short, there is little inevitable about this at all. The reason people may have been frustrated by your comment is because Vulkan compute comes up all the time as some silver bullet that will save us from the walled gardens of CUDA and co (especially for ML, arguably the most complex and expensive subdomain of them all). We'd all like it to come true, but until all of the aforementioned points are addressed this will remain primarily in pipe dream territory.


The paradox I identify in your comments is the start & where you end. The start is that AMD's only choice is to re-embark & re-do the years & years of hard work, to catch up.

The end is decrying how impossible & hard it is to imagine anyone ever reproducing anything like CUDA in Vulkan:

> Sure these are all feasible, but who has the incentive to put in the time to do it?

To talk to the first though: what choice do we have? Why would AMD try to compete by doing it all again as a second party? It seems like, with Nvidia so dominant, AMD and literally everyone else should realize their incentive is to compete, as a group, against the current unquestioned champion. There needs to be some common ground that the humble opposition can work from. And, from what I see, Vulkan is that ground, and nothing else is remotely competitive or interesting.

I really appreciate your challenges, thank you for writing them out. It is real hard, there are a lot of difficulties starting afresh, with a much harder to use toolkit than enriched spiced up C++ (CUDA) as a starting point. At the same time, I continue to think there will be a sea-change, it will happen enormously fast, & it will take far less real work than the prevailing pessimist's view could ever have begin to encompassed. Some good strategic wins to set the stage & make some common use cases viable, good enough technics to set a mold, and I think the participatory nature will snowball, quickly, and we'll wonder why we hadn't begun years ago.


Saying all the underdog competitors should team up is a nice idea, but as anyone who has seen how the standards sausage is made (or, indeed, has tried something similar) will tell you, it is often more difficult than everyone going their own way. It might be unintuitive, but coordination is hard even when you're not jockeying for position with your collaborators. This is why I mentioned the silver bullet part: a surface level analysis leads one to believe collaboration is the optimal path, but that starts to show cracks real quickly once one starts actually digging into the details.

To end things on a somewhat brighter note, there will be no sea change unless people put in the time and effort to get stuff like Vulkan compute working. As-is, most ML people (somewhat rightfully) expect accelerator support to be handed to them on a silver platter. That's fine, but I'd argue by doing so we lose the right to complain about big libraries and hardware vendors doing what's best for their own interests instead of for the ecosystem as a whole.


I've mentioned this on other forums, but it would help to have some kind of easily visible, public tracker for this progress. Even a text file, set of GitHub issues or project board would do.

Why? Because as-is, most people still believe support for gfx1000 cards is non-existent in any ROCm library. Of course that's not the case as you've pointed out here, but without any good sign of forward progress, your average user is going to assume close to zero support. Vague comments like https://github.com/RadeonOpenCompute/ROCm/issues/1542 are better than nothing, but don't inspire that much confidence without some more detail.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: