Hacker News new | past | comments | ask | show | jobs | submit | epaulson's comments login

It's frustrating that there's no pricing information. The tech looks cool and all, but without knowing how much it's going to cost there's no way to really evaluate it.


It's really surprising that they don't have a page with information ready to go live when the announcement happens. E.g. the Nova AI models they announced look neat, but the Bedrock page doesn't mention them at all and the page for them has links to non-existent documentation.


The National Science Foundation has been doing this for decades, starting with the supercomputing centers in the 80s. Long before anyone talked about cloud credits, NSF has had a bunch of different programs to allocate time on supercomputers to researchers at no cost, these days mostly run out of the Office of Advanced Cyberinfrastruture. (The office name is from the early 00s) - https://new.nsf.gov/cise/oac

(To connect universities to the different supercomputing centers, the NSF funded the NSFnet network in the 80s, which was basically the backbone of the Internet in the 80s and early 90s. The supercomputing funding has really, really paid off for the USA)


> NSF has had a bunch of different programs to allocate time on supercomputers to researchers at no cost, these days mostly run out of the Office of Advanced Cyberinfrastruture

This would be the logical place to put such a programme.


The DoE has also been a fairly active purchaser of GPUs for almost two decades now thanks to the Exascale Computing Project [0] and other predecessor projects.

The DoE helped subsidize development of Kepler, Maxwell, Pascal, etc along with the underlying stack like NVLink, NGC, CUDA, etc either via purchases or allowing grants to be commercialized by Nvidia. They also played matchmaker by helping connect private sector research partners with Nvidia.

The DoE also did the same thing for AMD and Intel.

[0] - https://www.exascaleproject.org/


The DoE subsidized the development of GPUs, but so did Bitcoin.

But before that, it was video games, like quake. Nvidia wouldn't be viable if not for games.

But before that, graphics research was subsidized by the DoD, back when visualizing things in 3D cost serious money.

It's funny how technology advances.


It was really Ethereum / Alt coins not Bitcoin that caused the GPU demand in 2021.

Bitcoin moved to FPGAs/ASIC very quickly because dedicated hardware was vastly more efficient they were only viable from Oct 2010. By 2013 when ASIC’s came online GPU’s only made sense if someone else was paying for both the hardware and electricity.


As you've rightly pointed out, we have the mechanism, now let's fund it properly!

I'm in Canada, and our science funding has likewise fallen year after year as a proportion of our GDP. I'm still benefiting from A100 clusters funded by tax payer dollars, but think of the advantage we'd have over industry if we didn't have to fight over resources.


Where do you get access to those as a member of the general public?


In Australia at least, anyone who is enrolled at or works at a university can use the taxpayer-subsidised "Gadi" HPC which is part of the National Computing Infrastructure (https://nci.org.au/our-systems/hpc-systems). I also do mean anyone, I have an undergraduate student using it right now (for free) to fine-tune several LLMs.

It also says commercial orgs can get access via negotiation, I expect a random member of the public would be able to go that route as well. I expect that there would be some hurdles to cross, it isn't really common for random members of the public to be doing the kinds of research Gadi was created to benefit. I expect it is the same way in this case in Canada. I suppose the argument is if there weren't any gatekeeping at all, you might end up with all kinds of unsuitable stuff on the cluster, e.g. crypto miners and such.

Possibly another way for a true random person to get access would be to get some kind of 0-hour academic affiliation via someone willing to back you up, or one could enrol in a random AI course or something and then talk to the lecturer in charge.

In reality, the (also taxpayer-subsidised) university pays some fee for access, but it doesn't come from any of our budgets.


Australia's peak HPC has a total of: "2 nodes of the NVIDIA DGX A100 system, with 8 A100 GPUs per node".

It's pretty meagre pickings!


Well, one, it has:

> 160 nodes each containing four Nvidia V100 GPUs

and two, well, it's a CPU-based supercomputer.


I get my resources through a combination of servers my lab bought with using a government grant and the Digital Research Alliance of Canada (nee Compute Canada)'s cluster.

These resources aren't available to the public, but if I were king for a day we'd increase science funding such that we'd have compute resources available to high-school students and the general public (possibly following training on how to use it).

Making sure folks didn't use it to mine bitcoin would be important, though ;)


I'm going to guess it's Compute Canada, which I don't think we non-academics have access to.


That's correct (they go by the Digital Research Alliance of Canada now... how boring).

I wish that wasn't the case though!


Yeah, the specific AI/ML-focused program is NAIRR.

https://nairrpilot.org/

Terrible name unless they low-key plan to make AI researchers' hair fall out.


the US already pays for 2+ aws region for cia/dod. why not pay for a region that is only available to researchers?


About two weeks I came across this tweet, from a PhD candidate just finishing up:

https://x.com/CharityWoodrum/status/1808313627864440930

"For Woody and Jayson Thomas. From the local universe to the first galaxies, the brightest moments in space and time occurred during our brief epoch together. That light is unquenchable."

She had gone back to school as an adult to study physics, she was just finishing up her undergrad when her husband and child were swept away by a wave while walking on the beach.

She kept on with school and is about to finish her Ph.D. I just can't comprehend how. https://www.tucsonweekly.com/tucson/ua-doctoral-candidate-in...


For those like myself wondering how a regular wave could do this, the article says it was something colloquially termed a "sneaker wave." Like a rogue wave, but on the shoreline. It also sounds like they all got hit by the wave, and only Charity survived.

Edit: National Weather article on Sneaker Waves: https://www.weather.gov/safety/sneaker-waves

Apparently the cold water and other complications make things worse.


Thank you, never heard of this. It sounds terrifying.

> Sneaker waves appear suddenly on a coastline and without warning; generally, it is not obvious that they are larger than other waves until they break and suddenly surge up a beach. A sneaker wave can occur following a period of 10 to 20 minutes of gentle, lapping waves. Upon arriving, a sneaker wave can surge more than 150 feet (50 m) beyond the foam line, rushing up a beach with great force.

> The force of a sneaker wave's surge and the large volume of water rushing far up a beach is enough to suddenly submerge people thigh- or waist-deep, knock them off their feet, and drag them into the ocean

https://en.wikipedia.org/wiki/Sneaker_wave


The second one in this video [1].

[1] https://www.youtube.com/watch?v=84EQv6_91dU


My god, just 2 or 3 seconds later and the person filming this would've been swept away for good.


Woah. That helps visualize it, thanks.


Very common on the west coast… I recommend having any kids playing on an ocean beach to wear a life jacket, and adults too unless they are, say, very experienced reading waves and swimming long distances in the ocean.


Someone is cutting onions


It's a terrible day for rain.


Are there APIs to get the iCloud sync into my own app? I'm all for iCloud syncing to my devices I just want a way to also get a backup in a file so if Apple decides to delete my account on a whim, I don't lose everything.


What account are you worried about Apple deleting? If they nuke your iCloud account your info is still on the device.


The fun thing about BlenderBIM is that it's IFC-native. (IFC is the 'Industrial Foundation Classes' - a data model/structure for modeling buildings and the components, systems, and intangibles like construction schedules.)

BlenderBIM is internally managing everything with the IfcOpenShell library - all of the data uses the Python interfaces of IfcOpenShell (which internally has a lot of C) to keep the model state. Blender is more a rendering backend and nice UI to manipulate the state of the IFC model with IfcOpenShell - but basically everything you can do with the Blender GUI you can pop open a shell and just type in Python and do the same thing.

This means you'll occasionally see some Blender things that don't do what you expect to the model you're editing - there are ways to have Blender do state modifications that don't all get translated to the IFC data underneath, so sometimes doing things like selections or modifiers are surprising for Blender users. (I think over time the list of things that are like this has gotten a lot smaller, and BlenderBIM is now pretty good about keeping the state of what's displayed in Blender in sync with what the underlying IFC model is storing)

The main commercial player in this space is Autodesk Revit. There is a lot of thinking that perhaps Revit has reached a point as a platform where Autodesk can't keep building on it (i.e. it has so much tech debt that it's getting hopeless) - see https://letters-to-autodesk.com/ Autodesk has a number of other 3D modeling software packages and I sometimes think that for their next generation of Revit they should consider the BlenderBIM approach and maybe build on top of Maya or one of their other offerings.


Has anything actually been moving in this space? From what I recall Autodesk had the US market bottled up, and IFC was really only being adopted in the EU.


I'd even be willing to pay the $99 a year, I just want the signature to last longer than a week, ideally forever. For years when I don't feel like updating the app, I won't pay the developer membership fee.


Uh, don't the $99/year certs give you signatures that last like a year?


I think the deploy straight from Xcode to your device builds have a pretty short lifetime (I remember it being a month in around 2014 or 2015) not sure now. If you archive and build using a certificate from developer.apple.com they last a year.


As someone with dozens of side loaded iOS apps, all of my apps are working just fine, well over 6 months later. I always build and deploy without archiving.


Yeah okay - maybe things have changed then but I remember building an app for my boss in 2014 and it telling me it would last 30 days.


If you pay the $99, the cert don’t expire for a year. It is only the free account that has the 7-day limitation.


Oh, awesome. I'm glad to discover I was wrong about that!


It's not so much that OPML is the interesting part here, it's that it's a file. A few weeks back Andrej Karpathy had a twitter thread[1] about blogging software and shared this link on 'File vs App' - https://stephango.com/file-over-app - and that really was great for ecosystem interoperability. I can download the file using whatever tool is appropriate, store it however I want, and then upload it somewhere else using whatever tools is appropriate. I have the OPML export I took of my subscriptions from the day Google Reader shut down and there's still a fighting chance that other services could actually import that file.

It's also worth noting that OPML is only the container format here. Agreeing on a container format is obviously important and we won't get very far for interop if we can't even agree on the container format, but OPML is supposed to be a generic tree of 'outline' format, and conveniently RSS subscriptions (and folders) look like a tree.

I sorta expected that there would be a second standard that says "here's how you use this generic OPML container format to represent RSS feed subscriptions" but oddly that's actually included right in the OPML spec[2]. In fact RSS subscriptions are the only application format defined in OPML - there's a 'type' field defined for <outline> element and if type is set to 'rss' then there's also a required xmlUrl of the feed and optional things like the html link for the blog, the version of RSS used. This is the data and part of the spec that makes the actual subscription list exchange work.

But again the only entry for 'type' defined in the OPML spec is 'rss'. If you want to use OPML as a container for something else, like Youtube subscriptions or Twitter followers, you of course can but you gotta find some way to get everyone to agree on how to interpret the 'type' you set for that <outline> element. And as far as I know, no one's done anything like that for any other domain.

So it'd be awesome if more domains defined 'type' fields and set out some specs so I can export my video streaming subscriptions or Amazon wishlist or whatever but without defining more 'type' fields OPML is really not any more interesting than a CSV of URLs.

[1] https://twitter.com/karpathy/status/1751379269769695601 [2] http://opml.org/spec2.opml


If someone wanted to extend OPML into another domain, even if they got others to agree on their proposed type value and the new attributes added to support that type, there's nothing to stop a collision with somebody else choosing the same attribute names.

There also is nothing to stop the author of the OPML specification from opposing the new type.

It would be far easier to create a new XML format.


This sentence is doing a lot of work: "Hypothetical S2 does a bit more to simplify the layers above – it makes leadership above the log convenient with leases and fenced writes."

It'd be awesome to have a bit more transactional help from S3. You could go a long way with 'only update this object if the ETags on these other objects are still the same'. I know AWS doesn't want to turn S3 into a full database but some updates you just can't do without having a whole 2nd service running alongside to keep track of the states of your updates.


Agreed, both Google Cloud Storage and Azure Blob Storage support preconditions. Azure even has leases. S3 is for better or worse the common denominator for systems layering on top of object storage.


One of the long-standing challenges in the federated identity space has been that most of the solutions are built around domain names, which are a pain for most users to create on their own. There's a sense that people would prefer email addresses as their identifiers, but without some server help that's hard to do. The WebFinger protocol works well for translating email addresses into something that could be used for federated data servers, but alas most of the big email providers (ala gmail) don't participate in WebFinger.

A while back Brett Slatkin and Brad Fitzpatrick built out a protocol called 'Web FistBump' that could bring WebFinger to people who's email providers to support it. It was a clever hack with DKIM - you emailed their webfist.org server with what you wanted to be your Webfinger info, and because Gmail signed the message with DKIM anyone could verify the message. The webfist.org server just proxied WebFinger requests into lookups for those signed emails. Even better, because it's just a signed email you can treat it as a blob and have a pool of different resolvers do the proxying, kinda like a blockchain. I think there was a post from Brad somewhere that estimated that the total data needed if everyone in the world used WebFistBump for storing a blob was in the low 100s of GB, which is pretty managable for a wider community to keep online.

I need to read up on DIDs but it feels like DIDs just standardize on what the message format should be that would come back from something like WebFinger/WebFistbump, but if WebFistbump were actually up and running, it could make WebFinger more widely available. (Alas, I think the webfist.org server has been shut down but maybe they could flip it back on!)

https://www.onebigfluke.com/2013/06/bootstrapping-webfinger-...


I wish there was a good nonprofit 'infrastructure cooperative' that could provide some of these core services but have a corporate governance that could be trusted. The place I most want it for is a domain name registrar but DNS and mail servers would be good additional services.


You know, if you started one, I bet there would be a bunch of people here who would use it. I would be one of them (but I don't have the time to start such an endeavor).

Put up an ASK HN and see what happens.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: