Hacker Newsnew | past | comments | ask | show | jobs | submit | cmsparks's commentslogin

Frankly, it's pretty difficult. Though, I've found that the actor model maps really well onto building agents. An instance of an actor = an instance of an agent. Agent to agent communication is just tool calling (via MCP or some other RPC)

I use Cloudflare's Durable Objects (disclaimer: I'm biased, I work on MCP + Agent things @ Cloudflare). However, I figure building agents probably maps similarly well onto any actor style framework.


Should the people developing AI agent protocols be exploring decentralised architectures, using technologies like blockchain and peer-to-peer networks to distribute models and data? What are the trade-offs of relying on centralised orchestration platforms owned by large companies like Amazon, Cloudfare or NVIDIA? Thanks


That's more of a hobbyist thing I'd say. Corporations developing these things will of course want to use some centralized system that they trust. It's more efficient, they have more control over it, it's easier for average people to use, etc.

A decentralized thing would be more for individuals who want more control and transparency. A decentralized public ledger would make it possible to verify that your agent, the agents it interacts with, and the contents of their interactions have not been altered or compromised in any way, whereas a corporate-owned framework could not provide the same level of assurance.

But technically, there's no advantage I can think of for using a public distributed ledger to manage interactions. Agent tasks are pretty ephemeral, so unlike digital currency, there's not really a need to maintain a complete historical log of every action forever. And as far as providing tools for dealing with race conditions, blockchain would be about the least efficient way of creating a mutex imaginable. So technically, just like with non-AI apps, cetralized architecture is always going to be a lot more efficient.


Good points. I agree that for most companies using centralised systems offers more advantages because of efficiency, control and user experience, but I wasn't arguing that decentralisation is better technically, just wondering if it might be necessary in the long run.

If agents become more autonomous and start coordinating across platforms owned by different companies, it might make sense to have some kind of shared, trustless layer (maybe not blockchain but something distributed, auditable and neutral).

I agree that agent tasks are ephemeral, but what about long lived multi-agent workflows or contracts between agents that execute over time? In those cases transparency and integrity might matter more.

I don't think it's one or the other. Centralised systems will dominate in the short term, no doubt about that, but if we're serious about agent ecosystems at scale, we might need more open coordination models too.


My hunch would still be no; human agents are able to cooperate without needing to do everything in a global shared record, so I'd expect AI agents would as well. If you (or any other AI agent) feel the need to check that the AI agent did some task, you just verify it "manually", like add a verification step in the workflow so that your AI agent checks your bank account to verify that the other AI agent actually transferred the sum that they said, just like human-to-human interaction (and just like a non-AI automated workflow would do).

But, that's just a guess. Maybe the combination of AI and automation adds something special to the mix where a global public ledger becomes more valuable (beyond the hobbyist community) and I'm just not seeing it.


MCP isn't static. It explicitly includes support for dynamically modifying tools, resources, etc via it's client notifications[0]. Sure, context is usually opaque to the server itself (unless you use the sampling feature[1]), but there's nothing preventing MCP clients/hosts from adjusting or filtering tools on their own.

[0] https://modelcontextprotocol.io/specification/2025-03-26/ser...

[1] https://modelcontextprotocol.io/specification/2025-03-26/cli...


Server notifications are a bad way of implementing semantic retrieval on tools. There when would one update the tools? You can’t “trigger” an event which causes a tool change without some hacky workarounds


>There when would one update the tools? You can’t “trigger” an event which causes a tool change without some hacky workarounds

Tool calls can trigger tool changes. Consider an MCP server exposes a list of accounts and tools to manage resources on those accounts:

1. MCP session starts, only tool exposed to the client is the `select_account` and `list_accounts` tools

2. MCP Client selects an account with `select_account` tool

3. MCP Server updates tools for the session to include `account_tool_a`. This automatically dispatches a listChanged notification

4. MCP Client receives notification and updates tools accordingly

IME this is pretty ergonomic and works well with the spec. But that’s assuming your client is well behaved, which many aren’t


A tool change can send a notification to the client, but the client chooses when to actually update the tools. This could take time and the LLM may not be aware of the new tool. I don’t think there is a concept of a “well-behaved” client since MCP is about giving flexibility of implementation.

I wouldn’t call this ergonomic. Alternatively, you could just notify the server when a user message is sent, and allow the server to adjust the tools and resources prior to execution of the agent (this is clearly different from the MCP spec).

On a separate note, what client are you using that supports notification, I haven’t seen one yet?


Oof not a fun incident, this is my nightmare as someone who works on this type of stuff.

As an aside, GitHub’s security model for apps/integrations is extremely puzzling to reason about and enables a lot of foot guns. Add the fact that it’s very obtuse to audit integrations (especially within an organization) makes them pretty scary to use sometimes.


I think this project runs on the QuickJS engine


If you want to try out Cloudflare Pages again, you aren't required to have github integration. If you want to just upload and host some static assets you can use direct upload. You can basically just drag and drop/manually upload a zip file containing some HTML, JS, or other static assets and create a static site that way. (See: https://developers.cloudflare.com/pages/get-started/direct-u...)

(Disclaimer: I'm an engineer on Cloudflare Pages)


> "in the absence of a law that compels us to write software, which is unconstitutional btw"

This isn't really settled. Writing software is not always constitutionally protected speech, and Apple being compelled to write software would probably not constitute a violation of the First Amendment. Federal wiretap law can compel companies to make it easy for the government to get data via a warrant (which necessarily entails writing code to produce that data) and has been upheld in the past. Also companies are often liable for the code they write. Both of those are examples of when code is not considered speech.


The government compelling apple to cause peoples phones to search themselves (with no probable cause that the suspects of the searches committed a crime) would be facially unconstitutional under the fourth amendment, not the first.

Compelled speech would be an interesting argument against compelled writing of software, but is definitely the weaker one here.

Edit: Oh, I see in one of the other replies GP raised the first amendment. Just take this as a reply to the idea in general...


The government need not care how Apple complies with the law, which could merely state that cloud storage providers are liable for illegal material stored there by their customers, regardless of the cloud provider's knowledge. This would be catastrophic to cloud storage in general, of course, but given that strict liability is a thing, I don't see how such a law could be ruled unconstitutional.


Trying to workaround the 4th like that might manage to make the law facially constitutional, but I'd be surprised if it made the searches conducted as a result of it valid.

By my understanding you have to avoid a "warrantless search by a government agent" to avoid violating the constitution. The "warrantless search" part is really beyond dispute, so it's the "government agent" part that is in question. In general "government agent" is a term of art that means "acting at the behest of the government", but I don't know exactly where the boundary lies. I'd be fairly surprised if any law that allowed for accidentally storing CSAM after a failed search, but didn't allow for accidentally storing CSAM content without a search, didn't make the party doing the search a government agent. If you make the former illegal, cloud storage at all (scanning or not) is an impossible business to be in.


You already have the FBI partnering with computer repair shops to do dragnet searches of customer's hard drives for CSAM when they bring their computers in for repairs.


I'd argue the Fifth Amendment should apply to mobile phone headsets, but law enforcement would pitch a fit to lose those.


Section I of the Thirteenth Amendment reads: “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”


You heard it here first: all regulations are slavery!


There is a difference between regulations and compelled action. The government make your ability to do X conditional on you also doing Y, but it generally can't just make you do Y.

The exceptions are actually quite few outside of conscription, eminent domain sales, wartime powers.


That sounds right to me. The government has tremendous powers. Forcing people to write computer programs isn't one of them. (They could have saved a lot of money on healthcare.gov if they had that power!)


How does conscription fit into that picture?


It fits in if 5 out of 9 supreme court justices want there to be a draft.


> This isn't really settled.

I agree with that. Especially in recent years, it's nearly impossible to tell what is and isn't settled case law.

It would be an expensive battle for the taxpayers to compel Apple to write software, I think. They've tried and failed. It will just be more expensive next time.


> Federal wiretap law can compel companies to make it easy for the government to get data via a warrant

Not the same thing. The wiretap law cannot compel any company; only licensed telcos. These companies, in exchange for license to be able to provide such service to general public get some advantages, but have to agree to a set of limitations.

I have my doubts that Apple would be a holder of such license that would make it a subject to such regulations.


One note about the aero research rules in F1 which makes this post particularly interesting: there are limitations on the amount of CFD simulation time teams can use.


In 2017, Ars Technica did a deep dive into computation in Formula 1 (https://arstechnica.com/cars/2017/04/formula-1-technology/).

Some relevant quotes:

> For example, each Formula 1 team is only allowed to use 25 teraflops (trillions of floating point operations per second) of double precision (64-bit) computing power for simulating car aerodynamics.

> Oddly, the F1 regulations also stipulate that only CPUs can be used, not GPUs, and that teams must explicitly prove whether they're using AVX instructions or not. Without AVX, the FIA rates a single Sandy Bridge or Ivy Bridge CPU core at 4 flops; with AVX, each core is rated at 8 flops. Every team has to submit the exact specifications of their compute cluster to the FIA at the start of the season, and then a logfile after every eight weeks of ongoing testing.

> Everest says that every team has its own on-premises hardware setup and that no one has yet moved to the cloud. There's no technical reason why the cloud can't be used for car aerodynamics simulations—and F1 teams are investigating such a possibility—but the aforementioned stringent CPU stipulations currently make it impossible. The result is that most F1 teams use a somewhat hybridised setup, with a local Linux cluster outputting aerodynamics data that informs the manufacturing of physical components, the details of which are kept in the cloud.

> Wind tunnel usage is similarly restricted: F1 teams are only allowed 25 hours of "wind on" time per week to test new chassis designs. 10 years ago, in 2007, it was very different, says Everest: "There was no restriction on teraflops, no restriction on wind tunnel hours," continues Everest. "We had three shifts running the wind tunnel 24/7. It got to the point where a lot of teams were talking about building a second wind tunnel; Williams built a second tunnel.

With the new cost cap in F1 (https://www.autoweek.com/racing/formula-1/a35293542/f1-budge...) (which notably excludes driver salaries), it would be interesting to know how much these on-prem clusters cost to operate.


To level the field even more, I think the FIA should require teams to release the design of their computer hardware after X time. That way, investments by one team on improving the system architecture spread to teams with lower budgets after a while.

Also, I didn’t find it in the article, but I guess they have programmers who can work for months to speed up their software by a few percent.


>> to level the field even more, I think the FIA should require teams to release the design of their computer hardware after X time.

But that is Not what Formula One is about... it is Not a Spec series where the cars are equal to each other. It is a competition where each team builds their own race car to compete against the other iterations of race cars built by the opposing teams. It is Not meant to be fair or equitable. We have Indycar and NASCAR for that.

Ditto with the drivers: Is Max or Lewis comparable to say a Mazepin or even a Hulkenberg? No they are Not.

It's a Spectacle, it's a Circus... that is what F1 is about. And I tell you, as a racer there is nothing else that is its equal in terms of pure audacity both from a standpoint of driving talent and car performance.


F1 is definitely trying to make the teams and their engineering more similar than different, why do you think the whole regulation part exists even? [1] If they were to be allowed to build whatever they want, F1 would have looked very different than how it looks today.

F1 (FIA really) has been using regulation to improve the sports safety, but lately they also used regulation in order to regulate how much each team spends on engineering, both money-wise and time-wise. This is to make things more equal between the teams.

- [1] - https://en.wikipedia.org/wiki/Formula_One_regulations#Techni...


>> F1 is definitely trying to make the teams and their engineering more similar than different, why do you think the whole regulation part exists even?

I agree, especially under the new owners. And for sure the cars are built according to each teams interpretation of the rules (which are of course subject to scrutineering). But that still leaves massive room for innovation.

Lewis is sitting in the same cockpit as Bottas is... their results are frequently vastly different due to their individual interpretation of events.


>each Formula 1 team is only allowed to use 25 teraflops

That ... doesn't make much sense, honestly.


The problem is the sloppy use of technical terms. MIPS means "millions of instructions per second" so the "p" is "per" and the "s" is "second". So it is natural for people to use FLOPS in the exact same way, but it is more correct for this to be "FLoating point OPerationS" where the "s" is used to indicate a plural.

That makes MIPS the equivalent of power (Watts) and FLOPS the equivalent of energy (Jouls). In that case limiting each Formula 1 team to a maximum of 25 teraflops of computation does make sense.

If instead you use teraflops as the equivalent of "trillion floating point operations per second" as many people do then it indeed makes less sense.


My thoughts exactly.

But then, the article explicitly states that they are talking about "trillion floating point operations per second".

It's probably just a mistake from the journalist.


Strategy:

Leak CAD model, get other people to do simulations for free!


Unfortunately doing CFD is not a question of running ./cfd carmodel and waiting for a day. You need to ensure you are correlated with real life i.e. the wind tunnel.


"Accidentally" leak the wndtnl-cor-current (1).csv file as well :).


Seems you mean an attempt at open-source F1 car development...

Plausible? With insufficient contributors, pace of development would be too slow

With many, the big problem is sorting/selcting the most useful insights from the lot, but then that is probably not unlike any large org enterprise


You can be certain that Williams' competitors will have a very close look at this leak. The regulations were written with such loopholes in mind.


Why would you study one of the worst cars on the grid? For a laugh at the Christmas party?


If Williams outsourced their CFD with that leak, they might gain an unduly advantage over teams like Haas or Alfa. These teams compete over several millions in price money.


Even a bad car might have a lesson or two in it


it probably still a lot faster then a volkswagen golf. More like a wild spin on New Years'...


It's not a leaked model, it was a 3D model grabbed from some mobile game...


Ironically, the allowance for CFD time is scaled based on the reversed world championship points order, and Williams was dead last, so they should have the most CFD time of any of the F1 teams.


Interesting point, which a little bit implies that Williams would have the best aerodynamics by now. But clearly that is still not the case. So I am quite skeptical, whether CFD time has indeed the effect it aims to have. We all know that knowledge which you build up over time as a developer pays in long term. So even thou teams like Mercedes may have shorter time for CFD, they have the knowledge base build up over time which they use heavily.


Also, keep in mind that this year the rule is only in a demo mode. The first team has 90%, the last team has 112.5% of the dedicated time. From the next year this will change to 70% and 115%. (The time for the teams between is defined in steps of 2.5% this year, next year 5%)

https://www.formula1.com/en/latest/article.how-f1s-new-slidi...


Exactly CFD time does not necessarily correlate to effectiveness. The saying "all models are wrong but some models are useful" definitely applies and if you make incorrect assumptions your CFD will not be that usefel


That rule was only implemented this year, so there is no data on that yet.


The limit is in FLOPS but CFD is memory bound. To get round the limit, AMD made custom CPUs with restricted floating point performance to allow quadrupling the number of cores.

https://www.reddit.com/r/formula1/comments/g4sboe/custom_cpu...


That was awhile back. The rules have since been updated to be more flexible. AMD now has a multi-year partnership with Mercedes F1


And now there are some CFD solvers that can use the GPU effectively


Actually, the limit is in TFLOPs, and the use of CFD is audited.


How do they actually check the total hours of CPU usage ?


Each team has to submit an audit of CFD runs at 8 week intervals throughout the year. An FIA inspector can also turn up on premise to review simulations run. The full details are in Appendix 8 of the sporting regulations [0].

[0] https://www.fia.com/sites/default/files/2021_formula_1_sport...


Is that limitation to reduce the cost of competing for those who don't have big GPU farms?


Everyone has tons of resources / big GPU farms / etc in F1. It's one of many limitations to stop the cost to be competitive from going to infinity.

Typical contender teams spend $300m/year.


Since this year the budget cap for the teams is set to $145m/year.


Yes, but that comes with a pretty heavy asterisk - driver salaries are not included in the cap for example (which will be extremely significant for the top teams), nor is marketing. Still, I think it's a step in the right direction for the sport - having a cost cap in place now lays the groundwork for tighter restrictions in the future.


What is a typical team? Biggest teams spend $450m, smallest spend $130m. The resource difference is very real, and the mean hides that.


As another comment mentions above, there are now budget caps, which affect every decision a team make. For instance, one of the Mercedes drivers had a really nasty crash recently and the cost to build him a new car is putting budget pressure on Merc.


This reminds me of the invented sport Paced Badminton. It’s badminton, and also the players have pacemakers and are only allowed a fixed number of heartbeats per match.


IIRC it was mostly to reduce overall expenditure on aerodynamics. Before FIA restricted resources, teams would use wind tunnels 24/7, up to 70 days per year. This was crazy expensive so the restrictions were implemented (and they happened to include CFD limits too).


How much did wind tunnels cost to build and run? I'm surprised the teams (or at least the parent company) didn't have their own dedicated wind tunnel.


Most teams have their own dedicated wind tunnels and they are fairly expensive installations. As an example, the artificial lake you see in front of the McLaren Technology Center at Woking [0] is the cooling liquid reservoir for their wind tunnel.

Check out this 8 part YouTube series from Sauber about their wind tunnel installation: https://www.youtube.com/watch?v=RBVgwpYUp18

These are big industrial installations which need highly specialized staff to run them. Even if the facility exists, keeping the "wind on" costs a lot of money per hour.

Ironically, as a response to the newly introduced cost cap measure, the teams are building new facilities like crazy right now, with many teams building a bigger wind tunnel so they can have it in the books before the accounting for the cost cap starts.

[0] https://en.wikipedia.org/wiki/McLaren_Technology_Centre [1] https://www.youtube.com/watch?v=RBVgwpYUp18


A car size wind tunnel that can do 250 mph will be moving 400kg of air each second, and requiring a 2.5 megawatt fan. That costs ~$500k in electricity annually to run half a working day. That motor from China costs $30k. Imagine it's the same again for the fan, same again for wiring and control systems, and double that for fiberglass moulded side panels, foundations etc. We're talking £150k, and that's for a very budget wind tunnel.


Working back from the $500k number gets me to 19 cents per kilowatt hour which seems a trifle high for industrial usage with a predictable 2.5MW load.

https://www.gov.uk/government/statistical-data-sets/gas-and-...

It looks like 14-17 cents per kilowatt hour is about the going rate as of Q3 - Q4 2020.

I’m a little surprised this doesn’t lead to more such facilities being strategically located based on availability of cheap power, e.g. by building near to hydro, and where power costs can be ~2c/kWh. See: https://www.seattletimes.com/business/technology/sunday-buzz...

I’m forced to conclude the power costs are a marginal expenditure versus other costs involved in running such a facility, and benefits from having R&D, testing and validation, and production all on the same campus?


What? How is that even enforced?


The onus is on the teams to prove it. If you cheat in formula 1 and they feel like punishing you a fine of one hundred million dollars is not unprecedented.


And nobody uses parallel construction?


Guessing what you meant here if it wasn’t obvious:

A designer could have their own off-the-books compute cluster where they test out new ideas. Then only run the good ones on the official system.


The difficulty of policing parallel construction was one of the key arguments against the cost-cap, IIRC.


Honestly, even without outright cheating, a manufacturer team has so many advantages it's kind of pointless.

Imagine a scenario where a brake duct needs to be redesigned to account for some change in regulation or a performance tweak. At someone like Mercedes the conversation would probably start with ok, let's dig out all of the CFD we did for this when we designed this in 2007, let's also grab the data on the changes we made on the GP2 last year, also weren't the LeMans team doing some work on this last month? From there they would be in a much better place to identify the points where they have to concentrate their efforts without expending a minute of new CFD time.

At a small new entrant team none of this data is available to them.


I think Mercedes even offload some engineering work to the real (non-British) Mercedes too - that's simply not possible for a team like Haas


Seems like a lot of this data is sourced from OpenElections[0][1]? Not sure about what makes this better than OpenElection's data, especially considering OpenElections seems to have done a large portion of the difficult wrangling work...

There should definitely be better attribution for this data.

[0] https://twitter.com/derekwillis/status/1361508657154961408

[1] https://github.com/openelections


A portion of it was sourced from openelections, yes. These were attributed in the PRs, but not in the README for the repository. We've corrected this oversight in attribution and issued an apology.


"A lot" of the data wasn't sourced from Open Elections. Derek's claim is not true. As far as I know one user used Open Elections data for a portion of their total contributions (and attributed it in their PR).

"Many" of the contributors (myself included) used primary sources for their data.

The one contributor that used OE data cited it and it was a small portion of their overall contributions.


I have a similar issue with setting up my eGPU with Bootcamp. Apparently the macOS bootloader messes with the setup of a ton of devices so what I have to do is plug in USB C devices before boot, then plug in my thunderbolt devices in the bottom left port at a specific point during boot, then plug in USB-A devices after login. Anything else crashes the machine.

Admittedly my use-case is niche, but I wouldn’t be surprised about similar things happening with other devices.


There’s a lot of distrust from many southeast Asian nations due to the conflicting territory claims in the South China Sea as well.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: