A new open source UI Toolkit to facilitate LLM-generated UIs. A2UI is built with A2A protocol and allows an A2A agent to send interactive components instead of just text, using a high level framework-agnostic format that can be rendered natively on any surface (starting with examples for web and mobile).
- Based on A2A
- Streamable JSON Lines (JSONL)
- Truly Framework-Agnostic
Would be nice to have a better resolution for a sample video on YouTube (https://www.youtube.com/watch?v=FtOQddvp_U4), 64 million times sharper, but video quality is only 240p :)
You're probably being snarky, but that completely disregards compression. The artifacts on the video are atrocious. An uncompressed 480p stream is worlds different from a compressed 480p stream.
> How can 14 million pixels be 64 million times more pixels?
It’s Voxels not pixels. Voxels are 3D objects so to be clear this particular advancement isn’t 64 million (roughly 400 in each linear dimension^3) times better than the best that came before. Think 400x400 = 160,000 more pixels and 400 times as many slices.
Also the comparison is to a “typical clinical MRI for humans” which not only have 10 billion neurons but also tend to have relatively few slices. There’s little point in having people spend hours in a machine if a faster scan with fewer slices is good enough.
To be clear I used the term "pixels" in describing the characteristics of the final images produced, which are a series of stacked 2D images comprised of pixels.
But, I don't think it matters what we label the units we're counting, as long as we count them accurately.
The paper you've referenced states:
> field-of-view = 12 × 12 × 24 mm and matrix size = 200 × 200 × 400 giving an image with (60μm)[cubed] isotropic voxels
The "64 million times better" article states a resolution of 5μm voxels (for one aspect of the imaging). The paper the article references, by contrast, states 15μm when specifically comparing to MRI, and claims 1,000x improvement.
Diambiguating what is meant by "resolution" is a common problem, as it can refer to either the length of the axes or the overall area when multiplying them.
Conservatively, I'm going with 12x better resolution with the new technique when comparing apples to apples, or, to hype it reasonably: 5 cubed vs 60 cubed = a difference of 1,728 times more voxels in the same given area.
Another approach is to use Database Lab (https://gitlab.com/postgres-ai/database-lab). Our tool allows deploying disposable Postgres databases in seconds using REST API, CLI, or GUI. The difference is that we provide clones with snapshots of full-sized data. For example, you may have dozens of clones of your production databases (with masked sensitive data) to use as staging servers, database migration verification, tests, and all that provisioned in a couple of seconds.
This sounds like a different problem, not a different approach: "in seconds" is only good for integration tests, and the topic is about (sub-second) unit tests.
Another approach is to use thin clones of Postgres to create disposable databases for fast iterations.
We've made Database Lab tool working on top of CoW file systems (ZFS, LVM) that is capable of provision multiterabyte Postgres instances in seconds. The goal is to use such clones to verify database migrations and optimize SQL queries against production-size data. See our repo: https://gitlab.com/postgres-ai/database-lab.
Joe running in Slack is indeed a way to simplify and speed-up SQL optimization workflow for developers, as it takes seconds to get initial query execution plans and optimization recommendations.
>... With a Slack interface, so you don't need to bother with connection, provisioning, credentials and decommissioning.
It's worth noticing that Joe is a use case of Database Lab (https://gitlab.com/postgres-ai/database-lab) and it's still possible to use Database Lab features like thin-clones provisioning of production-sized databases and fast data state reset with Database Lab client CLI (https://postgres.ai/docs/database-lab/6_cli_reference) for purposes of SQL optimization without Slack integration. If a user has sufficient access to the data they can provision thin-clone with Database Lab client CLI and use psql to work with the clone.
Also, we have plans to add support of recommendations and statistics in CLI and REST API to the support of various messaging platforms in the future.
- Based on A2A - Streamable JSON Lines (JSONL) - Truly Framework-Agnostic