It depends on the layer, some of the layers might be able to take advantage of how the data is persisted. For example, if you use avro/protobuf, the decoder will handle it for you. If that's not the case, you would have to implement the migration by yourself. There is a paper[1] on this subject called "Online, asynchronous schema change in F1", which explains how to implement it.
I like the process that goes into these "imagine the architecture of AGI" articles. It's all hypothetical, but it's really fun.
But it's a missed opportunity if you don't embed LLMs in some of the core modules -- and highlight where they excel. LLMs aren't identical to any part of the human brain, but they do a remarkable job of emulating elements of human cognition: language, obviously, but also many types of reasoning and idea exploration.
Where LLMs fail is in lookup, memory, and learning. But we've all seen how easy it is to extend them with RAG architectures.
My personal, non-scientific prediction for the basic modules of AGI are:
- LLMs to do basic reasoning
- a scheduling system that runs planning and execution tasks
- sensory events that can kick off reasoning, but with clever filters and shortcuts
- short term memory to augment and improve reasoning
- tools (calculators etc.) for common tasks
- a flexible and well _designed_ memory system -- much iteration required to get this right, and i don't see a lot of work being done on it, which is interesting
- finally, a truly general intelligence would have the capability to mutate many of the above elements based on learning (LLM weights, scheduling parameters, sensory filters, and memory configurations). But not everything needs to be mutable. many elements of human cognition are probably immutable as well.
I like to think we could quickly create a next-level AI (maybe AGI?) if we simply model it on the Pixar movie "Inside Out". The little characters inside the girl's brain are different LLMs with different biases. They follow a kind of script that adapts to the current environment. They converse with each other and suggest to the girl what she should do or say.
This sounds a lot like the mixture-of-experts architecture, and the current best-performing language models (GPT-4, mixtral-8x7b) already use this architecture.
That's not really how MoEs work. They never directly interact with eachother. There is one manager type model that takes a prompt, directs token inference to 1 or more models, chooses the best response, and continues. The analogy would be closer to a "swarm of agents". (There are a handful of names for this approach, I think swarm is catching on the most)
one important thing you left out - the ability to reproduce and thus "evolve" naturally, and at scale, to essentially keep improving its own brain to the point it outpaces current human researchers in self-improvement. If not reproduce, maybe reincarnate itself in version 2.0, 3.0, etc...
Yeah, I guess I was heading in that direction with the last point. Earth organisms have a separation between lifetime learning (brain modification) and genetic evolution, but, for AGI, these could be combined into one, or further separated into three or more methods of goal-directed modification.
Delfina | $65,000 - $170,000 | Full time | SF or Remote | Senior Software Engineer
We’re looking for an experienced frontend software engineer to help us build the apps and systems that run Delfina Care, our intelligent pregnancy care platform. Delfina is a fast moving startup, and we have an incredible team of engineers building an integrated, ML-driven platform to make every pregnancy safer.
In this role, you will be the technical lead for our frontend development team. You will contribute directly to our codebase as a developer. And you will provide technical guidance, mentorship, and feedback to the rest of the frontend team.
A successful candidate will be able to jump immediately into writing code for either our React or Flutter codebase, with the expectation that they can mentor engineers in either frontend technology.
Would anyone knowledgeable about the field update their priors about whether we’ll see commercial fusion in the next 30 years, after seeing these results? If not, is there a big milestone we’re waiting for? Or will fusion advancement be a slow grind with many small improvements over decades?
I'm not an expert but I've been following the field for a while. It's telling that negligible venture capital is pursuing this route to commercial fusion, and the only cheerleading for it comes from DOE lab press releases. That's because the NIF is a thermonuclear bomb simulator developed by a lab tasked with both thermonuclear bomb development and also developing a portfolio of civilian applications for its technologies. Even if the NIF were to break even on the entire power plant package in theory, harvesting energy from fast fusion neutrons is hard enough in magnetic confinement designs without them pulsing like a bomb as they do in ignition designs.
Meanwhile the VC money is quietly piling into tokamak and stellarator magnetic confinement designs, driven by high expectations from real breakthroughs in ReBCO tape manufacturing technology. These superconducting tapes can be manufactured like semiconductors and can develop magnetic fields that were previously impossible, which is a key manufacturability enabler in a design whose path to commercialization is far better de-risked overall. There are still concerns with the durability of equipment needed to capture the neutrons in these designs too, but ReBCO tapes were the real prior changer.
Funding is starting to kick in for private laser fusion attempts. Over the past couple decades, lasers have advanced even more dramatically than superconductors.
Currently, about $3B is invested in fusion per year, while about $6000B is spent on oil subsidies. That's just to show how little we spend on fusion. Any decent increase in spending would really help speeding up the process. I think that's something we should all be promoting!
I don't see this as dealing with the considerable obstacles to inertial fusion. In particular: cost of lasers, size of the system with survivable final optics, cost of manufacturing the targets, and targeting of moving targets with sufficient accuracy.
Big milestone is construction materials, could long enough withstand neutron stream, which is about two magnitudes more, than in fission reactors.
This is last unknown in this equation. All others are already known, from achievements of last few years.
Materials research is one of primary targets of ITER.
If good enough materials will not being found fast enough, will need to use clear reactions like boron-carbon fusion, in which need magnitude higher temperature, so practical device will be few times larger (because x-ray losses, proportional to surface square of plasma configuration).
ITER will only operate for a few weeks total at full power. It's not intended for materials development. For that, a Fusion Nuclear Science Facility (FNSF) would be needed.
You don't understand. ITER will RESEARCH, how existing materials withstand in real fusion reactor, and gather parameters of real fusion reactor, so other science facilities will have benchmarks.
ITER is fundamentally unable to replicate the conditions that materials will be subjected to in an actual commercial fusion reactor. It cannot achieve the same cumulative neutrons dose that a real reactor can experience. It will not be able to answer the questions that need to be answered to prove out the materials for first walls or blankets, and it will not be able to establish reliability metrics for these structures.
For this reason, there has long been a call for a FNSF. This facility is likely to be needed to establish designs for components that would go into the putative successor to ITER (DEMO).
ITER fails in at least two ways. First, the intensity of neutron radiation at the first wall is far too low for a viable commercial reactor. It cannot simulate the heat load a commercially viable breeding module would encounter. Second, ITER cannot operate for more than a few weeks, so it cannot simulate the integrated radiation load a commercial first wall would have to be able to withstand. It also cannot operate with enough blanket modules, for long enough, to move the designs down experience curves for reliability growth to occur so they are sufficiently robust for a commercial reactor (this is a huge looming problem, as they will be very difficult to repair.)
Abdou at UCLA has been beating the drum for a FNSF to actually address these issues. He's been beating this drum for DECADES.
> the intensity of neutron radiation at the first wall is far too low for a viable commercial reactor
Source? Proofs? Sorry, for me this looking as just Your opinion.
> Second, ITER cannot operate for more than a few weeks
This is just not important at all now. That what I mean, said You don't understand physics.
- NOWHERE at Earth possible to recreate exact radiation environment of Jupiter orbit for YEARS, need to test radiation capable computer environment for space probes.
What really doing? After first probes measured parameters of environment, at Earth built test benches, consisting of few throttle-able sources, so they give approximate spectrum, very like near Jupiter, but could do year dose in few hours and could easy be switched off, to make manipulations with tested samples.
So now, I even know guys, who touched exposed chips and running real world software on them, and real computers in Jupiter/Mars missions, working much longer than need for mission (BTW, first samples tested at Earth, where not reliable).
Delfina [https://delfina.com/] | Software engineer (junior or senior, back or front or fullstack) | Full time | Remote (Anywhere)
We are building intelligent pregnancy care for healthier moms and babies. Our motivating challenge is the fact that in the United States, healthcare outcomes for pregnancy are significantly worse than in other similarly developed countries. We believe our technology-enabled solutions will help doctors better scale to better use data and in the end, deliver better care to their patients. We are looking for people who believe in our mission to join our team!
We are looking for both junior and senior level software engineers. And we’re
open to backend, frontend, or fullstack positions. It will help if you experience
in one or more of:
- Flutter for iOS and Android or React
- Python (FastAPI)
- PostreSQL, GraphQL
- GCP, GitHub Actions
Experience in healthtech, pregnancy care, machine learning, cloud operations, and design are all valuable as well!
Delfina offers competitive benefits, including health, dental, and vision coverage, unlimited PTO, and 10 weeks fully paid leave for parents of all genders and any parental event including adoption.
I love the direction Supabase is taking; finding the building blocks of modern applications (database, auth, functions, presence, realtime subscriptions), making them easy to use, and then sharing the source code. I’ve learned a ton just from cruising around supabase GitHub.
Can you say which of these new components will be open sourced? There are some other features (e.g. function hooks) that are also closed-source at the moment. Is Supabase heading for an “open core” model?
> finding the building blocks of modern applications (database, auth, functions, presence, realtime subscriptions), making them easy to use, and then sharing the source code.
Great observation!
> I’ve learned a ton just from cruising around supabase GitHub.
Glad to hear it!
> Can you say which of these new components will be open sourced?
All of these components are open source and licensed under Apache License v2.0.
> There are some other features (e.g. function hooks) that are also closed-source at the moment.
I actually worked on the initial implementation of function hooks. We've actually already open sourced both the client (see: https://github.com/supabase/supabase/tree/88bcef911669595428...) and the pg_net extension it requires (see: https://github.com/supabase/pg_net). I think we've yet to open source the SQL commands needed to create the schema, functions, etc. I'll talk to my team and we'll open source it.
> Is Supabase heading for an “open core” model?
I don't think so. We want to continue to open source our projects under either MIT (client libs) and Apache License v2.0 (server libs).
These are evocative images. I love a bunch of them! Knowing that this model was trained on a huge corpus of existing images makes them feel a bit like the output of a visual search engine -- finding relevant pieces and stitching them together. But it's more than that, because the stitching happens at different levels. They are often thematically and aesthetically cohesive in a way that feels intelligent.
Maybe we're just search engines of a similar kind.
An additional aspect of human art is that it (usually) takes time to make. The artist might spend many hours creating and reflecting and creating some more. The artist's engagement with the work makes its way into the final product, and that makes human art richer. Could future Dall-E version create sketches and iterations of a work; is there a limit to this mimicry?
Human artists also do a whole lot of mimicry. One could look at art produced by many artists and say that it is just things stitched together from pre-existing art.
For example the “enterprise vector people” graphics you see on every corporate website. Most human art is extremely repetitive.
AI art seems to be coming from the opposite direction to human artists - from a starting position of maximum creativity and weirdness (e.g. early AI art such as Deep Dream looked like an acid trip) and advancements in the field come from toning it down to be less weird but more recognizable as the human concept of “art”.
And DALL-E is impressive exactly because it has traded some of that creativity/weirdness away. But it’s still pretty damn weird.
Amyris | Roles in Data Engineering, DevOps, IT | Remote | Amyris has developed an industry-leading platform for designing and building synthetic organisms. Our technology is being used today to make clean beauty products, bio-based renewable chemicals, and even vaccine ingredients. We are engineers to support these data and software driven technologies.
Amyris has developed a platform for designing and building synthetic organisms. Our technology is being used today to make clean beauty products, bio-based renewable chemicals, and even vaccine ingredients.
Among the tools we have developed are a CAD/CAM system for genetic engineering: a compiler toolchain whose target architecture is life itself. This stack physically integrates high level genetic modules into microbial hosts. We also derive novel strains through random mutagenesis and directed evolution. Using our custom control platform, we then subject these experimental organisms to high throughput performance screening in our state-of-the-art robot labs.
We are also opening a DevOps position and other tech positions in the next couple week. If you're interested, you can reach out to me at king@amyris.com
agreed! a couple things seem to be holding this back:
- the academic publishing model incentivizes groups to build their own tools
- the small-ish market for something like this has kept commercial software from taking off (Genomatica started by building a tool like this in the early 2000s, before pivoting to bioprocess development)
- it's really hard to specify pathways in concrete physical terms. Even a chemical like glucose is actually a collection of pseudo-isomers (alpha & beta D-glucose). And try firmly defining a "gene" in your database!
The database would definitely need to define some boundaries and limitations, but I still think there is much opportunity in coalescing well-defined metabolic and genetic data and empowering folks to generate feasible genetic constructs.
You might also want some pathways to be pre-validated to work together in certain cellular contexts, like they have been doing with the BioBricks project