Thank you and other commenters for the great rundowns here. I'm interested in a related question and I wonder if you or others could point me in the right direction: why was the mainstream consensus around solar power (and/or batteries) apparently so wrong for so long? More specifically -- and maybe a better question -- why didn't progress in solar and batteries happen sooner?
I'm less interested in blame than in a systems analysis of how in the last half century powerful players seem to have missed the opportunity to start earlier investment in solar and battery technology. Solar and batteries are unique in energy infrastructure, as even any casual observer knows by now, and is certain to change many aspects of politics, industry and culture. It seems an inevitability that energy infrastructure will evolve from large complex components towards small and simple components, and I'm interested in engaging with the history of why "now" is the moment, rather than decades ago.
> why didn't progress in solar and batteries happen sooner?
The rate of progress in cost reduction has been astonishing. It's unlike anything except Moore's Law. This catches people out.
As well as the usual suspects: cheap fossil fuels, failure to take global warming seriously, belief that nuclear power would see similar exponential cost reduction rather than opposite, and of course anti green politics.
But if 95% cost reduction is the result of not taking it seriously, would taking it seriously earlier have been even better? Hard to say.
Right! Good points for optimism here, and acknowledging broken mental models.
We have silicon solar modules in the 1950s, Moore's law in the 1960s. Another take on the question then: today we use Moore's law to describe progress in solar modules, to what extent was that realization possible in the 1960s from the fundamentals, or "first principles"?
If it was clear, why did we not see rapid prioritization of solar and energy storage technology research? Or did we and I don't know the actual history? Or what influences am I undervaluing or not recognizing?
If it wasn't clear, why not? Gaming out many positive impacts of solar technology feels easy today in a way it appears was not easy in the past. Why wasn't it clear in the past?
Battery progress was in some ways slowed but also accelerated by oil companies who kept buying up patents on solar and battery stuff that looked promising, and then sat on the patents, refusing to license it.
One oil company bought Cobasys, which owned all the NiMH patents. Thereafter, Cobasys refused to license NiMH batteries to anyone making a vehicle, except large ones like transit busses. Several early EVs used NiMH batteries until Cobasys was acquired and set up the restrictions.
This really lit a fire under researchers and battery industry to try and improve lithium ion, which had hit the market in the early 90's. Once the price of Lithium Ion started falling, the market very quickly forgot about NiMH batteries. In about ten years prices have fallen to one fifth of what they were. That fall has slowed, but it's still dropping.
It's a false assumption that technological progress happens automatically or even that it's based upon the passage of time.
Progress happens as a result of many choices made by individuals to invest time and energy solving problems. Why is solar rapidly improving now? Because way more people are invested in making it better.
Nascent technologies almost always face an uphill battle because they compete against extremely optimized legacy technologies while themselves having no optimization at first. We only get to the current rapid period of growth because enough people pushed us through the early part of the S curve.
Sure, that makes sense. This is where I'm coming from with my interest in history:
I heard an interesting argument somewhere that solar cells are an ideal manufactured good. Whether you are building a module for a calculator or a GW scale plant, the modules are the same. This is fundamentally different for steam turbines. On the "concrete-internal combustion engine" spectrum of complexity, solar modules are closer to concrete and turbines are closer to ICEs.
Shouldn't this have led to a special interest in advancing solar module research? Or widespread understanding that eventually the unique set of attributes that define a solar module would lead to it's takeover of a significant portion of global energy generation? Shouldn't that have been apparent from the earliest days of photovoltaic research as a sort of philosophical truth before the advances in material science, extraction or manufacturing of the last fifty years?
I think another important part is that solar has low minimum useful quantities and customization. Lots of the problem with nuclear power is that you only need ~100 to power the US, and each one takes years to build, so getting scale is basically impossible. With a 50-100 year lifespan per plant, that means you only get to build 1-2 a year, and you can't learn much from the 5 you've most recently started since they're still under construction.
Solar and batteries got cheaper when we scaled up and built a lot. You have to pay current prices to get the next price drop, because it's all learning by doing.
If we had pushed harder in the 80s, 90s, and 2000s, solar might have gotten cheaper sooner. Solar fit in at the edges of the market as it grew: remote locations for power, or small scale settings where running a wire is inconvenient or impractical. The really big push that put solar over the edge was Germany's energiwende public policy that encouraged deploying a ton of solar in a country with exceptionally poor solar resources; but even with that promise of a market, massive scale up was guaranteed.
It's in many ways a collective action problem. Even in this thread, in 2025 you will see people wondering when we will have effective battery technology, because they have been misinformed for so long that batteries are ineffective that they don't see the evidence even in the linked article.
Also, most people do not understand technology learning curves, and how exponential growth changes things. Even in Silicon Valley, where the religion of the singularity is prevalent and where everyone is familiar with Moore's law, the propaganda against solar and batteries has been so strong that many do not realize the tech curves that solar and batteries enjoy.
A lot of this comes down to who has the money to spend on public influence too, which is largely the fossil fuel industry, who spends massive amounts on both politicians and in setting up a favorable information environment in the media. Solar and batteries are finally getting significant revenues, but they have been focused more on execution than on buying politics and buying media. They have benefited from environmental advocates that want to decarbonize, without a doubt, but that doesn't have the same effect as a very targeted media propaganda campaign that results in zealots that, whenever they see an article about climate change, call up their local paper and chew out the management with screaming. Much of the media is very afraid of right wing nuts on the matter and it puts a huge tilt on the coverage in the mass media in favor of fossil fuels and against climate science.
Indeed. You widen the conversation here, and remind me of the idea that moneyed influence is underrepresented in analysis and understanding of the world. Maybe the most appropriate way to understand big questions is who is funding the various players.
I like to think about "learn by doing". While I have of course lived it, I try to think of counterpoints. It seems clear that solar owes it's growth to Germany and California policies which subsidized the global solar industry with taxes on their economies, most disproportionately placed on individual ratepayers. But why couldn't solar research have been long-term funded based on it's fundamental value? Talk about national security, or geopolitical stability -- especially post 1970s! Skip the intermediate and expensive buildouts of the 2000s, failed companies heavily subsidized and fund research instead to hopefully bring the late 2010s forward in time?
What's a good model here, or concrete example? We see the same side of the history in electric vehicles. I think Tesla and Rivian, to pick two, both lost money on every sale in early years. Why not skip that expensive step in company history, and develop better products to sell at a profit from the beginning of mass manufacturing? Are there industries or technologies where this expensive/slow process went the other way?
> It seems clear that solar owes it's growth to Germany and California policies which subsidized the global solar industry with taxes on their economies, most disproportionately placed on individual ratepayers. But why couldn't solar research have been long-term funded based on it's fundamental value
I think this is a really important distinction, that between research in the lab versus research on the factory floor. Tesla in particular has talked about how much they value engineers that get down in to the production process versus those that are working in the lab. That's the "doing" that needs to happen. As well as shaking out parts of the upstream supply chains and making all that cheaper.
We can theorize about what's going to work in practice, but the price drops are the combination of 1% savings here, 0.75% savings there, 0.5% there, and until you have the full factory going you won't be able to fully estimate your actual numbers, much less come up with all the sequential small improvements that build on each other. And all that comes together in the design of the next factory that's the next magnitude up in size.
I hear that, it seems a common observation. Maybe a fundamental truth of enterprise.
> until you have the full factory going you won't be able to fully estimate your actual numbers, much less come up with all the sequential small improvements that build on each other.
Why not? Is there a theory or school of management or industry that establishes this foundational principle that seems so commonly invoked? It feels true, but I don't really know why it might be true. There must also be great examples of counterpoints in this too!
Maybe it goes back to learn by doing: it's a common refrain in outdoor recreation that safety rules are written in blood; that many of our guidelines directly follow from bad things that happened. But certainly we can also design safety rules by thinking critically about our activities. Learn by doing vs theory.
> We find that productivity improves when multiple generations of the firm’s primary product family are produced concurrently, reflecting the firm’s ability to augment and transfer knowledge from older to newer product generations.
Could you expand on what you see as exciting developments? I’ll have to check out the op post link as well as yours and others in the thread.
It’s been a few years since I seriously looked at options for my personal use, but I remember being quite disappointed in the options I found. Zotero and org-noter seemed two of the best (though in completely different ways) pieces of software I could find regarding reading or organizing pdfs. I trialed OneNote for a year and liked it in the moment, but zero support for navigation or discovery or review of information make it untenable for building a knowledge base or doing literature review.
I imagine that software which makes reading and connecting document information (in any form: pdf, html, video or other) could be so much better than what I use daily.
1) The post-roam research note-taking apps (Obsidian, logseq) have shown the usefulness of creating notes with links, back-links and databases.
2) Document editor apps (Notion, Craft) have popularized the concept of documents as a set of text and non-text blocks. They're useful and provide rich building blocks for documents.
3) Some design engineers are exploring multi-modal text editors. Text, audio and video in the same document, integrated with CRDTs for collaboration.
One would think that digital text editing had already reached state of the art, but the work above shows that there's plenty to discover yet. I'd love to hear your take on what you think could be much better.
Off topic, but I’m wondering if anyone attracted to this topic could help me understand why JavaScript doesn’t have macros.
I’m aware of much conversation around dismissing macros, often in the context of bad dev experience — but this sounds like a shallow dismissal to me.
At the end of the day, we have some of the results of macros in the JavaScript ecosystem, but rather than being supported by the language they are kicked out to transpilers and compilers.
Can anyone point me to authoritative sources discussing macros in JavaScript? I have a hard time finding deep and earnest discussion around macros by searching myself.
But more importantly, do you really want script tags on webpages defining macros that globally affect how other files are parsed/interpreted? What if the macro references an identifier that's not global? What if I define a macro in a script that loads after some other JavaScript has already run? Do macros affect eval() and the output of Function.prototype.toString?
Sure, you could scope macros to one script/module to prevent code from blowing up left and right, but now you need to repeat your macro definitions in every file you create. You could avoid that by bundling your js into one file, but now you're back to using a compiler, which makes the whole thing moot.
It turns out there might actually be a benefit of the compilation step which has been introduced now that everyone uses Typescript... would be really interesting to see macros get added, though I suspect it's too far away from Typescript's mandate to add as few new features on top of Javascript as possible
Macros don't really make sense in JS runtime spec. Since you can mostly already achieve macro level features by using eval or new Function, but it's not very efficient. Macros make most sense at build time, and there have been a few attempts at generalized build macros with various bundlers / transpiler plugins. I think the space needs more time to mature. I'm optimistic that we'll eventually see some sort of (un)official macro spec emerge.
A great resource that I should have found on my own. Thank you. I’ll look through this later. Giving it a quick glance now I see some of the same language I see other places; here that macros are “too far.”
I don’t know why macros are approached with apprehension. As I briefly get at in my first comment, I’m aware of a lot of dismissals of macros as a tool, but those dismissals don’t make sense to me in context. I’m missing some backstory or critical mind-share tipping points in the history of the concept.
What could be a good set of sources to understand the background perspective with which TC39 members approach the concept of macros?
I picked this up at a used bookstore a while back for a dollar or two, and enjoy flipping through it from time to time. There’s something deeply satisfying about the quantity and density of the graphics in the book, and the visual simplicity of the prints.
Why do you post it here? What do you think about the book?
This is a great project in a space that I've been playing around for a little while, fun to see it here!
I'm interested in hearing what you think are some of the more difficult problems or bugs you've come across during development. Did you hit any stumbling blocks around handling user code or integrating babel or the terminal? Do you have any insights about preventing errors or crashes in how you parse and eval user code? (My typical test of a while(true) loop crashes this system, but you're still in good company; it crashes replit, browser dev tools, observable and just about every other clientside execution tool I've come across. The most popular solution appears to be the loop timeout transformation.)
I think the examples pages do a better than usual job of demonstrating your system, in particular the ubiquity of one liners and your connections between them. Do you have ideas or responses about the classic "mess of wires" critique that graphical coding systems inevitably receive?
This is such a fun domain to think about, thanks for sharing your work!
The expression is only transformed with Babel if you enable the JSX React transform. Otherwise, it's just straight eval-ed by your browser. It's by no means battle-tested (eg while (true)). I haven't tested circular deps and am leaving that as a surprise for myself in a little bit. One thing that I do is run the canvas in an iframe on a different domain for security reasons.
Parsing is something I'm trying to avoid as much as possible but it's likely I'll add it. Referencing things as inputs[2] doesn't feel stable. May help with implicit deps and avoiding wires (see observablehq.com)
As for mess of wires, I'm still forming my opinion! I want to learn more about nodes-and-wire programming and why it isn't mainstream. The hunch I'm getting is that visual programming feels better to create than consume. The space is great for exploration but looking at someone else's canvas can be chaotic. Maybe there are features that can alleviate this (multiple views, autolayout). Look at this haha https://twitter.com/_paulshen/status/1321872376234082305
The future of coding link in the parent has a large list of similarly spirited projects. I have scattered lists of similar projects but none handy or packaged well. I'll point you to the Ink&Switch article on end user programming. https://www.inkandswitch.com/end-user-programming.html
And encourage you to check out personal sites of the people involved. The lively kernel is a programming kit project that's been around in various incarnations for a long time. https://lively-next.org/ The history of Eve (also linked by that future of coding page) is rich and full of references to other projects http://witheve.com/ VPRI similarly is a gateway to lots of history on personal computing http://www.vpri.org/ of particular interest to me there is the graphical language Nile and the meta compiler Ohm. http://worrydream.com/ Bret Victor's site is another gateway you may have heard of, and the researchers at Dynamicland are also well worth exploring. More future of coding resources https://github.com/d-cook/SomethingNew
Do you, or the broader community, have any ideas about solving infinite loops? I'm on mobile so I can't test this at the moment, but I imagine that while(1) crashes the tab.
What would an MVP operating system like ctrl-C functionality look like for execution environments in the browser?
Codepen uses a system that measures loop duration, and it's a giant pain. Having done some pens that do ray tracing and image transforms which can have long running loops. Given the variable execution time of JS it can be quite random. It just exits the loop without warning, causing weird failures in your code.
2 theoretical solutions ( with significant overhead ) are:
Run the code in a VM ( maybe quick.js compiled to WASM would work ) that suspends code execution periodically if it exceeds a certain duration. This has the advantage that long running code in general won't block rendering, not just loops.
Transform the AST to use async generator that yields once per loop. This would allow the loop to be suspended and resumed. But it would require a lot of modification to the AST, making effectively the entire call tree async.
I've done this and it works surprisingly well. I made a timesliced js scripting system this way.. it looked imperative with tight loops but it was all asynchronous. It felt like a threaded app.
To add to this comment, Stopify is a JS-to-JS compiler that instruments sync JS code to make them interruptible at set points. The paper [0] can explain it better than I ever could.
I work on an experimental Pyret [1] runtime that uses Stopify to instrument compiled Pyret code (plain old JS) so that we can run Pyret code on the main page thread without hanging it up. Main thread execution is important for quick/easy DOM access. In terms of performance cost, we haven't measured too extensively, but so far, on average, we're seeing a 2x slow down compared to un-Stopified programs.
Do you see any other solutions in the same domain as Stopify? Another method that might provide a way to keep UI unblocked but still have user executable code?
If you need the user code to execute on the main thread, then unfortunately I am aware of none besides bundling your own tailored system.
Pyret used to use its own runtime system [0] but Stopify was created in part to replace it due to the maintenance burden and complexity of "vanilla" JS interoperability.
We started with a blacklist to match against while(1), while(true), for(;;), etc, but we eventually found an eslint plugin (goedel.js) that nicely tells you if the code contains an infinite loop or recursion.
I am aware of multiple hacky solutions, such as loop detection and adding timeouts. This fails in most non-trivial creative coding applications due to long running code. I'm interested what it would take to come up with a general escape hatch like any shell user has.
On Starboard[0] I approached this by sandboxing the notebook code in an iframe on a different origin. This sandboxing has to be done anyway to prevent XSS.
If you type while(true){} in a notebook only the iframe will break (and usually your browser will prompt you after a while to kill it). When you do only the iframe is no longer functional.
I don't think there's an elegant way to solve it any differently in the browser.
Hi, yes! I like your project. I chatted about similar things on your launch post here. You also address this explicitly on your product which I appreciate.
What I'm getting at is that these browser notebooks try to get at the desire and feeling for rapid exploration and iteration. Losing context by having a crashing logic error is a massive blow to that ideal.
I'm not saying that your or anyone elses product is only for "rapid prototyping" but it's still true that larger projects could be bit by the same errors. When I crash my native code I C-c and I'm back in an instant. When I crash browser notebook code I lose a bit of time and unsaved code. I crash browsers often in creative coding where I write many loops and don't always do them right.
It may also be that my chrome and firefox experience on linus is worse than standard, I don't know. But I have crashed my entire browser in chrome when using observable, and I thought that wasn't supposed to be possible.
That's clever. Do you know if there is something possible using web workers? maybe running the "sandboxed code" in the worker? I don't really know how they work and if it is possible to interrupt them from the main thread.
I think so, but a worker won't have access to the DOM and a bunch of other APIs, so the code would be fairly limited in what it can do. Which may be fine for some usecases!
Wow, that seems hard to do. One would need to take a lot of things in account, including recursive calls, asynchronous functions / calls and, indeed, even long strings of instructions that are not necessarily part of a loop or recursive calls.
Would a transform that adds the check between every JS instructions where it is possible theoretically solve the problem? is there a solution that does not slow down the code too much and interrupts the code within an acceptable margin?
Yeah! The general case of this is the halting problem...
The best solution I know of is stopify, which the other comments have talked about. I just wonder if there's another take on the situation, something akin to OS task management.
Well you can easily detect trivial examples like "while(1)" and "for(i=0;true;i++)". But otherwise how would know some is an infinite loop?
Put a bit more simply, to work out if a problem is unsolvable (infinitly looping) you need to evaluate the problem... By trying to solve it. Checkout the halting problem for more details.
"Solving" infinite loops doesn't necessarily mean accurately predicting a priori whether a piece of code will terminate. It can just mean ensuring that if the code does try to run indefinitely, it doesn't have unfortunate effects such as blocking the UI thread without the possibility of being interrupted.
> It can just mean ensuring that if the code does try to run indefinitely, it doesn't have unfortunate effects such as blocking the UI thread without the possibility of being interrupted.
Well that can be achieved by executing the code in a background worker thread. Which doesn't affect the UI thread in browsers... no sure how it's managed but I think you could terminate it after a certain amount of time too
I have made something similar (https://easylang.online/ide/). It is a language of its own, which is compiled and interpreted by WASM. The problem with hanging in endless loops is solved by running the interpreter in a "web worker" that can be killed and restarted at any time.
You could modify the AST tree using something like jscodeshift to add a function that is called in every loop (and maybe every function) and there you can "break" the loop in a pretty clean way.
I'll second the computers for cynics. I don't think one of that series is about blockchain but he does have other videos about blockchain.
For those who don't know computers for cynics is his series about questioning the origins of the status quo and considering alternative futures with different foundations.
Is the seventh in the series: https://youtu.be/3CMucDjJQ4E published on Sep 2014, six years ago, and a must-watch account for any outsider even in 2020. Enjoy it!
It's been a little while since I've heard this one, but if there's one thing that I remember hitting me is that the same language used by the interviewer is used today about privacy, end user programming and any other more powerful technology, programming language or paradigm. It seems that as an culture we're always able to go so far, but not all the way. We see the path between start and end, there's no genius needed for the last push, but after so much progress we reduce ambition towards the end goal and instead develop arguments against continuing.
At some point we just don't think people need help with paper based tasks, "look around you, it's how everything is done" yet here we are with the PC 40 years later. And people look around and think that there is no chance everyone could be a programmer "look around you, they're all consumers, they couldn't understand how to make the computer do what they like". In 40 years there's no doubt this viewpoint will be wrong, but the popular opinion on the matter can't see that future.
See Bret Victors history of computing. The biggest adversary we have to overcome towards progress is the mainstream experts of our own field.
We have apps which seem to be like starting from scratch every time, which can't have abilities known by all because they aren't prepackaged by the devs ahead of time. Every app reinvents a minimal subset of sorting and search. If you have a better idea or a different connection you want to make its just not possible in the app.
Stop pretending that debilitating users is actually good for them in the silly word games we play. Give users power.
I feel like we don't give tools like Microsoft Excel, Game Maker, Photoshop, or the web enough credit for the ability they empower users of computers. Without any formal training or education, they're able to use computers to their ends.
Give Excel to an uncontacted tribe and see how well they go with it.
That we assume reading, numeracy and fine motor skills which until 5 centuries ago were the preserve of less than 1% of the population in the West are not part of formal education or training should tell you all you need to know about how much cultural knowledge we assume people need to have in their daily life to function at the level of a 10 year old.
That 4 years to learn to read is considered normal but reading a 100 page manual is considered unreasonable shows us how much popular culture is lagging behind our tools. Given how technocentric our culture is, this is a situation as ridiculous as Mongols complaining that they need to learn to ride horses.
Good for the interviewer for making counter arrangements at that time. Some of the things he said are still true: we still have to come up with categories in our heads. This is true regardless of advancements in ML.
Good point! I might give it another listen. I remember it being an interesting respectful conversation with no ground given on either side. Hence the title, I suppose.
Reading and writing is intellectually challenging. Everyone can do it now, because we make sure to teach them to do it, and because it's necessary to live in modern society.
Are you seriously trying to claim that programming, of any useful sort, is so hard that only really smart people, presumably like yourself, can do it? Christ. I really hope the software development industry gets its ego kicked in really hard in the near future so everything can stop sucking because of elitist asshats.
Well on one hand, it could be my massive ego, and on the other it could be the truth.
You're the one arguing on the basis of value. I don't. Let me just ask you this; do you think that being a good enough finance trader requires a high IQ? I'm pretty sure of it, and I'm also of the opinion that what they're applying their smarts to is a net negative for society as a whole.
Another question, have you read Steven Pinker's The Blank Slate?
I think the other two comments make good points. There is every chance that everyone will learn to use a small set of fundamentally composable digital tools in the future. That's programming.
I think "intellectually challenging" just means poorly explained or resulting from poor access. Anyone can program, it's just artificially hard to do it today.
> There is every chance that everyone will learn to use a small set of fundamentally composable digital tools in the future. That's programming
That's not, unless you stretch the meaning of the word far into meaningless. But if you insist on doing so, then yes, most people should be able to "compose digital tools" for a small enough number of digital tools and a wide enough meaning of "compose." Although, on second thought, it appears so many people had issues with "programming a VCR" back in the day, and that wasn't anywhere close to my meaning of "programming."
So let me rephrase it, "there is no chance everyone or even majority could become a minimally proficient user of a minimally useful programming language for novel tasks beyond a sequential list of actions."
Is being able to read at the minimum level comparable to programming?
Do you seriously expect 99% of the population to be able to understand something as simple as the demonstration of Euclid's theorem? And yet any programming is more complicated than that, and more analogous.
Outside of the datascience context, I would argue that notebooks like these are how information in a world of compute should be presented. It's ironic that we have codeblocks upon codeblocks of content served on the web on technical sites and blogs that aren't immediately executable in the context of the post. The most popular software in the world are javascript vms, yet we talk about and teach javascript code on websites in a javascript vm that can't run the code on the screen.
Haha, sure, sure. But that's not really the point. We have machine code right? Who needs an assembler? Tools and workflows are worth thinking about and improving. Right?
Depends a lot. Debugging is awesome when you have source maps and good devtools in a modern browser but the code itself is not editable. That said it's possible to open a plain html file from disk with devtools and save changes done in it to disk making devtools essentially a development environment.
I wonder how much value there is to this particular use case because it lives somewhere between devtools and an online IDE. Also arguably testing out a library in a codepen is more comfortable than an inline which you potentially can't save.
Very good points. Makes me think about a persistent web, what if every javascript code visible on your page could be edited and persisted between loads or visits or you could send a blog post link to a friend with your edits/comments/changed environment. Not arguing for that immediately, it's just interesting to think about.
Then what do you do when someone is demonstrating how to reset a page, whether that be document.body.innerHTML = "" or document.body.removeChild(main). They could be run in sandboxes, like starboard here in iframes. Or a powerful history tool can be attached to the page. Every program takes an environment as a parameter, maybe we can make it as easy to shuffle around them as playing with cards.
To really simplify it down, I'm exploring the space of digital workspace + shell. What do you get with a compute environment in OneNote or Figma or Sketchup. What do you get if you can position things in a freeform space rather than a filesystem and text based browser? Can I build a Sketchpad with code editing, and take advantage of viewport macros to build a space of my project and explorations?
Makespace.fun + jsfiddle/observable seems like an environment that I'd like to work in with myself, my collaborators and arbitrary content. To play with things like history and alternatives, and associations of material and ideas.
I have a good example of success of programmatic control of content and auto layouts working with my tiling window manager. Being able to explicitly describe compositions and demonstrating computational content seem to be targets not yet hit by the pop culture of personal computing.
I might be just blowing hot air with all this, but it is fun to play around with.
I've been working on something similar on and off for a while, and I've found the space to be fascinating and somewhat puzzling. Most of the first questions I try to ask about these in browser notebooks don't have clean answers. In no particular order:
- Do you have any ideas about solving logic errors? As it stands right now a while(true) loop crashes the page and consecutive re-openings of the page for a short while (in chrome). What about things like document.body.innerHTML = "" (you actually seem to handle this pretty gracefully, I'll see if I can poke more holes in this another time). For what it's worth none of the online notebooks I've seen have a satisfactory solution to infinite loops, loop timeouts are too blunt an instrument and crashing the tab can take the browser minutes to recover.
- Is there a particular reason you use eval rather than the new Function constructor? From what I've read using new Function is much more performant, and other than different scoping a better choice than eval. Can't find the link at the moment, but it was wrapped up in the mdn[1] design docs for their codebox examples.
- I see you're using the first codemirror 6[2] beta release, how are you liking it? I really enjoy the interface so far.
- Do you have any favorite resources or inspirations about why you went about building a hackable offline local first notebook environment? I particularly like the experiments at Ink and Switch[3]. (many related hn submissions). As well as webstrates[4].
My attempt at this game is to break out of the notebook style single column layout and embrace an art board style canvas, which is a rather radical idea in that it is not obvious what that should look like or how basic interactions like hierarchies or execution order should look, but fun to explore. I also desperately want to prevent crashing as a result of logic errors and workflow footguns(like deleting DOM elements or overwriting storage), to that end I have a separate storage of scripts to rebuild a "safe boot" interface, but there is more thinking to be done here.
I look forward to poking around your code some more soon. Thanks for posting.
Edited out a question about your motivations. I reread your comment and the about page and I realized my motivations are similar to yours. While observable doesn't have a few of your key points, it is a fantastic product. The reason I don't settle on it is that I'm interested in experimenting with the environment outside of a notebook-with-cells interface.
> As it stands right now a while(true) loop crashes the page
> [...] I also desperately want to prevent crashing as a result of logic errors
I could be wrong here, but I think the new Site Isolation stuff in Chromium[0] may mean that you can sandbox an iframe to a different origin, so that the main threads of the top frame and the iframe aren't synced. So the iframe where the code is running would freeze, but the main frame would still be responsive. Again, I may have misinterpreted this because I only read about it in passing, and it was a while ago.
A brief skim through that doc looks like it is about a kind of security that's not relevant to my project. I'm trying to protect the user from themselves while simultaneously giving the user full power over the document.
Web workers are interesting and relevant to this use case, however the almost total isolation doesn't lend itself to manipulating the DOM without the user explicitly writing a custom API to interpret the worker's messages. So, back to the same problem.
I’ve been thinking a little of doing something in this space. But doing so as part of a larger project involving a new programming language so perhaps not applicable to you situation.
I also want to leave the notebook style, focusing more on graph approach. Think higher order spread sheet without the grid.
One, somewhat tangential, thought I’ve had to handle the infinite loop issue though. Is to fragment the language somewhat like they suggest in the out the tarpit paper. Trying to avoid infinite loops by construction, only used at the highest orchestration layer to schedule execution blocks, where the runtime can keep track of things going out of control.
Not sure what it would take to extract a “total” subset of javascript though. But one idea would be to insert a trampoline breaking out of infinite loops often enough for error handling.
To answer some of your questions (let me know if I didn't answer all of them!)
- I think logic errors can't be solved or prevented when exposing vanilla HTML, CSS and JS. You can completely break the notebook if you want to. Like the sibling comment to this stated: the notebook is run in an iframe of a different origin so you (should!) not be able to break out of that iframe. But of course if you introduce some endless loop that crashes the browser (or maybe just the iframe depending on the browser), I can't stop that.
- No particular reason, but it was always a bit iffy getting the return value correctly. To support top level await your code actually gets wrapped into an async function, I don't know how nicely that plays with new Function.
In the future I want to support ES modules with import instead. Perhaps no magic will be required at all anymore and performance will probably be even better (https://2ality.com/2019/10/eval-via-import.html)
- Codemirror is very good and doesn't have a huge bundle size. I decided I wanted to support touchscreens. The monaco editor completely fails on my phone (if I type space it usually copies the word in front of it), whereas codemirror just works. The autocomplete and language support is much worse though, on a desktop I think I will always choose the Monaco editor.
- Probably not the most exiting answer, but for me it was just wanting to use Jupyter for things it was really bad at. There is Project Iodide which is very similar, but they made some different design decisions that move away from the code-output-code-output structure.
By putting the editor inside the sandbox things become much more straightforward and less "special" or "magic", that's another big difference from Project Iodide.
I think more and more fully client-side editors are becoming possible now due to dynamic import and webassembly, do share your project with me if you get the chance to work on it!
Ah, okay this is interesting. The entire notebook is one iframe, which lets you do more of the metaprogramming and sharing variables. Both printing (via console) and non-printing infinite loops crash the iframe until you close the page. And the reason my document.innerHTML = "" test was recoverable is that your notebook-top-button-container is not in the iframe (and it has that refresh button!).
I'll have to look more into async stuff. Great eval article there. I've found these four lines to work for importing content[1]. I dove into realms[2] and the realms shim [3] a while ago after reading about Figma's engineering behind plugins [4] but I never figured out how I might use that for my system.
You might take a look at how the codemirror author built his book [5] and editor [6] in this repo [7]. He implements the loop detection I derided in my first comment. I shouldn't be so harsh, it's a good and simple idea. It's just not an answer, more of a hack. My interests align pretty well with the p5js editor audience[8], so some of the early issues there convey why the simple loop detection isn't ideal.
The only viable way to avoid an endless while loop or overly heavy operations is to have a worker do the eval(), though it would defeat the meta-programming aspect and isolate execution making it pretty much useless. It's still possible to proxy local objects via comlink (or implement your own worker proxy) but it would limit communication to POJO's so no dynamic module imports and access to non-serializable objects.
Right, where you could send the worker a terminate signal if some conditions were met like time or no response or too many responses...
And that drawback obsoletes the strategy it seems, unfortunately.
It's an interesting problem, how to provide full access and control as well as non-corruptible recovery systems.
One of the biggest problems I consistently run into is that printing to the screen is a slow operation, every learns this day one of any graphics work, but things like console.log or Node.appendChild are the primary way to destroy someone's computer for a short period of time because orders of magnitude more operations are queued up before they can complete. I wonder if painting operations could be limited to some max number per second to prevent that orders of magnitude queue problem. And, more importantly, how do you code that limit?
> One of the biggest problems I consistently run into is that printing to the screen is a slow operation, every learns this day one of any graphics work, but things like console.log or Node.appendChild are the primary way to destroy someone's computer for a short period of time because orders of magnitude more operations are queued up before they can complete. I wonder if painting operations could be limited to some max number per second to prevent that orders of magnitude queue problem. And, more importantly, how do you code that limit?
For node.appendChild you have the whole virtual DOM thing leveraged by modern JS frameworks. If you hit issues try using any framework that boasts having a virtual DOM.
The same strategy could be done for the console to some extent.
The built-in console object is over-writable so you can add a layer that would batch console calls (if you do many in a short time). Note that this will break the call address you have on the right-hand side. Also IE11 does strange things with the console object depending if devtools are open or not.
The solution above is doable but it's a hack. If you really have to call console a lot of times in a short time adding a custom log method would be better.
globalThis.__debugLog = [];
globalThis.__printLog = () => globalThis.__debugLog.forEach(i => console.log(i));
const log = (...args) => globalThis.__debugLog.push(args);
// then in code:
log(whatever);
// and once you want to check it, call printLog in the console
__printLog();
Again with this solution you don't know where the call happens but it's a starting point. I recently started using the pattern above to debug performance and memoization issues I had in heavily used code. The console wasn't readable anymore without it. Console.table is helpful for printing out bigger sets of debug data btw.
I meant to dig more into requestAnimationFrame, thanks for the prompt. I should clarify, the problem is that you can crash a web page if you do any animation inside of a runaway loop but especially painting with console. I suppose this indirection might help you realize that before it happens though, and that's probably worth doing. Thanks for the code.
You're welcome. I avoid runaway loops like fire because the crashes tend to be problematic to debug. Consider the timeout thing from SO. Generally working with events in JS is easier than central run loops.
Note that it does not mention promises and async/await but it does include it. Promises are essentially syntactic sugar for a setTimeout calling success and failure methods. This also means that promise code will be executed in parallel but on one thread and one heap, so each promise will fight with all other for priority. This is generally fine for async stuff like fetching resources but it would completely destroy performance if it would do anything heavy, like, say unzipping zips with jszip.
Historically the event-driven model of JS enabled Node.js's API to be callback-driven. This lead to what is called "callback hell" and eventually to a proper introduction of promises and async/await to the language.
A good read, mdn is far and away the best resource I have found for web technologies. I'm excitedly looking into some p2p tech and the webrtc articles are useful.
From the article:
> "Run to completion"...
> A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
Good advice. And this is what happens when you `while(1) {console.log("loop")}`. Though I've never see a "taking too long to run" dialog, I'm just eventually able to kill the process myself and sometimes it takes a few minutes while my fans spin.
I'm less interested in blame than in a systems analysis of how in the last half century powerful players seem to have missed the opportunity to start earlier investment in solar and battery technology. Solar and batteries are unique in energy infrastructure, as even any casual observer knows by now, and is certain to change many aspects of politics, industry and culture. It seems an inevitability that energy infrastructure will evolve from large complex components towards small and simple components, and I'm interested in engaging with the history of why "now" is the moment, rather than decades ago.