Hacker News new | past | comments | ask | show | jobs | submit | pwpwp's comments login

> The fires in HSS in Germany were determined using web crawling for the year 2023 because no other data was available. All other probabilities were calculated using researched data. The results show a significantly lower probability of an HSS fire compared to a general house fire.

Seems legit.


yep

also:

> the findings indicate that the probability of an HSS fire is very low (0.0049 %) and is 50 times lower than for a general house fire.

So for every 50 "normal" house fires, one extra will be due to HSS.

The bigger question is, what are the consequences of such fires? Looking at burning electric cars, that can't be put out, having burning batteries in an apartment building basement doesn't really seem that fun.


Battery fires are highly toxic. You’ll need to evacuate the area, and cleaning up afterwards might be fun.


> covid

I don't understand this part. The people who left X after Musk's takeover seem to be mostly people who were on the winning side of Covid, i.e. the side that used the state and media apparatus to coerce people to participate in a medical experiment.


So I've read the title and the abstract, and I still have no idea what a "hypermedia control" is.


Academic papers are generally written to be read by peers in the field. If you're not a researcher in the hypermedia field, you shouldn't expect to immediately understand a paper without doing some work. The abstract is a paper distilled to its essence, and is not the place to expect to learn the necessary background.

In this case, the abstract says: "This functional mechanic is used to create first an informal and then formal definition of the term "hypermedia control". Using this formal definition we then derive a generalization of the concept, referring contextually to the World Wide Web." Therefore, we should expect to look in the paper for the definition.

We find the "functional definition" at Definition 4.1: "A hypermedia control is an element that responds to an event trigger by issuing a type of request to a URL and placing the response at some position within the user agents viewport." The "formal defintion" then follows immediately after, but can't be quoted here because it requires LaTeX notation and a full page of text.

As a possible non-academic summary is that a hypermedia control can be defined as:

An interactive element within a hypermedia system (like a web page) that allows users to trigger actions, typically resulting in new content being loaded or displayed. Common examples include: links (clicking loads a new page); buttons (clicking triggers an action like submitting a form); form inputs (entering data and submitting retrieves results); images with embedded links (clicking loads new content).

The key aspects of a hypermedia control are that it responds to a user action (click, submit, etc.); it causes some kind of network request (usually to fetch new content); it specifies where that new content should go (replace the whole page, update a specific area, etc.)


One important clarification I should have mentioned: buttons and form inputs only meet the definition of hypermedia controls when HTMX or something similar is used. Under normal HTML only links, images, complete forms, and iframes qualify, since (for instance) form inputs and buttons can't on their own make a request or specify where new content would go.


You can put the actual definition on page 4, or you can make it easy for the reader and just say "hypermedia controls (like links and buttons)" the first time you mention the term in the abstract.


Strictly speaking, buttons aren't typically hypermedia controls. Haven't read the paper yet but the only controls in HTML are links and forms. Buttons can be nested inside those as actuators but aren't controls unto themselves since they don't meet that functional requirement of sending a request to a URL and inserting response content into the viewport.

EDIT: It seems the authors also consider images and iframes to be controls since they also make requests and add content to the viewport.


"four types of hypermedia controls found in HTML: links (anchors), forms, image tags and iframes"

"Examining the hypermedia control mechanic of links, forms, image tags and iframes in HTML, we see the following common functional pattern emerge: For each of these controls: (1) An event trigger (click, submit or load) (2) Causes a particular type of HTTP request issuance (3) That request is made to a resource location specified by a URL (4) The content of the HTTP response is then placed in the user agent viewport Within the context of HTML, this gives us the following functional definition of hypermedia controls as an element that incorporates this hypermedia control mechanic: Definition 4.1 (Hypermedia Control). A hypermedia control is an element that responds to an event trigger by issuing a type of request to a URL and placing the response at some position within the user agents viewport."

They mention the limitations of these existing hypermedia controls within HTML. In general, existing HTML controls don't allow the developer to choose any type of HTTP request to be sent from any type of HTML element and target any part of the page.

"in the absence of an extension mechanism hypermedia clients and servers must agree in advance on the types of hypermedia controls they will support. HTML includes a very powerful extension mechanism: JavaScript [47 ]. By taking advantage of this extension mechanism we are able to extend HTML and provide the generalized hypermedia controls outlined above to HTML authors."

"CONCLUSION In this paper we proposed an informal and then definition of the term "hypermedia control", derived from the implementations of four common such controls found in HTML: anchors, forms, im- ages and iframes. From this formal definition we then derived a generalization of the concept of hypermedia controls within the context of HTML. We then introduced htmx, a JavaScript library that implements these generalizations for HTML authors. We demonstrated two interactive patterns that can be implemented by HTML authors using generalized hypermedia controls. Finally, we introduced Hyperview, a mobile hypermedia that also implements the generalization we proposed. We demonstrated an interactive pattern achievable in Hyperview using this generalization, thus demonstrating that this generalization applies to hypermedia systems beyond HTML and the WWW."


The paper is structured in the following manner:

- background

- examination of existing hypermedia controls in HTML

- formalization & definition

- generalization

- application (htmx & hyperview)

The TLDR is:

> So a hypermedia control (again, from a functional standpoint) is an element found in a hypermedia document that initiates a hypermedia exchange by a hypermedia client in response to some event, thereby updating the client state in some manner (i.e. with some disposition)


Maybe address any of the issues raised in the post?


Actually, I built this because I don't really understand async/await. Now that I understand generators (somewhat), I might be able to grok async/await ;-)


This appears to be an artefact of the "async call stacks" feature.

To turn it off in Firefox, go to about:config and set "javascript.options.asyncstack" to false.

In Chrome, in Devtools enter ctrl+shift+p and search for "async stack traces".


IOW, there isn't any real stack growth, it just appears that way when running in the devtools with the "async call stacks" feature enabled.


Overall I agree: yes, with fexprs you lose some code introspection ability compared to macros. I haven't found it to be a big deal in my fexpr-based hobby Lisp so far.

Re your two points:

You could have "symbol fexprs", analogous to symbol macros, I guess.

For places I think the first-class solution as employed by T and others is better, and would work fine with fexprs: (set (name-of person-1) "sam") simply stands for ((setter name-of) person-1 "sam").

IOW, name-of is expected to be a reader function. Every reader function has a writer function attached to it, that we extract with (setter name-of). Then we call that writer function with the rest of the original arguments.



I find the claim about image-based development being required for macros somewhat unfounded.

The most trivial counterexample is an interpreter - it can simply evaluate the macros just like ordinary functions.

A step up in complexity is a compiler that - during compilation - compiles macro definitions by emitting code and dynamically loading it (Goo does this http://people.csail.mit.edu/jrb/goo/goo.htm , and I have also put a toy implementation of this together using dlopen, and there are probably many other impls that do this.)


Yeah, that part made little sense to me.

Clojure is one of the weakest lisps when it comes to macros, but you can still define and use them at run-time, and it has no image-based development.


> This is all documented.

Things have changed. Fexprs are coming back in a big way.

You need to check out John Shutt's Kernel language https://web.cs.wpi.edu/~jshutt/kernel.html

Yes, older Lisps messed fexprs up. Kernel fixes this. The vau calculus used by Kernel is simply a lambda calculus that, unlike CBN and CBV, doesn't implicitly evaluate arguments. The rest of the calculus is the same.

What this means is you get powerful hygienic metaprogramming (arguably as powerful or even more powerful than Scheme's most advanced macro systems) at a low low price and with very elegant theoretical properties. In Kernel, hygiene is achieved simply with the usual lexical scope that's already in lambda calculus.

So vau calculus is simpler than the CBV lambda calculus used by Lisps. Because it doesn't evaluate arguments, so it does less than those calculi. And by doing less it gains the great power of being able to do hygienic metaprogramming in the same calculus, without second-class contraptions like macros.


> Things have changed. Fexprs are coming back in a big way.

I agree that Lisp dropped the ball on fexpr. It seems to be a combination of dynamic scoping, well founded performance concerns and some conflation of syntax and semantics. The special forms must appear under their own names thing is a related performance hack.

Lisp no longer has performance concerns. SBCL is excellent. Speculative JIT techniques would roll right over the "is this a function or a fexpr?" branch in apply with no trouble. I'm convinced there's no inherent overhead there.

I don't see much evidence that fexpr's are coming back though. Kernel is superb, though I'm unconvinced by the handling of cycles, but doesn't seem to be in use. Qi/shen is a similar sort of do-things-right effort with limited uptake. What do you have in mind?

My working theory is that lisp is niche enough that and a better lisp (which I contend kernel is - replacing the layers of macros with fexpr guts a lot of complexity from the language, and that was meant to be the driving motive behind Scheme) hits a subset of the first niche.


Those things take time. I am convinced that the next big Lisp dialect will be a Kernel.


I last looked at that maybe eight years ago? The web page has undergone some updates, but the PDF paper is still from 2009.

Older Lisps didn't mess up fexprs. The developers wanted (ahead of time) compiling and moved on.

Using lexical scope in the context of fexprs is only a minor (and obvious) improvement. If fexprs made a comeback in, say, Common Lisp, it is painfully obvious they would be functions, whose parameters and locals are lexical by default.

Under a single dynamic scope, what it means is that when a fexpr is evaluating the argument code, its own local variables are possibly visible to that code. If the fexpr binds (let ((X 42) ...) and inside there it calls EVAL on an argument which contains X, that argument's X resolves to 42.

That could be fixed by using two dynamic scopes: the fexpr having one implicit dynamic scope for its own execution (perhaps newly created for each call to the fexpr), and using an explicit dynamic scope for evaluating the argument material (that scope coming in as an argument).

Under a single dynamic scope, if the number of fexprs in the system is small, they can stick to some namespace for their own variables, and all non-FEXPR routines stay out of that namespace. fexprs have to then be careful when they pass pieces of their own code to other FEXPRS.

In a program with large numbers of fexprs, symbol packages would solve the problem: there would be multiple modules providing FEXPRS, which would use identifiers in their own package. Then only fexprs in the same package could clash when they use each other, which is resolved by inspection locally in the module. (E.g. use unique symbols across all the FEXPRS.)

I don't suspect hygiene was a problem in practice; during the heyday of fexprs, there wouldn't have been programs with huge numbers of FEXPRS (let alone programs with churning third-party libraries containing fexprs). Everything would have been done locally in one site, by a small number of authors working in their own fork of Lisp as such.

Thus, I don't think this was the main problem identified of fexprs; it was really that impediment to compiling. Schutt's paper doesn't seem to attack this problem at all.

Hygiene is a non-problem; we are still cheerfully using Lisps without hygienic macros in 2023, whereas fexprs not being compilable was clearly a pain point in 1960-something already.


Lack of compilation for fexprs is:

A) An exciting research problem! Shutt himself says that he doesn't see any fundamental obstacles to compiling them. It's just that nobody has done it yet.

B) Actually not a big deal for many applications. Take PicoLisp, which has been cheerfully used in customer-facing applications for decades. It's an ultra-simple interpreter (its GC is 200 LOC https://github.com/picolisp/picolisp/blob/dev/src/gc.c ) The same architecture can be used for Kernel implementations.


We have learned a few things about compiling in the last fifty years though. The assumption that fexpr thwart compilation is not well founded.


Enjoying the discussion. Here’s a related paper: https://docplayer.net/137462966-Towards-jit-compilation-for-...


I think this is the same paper, docplayer is not looking great on mobile. Will read it, thanks! https://static.csg.ci.i.u-tokyo.ac.jp/papers/10/zakirov-jsss...

Covers the right sort of thing. It makes a guess at what a function call will be, inlined that guess, and if it turned out to be wrong restarts from the start of the function in the interpreter. Doesn't explicitly call out the function calling convention but guarded inlining has much the same effect.

Maybe worth noting that inlining a fexpr is semantically very close to expanding a macro. Identical if symbol renaming is done the same way for each.


I don't find this convincing.

> What do you return for an index into the array?

An option/maybe type would solve this much better.

> Yes, I know, it can be clumsy to trace it back to its source

An exception would be much better, alerting you to the exact spot where the problem occurred.


> An option/maybe type would solve this much better.

NaN's are already an option type, although implemented in hardware. The checking comes for free.

> An exception would be much better

You can configure the FPU to cause an Invalid Operation Exception, but I personally don't find that attractive.


The missing bit is language tooling. The regular floating point API exposed by most languages don’t force handling of NaNs.

The benefit of the option type is not necessarily just the extra value, but also the fact that the API that forces you to handle the None value. It’s the difference between null and Option.

Even if the API was better, I think there’s value in expressing it as Option<FloatGuaranteedToNotBeNaN> which compiles down to using NaNs for the extra value to keep it similar to other Option specialisations and not have to remember about this special primitive type that has option built in.


Yeah. You should be very explicit about it. Certainly not treat it like, “ooh, here are some free bits that I can use to tag things in ad hoc ways (like -1 for missing index)”.

https://internals.rust-lang.org/t/pre-rfc-nonnan-type/8418


> NaN's are already an option type, although implemented in hardware

The compromise with this is that it makes it impossible to represent a non-optional float, which leads to the same issues as null pointers in c++/java/etc.

The impacts of NaN are almost certainly not as bad (in aggregate) as `null`, but it'd still be nice if more languages had ways to guarantee that certain numbers aren't NaN (e.g. with a richer set of number types).


> The impacts of NaN are almost certainly not as bad (in aggregate) as `null`, but it'd still be nice if more languages had ways to guarantee that certain numbers aren't NaN (e.g. with a richer set of number types).

The problem with that is that to guarantee arithmetic does not result in a NaN, you need to guarantee that 0 and infinity are not valid values, and those values can still arise from underflow/overflow of regular computation. Basically, there's no subset of floating-point numbers that forms a closed set under +, -, *, or / that doesn't include NaN. So you can define FiniteF32 (e.g.), but you can't really do anything with it without the result becoming a full-on float.


As far as I'm aware, there's no equivalent to a stack trace with NaN, so finding the origin of a NaN can be extremely tedious.


I've never found it to be particularly difficult.

The extremely difficult problems to find are uninitialized data and threading bugs, mainly because they appear and disappear.


Good points!


Exceptions are actually part of floats, they're called "signalling nans".

So technically Python is correct when it decided that 0.0/0.0 should raise an exception instead of just quietly returning NaN. Raising an exception is a standards-conforming option.

https://stackoverflow.com/questions/18118408/what-is-the-dif...


In practice, I've found signalling NaNs to be completely unworkable and gave up on them. The trouble is they eagerly convert to quiet NaNs, too eagerly.


I am firmly in the belief that sNaNs were a mistake in IEEE 754, and all they really serve to do is to create hard trivia questions for compiler writers.


technically I guess it should return sNAN (so app can check for it if it want to handle it differently) and raise exception if sNaN is used in (non-comparison) operation


> > What do you return for an index into the array? > An option/maybe type would solve this much better. Only if optional<float> is the same size as float.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: