Hacker Newsnew | past | comments | ask | show | jobs | submit | tgbugs's commentslogin

This is incorrect. You can write a parser for org. See for example https://github.com/tgbugs/laundry. Work toward standardization has been stalled because I (among others) have not had time to circle back to work on it. In part this is because the lack of a standard has not blocked most use cases since emacs is open source and can run almost anywhere.

> You can write a parser for org. See for example https://github.com/tgbugs/laundry.

Oh, there are a lot of incomplete parsers. This one is not an exception:

> Status

> Laundry can parse most of Org syntax, though there are still issues with the correctness of the parse in a number of cases.

> In particular there are a number of edge cases in the interaction between the syntax for various Org objects that have not been resolved.

I have my own parser as a pest grammar. It has just the basic features. This Laundry seems to implement more of org-mode, but I don't care anymore really, because I believe that org-mode will not be reimplemented.

> In part this is because the lack of a standard has not blocked most use cases since emacs is open source and can run almost anywhere.

I have some inexplicable aversion to an idea starting elisp interpreter just because my program needs org-mode parser. But even if I could integrate elisp into my program as easy as I do with lua, I probably wouldn't do it, because parser in lisp doesn't really solves the problem, it simplifies it a bit (I don't need to deal with the grammar) but shifts to another level: I need to learn how org-mode is represented as a lisp object. I need to reverse engineer the formal definition of that recursive object to deal with it, or turn on defensive programming expecting anything.

The only realistic way of dealing with org mode is to write code for emacs. There are exceptions of this rule, like pandoc, but I don't trust them.


> Work toward standardization has been stalled because I (among others) have not had time to circle back to work on it.

I tried to not to react to this, but, I'm sorry, I'm too much of a troll to just leave it without commenting.

Of course you have no time to write a formal definition. No one has time for that, and no one will have time for this. Because at this stage it is practically impossible. The parser was written as a bunch of regexps intermixed with lisp code. All edge cases were baked into org-mode because those regexps are the definition of org-mode. To write a formal grammar you need to catch all those edge cases, and to reproduce the behavior of the existing parser.

In retrospect, the parser should've been replaced with a formal grammar definition at much earlier stage, when it was possible to replace parser with another one, which is similar but generally incompatible because it deals with edge cases in a different ways. When the time was missed those edge-cases became a legacy you cannot fix.


A great related article on mammalian megafauna and plants. https://www.americanforests.org/article/the-trees-that-miss-...

I wonder if there are existing data sources that could be used to implement an optimal pot hole patching priority lists at scale.

Identify pot hole locations. Combine with traffic metrics for those locations. Then use a combination of some pot hole nuisance metric (size, depth, location in lane, number of cars that could hit it per unit time based on traffic metrics), a cost to repair for a given repair type metric (should include traffic disruption cost estimates), then have an estimate for future degradation if it is not repaired and the cost of that applied at a few time points .... I'm sure there are plenty of implementations of various versions of the algorithm, but I wonder whether there are open data sources ....

A quick search suggests that most approaches are municipality based crowd sourcing efforts. A stream from the radars from various vehicles could provide something that was up-to-date enough to avoid false positives that had already been fixed .... Things like streetview and various aerial photography datasets probably update too slowly ... though I know of some potholes that have existed through multiple recaptures.

0. https://par.nsf.gov/servlets/purl/10636488 1. https://doi.org/10.1016/j.jag.2023.103335

I guess the days of citizens grabbing their shovels and going to fix the roads are becoming a thing of the past. Which is a shame because the total cost of asphalt needed to fix most potholes is less than the cost of a single tire repair.


One way to work around the heat dissipation issues in space (and also on earth) is to move to computing systems that operate entirely at cryogenic temperatures to take advantage of superconducting circuitry.

I've heard stories that over a decade ago teams inside hyperscalars had calculated that running completely cryogenically cooled data centers would be vastly cheaper than what we do now due to savings on resistive losses and the cost of eliminating waste heat. You don't have to get rid of heat that you don't generate in the first place.

The issue is that at the moment there are very few IC components and processes that have been engineered to run at cryogenic temperatures. Replicating the entirety of the existing data center stack for cryogenic temps is nowhere near reality.

That said, once you have cryogenic superconducting integrated circuits you could colocate your data centers and your propellant/oxidizer depots. Not exactly "data centers off in deep space" since propoxd tend to be the highest traffic areas.


by my calculations, the heat dissipation isn't that big a deal

take an h100 for example. it will need something like 1kW to operate. that's less than 4 square meters of solar panel

at 70C, a reasonable temp for H100, a 4 square meter radiator can emit north of 2kW of energy into deep space

seems to me like a 2x2x2 cube could house an H100 in space

perhaps I'm missing something?


Heat travels when there is a thermal gradient. What thermally superconducting material are you going to make your cube out of that the surface temperature is exactly the same as the core temperature? If you don't have one, then to keep the h100 at 70c, the radiators have to be colder. How much more radiator area do you need then?

Have you considered the effects of insolation? Sunlight heats things too.

How efficient is your power supply and how much waste heat is generated delivering 1kW you your h100?

How do you move data between the ground and your satellite? How much power does that take?

If it's in LEO, how many thermal cycles can your h100 survive? If it's not in LEO, go back to the previous question and add an order of magnitude.

I could go on, but honestly those details - while individually solvable - don't matter because there is no world where you would not be better off taking the exact same h100 and installing it somewhere on the ground instead


h100 can operate at 80-90C continuously, so 70C seems conservative

I'm not advocating for space GPUs as a logical next step. so many unsolved problems remain

point is that launch costs per kg are a more realistic blocker than cooling


The typical GPU cloud machine will have 8 H100s in a box. I didnt check your math but if a single machine needs 32 square meter radiator, 200 machines will probably be the size comparable to the ISS.

How much does it cost to launch just the mass of something that big?

Do you see how unrealistic this is?

Given that budget, I can bundle in a SMR nuclear reactor and still have change left.


my point is that cooling is not the problem, launch cost per kg is


It is [0]. All the pages are generated from org sources. If you scroll to the bottom of any page there is a link. Also tec used to have an overlay that would show you the org source directly, but ultimately it was decided that it should not be enabled because it required javascript and could be confusing.

0. https://git.sr.ht/~bzg/orgweb


This points to a potential answer to a long standing question I've had about why some hairs stop growing at certain lengths. If the force is being generated by cellular migration then control over when to stop growing can be mediated by a signal that tells the cells to stop migrating, and that could be based on time or vibration amplitude or something else that correlates with hair length. For hair that grows continually you just ... never turn off cell migration.


I don't believe any hair on the human body truly grows continuously. Even head hair has a lifespan of ~7 years and whatever you can grow in that time is the max. I was a big metalhead in high school and grew my hair out. Indeed, after about 7-8 years it stopped getting longer, right at about waist level, and was stuck there until I finally got sick of it and cut it off.


I think they call this the "terminal length". As long as it will grow before it falls out.


Oh my god I have always asked why eyebrows stop growing and NOBODY ever thinks about it.


> why eyebrows stop growing and NOBODY ever thinks about it

If you clip your eyebrows, will they grow back to their original length? Or is there a process that generates an eyebrow hair and then stops after a pre-determined length of time (with periodic shedding)?


Every hair grows to a maximal length before stopping or falling out. It varies wildly based on genetics, location on the body, and body chemistry.

Animals evolved specialized hairs for different uses. Protection, warmth, display, your armpit hairs wick sweat and keeps your skin from rubbing. It's beneficial to have a system that keeps the specialized hairs in their optimal(ish) configuration and to replace hairs as they become worn and damaged.


Wait until you get older, and you either start trimming them or considering some kind of Yellowbeard style braids


I like to compare these to CENTENNIA [0,1], which was the first program like this that I ever encountered (back in 6th grade). My test, is to see whether the program records the Napoleonic wars. This one does not.

0. https://historicalatlas.com/download/ 1. https://youtu.be/WFYKrNptzXw?t=64


A fully psychometric version of this that explores more than just the fovea could be created by varying the scale parameter (if you crank it up high enough you can see the movement in the periphery). The additional component you would need is to have trials where the subjects has to report whether a particular region (could even be cued with a red circle, I don't think it needs to be random) is actually moving or not while fixated on the center. There are clearly cells that detect this kind of motion in the periphery but they need larger visual input, possibly because the receptive fields of the cells that feed in are larger out there.


One science museum that is not like that is the Deutsches Technikmuseum Berlin, at least when I was there (shudder) about a decade ago.

It was a museum that was designed for parents to explain to children. The written material for any given piece in an exhibit went into sufficient detail and successive sections of writing would build on each other without necessarily requiring that the previous section had been read.

Back then the museum had an exhibition on the longitude problem and time keeping, precision, drift, etc. that walked you through the development of increasingly accurate chronometers, the practical reasons why, etc. It was an absolute masterwork exhibit, and it expected the adults to be actively engaged with helping digest the material with the kids.


I think about this every single time I drive by a stretch of road that has these. You can't have public goods when the value of those goods in private hands is greater than the risk of, ahrm, converting those public goods into private goods.

When a society fails to provide sufficient opportunity for all its members then those who have been left behind can simply make up the difference by retrieving their share of the common wealth by other means.

The cost of trying to police this (ignoring entirely the moral and ethical implications of such policing) at the scale of e.g. all roads with guardrails is more than the value if replacing the rails, and likely substantially more than just providing the missing opportunity and removing the sources of wealth inequality that make wealth redistribution in the form of guard rails an inevitability.


But the police are paid for with the taxation of normal people, not the ultra wealthy class. Which is who would need to be taxed to redistribute wealth and opportunity. Our politicians have zero interest in properly taxing themselves and their friends. So easier to just keep taxing the middle and over funding policing.


you got so close...


The degree to which the line between the state and moneyed individuals and interests blurs as they converge near the top confuses a lot of people.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: