This is not true for business books like mine. It's vital to write a proposal first in that world; publishers want to influence the content (as in the OP article).
I think the same is true for tech books but I don't know as I haven't written one.
A novel or other fiction is the opposite; there you do have to write the whole thing first.
As I commented in another thread, there's no a priori reason to believe that the "average" glutamate receptor level is the "right" one. Isn't it possible that there are:
1. "Normal" people with a level of glutamate receptors at 10, say, on a scale I'm inventing for this example
2. "Autistic" (according to the DSM) people with a level of, say, 5, who are hindered by the effects of being at this level
3. "A little bit autistic" people at a level of, say, 8, who aren't hindered and don't meet the DSM criteria, but in fact actually benefit from the effects of being at this level
Some "normals" might then want to inhibit their glutamate receptors somewhat to get the benefits of being at an 8 or a 9 on my made-up scale.
Perhaps. But remember that this is a very complex 3D structure with varying receptor densities, it's not "The Glutamate Level", it's some neural network areas with higher or lower excitability connected to other neural networks.
Just like with ADHD it's likely that medication will at best have limited effectiveness and many side effects.
Certainly, we're at the "bash it with a hammer" stage not ready for anything nuanced. I just wouldn't want to assume that the right outcome is "less autism"; I suspect most people could do with at least a little more!
It seems you are assuming that because the majority of people have a certain quantity of glutamate receptors, that they are the healthy ones and that we should be trying to bring autistic people up to that level. Is that right?
Why not consider the opposite, that the most beneficial quantity of glutamate receptors could be somewhere below the typical amount? If that were true, then we could try to help others reduce their glutamate receptor level to become healthier and more successful (and a little more autistic).
If we found, say, an association between a lower level of neurological characteristic X and concert-level piano skill, then those who aspire to play that instrument at an elite level might try to decrease X. The fact that most of us are rubbish piano players would not be evidence that lower levels of X are harmful, but very much the opposite.
It is an interesting idea, but let’s not assume autistic traits make you more talented in anything. There certainly is very highly intelligent people with autistic traits that are able to use hyper-focusing to help them work very hard and succeed in academia or at work. I doubt any rational person is looking for ”a cure” for the Alan Turings and Albert Einsteins of this world. Nor even for a regular, albeit slightly odd, chap like myself, who likes reading books alone with his cat and studying math instead of seeing other people.
However there are people with severe autism that makes it more or less impossible for them to communicate with other people or live independently. If these people could have their life improved it might make huge difference to them and their families.
> All autistic participants in the study had average or above average cognitive abilities. McPartland and collaborators are also working together on developing other approaches to PET scans that will enable them to include individuals with intellectual disabilities in future studies.
Simply put they didn't even touch the keeners, nonverbalists, the piss-in-your-pants, or the perpetual 1 year old autistics. They went after people who previously would be called "Aspergers syndrome".
But everything cognitive seems to be called 'autism spectrum disorder' these days.
I am not sure what conclusion you would like us to draw from this. Presumably it is simpler to get people for this sort of study if you can, y’know, ask them. Next step would be to repeat the study with a larger group, eventually adding also those, who really really need help. I doubt there’s a Noble waiting for someone for creating a drug that helps a chap who likes trains to look you into eyes while they are speaking to you.
Of course they didn't. It would be unethical to perform non-medically-necessary PET scans on people who are unable to give informed consent due to the radiation exposure.
First, 1 PET scan is around 25mSv. 50mSv is yearly limit for radiation workers. And those are being overly safe to allow accidental overage. 100mSv is start of detectable cancer risk. So the risk for 1 scan is basically 0.
Secondly, someone has medical power of attorney over the non-functional autistics. And in reality, they are the ones at most need of (almost passive) study to help them. Us high functioning autistics dont need anywhere near the help.. And we have no way to know an Aspergers and traditional autism are even similar, other than the spectrum brigade keeps adding more and more under 'autism'.
Simply put, guardian says yes to do a single scan a year, and I see no problem with it. More than 1 a year, and we start getting into potential damage. Maybe with some pie-in-the-sky-IRB whatif situation, sure. But 1 scan/yr has no demonstrable damage.
I imagine it was a lot easier to get this version where the study participants can consent for themselves past an ethics panel. Now that there's a result suggesting something of value might be learned, there's a stronger argument for studies with greater ethical risk.
You're absolutely right that assumption was implicit. The answer was written totally in that framework. I'm not here to say what's right or wrong in determining something about people who lie outside of normal in these things, or what normal means.
So what I wrote should be read with a "if it is held to be a condition which deserved remediation or avoidance of it's manifestation" attached.
Most medical conditions are couched in this sense, that a deficit or departure from the normal is a problem. In matters of brain chemistry it pays to be more nuanced.
Amazingly, no one seems to have actually checked that this picture was really "circulating on social media". I've been investigating for the past hour or so and can't locate a single public post or reference anywhere other than reposts of the BBC article.
Typically, postings that gain traction have many many reposts and though some may be deleted, there's a long tail of reverberation left behind. I can't find that at all here.
I wonder if the hoaxer just emailed it to Network Rail directly?
Doing it without your customer's agreement is indeed unhelpful! I'm sorry you're that frustrated by this experimentation.
That's why I advocate using feature flags and beta labels and supervised user research to get feedback. Do those methods work for you, so you can opt in?
Thanks! That's not the positive use I have in mind here, where a manager has a real need and uses a demo to focus the work. Does that ever happen at your large corporate?
Roadmap [1] last updated in June 2025, so a reasonable chance that the project is alive. Though the status colours indicate there's a good percentage of development left to do even before early access.
The roadmap has both early access and 1.0 goals. I just wrapped up terrain generation/modification, so all that's left is to add in the municipal services, funds, and prob. street parking. Then wrap up the overlays.
I have to imagine that like pair programming, this multi-AI approach would be significantly more tiring than one-window, one-programmer coding. Do you have to force yourself to take breaks to keep up your stamina?
Well I don't use as many instances as they do, but using codex for example will take some time researching the code. It mostly just becomes a practice so that I don't have to wait for the model, I just context switch to a different one. It probably helps that I am ADHD, I don't pay much of a cost to ping pong around.
So I might tell one to look back in the git history to when something was removed and add it back into a class. So it will figure out what commit added it, what removed it, and then add the code back in.
While that terminal is doing that, on another I can kick off another agent to make some fixes for something else that I need to knock out in another project.
I just ping pong back to the first window to look at the code and tell it to add a new unit test for the new possible state inside the class it modified and I'm done.
I may also periodically while working have a question about a best practice or something that I'll kick off in browser and leave it running to read later.
This is not draining, and I keep a flow because I'm not sitting and waiting on something, they are waiting on me to context switch back.
If you're getting AI slop you're doing it wrong. You should be getting high quality code. Of course that's easier said than done, but AI slop is a sign that things have gone off the rails.
I have scarcely gotten decent code. The best a model has spat out is 'fine', which is ok for menial tasks.
I have yet to see anyone show me an AI generated project that I'd be willing to put into production.
IDK, I feel like 'vibe coders' or people who heavily rely on LLM's have allowed their skills (if they ever existed) to atrophy such that they're generally not great at assessing the output from models.
A friendly reminder that "refactor" means "make and commit a tiny change in less than a few minutes" (see links below). The OP and many comments here use "refactor" when they actually mean "rewrite".
I hear from my clients (but have not verified myself!) that LLMs perform much better with a series of tiny, atomic changes like Replace Magic Literal, Pull Up Field, and Combine Functions Into Transform.
Everywhere I've worked over the years (35+), and in conversation with peers (outside of work), refactor means to change the structure of an existing program, while retaining all of the original functionality. With no specificity regarding how big or small such changes may amount to.
With a rewrite usually implying starting from scratch — whether small or large — replacing existing implementations (of functions/methods/modules/whatever), with newly created ones.
Indeed one can refactor a large codebase, without actually rewriting much- if anything at all- of substance.
Maybe one could claim that this is actually lots of micro-refactors — but that doesn't flow particularly well in communication — and if the sum total of it is not specifically a "rewrite", then what collective / overarching noun should be used for the sum total of the plurality of all of these smaller refactorings? — If one spent time making lots of smaller changes, but not actually re-implementing anything... to me, that's not a rewrite, the code has been refactored, even if it is a large piece of code with a lot of structural changes throughout.
Perhaps part of the issue here in this context, is that LLMs don't particularly refactor code anyhow, they generally rewrite (regenerate) it. Which is where many of the subtle issues that are described in other comments here, creep in. The kinds of issues that a human wouldn't necessarily create when refactoring (e.g. changed regex, changed dates, other changes to functionality, etc)
Yes, the incorrect usage is widespread! See Fowler's original book for the thinking behind the term -- every example therein is a 1-minute job, and many are macros in your IDE.
Good point that LLMs tend to rewrite unless corrected. I have heard (but not tested myself!) that if you tell them to apply a series of small changes they stay on track better. Fowler's list would probably be a good starting place.
I think the same is true for tech books but I don't know as I haven't written one.
A novel or other fiction is the opposite; there you do have to write the whole thing first.