Hacker Newsnew | past | comments | ask | show | jobs | submit | amszmidt's commentslogin

> [...] and notebooks have embedded code in documentation since Mathematica in the 1980's.

Late 80's, very late ... but the concept of "notebooks" predates Mathematica by at least a decade (it was very common to embed structure in source code files with markup).


(No need for LET*, just LET would suffice.)


LET* may be needed if the operations are time-dependent (so if the move-to operation must happen before the delay operation), as LET does the binding in parallel (at least, according to the spec).


That is a very acute observation!


> What were the other colors?

ADA was designed around a competition between four companies, to make it unbiased the proposals got named by colour.

Honeywell was Green, Intermetrics was Red, SofTech was Blue, and SRI was Yellow. There where no other colors.


Ada is a name, not an acronym like ADA, the Americans with Disability Act, which unfortunately domain collides when searching for Ada material.


While the Lisp Machine does use lists, the benefits of CDR coding were / are quite overblown. The Lisp Machine also used other data structures far heavier than lists.

The Lisp Machine macroinstructions aren't that complicated, it is a basically stack based machine -- the most complicated thing was handling of the function arguments from Lisp (FEF); which has complicated semantics when it comes to handling optional, keywords, and what not arguments.


Not sure where this "analysis" comes from, but the demise of the Lisp Machine has generally been agreed that it had to do more with the speed of technological advances than Common Lisp -- by the time Common Lisp was standardized, for all intents and purposes, all Lisp Machine companies where already defunct.

Common Lisp was definitely not designed with the intention of better performance, the intention was literally to have a _common_ Lisp language that multiple implementations could use and where porting programs would be much easier. One needs to remember that when Common Lisp was first drafted, there where dozens of Lisp dialects -- all of them different.

The list of CPUs is a very large span, some predating Common Lisp (CLtL1 is from 1984, CLtL2 from 1990, and ANSI Common Lisp from 1996) by close to several years (VAX, from 1977).

But other than that, the idea of a not-Unix system does fall into those two buckets ... make it Unix, or rewrite everything. One can see this in Oberon, Smalltalk-78, Mezzano, etc...


Sinks where you wash dishes tend to be far more dirty than any shower.

Having showers at work is awesome, it means you can take a proper bike ride to work, and freshen up.


It is documented in detail in the GNU screen manual, https://www.gnu.org/software/screen/manual/screen.html#Log


> 1. Commit 3ba50ec which Marcan is complaining about was pushed in 2008 and didn't just delete attrib specifically, but all (?) VCS comments indiscriminately. The file was barely touched since

FWIW -- I think this is the commit being talked about: https://github.com/m87h/libogc/commit/3ba50ecd4134ef37a0f18f...

Looks honestly totally innocent ...


This by itself just looks to me like someone getting rid of the old CVS comments that are messing up the headers. Maybe the problem is that younger people don’t recognize what a cvs header is?


Yeah entirely possible.

The threading code does seem to be "similar" enough to warrant some investigation, but from the looks no license notice was "removed". Even the threading code in question has never had one, from the initial import from CVS/Subversion.

Making a mountain out of a molehill .... before the molehill is even there?


> A good way to make sure your project won't cross compile is to use Autoconf. Rampant use of Autoconf is the main reason distros gave up on cross compiling and started using QEMU. Developers who use Autoconf and who don't know what cross-compiling is will not end up with a cleanly cross-compiling project that downstream packagers don't have to patch into submission.

Cross compilation for distributions is a mess, but it is because of a wide proliferation of build systems, not because of the GNU autotools -- which have probably the most sane way of doing cross compilation out there. E.g., distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...

> The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.

Nothing could be further from the truth, GNU autoconf started as a bunch of shared ./configure scripts so that programs COULD build on non-GNU system. It is also why GNU autoconf and GNU automake go such far lengths in supporting cross compilation to the point where you can build a compiler that targets one system, runs on another, and was build on a third (Canadian cross compile).


> distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...

1. Most of the time ./configure won't properly support --host isn't because someone wrote their own thing, but because it's Autoconf.

2. --host is only for compilers and compiler-like tools, or programs which build their own compiler or code generation tools for their own build purposes, which have to run on the build machine rather than the target (or in some cases both). Most programs don't need any indications related to cross compiling because their build systems only build for the target. If a program that needs to build for the build machine has its own conventions for specifying the build machine and target machine toolchains, and those conventions work, that is laudable. Deviating from the conventions dictated by a pile of crap that we should stop using isn't the same thing as "does not work".


> A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.

Not everything is C, or GNU/Linux. The example also misses much of the basic functionality that makes GNU autotools amazing.

The major benefit of GNU autotools is that it works well, specially for new platforms and cross compilation. If all you care about is your own system, a simple Makefile will do just fine. And with GNU autotools you can also pick to just use GNU autoconf .. or just GNU automake.

Having generated files in the release tarball is a good practise, why should users have to install a bunch of extra tools just to get PDF of the manual or other non-system specific files? It is not just build scripts all over the place, installing TeX Live just to get a PDF manual of something is super annoying.

Writing your own ./configure that works remotely as something users would expect is non-trivial, and complicated -- we did that 30 years ago before GNU autoconf. There is a reason why we stopped doing that ...

I'd go so far to think that GNU autotools is the most sensible build system out there...


> Not everything is C,

AutoTools are squarely oriented toward C, though.

If you're not using C or C++, you're probably not using AutoTools.

(I think I might have seen someone's purely Python or shell script project using Autoconf, but that was just ridiculously unnecessary.)

> Having generated files in the release tarball is a good practise

Without a doubt, it is a good idea to ship certain generated files in a source code distribution. For instance, if we ship a generated y.tab.c file, the user doesn't have to have a Yacc program (let alone the exact version of the one we would like them to use).

What's not good practice is that anything is different in the release tarball compared to the git commit it was made from.

"Release tarball" itself a configuration management antipattern. We are a good two decades past tarballs now. A tarball should only be a convenience for people who don't need a git clone.

Every generated thing that a downstream user might need should be in version control, and updated whenever its prerequisites change. This is a special exception to the general rule that only primary objects should be in version control. Secondary objects for which downstream users don't have generation tools should also be in version control.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: