Hacker Newsnew | past | comments | ask | show | jobs | submit | ezequiel-garzon's favoriteslogin

What would happen to projects like F-Droid, Termux, etc.?

Are you sure you aren't thinking of LaTeX?

TeX (plain TeX, not LaTeX) has phenomenally good logging and error messages IMO — everything you need is there, each error message comes in a “formal” and “informal” form and points you to exactly the place the error happened, and TeX lets you fix things on-the-fly without restarting the program. All this of course assumes you use TeX the way it is described in the manual (The TeXbook). The experience is opposite with LaTeX, so I find it worth giving up all the convenience of LaTeX just for the wonderful experience with TeX.

As for “the TeX language”, there is no such thing. As Knuth has said many times, TeX is designed for typesetting, not programming. Sure it has macros to save some typing, but if you're writing elaborate programs in it (as is nearly inevitable if you're using LaTeX) you're doing something wrong. Knuth said:

> When I put in the calculation of prime numbers into the TeX manual I was not thinking of this as the way to use TeX. I was thinking, “Oh, by the way, look at this: dogs can stand on their hind legs and TeX can calculate prime numbers.”

But of course LaTeX does every such thing imaginable :-)

More on TeX not being a programming language: https://cstheory.stackexchange.com/a/40282/115

On the TeX error experience: https://news.ycombinator.com/item?id=15734980


G. Brandon Robinson swears by U+2010 for hyphens in groff's Unicode output [0], but I see it as a hypercorrection. The most common convention by far (among authors who use Unicode and care about dashes) is to use U+002D for hyphens and U+2212 for minus signs. Not even the Unicode Consortium uses U+2010 for hyphens in its documents, and I'm not aware of any major organization that does.

As far as appearance goes, almost all fonts I've looked at make U+2010 identical to U+002D (i.e., they don't put any 'minus' into the 'hyphen-minus'), but a few make U+2010 a smidgeon shorter.

[0] https://news.ycombinator.com/item?id=38121765


Real programmers use octal not hex.

making everything a string, for example. representing string length with null termination. splitting shell variables into words after variable substitution (rather than having a list type). a zillion shell-script bugs when filenames, directory names, search strings, etc., contain spaces. not having user-defined subroutines in sh or awk. no command-line editing. command names and options that read the same as you type them (thus forcing you to choose between unreadable commands and commands that are too unwieldy to type comfortably). no tab completion. more generally, no interactive feedback on what is going to happen when you submit your command. no job control. having to retype command output as new input all the time. table layout that only works with fixed-width fonts. filenames limited to 14 characters. no text search through previous output. hardlinks rather than reflinks. an inflexible security policy defined in the kernel. the reification of . and .. in the filesystem. exposing the layout of filesystem directories to every program that enumerated directory entries. no way to undo rm or mv. no screen editors. arbitrary line-length limits all over the place. . in the path. textual macro expansion as the primary form of higher-order programming. backslashed backquotes. backslashed backslashes. total precedence orderings on infix operators. slightly different regular expression syntaxes for each program. self-perpetuating at jobs. etc.

a lot of these have been fixed, but some are hard to fix


45^2 = 2025

Happy perfect square year, everyone. The previous one was 1936 and the next one will be 2116.


Apologies for the long response.

I am only partially qualified in that I am not a professional archeologist, but I have done post-doctoral archeological studies and have read enough archeological studies to understand the larger academic context.

It is not possible to present all the data informing a judgment in such a short work. Even in a book, it would not be possible. Thus it is common in archeology for papers to be written as part of an ongoing conversation / debate with the community - which would be defined as the small handful of other archeologists doing serious research on the same specific subject matter.

Part of that context here is that these tombs are well-established to be the royal tombs of Alexander's family, spanning a few generations including his father and his son. This is one of the most heavily studied sites in Greece for obvious reasons, and that is not something anybody is trying to prove.

In that context, his arguments are trying to identify any body as one among millions, but as one among a small handful of under ten possibilities.

At the same time, the fact that he is not a native English speaker and general archeological style come into play. For example:

"the painter must have watched a Persian gazelle in Persia, since he painted it so naturalistically (contra Brecoulaki Citation2006). So the painter of Tomb II has to be Philoxenus of Eretria" sounds like a massive leap, and it is. He continues:

"... Tomb I (Tomb of Persephone) must have been painted hastily by Nicomachus of Thebes (Andronikos Citation1984; Borza Citation1987; Brecoulaki et al. Citation2023, 100), who was a very fast painter (Saatsoglou-Paliadeli Citation2011, 286) and was famous for painting the Rape of Persephone (Pliny, N. H. 35.108–109), perhaps that of Tomb I."

Another huge leap, both 'presented as conclusions'. However he then continues to indicate these are just hypotheses: "These hypotheses are consistent with the dates of the tombs..."

So his English language use of presenting things factually does not indicate certainty in the way the words would be used in everyday speech. He seems to perhaps misunderstand the force of the terms, but also appears to be working within the context of the conversation with other archeologists I mentioned to start: They all know every affirmation is as "probably", rarely anything more. So it is relatively common shorthand of the craft in that sense.

I believe you are overthinking his responses to other authors, although I understand the culture shock. It is an ongoing conversation and archeologists tend to be blunt in their assessments. Add Greek bluntness on top of this, and it does not seem to matter to the material.

As to your last question, is this legitimate research? The answer overall appears to be yes, although I could see several points (such as the identification of artists I quoted above, and various items I noticed), which I would never have put into ink the way he did. Still, most of his arguments are compelling. It is a shame that the aggressiveness of a few affirmations detract from the overall value of his work. Archeology is not code nor is it physics. It does not pursue universal truths that are more easy to verify through repeated experiments, but unique historical ones which necessarily attempt to interweave physical details and ancient historical records. Each field has its own level of certainty, and the fact that we cannot establish these details with the same certainty as we can establish the chemical formula for water does not make them useless, or pure inventions. Far from it.


Chip design & manufacturing is probably the closest thing we have to witchcraft as a species.

what're the odds of two dead 0x users showing up at the same time in a thread previously without comments. gave me a chuckle

I have no data, but I wish that to be true. What an amazing tribute.

My brother just started learning web development this past year and his mind was blown when I told him you could send HTml over HTtp.

He thinks it's really amazingly cool! :D

I'm so happy to hear that I had some part to play in inspiring such a marvellous project.


“I speak Spanish to God, Italian to women, French to men, and German to my horse.” — Charles V

For those Python programmers out there, if you don't mind sharing your experiences, do you spend any time at all on the REPL? If so, what fraction of the time, approximately? Using IPython, JupyterLab, or something else? Or do you just run it directly from an VS Code or PyCharm? Anything you may want to add about your routine would be appreciated.

Oh, if you (experienced programmer or not) happen to know about a good site or YouTube channel to see Python programmers in action (as opposed to tutorials), please share.

Thanks in advance, and apologies for the digression.


More trivia: ASCII characters were designed to support overstrike (using BS) to generate more characters. For example, á (á) can be written as a BS '. Acute, grave, and circumflex accents, and umlauts, can be written this way on lower-case letters. ñ is n BS ~. Cedille (ç) is c BS comma. Underline is character BS underscore; strikethrough is similar, using dash or tilde. Bold is character BS same_character.

Many Compose key sequences are based on this.

So ASCII, in a way, may well be the first -or one of the first- variable-length codeset.

Note that overstriking upper-case letters to add diacritical marks does not work for ASCII (see more below).

Also, overstriking was how one typed diacritical marks back in the days of mechanical typewriters. Spanish used to (and may still? I forget) permit upper-case letters to not carry diacritical marks precisely because it was difficult or costly to get typewriters and printers to print such letters. Adding diacritical marks to upper-case letters requires fonts designed for that, and clearly an overstrike sequence cannot work unless the typewriter/printer holds printing the character until it knows whether the next character is BS.


Databases can usually be split into one of two types; OLTP (row-based) or OLAP (columnar). OLTPs are used mostly for transactional workloads whereas OLAP is mostly used for analytics.

Here goes...

Take 1: sqlite is to Postgres, what duckdb is to Snowflake/BigQuery.

Take 2: In a similar way that sqlite is an in-process/memory OLTP, duckdb is an in-process/memory OLAP.

I should mention some caveats/exceptions/notes about my statements above:

- there are OLAP projects out there that use Postgres as their basis.

- HTAPs are DBs that allow you to define tables as either row-based or columnar.

- duckdb works with sqlite tables, and its SQL is heavily based on postgres SQL

- duckdb 0.9.0 is being released next week :)

- It seems duckdb is poised to become an almost ubiquitous component of the analytical stack. Apache Iceberg, dbt, ...


I miss this kind of playfulness in computing.

When I was at Amazon my manager told me that several years earlier he was responsible for updating the 404 page so he scanned a picture of a cat his daughter drew and made that the body of the page. In 2009 when I started, that was still the image, but at some point someone must have noticed and replaced it with a stock photo of a dog. The asset was still called kayli-kitty.jpg, though. It’s since been changed again to rotating pictures and references to the original are gone.


In 1981, I received a computer as a birthday gift at the young age of 13. Likely the first micro-computer in my small California town. When I took it to my Junior High science fair people were like stunned by a kid with a computer. When you turned on the computer, all you got was a flashing cursor wait for you to program in BASIC. The next year, a successful local business person asked me, if I thought in the future most people would have computers. I said yes. They replied they didn’t think that would ever happen. And they didn’t see, the need for it in their business, even after seeing VisiCalc. In less than 5 years, they had computers in thier business for Lotus123. And not long after, they had computers in their home. I have seen this pattern repeat many times. I have also seen many branches of computer tech die (amiga, os/2, palm….) But from my point of view “figure out what it’s for” is the ultimate computer hackers playground. And perhaps Vision Pro will be the first headset “worth criticizing” regardless of its success.

I've been an iOS dev for... 12 years now. And I love it. I'm in a position now where I'm learning Android for the first time. It's Kotlin and I don't hate it. And I'm still trying to learn back end. I can't code my way out of a box using HTML. I guess you'd say I can't code my way out of a <div>

The ecosystem is amazing. I love the tools. Xcode is a joy of an IDE to work with. There's some room to improve, but compared to everything else... it's not even a question. The community is full of people who are full of wonder and excitement.

As a web dev, you probably already know some Ruby and have some familiarity with some build tools that I didn't when I came to iOS dev. That'll help. You won't be exclusive to Apple. You'll be using CocoaPods and Gradle and Bazel and Jenkins and Travis and god only knows what else. On any given day I code in Swift, Objective-C, Ruby, Kotlin, and Groovy.

At the end of the day, remember you'll be doing this 40 to 80 hours a week. Make sure you enjoy it.


So which cloud provider offers a HARD spend limit? I just want to fund my account e.g. $20/month and never, ever, spend a cent over that. Even if my account gets hacked for bitcoin mining or whatever, I don't want to spend a cent over that.

With AWS, you can do it, via a trigger on a spend notification and a script, but the whole thing is a giant kludge. It should be a default feature. It should be a default feature for all cloud providers.

I'm literally using them less because this isn't a feature they offer. Even the free tier of AWS is too risky without hard limits.


The fact that cloud providers don't have a simple "This is how much I can afford, don't ever bill me more than that!" box on their platforms makes development a lot scarier than it really needs to be.

[Flutter Eng. Dir here]

I guess I would like to think we are betting on the web. :) All of the Flutter founders came from Web backgrounds. After years of attempting to make the Mobile web awesome, we forked Chrome and built a new thing. Now we're bringing it back to the Web.

The web is a big tent. I think there is a lot of room for innovation here. We're attempting with Flutter to push on some of the newer aspects of the Web. There are still some pieces missing from the Web to make things like Flutter really shine (e.g. a multi-line text measurement API could help get rid of a ton of code in Flutter Web). As you saw in the keyonte today, we're working with Chrome to continue to improve life for developers.

I don't think Flutter will ever be the right solution for all of the Web. Certainly not today. For example, we don't even support SEO or great indexability yet (although it's long been planned for and will be coming soon).

I just want to believe we can do better. We, developers, can all push development (including the web) to be better. Hopefully Flutter will do it's part.


What is really annoying is that Cloudflare pages will strip the file extension off of your html pages, and perform permanent redirects to those new URLs. Now if you're intending on moving to a new hosting service that doesn't do that (cgi-bin), all Google search results to your site will 404.

Github pages and likely others also supports this format, so moving between those two services doesn't exhibit this problem. Moving to cgi-bin will.

I'd suggest Cloudflare shouldn't try to establish their scheme as canonical url and rather implement github pages behavior, but what do I know... I'm just hosting an old fashioned blog, not a JAMstack/SPA/whatever thing


A ‘trick’ for fancy prompts containing information is to give them the form,

    : all the stuff;
As long as ‘all the stuff’ is parseable as shell words (which you can ensure by quoting with printf '%q' if there's doubt), then you can copy and paste entire lines.

The trivial copyable prompt would be

    :;

A couple of personal points that I may as well insert here.

The account I'm now using, dang, was used briefly in 2010 by someone who didn't leave an email address. I feel bad for not being able to ask them if they still want it, so I'm considering this an indefinite loan. If you're the original owner, please email me and I'll give it back to you. The reason I want the name dang is that (a) it's a version of my real name and (b) it's something you say when you make a mistake.

Second, very minor, but I need to leave this discussion for a bit while I bike to a different location. After that, I'll be in the thread... but I'm not giving up my bike ride for this :)

(Edit: it turned out from old logs that someone had made a bunch of different accounts with names like "dang" all at once, so in retrospect I was reclaiming a dormant sockpuppet!)


I’ve found an iPad Pro to be perfectly acceptable for web development.

Working Copy is an excellent git client, and combining it with Textastic proved a good workflow. Textastic allows you to add any folder, including a Working Copy repo, as a source in the file browser, so no more swapping files between apps.

Using this, I built my personal portfolio site with Jekyll while commuting on the tube.

Other apps are available: DraftCode provides a full PHP/MySQL stack, I got Django running on Pythonista, found iSh could run Bundler and Jekyll and got a long way towards getting Rails working. Also Processing runs, well, Processing. Play.js does react and Node and Continuous does .NET and C#.

There are also Vim and Emacs apps, though I don’t know how good they are.


Yep, standard LaTeX with the book class.

The preview linked is a little outdated so it shows the old figures, but the latest version has all the figures done in TikZ (a vector drawing package using LaTeX syntax, see https://www.overleaf.com/learn/latex/TikZ_package for examples).

There is no "tooling" per se for the book production (just run pdflatex to get the PDF, then upload to lulu.com and amazon.com, and they take care of the printing). I do have some scripts to enforce naming and notation conventions though, and there is some advanced git-rebase kung fu going on that allows me to reuse the high school prerequisites from Chapter 1 of the MATH & PHYS book for the LINEAR ALGEBRA book as well (basically when I fix typo in the master branch, I have to rebase the LA branch).

One thing that has been tremendously useful and I highly recommend for any authoring task, is using text-to-speech for proofreading: https://docs.google.com/document/d/1mApa60zJA8rgEm6T6GF0yIem...


Sometimes I have unproductive weeks. It can be hard to overcome the miasma by refactoring code or picking favourites.

I have found the easiest way to get out of the funk is to start making Todo lists of my life. Get the things out of the way that are bothering me or need to be done as a priority and then slowly chip away at it.

Productivity isn't something you can hack. There is a maximum that you can do without sacrificing the quality of your work, and your goal should be to maximize quality, not to plan your day out in five minute intervals, that's a great way to burn out.


A few random comments:

• Obviously, this is typeset with TeX.

• Though originally Knuth created TeX for books rather than single-page articles, he's most familiar with this tool so it's unsurprising that he'd use it to just type something out. (I remember reading somewhere that Joel Spolsky, who was PM on Excel, used Excel for everything.)

• To create the PDF, where most modern TeX users might just use pdftex, he seems to first created a DVI file with tex (see the PDF's title “huang.dvi”), then gone via dvips (version 5.98, from 2009) to convert to PostScript, then (perhaps on another computer?) “Acrobat Distiller 19.0 (Macintosh)” to go from PS to PDF.

• If you find it different from the “typical” paper typeset with LaTeX, remember that Knuth doesn't use LaTeX; this is typeset in plain TeX. :-) Unlike LaTeX which aims to be a “document preparation system” with “logical”/“structured” (“semantic”) markup rather than visual formatting, for Knuth TeX is just a tool; typically he works with pencil and paper and uses a computer/TeX only for the final typesetting, where all he needs is to control the formatting.

• Despite being typeset with TeX which is supposed to produce beautiful results, the document may appear very poor on your computer screen (at least it did when I first viewed it on a Linux desktop; on a Mac laptop with Retina display it looks much better though somewhat “light”). But if you zoom in quite a bit, or print it, it looks great. The reason is that Knuth uses bitmap (raster) fonts, not vector fonts like the rest of the world. Once bitten by “advances” in font technology (his original motivation to create TeX & METAFONT), he now prefers to use bitmap fonts and completely specify the appearance (when printed/viewed on a sufficiently high-resolution device anyway), rather than use vector fonts where the precise rasterization is up to the PDF viewer.

• An extension of the same point: everything in his workflow is optimized for print, not onscreen rendering. For instance, the PDF title is left as “huang.dvi” (because no one can look at it when printed), the characters are not copyable, etc. (All these problems are fixable with TeX too these days.)

• Note what Knuth has done here: he's taken a published paper, understood it well, thought hard about it, and come up with (what he feels is) the “best” way to present this result. This has been his primary activity all his life, with The Art of Computer Programming, etc. Every page of TAOCP is full of results from the research literature that Knuth has often understood better than even the original authors, and presented in a great and uniform style — those who say TAOCP is hard to read or boring(!) just have to compare against the original papers to understand Knuth's achievement. He's basically “digested” the entire literature, passed it through his personal interestingness filter, and presented it an engaging style with enthusiasm to explain and share.

> when Knuth won the Kyoto Prize after TAOCP Volume 3, there was a faculty reception at Stanford. McCarthy congratulated Knuth and said, "You must have read 500 papers before writing it." Knuth answered, "Actually, it was 5,000." Ever since, I look at TAOCP and consider that each page is the witty and insightful synthesis of ten scholarly papers, with added Knuth insights and inventions.

(https://blog.computationalcomplexity.org/2011/10/john-mccart...)

• I remember a lunchtime conversation with some colleagues at work a few years ago, where the topic of the Turing Award came up. Someone mentioned that Knuth won the Turing Award for writing (3 volumes of) TAOCP, and the other person did not find it plausible, and said something like “The Turing Award is not given for writing textbooks; it's given for doing important research...” — but in fact Knuth did receive the award for writing TAOCP; writing and summarizing other people's work is his way of doing research, advancing the field by unifying many disparate ideas and extending them. When he invented the Knuth-Morris-Pratt algorithm in his mind he was “merely” applying Cook's theorem on automata to a special case, when he invented LR parsing he was “merely” summarizing various approaches he had collected for writing his book on compilers, etc. Even his recent volumes/fascicles of TAOCP are breaking new ground (e.g. currently simply trying to write about Dancing Links as well as he can, he's coming up with applying it to min-cost exact covers, etc.

Sorry for long comment, got carried away :-)


I think it's mostly supply and demand, timing and where you live. But those are the factors we rarely talk about.

Instead, people complain they are not compensated enough relative to the importance of their work, their intelligence, the hours put in, their title or the length of their education or the risk they take.

If people were paid according to the importance of their jobs, I would argue that plumbers, sewage workers and farmers should be paid higher than programmers. If it were about intelligence, most physicists I know are underpaid and programmers are generally compensated fairly. If it were about hours put in, many of the people producing our clothes should be paid better than us. If it were about titles, programming should be behind almost any other engineering field. If it were about length of education, programmers would generally rank much lower than they do because so many don't have any or much formal training. If it were about risk, in just about any other profession people have taken larger risks than a programmer joining a startup. So you "risk" only making $40,000 for a couple of years? Yeah, well, that's the calculated risk your English high school teacher knowingly took on for life when she decided to take a bachelor in arts. And yes, she could have become a developer or lawyer instead. What about the risk of work injuries faced by construction workers every day. That's real risk. Our financial risks are modest worries.

Programmers in 2016 are generally very well compensated for a low risk, indoor job with a very low barrier to entry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: