Hacker Newsnew | past | comments | ask | show | jobs | submit | seanhunter's commentslogin

The modern formulation of functions as sets doesn’t require type theory but is entirely congruent with Russell’s definition, just much less cumbersome. In this view, φ is a relation on the set (D X C) where D and C are the domain and codomain of the function (which he calls the “range of significance of x” and the “range of significance of φ(x)” respectively). So since he’s talking about propositional functions, here C is the set {true, false} and D is all the things that are like whatever x is ie the set {x’: x’ is of the same type as x}.

Now a relation is just a particular type of predicate (ie it too is a set) so here we have x ~ y if φ(x) = y for all (x,y) in (D X C).

Notice here both the propositional function and the type are sets.


> Yes -- in set theory sets can contain themselves

Hrbacek and Jech would like a word. It is very much not the case that in standard axiomatic set theory sets can contain themselves, precisely because this leads to things like Russell’s paradox. Sets containing themselves is generally prevented by the axiom of regularity. (Every non-empty set S contains an element wihch is disjoint from S) https://en.wikipedia.org/wiki/Axiom_of_regularity

> types are not sets and sets are not types

This is also not true. All types can be expressed as sets but not all sets are types in the standard definitions.


Yes. Additionally you realise the original purpose of streets (eg “love lane” in the city of London near the old guildhall is a particular favourite of mine).

https://www.openstreetmap.org/way/8431660 Happens to be near "wood lane". Make of that what you will.

I studied on "Silk Street" which is nearby. Nearby are also "Oat Street", "Bread Street", "Milk street", "Gutter lane", "Goldsmith street", "Poultry" and many more who have old names relating to their function.


The "odd" location names in London are a fun plot point in Garman's "neverwhere" novel, tho he focuses on tube stops (black friars, shepherd's bush, kings cross etc).

I like those but IME most people have no clue what old names mean, they are just sounds associated with a place most of the time.



Imagine finding this in the US...

https://en.wikipedia.org/wiki/Gropecunt_Lane


Sounds like a good name for renaming the President Donald J. Trump Boulevard leading up to Mar-A-Lago when the current bout of totalitarianism over there ends.

> In "The Miller's Tale", Geoffrey Chaucer writes "And prively he caughte hire by the queynte" (and intimately he caught her by her crotch),[14] and the comedy Philotus (1603) mentions "put doun thy hand and graip hir cunt."

It turns out “grab her by the pussy” has surpringly robust precedent.


Indeed.

> they can use whatever mechanism they want to, without disclosure, to produce numbers.

That would be fraud against whoever participated in this round, so no. Just because they aren't regulated doesn't mean they are literally free to do whatever they want to close the round.


> Just because they aren't regulated doesn't mean they are literally free to do whatever they want to close the round.

What makes you think their public announcements are aligned with what they give prospective investors?


The fact that in all the rounds I have been involved in all public announcements related to the round go through the legal team to check for possible material misstatements that could cause exactly this kind of problem.

> The fact that in all the rounds I have been involved in all public announcements related to the round go through the legal team

All public announcements go through the legal team, regardless of whether it's related to the round or not.


it would be fraud only if they're also telling their investors the same numbers.

Well the canonical example is Diana Athill who had a long and distinguished career at a literary editor for people Phillip Roth, John Updike, Margaret Atwood, Jack Kerouac and others, then retired at the age of 75 and started writing her own novels and memoirs and is considered one of the greatest writers in English of the 20th century. “After a funeral” is I think the one of hers I read and it’s amazing

https://en.wikipedia.org/wiki/Diana_Athill


Short answer: today I think there is genuinely nothing that anyone should use oracle for, but their database used to be seriously far ahead of the competition.

A very long time ago (circa 2000) there were basically 2 databases that worked for use cases where you needed high availability and vertical scalability and those were Oracle and Sybase and Oracle was really the only game in town if you actually wanted certain features like online backups and certain replication configurations.

At the time, MySQL existed and was popular for things like websites but had really hard scalability caps[1] and no replication so if you wanted HA you were forced to go to oracle pretty much. Postgres also wasn't competitive above certain sizes of tables that seem pretty modest now but felt big back then, and you used to need to shut postgres access down periodically to do backups and vacuum the tables so you couldn't use it for any sort of always-on type of use case.

Oracle also had a lot of features that now we would use other free or cloud-hosted services for like message queues.

[1] in particular if you had multiple concurrent readers they would permanently starve writers so you could get to a situation where everyone could read your data but you could never update. This was due to a priority inversion bug in how they used to lock tables.


We were building a payments system in the early 2000s and got a diktat to not use Oracle. The amount of things we had to build to satisfy the availability and durability requirements were so huge it consumed the first few years of work. We didn’t get to the business side of things until much later. Funny thing is we ended up giving up on MySQL and went back to oracle after all that work. The whole thing was scraped after a couple of years.

To get to the level of scale that oracle can handle we had to build sharding and cluster replication from scratch. It still didn’t get to even 1/10th of a single oracle node. Obviously we made a lot of poor architecture decisions as well - in hindsight, of course.


We should really be more thankful for the existence of PostgreSQL

Yes, although a lot of the most advanced PostgreSQL features that would bear comparison in this discussion are relatively recent. PostgreSQL didn't have them in the 2000s, either, and where it did, the ergonomics were much worse than they are today.

I use Patroni (https://github.com/patroni/patroni) (no affiliation to me) which is a really nice and reliable PostgreSQL distribution that provides automatic failover and not just active-standby nodes with manual failover.

As I understand it, you would have to script a separate watchdog process for the basic PostgreSQL, to get high availability.


Still lacks several features, or you have to pay as well for parity.

Salesforce have been building an Oracle replacement based on Postgres for years, named Sayonara and as far as I know, it’s not ready yet.

https://www.theregister.com/2016/05/16/salesforce_replace_or...


There was also DB2. DB2 was (still is) an excellent database that IBM has completely fumbled.

There are three different Db2 databases.

I believe the mainframe version was first.

There is a version baked into the os/400 operating system (i series).

Then unix/windows Db2 came last, if memory serves.

https://en.wikipedia.org/wiki/IBM_Db2


I only ever worked with the Linux/Windows variant. I can’t believe I am saying this about an IBM product, but I found it to be actually rather pleasant to work with.

It’s def got 80’s hacker movie vibes, typing “Iniate log rotation sequence;” etc just screams out for a green terminal emulator.

As an IBM hobbyist user, picture something worse than VMS in 'hackerdom'. IBM's mainframe OSes are like NT/OS2 taken to the total extreme with objects, because by default you don't see files but objects which might have files... or not.

Imagine the antithesis of Emacs. That's an IBM environment with 3270 terminals and obtuse commands to learn.


You’d never say that if you’ve been on the inside of a mainframe DB2. shudders

As somebody who administers several large DB2 clusters all linked together with multiple replication modes (HADR, SQLREP) for an emergency services communication platform, I can confirm this. It's pretty damn rock solid even on Linux these days.

It's very amusing to me that you bring up IBM in a discussion of the value of Oracle.

I came here to say that if you want to understand Oracle's value, think IBM with less history.


Kind of, but there are some subtle differences in my opinion. Oracle is top-to-bottom evil, whose business model basically boils down to screwing over their clients and everyone else at every possible opportunity, comparable to the likes of McKinsey or Accenture.

IBM is a bit more nuanced. My wife grew up in an IBM town and a lot of her family and her friends’ families used to work there in the 70s and 80s. People, especially the engineers, used to take pride in their work there.


I think of IBM and GE as being cut from the same cloth back then- they treated their people well and dominated their markets.

Didn't they famously help both the Nazis and Apartheid South Africa?

Yes. Leased IBM equipment was a critical infrastructure component of the holocaust.

"At the time, MySQL existed ..."

You had to be careful with MySQL back then as constraints were syntactic sugar but not enforced. PostgreSQL was indeed much tougher to manage but more full-featured.


Really, you've always had to be careful with MySQL. It really was the PHP of RDBMSs.

The silent "SHOW WARNINGS" system, nonsense dates like Feb 31, implicit type conversions like converting strings to 0s, non-deterministic group by enabled by default, index corruption, etc.


Good enough for Wordpress.

We are circling back to PHP, aren't we?

We never left

Not just constraints, transactions were also a no-op. The MyISAM engine is still available in modern versions if you want to experience this, it's just not default anymore.

Yep, I've had to work with a MyISAM project with no transactions - it's a reasonably simple system thank goodness but a little scary all the same (and lots of boilerplate to deal with partial failures).

I love Postgres in 2026, but it really was not a viable enterprise option before 2010. MySQL had decent binlog replication starting in 2000 which made up for a lot of the horrible warts it had.

mysql was great in 2000 if you knew all the foot guns to avoid and set it up correctly (and not just what sounded correct).

Not to mention there was Percona, and both Google & Facebook contributed a number of patches that made monitoring MySQL top notch (such as finding slow running queries, unused indexes, locks etc.).

SQL Server was pretty good until they went the Oracle way with their licensing shenanigans, but even with that they were a lot cheaper than Oracle. In fact SQL server was one of the few great products that came out of MS.

SQL Server started as a source fork of Sybase.

Having done both, with much better tooling. Sybase never had anything comparable to SSMS.

I remember MMC snap-ins.

Having written a rust client for it, even their documentation is absolutely stellar. You just read how the protocol works from the PDF and implement it.

Can't say the same about Oracle.


For the risk of getting downvoted:

MS SQL is today stil a very good product, using it now for more than 20 years in different applications.

And: The free version with max up to 50 GB (?) of DB size is a very good option for smaller environments/apps


50GB sounds like nothing, but I believe you in the quality. Most big bucks paid databases need to be high quality though, otherwise they would fail as products

I was a SQL server DBA early in my career, I've not used it in the last decade, glad to hear that it's still a great product.

My first job was a SQL DBA. 15 years and 5 companies later, this startup I'm at (which got acquired recently), still uses SQL Server. It has stood the test of time.

Actually one of the very few really good MS products at all?

Visual Studio is also great and widely adopted.

But what else do they have? I had some good experiences with Exchange years ago, but this is just my personal experience, since most people seem to hate it.

What else do they have that is considered a good/solid product that you would recommend to someone?


I disagree as I was running clustered sql server 6.5 and 7 in 1998 for hundreds of concurrent users doing millions of reads per hour on NT basically commodity boxes. Replaced it with Oracle for 100x cost and lost performance.

I think even back then you were usually better off with distributed databases running mysql or postgres over Oracle. Although people liked to think a giant Oracle db was better.


For others like me who might be skeptical to hear throughput in any metric other than seconds (and is used to large numbers in hours/days being used to inflate), I think millions per hour is actually quite high for 1998.

Assume that means 5_000_000/hour. 5M/hr => 83k/min => 1400/s. That is impressive for late 90s. I was generous on what "millions per hour" meant, but even if its 2.5M/hr that would be 700/s, which is still quite good.


Those are big numbers especially for non-enterprise DBs in the 90s.

MySQL's big breakthrough(not specifically talking about perf) was innodb in 2010.

Just 15+ years ago Postgres had major issues with concurrency as we think about it today.

And just 10+ years ago a LOT of DB drivers weren't thread safe and had their own issues dealing with concurrency.

So nearly 30 years ago? Fuhgeddaboudit.


What do you mean a distributed database running mysql or postgres? Even today you can't have a distribute db running (real) Postgres, it doesn't do multi-master clustering.

What's about DB2? I have no experience with it but I guess IBM specifically designed it for enterprise-scale transaction processing workloads...

DB2 was crazy good for certain use cases but very weird. For one, the pattern for DB2 efficiency was pretty much the exact opposite of every other database. Every other database would say "Normalize your tables, use BCNF, blah blah, small reference tables, special indices etc".

DB2, the pattern was "denormalize everything into one gigantic wide table". If you did that it was insanely fast for the time and could handle very large datasets.


I have not had much experience with DB2, but given that the relational data model and normalization was invented at IBM (Codd) and IBM's implemenation of those concepts was DB2, DB2 performing poorly with a normalized data model seems strange.

My recollection was that DB2 did not support multi version concurrency control like Oracle and Postgres did. The result was a lot of lock contention with DB2 if you were not careful. MVCC was eventually added to DB2, but by then it was too late.


Still around, and present form for many big corps.

That sounds oddly similar to how people recommend using Dynamo. It's super hard to do coming from SQL because everything just feels wrong.

DB2 had/has excellent data compression capabilities, achieving ratios for OLTP that would only be equaled by later OLAP columnar systems.

For raw performance needs, many financial services schema were going to be denormalized anyway. Compression was a great way to claw some of the resulting inefficient storage back.


> DB2, the pattern was "denormalize everything into one gigantic wide table". If you did that it was insanely fast for the time and could handle very large datasets.

Nobody got fired for buying IBM^W NoSQL


So it was an early version of mongoDB?

Just curious, how was SQL Server perceived at the time compared to Sybase and Oracle? I know it originated as a port of Sybase.

SQL Server 2000 was well received in the segments that mattered as a challenger. Oracle was in first place running on Unix. However, it was viewed as expensive and the procurement experience was regarded as unpleasant. People wanted competition even if they didn't think SQL Server, or another alternative, could unseat Oracle for the most important stuff.

Windows was really picking up steam and there was a move to web development in the Windows-based developer space. Visual Basic and Delphi were popular but desktop development had peaked. ASP was for building your apps and SQL Server was the natural backend. SQL Server fed off this wave. It wasn't dislodging Oracle, but rather than every app being built on Oracle, more apps started to use SQL Server as the backend.

Then ASP.NET appeared on the scene and demand grew even more. It was a well-integrated combo that appealed to a lot of shops. I started my career in a global pharma and there was a split between tech budget. IT was a Windows shop for many reasons and ran as much on SQL Server as possible. R&D was Unix/Linux with Oracle. There was a real battle going on in the .NET vs Java (how about some EJB 1) and the databases followed the growth curves of both rather than competing against each other.

The SQL Slammer worm brought a lot of attention to the product. There were instances running everywhere and IT didn't expect so much adoption. Back then you had a lot more servers running inside offices than you do today. My office was much like my homelab today. This validated the need so the patches got applies, IT got involved in the upkeep, and adoption continued to grow.

Oracle's sales folk and lawyers were horrible to deal with. I had some experience of this directly as they tried pushing Java-related products and my boss dragged me into the evals. One of my in-laws was outside counsel in the IT space doing work with enterprise-sized companies. He claims they are the worst company he's ever had to deal with and wouldn't delegate any decision-making locally which endlessly dragged out deals. They had a good product but felt they could get away with anything. Over time he saw customers run lots of taskforces to chip away Oracle usage. This accelerated with SaaS because you could eliminate the app AND Oracle in one swoop.


I remember talking to one tech leader at the time who described it as "surprisingly good, for a microsoft product" which sort of summed it up. But it had similar characteristics to sybase except more so because you had to run it on an NT server (iirc) and so there was an even harder cap on the scale of hardware you could run it on, whereas you could run oracle on really top-end sparc hardware that was way more powerful than anything that ran windows.

Depends if the director or VP liked Microsoft or not. I’ve worked at places that loved SQL Server and Microsoft server products in general. Others did not use them anywhere in their datacenter and wouldn’t have considered them. Oracle, IBM, and Microsoft were very dependent on if the people in charged liked them. Not so much technical merits.

SQL Server was very good and used in a lot of enterprises. ime the decision between Oracle and SQL Server tended to be down to whether the IT department or company was a "Microsoft Shop" or not. There were a lot of things that came free with SQL Server licenses and it had really nice integrations with other Microsoft enterprise systems software and desktop software.

Oracle was definitely seen as the more mature and resilient (and expensive!) RDBMS in all the years I worked in that space. It also ran on Unix/Linux whereas SQL Server was windows only. Many enterprises didn't like running Microsoft servers, for lots of (usually good) reasons.


MS SQL Server was forked from Sybase in 1993. Not sure how much the code had diverged by 2000. Informix was also a contender back then.

we still have an informix db for an old early 2000s application we have to support. shit runs on centos5 lmao. it's actually not too bad, around v12 there's cdc capabilities (requires you to build your own agent service to consume the data) that made the exercise of real time replicating the app db in our edw a cakewalk. which ironically has greatly extended the lifespan of the application since no one has to query informix anymore directly.

ibms docs and help sites suck butt tho.


7 was a rewrite, from c to c++, also went from 2k pages to 8k pages

MS SQL Server was a cheaper, friendlier plugin replacement for Sybase in the early 2000s.

I built apps in an active-active bidirectional replication telecom Sybase environment and was deeply involved in migrating it to MS SQL server in the early 2000s. I remember a fair amount of paranoia and effort around the transition as our entire business and customers' phone calls depended on it (for "reasons") but in hindsight it went quite smoothly and there were no regrets afterwards.

The Microsoft went and added a nice BI stack to the whole thing which added a new dimension of value creation at a new low price point.


My experience at the time was that it was perceived as not serious enough and lacking important features. If my memory isn't very bad, I believe as late as 2000 SQL Server still only supported AFTER triggers.

In my experience in the late 90s and early 00s, besides Oracle and Sybase, DB/2 and Informix were also regarded as good. Oracle was considered the best though.


2000 for sure had instead of triggers.. I used them :-)

Thanks for the clarification, I guess my memory is very bad after all! :)

Do you remember if that was a recent addition?

Full disclosure: I was quite the newbie back then and most of what I "new" about SQL Server was what the more experienced coworkers told me. This was a very IBM-biased place so I'm not surprised they would have stuck to some old shortcoming, like people who still talk about bad MySQL defaults that have been changed for at least 10 years.

Up until that job (which was my second Actual Formal Job), all my DB experience had been with either dBase (I think III plus or IV) and access, so this was a whole new world with me.

It was through MS SQL Server that a colleague taught me about backups and recovery, after I ran an update in prod but forgot to include the where clause ... :)


This is a very short comment on SQL Server's code improvements (post-Sybase).

https://news.ycombinator.com/item?id=18464429

The top comment in the post is a long complaint about the code quality of the Oracle database (worth a read).


SQL Server was Sybase until (I think) version 4.9, just rebranded as Microsoft SQL Server.

Then the two versions split and I don't think that any of the Sybase source code remains in what is SQL Server today.

That said, a lot of the concepts (like a significant number of system stored procedures) and also TSQL remain almost the same, with small differences (except for system functions, which SQL Server has a lot more functionality).

When you come from the Sybase world getting a start on SQL Server is quite straight forward when it comes to handling the database.

Internals and other low level nuts and bolts differ nowadays, of course.


I wrote a connector for Sybase back in 2000, based on our existing one for MS SQL Server 7, and some things had already changed on the protocol level.

I don't remember exactly what and why, just that for some specific DML commands another kind of connection was required.


The split must have happened in the mid nineties (I think) with SQL Server 6 and Sybase 10. The next version after 4.9.

It's notable that 10 was the worst Sybase version, ever.

Source: I worked for Sybase Professional Services from 95 - 99.


SQL Server's claim to fame was GUI admin tools making life easier for many who bore DBA responsibilities only in anger.

It remains one of the most reliable Microsoft products, but few would claim that is a high bar.


TOAD was fantastic for Oracle, though. I liked it better than SQL’s stuff.

I can't really speak to 3rd party utilities, I think Management Studio was sufficient to keep most competition from ever starting.

Open arms, especially given its graphical tooling.

Starting with version 7.5 it was quite alright, however being Microsoft, it has been mostly used in Microsoft shops, alongside VB, MFC two tier applications, ASP, .NET, Sharepoint, Dynamics,...


I keep landing into projects with Oracle, SQL Server, DB 2.

Naturally our customers aren't companies that care about HN audience.


IIRC they also had the first native (100% Java) JDBC driver, so you could run from any platform and without weird JNI locking issues when using threads.

> A very long time ago (circa 2000) there were basically 2 databases that worked for use cases where you needed high availability and vertical scalability

... and both of them were Postgres.

I used it in the late 90s for the backend for websites written in PHP3, but everyone said this was ridiculous and silly and don't you know that everyone's using the MySQL thing.

So I used this MySQL thing, but by about 2005 I'd gone back to powering my lawnmower with a 500bhp Scania V8 because I just preferred having that level of ridiculous overkill in an engine.

Nowadays? Key/Value store in RAM is probably fine for testing -> Sqlite is often Good Enough -> Ah sod it, fire Postgres into a docker container and warn the neighbours, we're going down the Scanny V8 route yet again.


Java and VirtualBox. But both are free.

Sort of. The VirtualBox Extension Pack is free for personal or educational use. It is explicitly not free for work use[0]. You can download it for free, but accepting the license to do so obligates you to pay them for it.

[0]https://www.virtualbox.org/wiki/Licensing_FAQ


I was around back then and I call Bullshit on everything you claim. There were more database options in 2000 than there were in 1996. Even before that there was FoxPro… c’mon man. Oracle’s only value was they built a NO EXIT clause into their contracts…

Oracle was the ONLY game in town if you were serious. It was like buying IBM in the 80s. Source: programmed PL/SQL and embedded SQL at the Toronto Stock Exchange in the early 90s, on SCO Unix and Oracle.

it was soooooo the only game in town that they were like NVDA now, yea you got alternatives but you really don't and hence you charge insane prices and everyone is paying up with a grin on their faces. oracle was the only game in town 100% if you were serious!

Nobody was building WoW on FoxPro, c'mon.

You'd have to assume businesses were insane/stupid to go with Oracle to the tune of billions and billions of dollars if you believe that they had zero value to sell.


The intuition here is that combinators are higher order functions which take functions and combine them together in various ways. So for a simple example "fix" is a combinator in regular maths where

Fix f = {f(x): f(x) = x for all x in the domain of f}

So if f is a function or a group action or whatever, the fixed-point set of f is all points x in the domain of f such that f(x)=x. ie the points which are unchanged by x. So if f is a reflection, the points which sit on the axis of reflection.

The fixed-point combinator is of particular relevance to this site because it's often called the y combinator.


No one who would ask that question would be able to understand your answer.

I’m going to frame this comment.

Hehe. Sorry. Yes perhaps you’re right. Wasn’t trying to be obtuse but I didn’t express that particularly clearly.

Perfectly clearly, just for a different audience.

Your explanation was several years worth of math studies beyond what GP was asking.

$100B sounds like a lot of money to any sane human being, but for the T-Bill market it's really a drop in the ocean. Current T-Bill Market cap[1] is 29 Trillion give or take a little, so $100B is about 30bps of the total. Would nudge the market a little bit, but not that much.

[1] Here's my source and they should of course know https://fred.stlouisfed.org/series/MVMTD027MNFRBDAL


Complaining about downvotes is futile and is also against hn guidelines.

I'm not complaining "about downvotes" LOL I'm explaining why some people will be replaced by LLMs because of their own "context window" length.

This is spectacular. Nice work.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: