The crypto model of single entries with "from" and "to" field works well for transactions. For example you move $100 from checking to savings account, something like the following will capture it perfectly.
But the main reason why we need double entry accounting is that not all accounting entries are transfers. For example, is we are logging a sales, cash increases by $100, and revenue increases by $100. What's the "from" here? Revenue isn't an account where account is taken from, it is the "source" of the cash increase. So something like the following doesn't capture the true semantics of the transaction.
- Depreciation: Nothing moves. You're recognizing that a truck is worth less than before and that this consumed value is an expense.
- Accruals: Recording revenue you earned but haven't been paid for yet. No cash moved anywhere.
The limitation of ledgers with "from" and "to" is that it assumes conservation of value (something moves from A to B). But accounting tracks value creation, destruction, and transformation, not just movement. Double-entry handles these without forcing a transfer metaphor onto non-transfer events.
But you don't need double entry to register increases or decreases of value; you could as well just use single-entry accounting and add or remove money from a single account, using transactions that are not transfers.
In the past this was problematic because you missed error-checking by redundance, but nowadays you can trust a computer to do all the checking with database transactions (which are more complex checks than double entry, though they don't need to be exposed into the business domain). Any tracking that you'd want to do with double-entry accounting could be done by creating single-entry accounts with similar meaning, and registering each transaction once, if you know that you can trust the transaction to be recorded correctly.
I think this depends on what you consider to be the fundamental trait of double-entry accounting: the error-checking or the explanatory power.
It is true that by enforcing that value movement have both a source and a target, we make it possible to add a useful checksum to the system. But I believe this is a side benefit to the more fundamental trait of requiring all value flow to record both sides, in order to capture the cause for each effect.
I agree with your general perspective though: technology has afforded us new and different tools, and thus we should be open to new data models for accounting. I don't agree with other commenters that we should tread lightly in trying to decipher another field, nor do I agree with the view that the field of Accounting would have found a better way by now if there were one. Accountants are rarely, if ever, experts in CS or software engineering; likewise software developers rarely have depth in accounting.
Source: just my opinions. I've been running an accounting software startup for 5 years.
You should stop presuming that you know more about what’s needed than an entire field of business practice and study. There is actual theory behind accounting. Accountants understand that there can be different views on the same underlying data, and the system they continue to choose to use has a lot of benefits. You seem to be really stuck on the idea that the mental model accounting as a profession finds useful doesn’t seem useful to you. But it doesn’t need to make sense to you.
I'm not presuming I know anything about accounting. I know a great deal about data recording and management though, and my analysis is done from that perspective.
I explicitly recognized that the practice of accounting as a discipline keeps using traditional concepts that are culturally adequate for its purpose. What my posts are pointing out is that the original reason for double-entry records, which was having redundant data checks, is no longer a technical need in order to guarantee consistency because computers are already doing it automatically. From the pure data management perspective I'm analysing, that's undeniable.
The most obvious consequence of this analysis is that traditional bookkeeping is no longer the only viable way of traking accountability; new tools open the possibility of exploring alternative methods.
Compare it to music notation; people keep proposing new ways to write music scores, some of them computer assisted; and though none of them is going to replace the traditional way any time soon, there are places where alternative methods may prove useful; such as guitar tablature, or piano-roll sheets for digital samplers. The same could be true for accounting (and in fact some people in this thread have pointed out at different ways to build accounting software such as the
Resources, Events, Agents model.)
The older one was the original and whoever created it did most of the work and should be blessed by the heavens.
The newer one took the older one, and replaced the screenshots of code with markdown equivalents so they could be rendered by Anki while saving memory. You can see this in the difference in the number of images between the two decks. This is the one I'd download and use.
I love Andy Matuschak! His podcast with Dwarkesh was so enlightening and his blog is great as well. He's one of those people whose work I go back and read every couple of months and I always learn something new
One of the reasons I'm still in the Apple world is how well my phone works with my computer. It's little things like being able to turn my phone into a scanner from my desktop, and synchronised DnD. All these little things add up to an experience where I can ignore the tech and get on with what I'm trying to do.
If KDE connect can give me something like that then I'm jumping, even if it means switching to Android (actually, that'd be a bonus because the iPhone cameras _suck_ in comparison).
My stack atm is neovim, python/R, an EC2 and postgres (sometimes Sql Server). Some use of arrow and duckdb. For queries on less than few hundred GB this stack does great. Fast, familiar, the ec2 is running 24/7 so it's there when I need it and can easily schedule overnight jobs, and no time wasted waiting for it to boot.
You mentioned earlier about how long it would take to acquire a new cluster in Databricks, but you are comparing it here to something that's always on here. In a much larger environment, your setup is not really practical to have a lot of people collaborating.
Note that Databricks SQL Serverless these days can be provisioned in a few seconds.
> you are comparing it here to something that's always on
That's the point. Our org was told databricks would solve problems we just didn't have. Serverful has some wonderful advantages: simplicity, (ironically) cheaper (than something running just 3-4 hours a day but which costs 10x), familiarity, reliability. Serverless also has advantages, but only if it runs smoothly, doesn't take an eternity to boot, isn't prohibitively expensive, and has little friction before using it - databricks meets 0/4 of those critera, with the additional downside of restrictive SQL due to spark backend, adding unnecessary refactoring/complexity to queries.
> your setup is not really practical to have a lot of people collaborating
Hard disagree. Our methods are simple and time-tested. We use git to share code (100x improvement on databricks' version of git). We share data in a few ways, the most common are by creating a table in a database or in S3. It doesn't have to be a whole lot more complicated.
I totally understand if Databricks doesn't fit your use cases.
But you are doing a disingenuous comparison here because one can keep a "serverful" cluster up without shutting it down, and in that case, you'd never need to wait for anything to boot up. If you shut down your EC2 instances, it will also take time to boot up. Alternatively, you can use the (relatively new) serverless offering from them that gets you compute resources in seconds.
To ensure I'm not speaking incorrectly (as I was going from memory), I grep'ed my several years' of databricks notes. Oh boy.. the memories came flooding back!
We had 8 data engineers onboarding the org to databricks, it was only after 2 solid years before they got to working on serverless (it was because users complained of user unfriendliness of 'nodes', and managers of cost). But then, there were problems. A common pattern through my grep of slack convos is "I'm having this esoteric error where X doesn't work on serverless databricks, can you help".. a bunch of back and forth (sometimes over days) and screenshots followed by "oh, unfortunately, serverless doesn't support X".
Another interesting note is someone compared serverless databricks to bigquery, and bigquery was 3x faster without the databricks-specific cruft (all bigquery needs is an authenticated user and a sql query).
Databricks isn't useless. It's just a swiss army knife that doesn't do anything well, except sales, and may improve the workflows for the least advanced data analysts/scientists at the expense of everyone else.
This matches my experiences as well. Databricks is great if 1. your data is actually big (processing 10s/100s of terabytes daily), and 2. you don't care about money.
This is a good place to use cumulants. Instead of working with joint characteristic functions, which gets messy, it lets you isolate the effects of correlation into a separate term. The only limitation is that this doesn't work if the moment doesn't exist.
I've also tried out a few solutions for this problem, but I inevitably went back to use slack or signal. The reason is that these messenger apps are just always open and it's far easier to paste stuff into it than opening an app dedicated to this purpose.
It's 80% clickbaity bullshit and 20% mediocre gadget reviews. Long gone are the days when Engadget was a relevant source of tech journalism. On a tangentially related note, does anyone remember Engadget's apple-only sister site TUAW? That was a great outlet for a good long while until Engadget started to shit the bed a couple years ago.
And bodies. Had to move my laptop dock to the other side of my desk when I switched headphones from a pair that had the transmitter in the left ear to the right.
Yeah that is perhaps the most problematic aspect of Bluetooth. The radio interface is designed with the presumption of the signal bouncing off nearby surfaces. Thus it is sensitive to line of sight issues.
Mind you, i have also been able to connect to a USB dongle through solid wood walls, so mileage may vary.
Never mind that you can get two classes of radios, on top of the LE stuff. With each class having different transmission strengths.
Both use the 2.4 ghz spectrum, so that makes sense - especially for bad microwaves.
I had an old microwave that completely tanked my 2.4 ghz Wifi back in the days (very poorly shielded).
```json { "from": "Checking", "to": "Savings", "amount": 100 } ```
This is basically what a crypto ledger does.
But the main reason why we need double entry accounting is that not all accounting entries are transfers. For example, is we are logging a sales, cash increases by $100, and revenue increases by $100. What's the "from" here? Revenue isn't an account where account is taken from, it is the "source" of the cash increase. So something like the following doesn't capture the true semantics of the transaction.
```json { "from": "Revenue", "to": "Cash", "amount": 100 } ```
Instead, in accounting, the above transaction is captured as the following.
```json { "transaction": "Sale", "entries": [ { "account": "Cash", "debit": 100, "credit": null }, { "account": "Revenue", "debit": null, "credit": 100 } ] } ```
It gets worse with other entries like:
- Depreciation: Nothing moves. You're recognizing that a truck is worth less than before and that this consumed value is an expense. - Accruals: Recording revenue you earned but haven't been paid for yet. No cash moved anywhere.
The limitation of ledgers with "from" and "to" is that it assumes conservation of value (something moves from A to B). But accounting tracks value creation, destruction, and transformation, not just movement. Double-entry handles these without forcing a transfer metaphor onto non-transfer events.