> Store of Value: Bitcoin is better than gold at what gold does. Gold is about $7tn. BTC ~$180bn
I don't know many women who feel this way, and women's interest in jewelry has historically helped establish an important floor for the value of gold. If civilization collapsed tomorrow women would work hard to protect their gold, but not their Bitcoin (or similar). But of course , it doesn't require societal collapse to put downward pressure on Bitcoin.
Wait, is this Kyle Kingsbury? If so, where is his name? Who are the authors? I notice this text doesn't mention the authors:
"We wish to thank Jordan Halterman for his discussion of Hazelcast use cases. Luigi Dell’Aquila & Luca Garulli from OrientDB, and Denis Sukhoroslov from BagriDB, were instrumental in understanding those systems’ use of Hazelcast. Thanks also to Julia Evans, Sarah Huffman, Camille Fournier, Moishe Lettvin, Tim Kordas, André Arko, Allison Kaptur, Coda Hale, and Peter Alvaro for reading and offering comments on initial drafts. This research was performed independently by Jepsen, without compensation, and conducted in accordance with the Jepsen ethics policy."
Hahaha I suppose that's an entirely legitimate question, isn't it? Just to reassure you, Lisa, yes, this is me, and I'm the only person who works at Jepsen. I'm still exploring voices for the Jepsen brand--since at some point I might hire more researchers, more recent stuff is written with an organizational "we". Suppose I should start adding bylines. :)
completely off-topic:
have you ever thought about giving your training (https://jepsen.io/training) in an online way (mooc, udemy, whatever)?
I would love to learn about distributed systems from you, but today I think it would be almost impossible since you only seems to give in organization trainings.
FWIW I think you're right to take this approach--most in-depth technical classes aren't good fits, for that matter, from a pricing perspective if nothing else. I charge a few hundred dollars a head to teach somebody in-person. The Fundamentals of Cloud Architecture course that I typically do is one I could, for the most part, extend to an online course...but, given that it includes the ability to contact me, etc., I wouldn't want to charge less than $100 for it.
It's a good course; better, IMO, at what it teaches than equivalent Udemy courses. And I'm a pretty good teacher, I can be pretty engaging while talking about this stuff and it's fun. But at the race-to-the-bottom prices of the MOOC economy, it's a nonstarter. The Udemy "every class is ten bucks" disease discourages really capable, competent people from sharing what they know.
(And Packt et al. finding somebody to read some slides is not a good counterexample. I said "really capable, competent" for a reason. I was approached by one of their competitors--a bigger company than they are--to write a book on Mesos on the back of two blog posts...)
Have you explored high level consulting in inception phase of distributed system design? (and p.s. thank you for your exceptional work and knowledge sharing.)
Oh yeah, I guess this is a startup website isn't it! And Jepsen is... sort of... a startup?
For context, Jepsen started as a series of volunteer nights-and-weekends blog posts and conference talks. About three years in I bootstrapped the business as a consultancy, with one client lined up and ~15K USD in [available] savings. Scraped through the first year by dipping into credit cards, learned a lot about pricing and pipelines, and am doing pretty well now. Having money in the bank lets me do more volunteer work, like this analysis. :)
Jepsen makes money through consulting services (usually paid analysis work), training classes, speaking engagements (I charge at for-profit orgs and speak for free at nonprofit events), and Amazon Marketplace subscriptions.
The convention in serious consulting is that a report printed without author names under the company's name is backed by the whole company and speaks for the whole company, which turns out to be the case here.
Gladly would voice my support for and validation of this research. It might not be glorious but this fundamental synchronization validation keeps us all in check.
Or questions that are original but get marked as duplicate. This has happened to me several times. It's become my biggest frustration with StackOverflow. I might run into a problem that is a new variation on an old problem. I try to document it carefully, and I often link to old questions on StackOverflow so that people will realize that I've done my research, and my current problem is not the same as the older problem. But still, the moderators on StackOverflow don't always take the time to read all the way through a post. They seem to just read the first sentence or two and decide that the question is a repeat of an older question. I've been shocked at how aggressive they can be about marking a question as a duplicate.
Another frustration is stuff like PowerShell where it's entirely luck of the draw whether they answer your question or redirect it to SuperUser (where nobody is likely to be able to answer certain classes of question anyway).
pervycreeper is making the point that all computers are, for practical purposes, probabilistic, since you can never know if a given result is because of a machine failure. In the normal sense of a math proof, one can not "exhaust the problem space" using a computer. Using a computer remains an empirical approach, since there is always the chance that the computer is malfunctioning.
The low probability of machine failure, combined with error correction means that by 4 layers of error correction, the chances of a failure (4 successive failures) are essentially nihil.
And furthermore, you could run it on different machines and have humans verify the results. In any case I'd trust the machines more than a human mind when it comes to consistency, and we already trust human minds with proofs.
In my experience, when a computer proof disagrees with a person proof, it's more likely the person was wrong, rather than the computer (along with the computer's programmer and hardware of course).
Sure it can. As an example, d'Alembert "proved" the Fundamental Theorem of Algebra but based his work on some assumptions that made the proof incorrect. So did Euler.
Correct proofs came much later thanks to Lagrange and (especially) Gauss.
Problem is: a smart guy can catch a malfunctioning in another smart guy's line of reason. Can they catch an error in 200TB of proof?
Yes, you could either put an army of people on picking through the data, or write another software program that also picks through the data
In either case, we are talking about two different things. The initial discussion was about probabilistic theorems, which is the idea of throwing a bunch of tests at a problem and getting a sense of how likely it is that the theorem is correct vs going through the entire space of a problem through brute force.
Then somehow the discussion changed to whether people and computers can be trusted to not make mistakes when brute forcing the problem space, which is an entirely different subject from the topic of this post.
You won't find anything in any set of economic statistics that can offer the slightest justification for this statement:
"The JavaScript ecosystem nowadays is responsible for the global revolution the same way steam engines were responsible for the first industrial revolution"
That is ridiculous. Here is reality:
1.) Labor productivity has been in long-term decline, though computers did cause a temporary surge in the 1990s. See the charts in this post:
"Labor productivity has been in long-term decline"
If I read the graphs from the link correctly, they show that % productivity growth is declining. Still it is positive thus the productivity by itself continues to grow. Hard to say from the graphs, but it looks like absolute productivity growth per year is bigger (or at least the same) as in the 50's: avg 2.5% growth over 60-70 years means ~5x increase of absolute value, thus 4% / year growth in the 50's is the same absolute value as ~0.8% growth today.
Great teachers are expensive. This needs to be emphasized more. Even if there is strong demand for the dev bootcamps, the bootcamps still have to hire great teachers. The teachers are highly experienced developers, the kind of people who could be making something between $120,000 to $160,000. So the schools face some very high expenses.
A comparison might be made to retail. Sometimes you'll see a store that is very busy, full of customers, and then it goes out of business. Why did it fail when it was so busy? Often the reason is that the landlord raised the rent.
High costs can kill an otherwise good business idea.
This article misses what seems like the most obvious explanation, which is that the USA economy has recovered from the Great Recession. We had a stretch from 2009 to 2014 when it was difficult to find any normal investment that paid better than 1% or 2% a year. Seeking yield, money poured into startups, because they seemed like the only thing that might be able to offer better than 20% returns (assuming a basket of startups, most of which fail and some of which return 100x).
Since 2015, the economy has returned to almost normal, so the need to invest in software startups is reduced. And this would most likely effect San Francisco the most, since it had received the most investment during the era 2009-2014.
Now that the economy is almost back to normal, there are a lot of areas in the economy that offer possible returns. The need to focus on software startups is diminished.
You have it backwards. The plague starts in east Asia, then spreads West along the Silk Road, it arrives at the Black Sea, and then is carried to Venice. Most likely it was in rats that were on the ships which the merchants of Venice operated in the Black Sea. It arrives in Venice in 1347, and the next 36 months are the event that in the West is called The Black Death.
This was the 1346-53 epidemic.
The earlier black death epidemic starting at 541 (https://en.wikipedia.org/wiki/Plague_of_Justinian) was a bit more influental, and killed about 40% of the population. This came from Egypt to Europe.
But recent genetic studies also suggest it came from China instead, same as the later plagues.
On a similar note, recently a lot of research has looked at the extent to which utility functions fail to explore a space, and how the introduction of novelty is a crucial new strategy:
"Novelty search is a recent algorithm geared toward exploring search spaces without regard to objectives. When the presence of constraints divides a search space into feasible space and infeasible space, interesting implications arise regarding how novelty search explores such spaces. "
I don't know many women who feel this way, and women's interest in jewelry has historically helped establish an important floor for the value of gold. If civilization collapsed tomorrow women would work hard to protect their gold, but not their Bitcoin (or similar). But of course , it doesn't require societal collapse to put downward pressure on Bitcoin.