Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Frighteningly Ambitious Startup Ideas (paulgraham.com)
1032 points by anateus on March 10, 2012 | hide | past | favorite | 430 comments


Man finds a black kind of rock that burns; discovers that you can get a lot of this rock if you dig deeper, but deep mines have water. In order to successfully mine this rock, man devises a steam powered engine (neatly enough powered by this same rock) to pump out the water. No, not the steam engines you're familiar with. This is the Newcomen Steam Engine: http://en.wikipedia.org/wiki/Newcomen_engine

The Newcomen Engine has a fatal flaw: it cools the steam for the return stroke, losing energy to the latent heat of evaporation each time. James Watt discovers the latent heat of evaporation, and realizes that separating the condenser from the piston would improve efficiency. So let's go build some railroads, right? Not so fast. It would still be another 30 years (100 years from the invention of the Newcomen Engine) before railroads and ferry boats would be regularly powered by reciprocating steam engines.

What's the moral? For 100 years, vast leaps in technology came one after the other. In the process, the Laws of Thermodynamics were discovered and described. Many learned men stood around patting each other on the back at how successful, how inventive they were...at digging a black rock called "coal" from the ground.

But most people don't dig rock from the ground. Most people do travel from point A to point B on a fairly regular basis. The world changed when 100 years of technology left the mine shaft and the factory, and got people where they were going just a bit faster.

I'm convinced that computers are still at the Newcomen/Watt transition. We have a ways to go before the world truly changes.


This perspective is interesting when applied to communications.

Parlay. Courier. Pigeon. Mail. Telegraph. Telephone. Transatlantic Comms. Fax. Early Internet. Email. Text Messaging. Live Chat. Voip. Twitter. Facebook Messages. Video Chat.

It's a naîve summary of communications history, but look at the persistence of some of the early players. Many have not been replaced to this day - snailmail, POTS, Fax, Email, Chat, VOIP, Videochat - there are fundamental reasons to stick with certain technologies (Fax, POTS, FB and Twitter excepted). There is disruption to be had, but there is still massive value in some of the oldest methods, with some evolutionary shifts.

The services need to adapt, and incumbents do restrict progress, but the 'email killer' notion is not well conceived. Most people don't use email as a 'todo' - that's an extension, not a replacement. This is why Rapportive has a market, but is not _the_ market.


An idea I've been musing on: is there a fundamental set of problems of humanity from which all economic activity is derived? For example:

"Shrink the world": couriers, seafarers, caravans, riders, roadbuilders, railroads, telegraphs, automobiles, steamships, dockworkers, truck drivers, aviation, telephones, email, social networking, videoconferencing.

"Organize labor": lords, finance, education, recruiting, HR, management, information systems, law, accounting.

"Keep us safe": militia, pikemen, shamans, legions, samurai, knights, musketeers, standing armies, chemists, doctors & nurses, the military/industrial complex.

"Food and shelter": self-explanatory.



"Money for primarily good feelings, not stuff": charity, religion, theater, tv, films, gambling, music, story books, fashion, holidays, tourism.


"Find stuff to burn" is a pretty big chunk too.


"Control more energy than your body can produce"


The "sex & war" category seems to be a great catalyst for innovation. These days, "sex" of course means "Internet porn".


Don't know if you'll get this,

but it's cause we all want to grow. To keep the gains from growing, and insure it's invested. We intrinsically don't want to see kids starve, why?

That answer's the perspective, I think.


Not sure if this answers your question, but the fundamental problem from which all economic activity is derived is scarcity.


> there is still massive value in some of the oldest methods, with some evolutionary shifts.

Yes, and newer protocols often carry emulation layers for older protocols, so we have things like POTS running on top of TCP/IP, when just several years ago most of us still had TCP/IP running on top of POTS via dialup modems.

The other day I needed my insurance company to send a fax to my bank (banking regulations mandate the use of faxes rather than electronic formats to share documents). The insurance agent did it by hitting a few keys on her computer. A piece of paper didn't leave her office, but it arrived on cue at the bank's fax machine nonetheless.

Sidenote: One of the interesting side effects of the internet disrupting traditional retail is that the losses in traditional lettermail delivery are being offset by massive gains in package delivery. The USPS has pilloried itself by doubling down with huge new investments in the part of its business that is shrinking instead of pivoting into the obvious growth opportunity.


Some would argue Email is little more than USPS over IP.

I think the most interesting aspect of modern communications - accidentally in the 90s, deliberately in the post-twitter-era - is the simple addressability of people.

There was a tradition of letter writing for centuries (visit the British Library), but it required some level of introduction to connect. The academic roots of email broke some communication boundaries (to the time-detriment of prominent academics), and Twitter has opened the same addressability to celebrities and field-leaders (with a more voluntary twist I would say).


Yes, but this addressability of people also means that the sender must have some value to provide.

Being able to self-create a platform of value that you can offer to people you wish to network with is crucial (I created a magazine to accomplish this objective).


> Yes, but this addressability of people also means that the sender must have some value to provide.

If only that were the whole of it. The sender must have some value to provide in the eyes of the recipient. But the recipient will actually have to look at the message in order to determine if this is the case or not. That decision alone makes many messages that were sent with value '0' a net negative to the recipient.

Hence all the spam. If the 'providing of value' would be a thing we could determine in advance then the low barrier would not be an issue.

Effectively a spam filter determines that the value of a message is '0' to the intended recipient to avoid them becoming negatives.


The breakthrough in your comm analogy will come with the translation of thought to word.

We will wear a device which will be able to read our brainwaves and determine which word we are thinking ala dictation, then send that to the recipient.

This will be wired-telepathy - the recipient will get a message which they can receive any way they choose; visually (email - they read it) audio playback, or thought-injection. It is played back on the nerves and is "heard" in their head as a thought. (evolutionary results to be sure)

As a life long Cyberpunk enthusiast who, at 37 years old, has been using computers daily since I was 8, I really have concern over the mental health of the yet-to-be digital world.

I.E. the ADHD that will result in direct cerebral access to information 24/7.

What will be the impact on the (generally) serially wired brain to vastly parallel inputs?

I suspect massive upheaval on the social level. There will always be adopters of immersion, as there will be the future Amish who will eschew all digital, but the median social reaction will be a result more of our true, and unknown, innate biology that we wont even be aware of until this happens.


> We will wear a device which will be able to read our brainwaves and determine which word we are thinking ala dictation

Since this thread is presumably being read by entrepreneurs making bets on the future of technology, it needs to be said that this will never happen with the current imaging technology. Brainwaves implies EEG, and the research in this field strongly suggests that it is information theoretically impossible to extract this information through the electrical activity on the scalp.

For this vision to become reality we need a new imaging device that has both the temporal resolution of an EEG, and a spatial resolution that probably needs to be better than an MRI.

In summary: Certain things are impossible. I can say with certainty that no algorithmic improvement will allow this to work using an EEG. I don't know whether it is physically possible to create a non invasive imaging device that allows such a signal to be detected reliably, but it certainly does not exist today, and it seems like a leap of faith to assume that it definitely will exist at some point in the future.


I can key morse code at 40wpm with two muscles. With one hand I can chord at 120wpm. On a stenowriter I can transcribe about as quickly as most people can read - 250wpm.

I've invested an extraordinary amount of effort into improving the speed at which I can interface with a computer; I think the practical limit is about 300 baud, half-duplex.

Of course, we're trying to establish an interface with a bafflingly complex lump of grey meat, but are we really daunted by the idea of outpacing a V.21 modem?


Your judgment that present technology is inadequate is based on the assumption that computers need to learn to read the human thoughts.

What about the inverse, that the humans learn how to think in a way that a computer understands? That will be much easier, as humans learn much better than computers, and also much safer - I will have complete control over which of my thoughts the computer can detect and interpret.


The human learning to adapt to the machine has been the way EEG-based brain computer interfaces have been made for a couple of decades. Using machine learning to adapt the machine to the human is a much more recent development.

It is possible today to make EEG controlled devices. They typically differentiate between a small number of real or imagined movements in the user. This is awesome, because it can allow severely paralyzed people to communicate, control a wheelchair. etc. Nevertheless, the algorithms used to do this are perfectly useless when it comes to distinguishing whatever words the user is internally vocalizing.


The keyboard is not very good at determining which words I'm internally vocalizing either, still seems to work. The point I'm trying to convey is that maybe we can learn to transmit words using some form of brain reader, but that measures something else than vocalizing.


Doesn't have to be "brainwaves". The brain has a few outputs that can be highjacked (e.g. a computer with a neural interface that appears to be another muscle in the body). I don't know whether the bandwidth of these outputs is sufficient for interesting communication; we've evolved to take in far more data than we produce.

Edit It seems that more direct methods of neural interface are already plausible: http://www.technologyreview.com/biomedicine/37873/


It doesn't actually need to be noninvasive. If an invasive procedure is useful enough and can be made safe, eventually it will be ubiquitous.


The problem with invasive is upgrading.


Asher's Gridlinked supposed a limited set of society [operatives & wealthy] who could manage this fulltime connection, and even then it was perceived as unhealthy.

Wikipedia has undoubtedly changed how our generation views knowledge, but it's still a pull-technology. Outbound messaging will still be a push-technology (nobody wants to compose an email of their stream-of-consciousness, and brains are poorly wired to retain full structure in mental 'RAM')

Wetware doesn't add significant differences to the existing protocols - merely a more rapid input mechanism than checking your phone. Assuming contact is voluntary, people will not opt for the PubSub model for comms. If you choose to use it for trivia, caveat emptor.


Sure, but I was not saying that there will be compulsory receipt of info... though, given human nature and the already prevalent propensity for people to be overly responsive to the flood of alerts - I see a negative impact on conscious.

It will be very interesting to say the least.

Personally, I am already overly unacceptable to the karma endorphin boost from reddit, quora and HN. I was thinking about this just the other day; I was originally against karma being hidden on posts, but now, I like the fact I am less enticed to for bias based on that number.

We already continually scan for karma upticks on all our primary sites. This is bad...


Killer app for email is idiot-proof cryptographic signatures and cryptography.

It's baffling to me that a squiggle on a bit of paper is more trustworthy than a properly implemented cryptographic signature.


You are contradicting yourself. On one hand, you state that a "killer app", i.e. something that everyone would use is cryptography. On the other hand, you state that the vast majority of people trust a squiggle on paper, rather than actual crypto. If the average joe doesn't care about crypto, why would a crypto email be interesting?

You'd need to find a way to make crypto interesting. Lots of email don't have crypto, so if you sell something that does crypto well, then you can corner the newly created crypto market.

A peer-to-peer system for sharing music/films that does good strong crypto (and faster than tor) would do the job.


There is ResoMail which does it, and it seems it's not very popular.


I don't seem to understand why did you put "Facebook Messages" in a line of great inventions. Isn't it just another "Live Chat"? Am i missing something?


Facebook (and Twitter to some extent) solve one of the biggest problems of email which is the concept of verified sender.

I add it to the paradigm shifts as it resolves (in its own [large] namespace) a longstanding problem with email.

I add it to the 'transient' list as its solution is purely driven by network effects which leaves it vulnerable to the next player sideways market dissolution.


It's a bit of a stretch to say they solved it. I'm not on FB and so FB to me don't matter. On the other hand, I'm part of an Active Directory of my company, so AD solves this problem for me at work. But none of them guarantee that the email came from the particular person and not from a dog.


No, Facebook Messages and Facebook Chat are two different things.

http://www.facebook.com/about/messages/


Yes anyone who says that "email isn't a messaging protocol" has probably been smoking to many of those funny Jazz cigarettes.


Before the rail roads, people didn't do all that much travelling from A to B, unless they were very wealthy or had some external pressure like the need to find work or religious persecution, they mostly stayed at A. Which makes the invention of rail roads even more impressive in my eyes, it created demand rather than addressing it.


What moved from A to B was coal, ore, wheat, manufactures, but mostly by rivers. You can see this on a good map of northeast U.S. -- ask yourself why Pennsylvania is essentially rural in its center, with large cities at both ends, or why New York State is essentially rural 50 miles from the Hudson. Inland water transport was so important that the major capital projects of the early 19th century were canals.

The initial advantage of railroads over barges wasn't reach but speed. By moving goods faster, merchants were able to sell and receive payment faster, no small thing when credit was scarce, uncertain and expensive. I expect the initial rail lines served the same markets as the canals (that's where the business was), then began to extend their reach with spur lines.

There was probably some rail demand creation as the roads extended into the West -- farmland near a railroad was no different than that 20 miles away in anything but railroad proximity. But that was later, after the technology's dynamics were well understood and the players well capitalized.


Yes. The conscious revolution is just beginning, our species is very young, and the stars are still far away.


Yeah but until we invest as much time, money, and effort into defense against weapons and customized microorganisms as we do into their creation, we're running the risk of this young species (and maybe a lot of the other ones) disappearing very, very soon.


Oh please. Humanity is pretty much the definition of a species that should, in all probability, have died out on a multitude of occasions over the years and I am not just talking about the world wars and Nuclear Bombs, but things like the plagues, Spanish flu and all the rest of the diseases; the host of predators for which we are no match at all (a standard monkey is stronger than all but the best trained humans), the environment we inhabited (today that is not much of an issue) when we left Africa (why do you think we left? Properly because stronger tribes where pushing us further and further away and the alternative was dead on the shore), when we left the jungle, when we went to Siberia (again almost certainly pushed by stronger tribes) and when we went to Europe (same reason) where we had to fight the Neanderthals. Basically humanity has been on the winning sides of terrible odds since we started out. Don't forget that we nearly didn't make it in Africa (http://news.nationalgeographic.com/news/2008/04/080424-human...).

No wonder we love and underdog -- there is no greater under dog than humanity.


There is an amusing passage in "The Black Swan" about how a turkey would estimate its own life expectancy as Thanksgiving draws closer...

Don't confuse the fact that we've been lucky not to go extinct yet with evidence that we won't, especially when you being around to make that inference is conditional on said luck.


Amusingly that's not a bad argument that we're living in a simulation. Disregarding any particular human, humanity itself has done remarkably well against the odds in the same sense that story-book characters do remarkably well against the odds and that, at least when they're on a winning trend, player-controlled populations (whether in Populous, C&C, etc.) do well against the video game's odds or against other players.


Isn't that mostly survival bias at work? No matter what the odds were, the winners write history. If we had died out in one of the steps along the way, we wouldn't be here writing about it.

Once humanity spread around the world, the odds of a natural catastrophe killing all of us at the same time became very low (limited to planetary-scale distasters).

Only since the industrial revolution we've been on a more dangerous path toward central points of failure. On a geological time scale, that's only a very short time. But we've been testing our luck really hard.


Or perhaps it's an argument for multiple realities. There may be uncountable parallel timelines where humanity died out at every possible point during our history.

That said I get the simulation theory feeling pretty solid sometimes, to the point where grappling with it is one of the major themes of the graphic novel I'm in the middle of...


> No wonder we love and underdog -- there is no greater under dog than humanity.

Tell that to the Dodo, the Quagga, the Javan Tiger and the Thylacine. And many others besides.


Weak and winning is "underdog", weak and losing is "loser".


By that definition, every loser that hasn't yet lost is a winner. There is no way to tell the difference except hindsight. I bet the dinosaurs told themselves that they were totally different than all those other weak-ass species that had joined the annals of history 76 Myr ago...


I second this. We can either destroy our-self or become a more vibrant, multicultural civilization. See Physicist Dr. Michio Kaku on this:

http://www.youtube.com/watch?v=7NPC47qMJVg


I think this is a good thread to recommend Andy Kessler's book "How We Got Here" which can be downloaded from here (http://akessler.blogs.com/andy_kessler/2005/04/hwgh.html). It's a parallel history between economy and technology evolution up from the steam engine genesis.


I think you are right.

I don't agree with PG when he is talking about Apple. "None of them are run by product visionaries" is simply not true. Steve Jobs had great respect for companies like Sony because there are product visionary companies. Sony, IBM, Philips, and a lot of others, all companies that gave us great products we now are using every day. But it took years to get there.

E-ink for example was created in 1997 and is still in development. But we all know e-readers. A great example of a visionary product imho.


Sorry, Sony was a great engineering company, worthy of being respected, until they sold out and became a content creation company. Look at how the hobbled the mini-disc years ago. Sony deserves absolutely no respect these days.


While I can totally see where you are coming from, I think it is not fair to say that sony does not deserve any respect whatsoever anymore. They were innovators at first, too and have come a long way since.

I do not agree with sony's content creation affairs, however they continue to innovate with good products (e.g. look at the digital cameras, camera sensors etc.)

More often than not, there is no clear distinction between good and bad, certainly less so with amazingly huge companies such as sony, apple, exxon etc.


The key is 'gave'. Past tense.

I think a lot more start-up people would fear (or even care about) Microsoft if Bill Gates was still running it. The same goes for the rest of the companies you mention.


Similar, but different story. In the 800s, looking for an immortality elixir, man discovers a solid black chemical explosive. In time (around 1132), this finds itself as an early propellant in ranged missiles. Later it uses in other applications: mining, firearms, entertainment, bombs and myriad others.

But the use as missile propellant remains. Looking for similar, but more powerful propellants for these missiles (to give them more kinetic impact, longer range, and other properties).

Progress was slow until the late 19th and early 20th centuries when liquid fueled rockets were invented as well as an entire host of various chemical explosives with vastly different properties.

Progress was rapid, and soon missiles could travel hundreds of miles, and had enough extra capacity to deliver even more powerful explosives to their target.

However, accuracy was poor at best. Various types of complex control systems were designed and built into the missiles. As man realized that missiles could be scaled up and lift payloads up into orbit and beyond, bringing the payloads down in the desired location became even more important.

Inertial systems, radio control, celestial navigation and even attempts at manned guidance!

However, navigating is a general problem, and not just useful for missiles. Ships, people, surveyors, and others, all need to know where on the earth they are. In the 60s, with the advent of orbital missiles and satellites, a system of satellites was placed in orbit. And using a complicated collection of quartz oscillators, early computers, and various radio receiving equipment, one could (after a number of minutes collecting data and inputting it into dedicated guidance computers) determine where they were on Earth to accuracy of a few hundred meters.

Enter the nuclear age which gave us ultra-precise clocks, combined with transistors, then integrated circuits and all segments of the navigation problem were improved. By 1978, these components were small and reliable enough that missiles pushed constellation after constellation of satellites into orbit. Ground receivers were small enough to fit into the bodies of other missiles enabling them to navigate to targets with accuracy in the single meters with constant positional fixes along the flight path.

Realizing that the navigation computers and radios were now small enough to fit on a missile, it also meant they could be fit onto ships, large aircraft, and large trucks.

Later improvements to atomic timekeeping, computers, radios and other pieces meant that the receivers could be made man carry-able and fit into backpacks, small vehicles and on and on. Accuracy was improved to inches, and missiles could suddenly be dropped into selected air vents in specific buildings.

Except suddenly nobody cared as much about moving missiles, they found that knowing where they and their stuff was in the world was far more interesting and useful. Further improvements in computing meant that the navigational radios and computers could fit into a handheld device, then be integrated with high quality geospatial data, heuristic path finding algorithms and suddenly we have satnav in our cars.

Reusing the same navigation tech and we find we can improve the geospatial data considerably, improving navigation. Fixing these devices to the ground and we can measure plate tectonics for the first time, track fleets of vehicles in real-time, saving millions of dollars in fuel, compute phasors in power lines improving power delivery systems, ultra precise atomic clocks means precise time synchronization (with accuracy +-10 ns).

Further improvements in miniaturization puts navigation into handheld phones, and suddenly we know what restaurants are nearby. Tie it to a database collecting reviews and we know if it's good. Tied to the previously developed navigation system and we can even get walking directions there.

In other words, most people don't need to guide a missile, but they do need to find a good place to eat. And when the technology developed for busting a bunker left the avionics systems it got people and their stuff where they wanted to go faster and with less confusion than before.

Today it's hard to imagine a time when we didn't know the exact location of just about everything.


As of 2012, start-ups with over 1 billion growth potential appears to revolve around services or products offered to a large portion of the society or companies and which can generate 10 to 100 USD per person ( employee or individual) per year such as Search(ads), emails, education, showbiz, healthcare. Here are some additions; cheaper fuel, integrated software-as-a-service(kill salesforce or gapps), faster travel, space travel, home robots/ai, and better science.

In some cases there is a need for someone to open a new market. If Apple or Google had build a home robot with few basic functions, wouldn't a lot of people buy it and then it would open a new market?

More ambitious things to kill;

- Kill the "house"; since the days of the community cave a house has been human shelter #1; nowadays there might be better alternatives to an owned house.

- Kill the "state" or "a better citizenship"; provide the same things the state provides, leaner and cheaper.

- Kill the capital investment; create an automatic investor which selects virtually existing start-ups based on instant financial metrics.

- Kill the "company";

- Kill "democracy"; a better voting system

- Kill placental reproduction or sexual production; 9 months are too much.


I've been thinking about the 'Kill the state' idea for a while. It just makes sense. I have more in common with the average person in Germany than I do with the average person in Kansas or Alabama (not necessarily politically, but culturally). With online communities and instantaneous worldwide communication, I am also more closely in touch with my global counterparts than a significant portion of people in my own country.

In short, nation boundaries are slowly becoming outdated. We will probably have to wait for actual teleportation before this fully comes to pass :)


>It just makes sense. I have more in common with the average person in Germany than I do with the average person in Kansas or Alabama

Unless you socialize, talk to your family and take your entertainment in German, I struggle to imagine what you mean.


No, this works for me as well.

Not Germany specifically, but I left the US for France. I don't mesh with the culture 100%, but... more so than I did in the US, except for little localized pockets there. Just for starters, religion plays a far smaller/quieter role here; no one cares that I'm an atheist. When I come back to the US, it's just so obvious and... loud, and everywhere. It grates incredibly.

I certainly talk with my family frequently, though just between parents & siblings we're already split across both US coasts, the Netherlands, and (me in) France. My in-laws are in Malaysia & the US; we also talk regularly, and try to see everyone in person at least once a year.

But I never, never forget about state borders. I don't have that option; they don't let me. My wife is Malaysian; I'm American (as is our daughter), we reside in France and my employer is in the UK. We have to file tax returns in both US & France, and have wasted weeks of our lives doing paperwork and waiting in line in embassies and other government offices (sometimes forced to stay in hotels to be near an embassy in the morning... we don't live near one!) sorting out all of the incredibly stupid details.

My wife needs a visa for flights to the US, or Canada (and was once ejected from Canada because she had a US-bound flight with a Canadian stopover, and hadn't realized that mattered. Yup, it really does.

The hoops you have to jump through to emigrate to a country -- like she did to the US, and we both did to France -- are horrible, with uncertain outcomes, and often poorly documented.

I'll stop the rant, but I'd really, really love any progress away from current states.


I think he means he prefers the package of services provided by the German gov more than the package provided by the Kansas/Alabama gov.


Its not not about language or political views. I was referring to culture and values.


Can you expand on this? Because I would find it impossible to separate German politics from culture/values, and I think many, many Anglophones would find the Deutschsprachige love of orderliness maddening; easily 30% even in a country like Denmark where practically everyone who is capable of working speaks nigh on flawless English. I realise the Danes aren't German but they're more ordentlich than the Dutch and it's that that would piss so many off, so fast.


Language tends to be very strongly intertwined with culture as the main means for conveying said culture. So I'm curious in what way you feel culturally closer to Germany. (disclaimer: I have spent about 80% of my life in Austria and about 18% in Britain, speak both languages but identify much more strongly with British culture)


But the English language is mostly a germanic language, and the British people have mostly germanic DNA. Contrasting Austria and Britain is not that big of a divide. A better contrast would be between Britian/Germany/Austria and one of the latin countries, or a slavic country, etc.

When the US was founded, there were essentially two separate civilizations within the countries borders. This was true until one annihilated the other during the US civil war, but it was never assimilated until the last few decades. This means there is a major cultural divide that still exists.

I would say that the North took more of the Anglo-Saxon protestant values than the South. Hard-working, industrious, religious but-not-overtly-so.


You seem to either be defining "culture" very narrowly in terms of work ethic, or you haven't spent any significant amount of time in the countries mentioned. Or maybe both.

Even with the narrow definition of culture, I can't say I agree. Austrian society is rather conservative. There may be progressive and industrious individuals, but I certainly not the country as a whole. I haven't lived in Germany, but I'd certainly say British people are more liberally minded and individualistic than Austrians.

Despite their disdain for politicians, Austrians will typically expect the state to solve their problems, and accept paying vast amounts in taxes. An example of this is higher education: the majority expects it to be completely free for students, very much at the expense of quality.

The civil service is enormous, inefficient and somewhat corrupt, but it seems to be accepted as a requirement for stability and welfare as a huge provider of jobs. Contrast this to the British fears of a "nanny state" and general grumbling about taxation.

Here's my theory how the differences came about:

1. You're right to point out a certain north/south gradient across Europe, but you're forgetting that Austria and Bavaria were historically the centre of the counter-reformation, unlike central and northern Germany. So, for a long time, very much catholic like southwestern Europe; even anti-protestant. Religion obviously has very much taken a back seat in recent decades, but culture (there's that word again) changes much more slowly.

2. The Austro-Hungarian Empire used to stretch deep into the Balkan and Eastern Europe. Although the German language dominates today, Slavic and Hungarian surnames and people's appearances today still hint at a past where Austria, and Vienna as the capital in particular, was much more heterogeneous. Even the use of the German language is also not quite as straightforward as it seems: the names for many foods are different from those used in Germany and have Slavic, Hungarian and Italian origins. (and I would personally consider food to be a part of a society's culture)

The English-German connection you point out is, by the way, quite far in the past. The Norman invasion of England injected a lot of latin vocabulary into the English language, and that was in the 11th century.

I'm not trying to make a moral judgment by the way; I have family in and from both countries. I can't predict which country will be more prosperous in 50 or 100 years' time; Austrians are probably the happier ones at the moment. I'll readily admit to having my own strong opinions on whether it will stay that way, but that's not really the point here.

For what it's worth, I likewise can't really comment on the differences in culture across the United States from personal experience. I'd be surprised, though, if the differences were as big as across Europe. (I'm leaving aside insular communities such as the Amish who deliberately do not mix with general society; you get those everywhere, and their populations tend to be small)


Could you elaborate on the idea of "Kill the house"? It sounds fascinating.


If modern human life is a drama (consumption), the house is the center location where it mostly occurs. Most people work for a house for all their lives. You feel safe and comfortable in your house, you sleep there, keep all the stuff you bought such as computers there. Most of us are enjoying our sunday at our houses enjoying the Internet.

We had a lot of variations for the "house"; the wood > shared caves by a community > single house > apartment(flats). Hotels, cottages as complementary. So the question is, can the current tech start-ups disrupt the concept of the house? It is a broad topic, but random things which comes to my mind without going too deep:

- make virtual windows where you and your selected facebook friends see a common virtual place or a real scenery where you install cameras (for example in beautiful places on earth), use high quality display and cameras. (google's very fast internet may have a use here)

- kill the walls by replacing them with always-on displays and cameras to your remote counterpart. two distant house should feel become virtually "one". (rasperry pi or cheaper hardware may help here in the next one)

- rethink home automation (irobot does it in some sense) or simply build a branded robot which can simply fetch a sandwich. a central web service can be at the center, or a social web service.

- make location-independent apartments(flats) which are stackable, moveable, expandable.

- managed kitchen or fridge based on your dietary requirements by a web service.


I've always thought of creating a dining room where one wall would be a projected screen that would give you the illusion that your dining in a different part of the world.

It would be actually be even cooler if you could get a live stream from restaurants for their "virtual table" dining guests


Ballistic missiles since the early 60s had inertial internal guiding systems, fully independent of any satellites. Navigatinal Satellites would be shot down first in case of an all-out nuclear war.


I believe the UGM-27 was the first to be targeted via something like GPS, but it was only used for determining launch site, and I think that stayed the norm until the late 80's when the GPS systems could be shrunk and hardened for high-g operation. So yes, most Ballistic missiles maintained an inertial guidance system for many years after GPS was launched. But it was the requirement to know accurate position of mobile launch sites (in order to preprogram the missile's flight computers with appropriate inertial target solution) that motivated the system in the first place.


If you're interested in this sort of thing, you should watch the BBC series "Connections".

http://www.youtube.com/watch?v=OcSxL8GUn-g


These ideas, and the idea that they are frighteningly ambitious, clearly come from the personal experience of living in northern California and dealing with tech startups most of your time.

These are largely first world problems. Here are some ambitious ideas:

- distributed power generation that's cheap enough and renewable enough so people in rural parts of sub-Saharan Africa don't have brown outs anymore.

- synthetic food generation a la star trek

- desalination that is cheap enough for a farmer in Mozambique to do himself

There are more, lots more. People outside the valley bubble have real problems.


Solving first world problems gets you first world ROI. Y-Combinator is an investment firm. I'm not judging, I don't have a dog in the fight. But I'm pretty sure that's why you don't see stuff like stopping brown outs in rural sub-Saharan Africa on this list. There's no first-world money in it.


It's a trade off. There's a lot more money in the first-world. But there's a lot more users in the third-world. That's why the third-world is called the majority-world.


Check out the Google Solve for X videos. One presentation is about a clever new idea for cheap desalination: http://www.youtube.com/watch?feature=player_embedded&v=R...!

Another is about synthetic food...the presenter said his company could feed the entire world from an area the size of Rhode Island: http://www.youtube.com/watch?feature=player_embedded&v=r...!


I think the best ideas improves first world issues while fixing third world problems. Energy generation for example, if you managed to figure out how to generate power/propulsion using a cheap and small physical object and a renewable resource you both solves first world issues and third world problems.

Education needs to catch up with the world in the first world, but if you do that in an async manner online then you can at the same time solve the problem with education in africa.


distributed power generation that's cheap enough and renewable enough so people in rural parts of sub-Saharan Africa don't have brown outs anymore.

desalination that is cheap enough for a farmer in Mozambique to do himself

I am sorry, but you can not fix broken states with technology. Want to help Mozambique and sub-Saharan people? Find a way to turn their governments into well working ones. This is at once infinitely more difficult and also the only thing which will really solve the common developing world problems.


The University problem is potentially a "3rd world" problem, especially inasmuch as you interpret it as a learning/teaching problem.

Non consumption is traditionally a good place to start and I would suggest that high school & junior university level is the most disruptive place to start. Non-consumption of senior high school to junior university level education is something in great abundance in poor countries.

There is also probably a more incentive for potential students to play along in poor countries. A 9th-10th year dropout is more likely to be in that position because of access or soem other problem that technological innovation is good at dealing with. More importantly, the ROI on those 2-4 years of education is probably much higher.

If you were to go after job skills/ training as the point of attack (as opposed to general education) poor countries are also a great place to start. For a lot of skills there is demand at the bottom: bookkeeping, graphic design, programming, etc. Bringing a person making <$1-$2 per hour to a point where they can command a $3+ is fundamentally doable. That's a big incentive.


What if we made people in subsaharan africa as rich as europeans & americans? Then the solution to "brown outs" there is the solutions to brown outs here. Better infastructure.


If you can solve that problem, you have will have done the world an amazing service, outperforming even Norman Borlaug, and the idea is certainly ambitious in its scope.

One question though: how would you go about doing it?


Let's study history and see how Europe and America got wealthy and try to do that.


Which lands and populations do you suggest Africans raid for resources and labour?


Oh yeah, forgot about that...


One big deal in Africa is logistics. The railways of the colonized era were designed from quickly moving raw materials to ports. And the new Chinese-built roads are no different.

The end result is that trucking food between cities and villages not that far away from each other is crazy slow, dangerous, and profitable. Not much differently from trans-Pacific trade some 100-150 years ago.

If somebody would come up with a transport solution that wouldn't require massive infrastructure investments (for which there is no money), and which would be safer and faster than current trucks, Africa could start changing very quickly. And there would probably be a decent profit in that.

Where to start? Could locally-produced, solar-powered small airships do the thing?


There's some interesting innovations in infrastructure-less transport:

1. low cost UAV's - have been used in africa to transport blood samples and meds between hospitals and cities. the MATTERNET is a project that on scaling a UAV transport network in africa, using UAV's , currently probably for small weights , but it would probably increase in the future.

But the UAV industry is improving in rapid pace, because of military innovations, so it's a good place to be.

There's also a lot of open source innovation there, radically reducing the costs.

2. There's some work being done on airships(and plane airship combo) for the army, and airships for commercial transport for areas with no infrastructure(oil fields, etc.). might also fit africa.

3. There's a company working on a low cost jeep , fit to the the African muddy roads. It's can also cheaply supported by current african repair networks.


Then resources can be raided in remote areas without doing anything for the people who live there. With efficient enough machinery you won't even need to train local people to take part in the operation from start to finish.

Having a major logistics route run through an area does not make that area prosperous. It does, however, make the owner of said trade route prosperous. What your arguing is an extension of the status quo for Africa and many other areas of the world with extensive natural wealth. Nothing gets locally produced if someone elsewhere can produce it cheaper in a free market.


Airships? How did Europe go from crappy roads to good roads? Not with airships.


Europe had a governance model where infrastructure investments didn't end up on Swiss bank accounts.

The point here is to figure out a way to fix logistics in a way that individual entrepreneurs can do it.


The problem with search is that not only is Google getting worse, but I've also mostly outgrown it, in that it isn't sophisticated to answer pretty much any scientific question I would want to ask.

- No way to search for a scientific question and get a summary of the current scientific consensus or viewpoints on specific issues

- It's really hard to access academic journal articles online.

- Even when you can access journal articles, it's hard to know which ones to look in to answer your question. Sometimes it's hard to even know which field(s) your question falls under.

- Even if you vaguely know which field your question falls under, you don't necessarily know any of the vocabulary used by that field.

- No way to search by dependent and independent variables, confounding variables, etc.

- No way to sort articles by the quality of their methodology, the quality of the journal they were published in, the quality of the researchers, etc.

I know this isn't a product that more than 1% of the population would use, but if someone built it then maybe there are other things it could be used for.


You're talking about highly-focused, or micro, search. Yeah, Google doesn't seem to do that very well. They have a few segments, like book search and image search, but it's not specific enough.

One thing I search for sometimes are code examples in a particular language. Search for something in C on Google and you end up with lots of stuff for C++, C#, etc... Github, with its large repository of public code, lets you filter by programming language, and is much better than Google in these cases.

Bing copies Google. DuckDuckGo returns different things than Google, but otherwise is a copy. There's no micro search engine for specific topics and sub-topics, outside of site-specific search. Market opportunity...


We're trying to do something like this, i.e, let the user build more complex queries (than a free text search) for their specialization without having them write actual SQL ;) We've started off with Biotech - http://www.distilbio.com


www.dialog.com/products/guide/results/science_advanced.shtml

Dialog have been doing this over the internet for a long time. It's generalised search of the web I feel that is too expensive to tackle as yet.


> It's really hard to access academic journal articles online.

We obviously need something better than the status quo here, but the status quo isn't as bad as it seems if you know how it works.

Quick hints: email the article author. They'll probably more than happy to send you an article and a quick summary worded for a lay audience (not to mention talking your ear off about their more recent work...) They're not worried about you not paying the publishers, they want to spread their work around.

Another trick is knowing people in academia. Maybe you have a friend who's doing graduate work, or lecturing. You could ask your old lecturers if you went to university and if the paper you're after is in their field. There are also communities like /r/scholar on Reddit, though I imagine some people are against that sort of thing.


Actually the use case you described seems very ripe for disruption if you ask me. Because it's hacking your way to the solution, whereas we could need a better solution.


> The problem with search is that not only is Google getting worse, but I've also mostly outgrown it, in that it isn't sophisticated to answer pretty much any scientific question I would want to ask.

This simply means that Google doesn't work very well for you, and I mean no offense, but what you are searching is a very, very small minority of search queries. Google still serves a crushing majority of people very well.

You are making the same mistake that Paul is making throughout his essay: he wants startups to build products for him and not for regular users. Seriously, email is actually a todo list? Come on, now.


Google holds the elite back, holds science back, holds are collective knowledge back. Even if it is a minority of queries, these queries are more important than your typical query.


We're surely working all the time to make search "more sophisticated"; many special case queries are already smart (from "2+2" to geo, stocks etc., you don't need to go to special sites like calculator, maps and finance). And we surely have plans to go way beyond, but generally, this is Hard Stuff(TM). For things like journal articles, the information is often behind paywalls like ACM, and even when it's not, specialized engins like citeseer are hard to beat because the info has very special organization needs like collecting and measuring citations. On your most advanced requirements, I think only an Asimov's positronic robot would be that smart ;) unless there's a specific effort to curate this data... which requires tons of human labor, so ads served to the very small amount of people who needs this service will not make it viable. It's the same problem we have with patents (see http://arstechnica.com/tech-policy/news/2012/03/opinion-the-...).


Listing all the reasons it's hard is why the area is ripe for someone else to do it.


No; these reasons show why the are is ripe for anyone to do it. The only fallacy in Paul Graham's comment (or in possible interpretations of it) is that Google has a weak spot there so it's an opportunity for somebody else to beat Google. Trust me, we have a ton of resources dedicated to improvements in search and we have lots of cool things coming down the pipe, although maybe not in the velocity that one could dream (e.g. something like intelligent research for scientific papers is firmly in the sci-fi realm today, at least for fully-automated computing).

BTW, Paul's article has a big #fail when he mentions code search as a possible idea; dude, we do have that and it's amazing (but unfortunately we recently shut it down; not sure if this will eventually resurface as part of some other product).

All that said, of course some company can always make an effort dedicated to a specialized niche that we are overlooking and beat Google Search in that niche; an excellent example of that is Wolfram Alpha. Still, not a big deal; to really "beat Google" you need a new general-purpose, full-Web search engine that beats Google's. Not impossible either, but the barrier to entry is simply colossal and it amazes me that people don't realize that and dream that it just takes some cool new idea or clever new algorithm to do that--we are not anymore in 1998, when Google, still working off a garage, started to beat the current top engines like Yahoo! and AltaVista.


> The problem with search is that not only is Google getting worse,

Google, today, after all is a stock-holders company that is aimed to generate PROFITS, and maximize those. It should be kind of obvious that at some point they (as a company) will try to maximize cash coming in, and minimize going out (spent). Therefore, their product [search] is narrowed towards the ones who push the most obvious questions/search queries: what do they play in theatres, what car to buy, best lcd tv, pharmacy near me, etc. Thats probably 90% of search queries they getting. I say, as long as they work in this zone and make sure simple queries return the best results, they are winning - winning biggest chunk of market and smile on shareholders' faces.


> - It's really hard to access academic journal articles online.

That's because of the business model around funded research. Academic research funding is driven by a limited number of funding agencies being bombarded by huge numbers of proposals. One of the key metrics they use is how many peer reviewed journals has the author been accepted by. Those journals make money by charging access fees and by being semi-trusted gate keepers. Journals WANT it to be hard to access them online since they view the Internet publishing paradigm as a threat.

People have been trying to disrupt that business model since the mid 90s.


I use it more as a way to shortcut sites. For example, if I want to Wikipedia "pi" instead of typing www.wikipedia.org, in the address bar, then typing in "pi" in the site's search bar, I enter it in Google and find the link. Firefox's awesome bar is gradually taking over as I can favorite things and "search" for them using that just by typing in a couple letters, but I still use Google for anything I haven't favorited.


I've been doing that for years on Opera. Just gotta right click a search bar, give it a few letters to ID it and off I go with 'w pi' to search wikipedia. I do not miss having to go through google first if I just use the search bar, or even going to wikipedia.org or wherever first.


I like Firefox's Keyword Search bookmarks. The Awesome Bar becomes a "web command line". Some example search bookmarks I've configured:

* "w pi" to search Wikipedia articles * "d pi" to search Dictionary.com definitions * "am pi" to search Amazon products * "map pie" to search Google Maps locations * "g pi" to search Google

and many others. :)


I'm honestly surprised not every hacker does this. The vast majority of popular browsers supports keyword searching either out of the box or via a plugin.


This is something I've been thinking about seriously, building an "academic-level" search engine. I have the IR/NLP background. If anyone is interested discussing/collaborating, ping me!


Does it bother anyone that "frightenly ambitious" begins with search and email? Seriously? This is the pinnacle of our contribution to mankind - building search engines and to-do lists?

Where's the lunar base? The flying car? The personal robot? The cyborg? Meh, maybe I'm just getting old and grumpy. (To be fair I did like the other ones).


Seriously?

Search is another word for artificial intelligence.

Email means a universal way for people to communicate.

Lunar bases & flying cars aren't ambitious, they are just expensive.

Personal robots are here now in vertical spaces[1] and there are at least 200,000 cyborgs walking around now[2]. I think both these areas are worth working in, but I think you underestimate the ambition of a word like search.

[1] http://www.irobot.com/

[2] http://en.wikipedia.org/wiki/Cochlear_implant


OK, but "cleaning" is yet another word for artificial intelligence. I think you underestimate the power of economics. Can you imagine how many people are busy cleaning things?

I find it amazing that the difference between tasks like washing cloths and cleaning toilets is so small for humans and so huge for machines. We've had washing machines forever, but we're still cleaning our toilets manually.

And cleaning is just one of a large number of tasks that require similar kinds of intelligence. If robotics is able to close that gap, the world changes more profoundly and more quickly than ever before.

All of a sudden, most of the jobs performed by untrained people will disappear without replacement. The current thinking is that tasks that get automated are replaced by "higher level" tasks, but I think that's not actually what happened.

Automated tasks have largely been replaced by other tasks that are equally simple for humans but more difficult for machines. As a result, most people are still doing work that anyone can pick up in a week.

If that gap closes, we're in for a pretty difficult but ultimately wonderful transition. People have been crying wolf out of fear of this transition for hundereds of years. This time it's for real. Even if humans find new things that aren't yet automated, the transition will be too fast for at least a generation.


It seems like a lot of people disagree with you, but I agree with this viewpoint. That the Luddite fallacy will be a fallacy forever is a fallacy in itself.

I've said this before, but I keep repeating my viewpoint. At some point, we will have automated too many of the "useful" things to do for there to be enough work in the private sector for everyone. There are only so many cars, houses, gadgets and foods humanity needs, and the efficiency in producing these things just keeps growing. I'm not saying that we are headed for a dystopia where no one has a regular job - quite the opposite - but during a transition period a lot of people will be without work. Arguably this is part of the reason joblessness is so common in the US today. We will have to find some sort of tax system that makes it possible for those who can't participate in the elite to also live their lives, and hopefully contribute with some creative pursuit.

A lot of people will find this transition difficult, because a lot of people simply have no direction in life and just keep doing rote work. When the rote work gradually disappears, they will have to find something sufficiently engaging to do with their lives.


I've heard it described as the post-employment era. Scary consequences if you let your imagination run with the possibilities.

Some interesting reading:

http://www.wfs.org/content/postemployment-economy

http://www.impactlab.net/2011/10/11/post-employment-era-huma...


The really interesting question is whether this will further exacerbate inequalities or whether society will necessarily have to transition to a more "socialist" model if work no longer needs to be done.


"...at that time the economy of the United States will be going down and the next boat people will be Americans leaving America looking for work abroad."

Jacques Attali in his 1990 book "Millennium: Winners and Losers in the Coming World Order"

Still quite relevant, though I'd alter the quote to say "virtual boat people" since jobs are turning out ot be anywhere where you have an internet connection.


There are only so many cars, houses, gadgets and foods humanity needs.

Have you watched: http://www.youtube.com/watch?v=audakxABYUc ? (Rory Sutherland's TED talk 'Life lessons from an Ad man').

Particularly the note about luxury trains.


"We will have to find some sort of tax system that makes it possible for those who can't participate in the elite to also live their lives"

Rather than a tax system that gives them pretty much no incentive to get a job (if you were essentially paid to do whatever you wanted to do, would you want to get a job?), it would probably be better to have government enforced jobs (IE: you want money from the government, you need to work for them until you find a regular job. Otherwise, you get nothing).


As I've said before, Norway has a de facto system that works like this already. Anyone that's deemed unfit to get a well-paying job gets a fixed monthly payment from the government.

What if you want more money than you can get through this system? Then you have to do work that pays. Money and status works relative to your peers, so there's an incentive to earn more if your peers are richer than you. I don't see that this is a problem if the society is wealthy enough.


You're line of reasoning is way off base. You're trying to shoehorn a scarcity-based economy into a post-scarcity society. The idea of "contributing" (the way its currently thought of) will have to go away completely. Private ownership is a social construct. In a post-scarcity world our ideas of private ownership will have to change. Society will simply not allow the owner of Acme robot company to literally own everything (because he has a monopoly on labor). A more communal-based society will necessarily spring up. No one will stand for a monstrously imbalanced society.


Unless the robots have guns...


you want money from the government, you need to work for them until you find a regular job

So you're saying the government should hire all the unemployed? That sounds like a surefire way to make even more people unemployed in sectors that have to compete with the government's army of cheap laborers.


Why should we incentivise people to work if nothing needs to be done?


Eventually, far out into the future, the trends might actually reverse.

Fundamentally, robots are made out of expensive stuff, metal, earth minerals. It can be recycled, but it's expensive, and not 100%. Humans, on the other hand, a pretty cheap, we run on carbon, oxygen and water.

In the future, it might make sense for robots to do stuff they're best suited for (e.g. very hard, dangerous work in human-unfriendly conditions (e.g. spaceship repairs in outer space)). However, unless we discover a way of making robots dirt-cheap and 100% recyclable, it will probably be humans cleaning the ship from the inside, no robots.


Who knows what robots will be made of far out in the future, and remember that humans have to be fed for at least a decade before they can do much of anything. Also, earth isn't losing any material as far as I know but I admittedly know very little about physics and chemistry. Can metals degrade into a lower kind of unrecoverable state like energy can?


If you have an unlimited supply of energy, you can pretty much recover whatever. The only elements hopelessly lost are those that involve nuclear reactions.


Even then it's just a question of energy. You can do a lot with a particle accelerator.


Well, while you can in principle do something, an accelerator is a pretty blunt instrument. You're unlikely to have practical success putting Krypton, Barium and a few neutrons together into a Uranium nucleus even if it's theoretically possible. The phase space for the reaction is just not there. Chemistry has a lot more tricks for synthesizing molecules than just shooting them at each other.


Fundamentally, robots can be made out of less stuff than us, by virtue of being a mostly-empty lattice material at all levels, and won't need recycling as repairs happen on smaller scales until the repaired item is as close to new as you want.


Please. Stop. Bringing that old saw back while claiming that 'this time it is for real'.

Humans can do wonderful things when the alternative is to starve or lose their habitat. It is not different this time, it will not be different the next time, nor any time after that. Won't happen until we have a perfect AI which can do all jobs. Then, and only then, will humanity take a permanent vacation.


I'm not talking permanent. What is different this time is this:

a) The transition will happen more quickly because the kind of intelligence you need to clean a room is very broadly applicable. It affects almost all manual tasks at the same time.

b) Contrary to earlier transitions, there will not be many tasks left that untrained people can perform and that are similar enough to the old ones. A farmer can go work in a factory. Most cleaners will not become artists and entertainers within their lifetime.

I don't think humanity will ever take a permanent vacation. There will always be something that humans want from other humans and there will always be something to exchange as payment.


The same arguments were raised the last time something like this happened.

And while it is true that a farmer can go to work in a factory there is very, very little overlap between these two professions (could a factory worker go tend a farm and survive? Unlikely) -- just as there is little overlap between cleaning toilets and being a cashier in a shop.

Oh and cashiers are not going to be replaced by the same AI that can clean a bathroom, do the dishes and wipe the floor.


Good comment. This is definitely a huge issue that is going to need serious thought in the coming years.

We're going to need to do some more serious big thinking about more than "work", which is too narrow now. We need to figure how how we're going to "occupy" people in the transition from post-industrial/service/information technology society to a roboticized, post-scarcity, arts and leisure society. If handled poorly, "social unrest", mass protests, and outright violence may be become a regular part of the landscape, what with millions of always-idle, impoverished people just sitting on the sidelines, ignored. How long could this last? One hundred years, perhaps? That's a really long time to have constant social upheaval.

Sci-fi has dystopias full of rebellious robots, human vs. robot warfare, grappling with what it means to be sentient, etc. but have startlingly little that deals with a much more realistic question: what does a society where human labor is being made redundant look like in terms of day-to-day human behavior? Over the coming years, the consequences of mass automation and even "stupid" AI are going to come to a fore.

Ok, so what are things we're going to need to do? Encouraging the arts and music, competitive games, and various leisure activities, are probably a given (and not much encouragement is likely to even be necessary, since many will do these things on their own given the ability to do so.) The other piece of the puzzle is the one that's really going to need a great deal of thought: how do we support them, and just as importantly, how do we run an economy where 40, 50, 60 of the population does not work (and thus, under the current system, have no income)? The most obvious answer is to move towards a massive expansion of what we currently call the "welfare" state along with planned population policies (hard, but probably necessary.) The big difference? We'll be not just supporting jobless people, but the economy itself (we can call it econofare or something.) What kind of economy can you have with few consumers? Not much of one. It will require a completely different perspective, one where it's not a bunch of unfortunate (or lazy, depending on your perspective) jobless people, but a bunch of people we essentially pay to be consumers. Items which are scarce will need special handling for sure (hopefully things like food and shelter can be made superabundant sooner than later), but for everything else, it'll be all about simply keeping the flow of money going. The main policy prescription here is the tax-free guaranteed income, like they have in certain Scandinavian countries (someone else in these comments mentioned this.)

That all said, the two types of jobs I would put on "won't be automated anytime soon" list:

1) Jobs that exist to make others feel powerful/superior. Waiters, house-servants, massage therapists. They do have jobs that could certainly be automated, but you don't get the same feeling of "lording over" others with robots that you do with people. Robots /will/ do these jobs for many, perhaps even most people on a day-to-day basis, but there will likely still be people who pay for this as an "experience."

2) Interdisciplinary generalists (i.e., modern Jacks-of-all-trades.) This job isn't a single "job" at all, but a collection of jobs requiring the understanding and ability to synthesize knowledge from different (and sometimes disparate) areas. We will continue to automate parts of specific types of cross-discipline "tasks", but we won't be automating the big-picture viewers.


They're expensive because they're inefficient. To accomplish them efficiently is ambitious and elegant without a doubt.

A startup that can chip away at what makes these things so inefficient is going in the right direction. Looking for inefficiencies, such as with present day email, is a good way to find a direction.

Economics is still at its heart the study of the problem of scarcity. Economies, and all economic activity, attempt to alleviate or manage that problem.


This is a nit pick, but much economic activity these days is nothing to do with alleviating scarcity and everything to do with introducing artificial scarcity where it didn't previously exist. See e.g. Intellectual property.


Economics != economic activity (at least in the context you are using it).

TheCowboy is right: Economics is still at its heart the study of the problem of scarcity.


I said it was a nitpick, but the post to which I responded did say "all economic activity".

Does it even matter? It matters if it means our fully automated jobless future is penury and starvation instead of the post-scarcity Culture we're looking forward to.


Yes, I agree with that. SpaceX etc are doing well at the "chip away" strategy.

I also agree that a cheap lunar base would be a wonderfully ambitious project!


That was just a selection of ambitious ideas. I didn't mean to imply it was a complete list; I never imagined anyone would think I meant that.

I almost added a section on robots, but the talk was already long enough and I figured 7 was enough to illustrate my point that ambitious startup ideas are frightening.


I'm not understanding where you figured my point was that the list was not exhaustive enough; my point was you left out robotics and kept search and email in.

Robotics actually, is an area where something huge seems right over the horizon. I'd certainly bet on the next game changer coming from robotics before any equally revolutionary activity comes from search or email.

I don't necassarily disagree with what you do say about the two, I just don consider advances in these areas terribly important in the grand scheme of things.


You, diego, et al. seem to have difficulty coming to terms with the author. It is clear from the article that by "frighteningly ambitious" the author means problems people currently have which seem insane to take on. They are problems people have because the author has indicated that he has them or has discussed with others who have them. Search and email are insane to take on because if you succeed with search, you are decimating a company with market cap in the hundreds of billions; succeeding in email means disrupting one of the original Internet protocols which virtually everyone uses. Thus, these are frighteningly ambitious.

You instead see "frighteningly ambitious" and assume your own term which is in the realm of science fiction (cyborgs, personal robots, &c.). Robotics, incidentally, may be a frighteningly ambitious endeavor but only if you are considering something like autonomously scraping and repainting ships or actuation in camera pills: personal (autonomous) robotics won't happen in the near future (see iRobot Corp).

So, assuming the author's definition of "frighteningly ambitious", search and email are important because they are problems people have now. If search weren't a problem, Union Square wouldn't have funded DuckDuckGo, and more to the point, DuckDuckGo wouldn't be growing. (I also have the exact problems the author describes with Google -- for the past 2 years I've had a problem with Google.) If it hasn't been shouted by Fred Wilson and Paul Graham enough: they have pain from dealing with email. They would throw their money at you if you offered a way to make email less painful. Maybe instead of spending two hours every day with email they could spend one hour with new/email and they could have a spare hour a day with their kids -- how valuable is an extra 14 whole days a year to these guys? How many people hate spending so much time on email who have means? Probably enough to make it worth investigating the problem. For what its worth, I'm frugal as Franklin and I would actually pay for email (i.e I wouldn't accept any common free email service because they aren't even worth $0.00 to me for a variety of reasons).


> I just don consider advances in these areas terribly important in the grand scheme of things.

What is the grand scheme of things?


Facebook and Google have improved the daily lives of billions of people. Tesla Motors' electric car? Very impressive and a lot of promise, but so far all it's done is increase the self-satisfaction of ~1000 people.

Look at Star Trek. Other than the spaceship itself the most impressive things are the technology the people use. The Siri-like computer interface, iPad-like computers, Skype-like communications technology, etc.


Facebook and Google have improved the daily lives of billions of people. Tesla Motors' electric car? Very impressive and a lot of promise, but so far all it's done is increase the self-satisfaction of ~1000 people.

The same could be told about computer in the 1950's, I am talking about the famous misquote : http://en.wikipedia.org/wiki/Thomas_J._Watson#Famous_misquot...

there was a world market for maybe five computers

If fossil fuels aren't going drive cars in the future its going to be something else. Fuel cells or Electricity.

If IBM had thought or if any body else would have thought how difficult it was to create computing power to derive any meaningful value. We wouldn't be having computers today.


> If fossil fuels aren't going drive cars in the future its going to be something else.

The other option is we won't have cars.


And yet, Star Trek falls firmly into the social SF camp. The tech is incidental, left abstract or waved away with babble so that the personal interactions can proceed. Obsessing over the latest gadget incarnation wrapped in shiny branding is just the opposite, consumerism rather than tech which quietly enables.


The frightening part is the idea that you would be trying to disrupt seemingly invincible tech companies like google, which would be an insane business plan for a startup looking for a $50k kickstart investment.

But, I remember when google was first released. I thought the search engine market was already full with yahoo and alta vista dominating at the time. Who knew?!


What is it they say? It seems invincible until it suddenly doesn't any more.

But at any count what companies seem like or appear to do doesn't matter to your company. What matters is what they are and what they do. If they seem scary, good, that will only serve to keep others from trying to beat them -> less competitor for you.

Now go forth and conquer.


Google offered to sell to Alta Vista (for somewhere in the 1-2 million range) if I recall correctly and they were turned down. Sometimes the giants don't see change coming.


It was $1,000,000 to Excite, not Alta Vista.

http://en.wikipedia.org/wiki/Excite


you are vastly underestimating how much email has changed the world. if a better email can do for this generation what email did for the last one, i'll back it against anything on your list for sheer bang for the buck.


Maybe it's not as bothersome as it sounds. Donald Knuth said the two biggest problems in Computer Science are searching and sorting.


The lunar base and the flying cars are pointless if your goal is to improve people's lives instead of just satisfying egos.

And the personal robot is already being worked on, but instead of a single humanoid robot (which I always considered to be a bad approach), they're small and specialized.


What about this?

"In the developing world, 24,000 children under the age of five die every day from preventable causes like diarrhea contracted from unclean water." - UNICEF

That's 54 jumbo jets a day.

I also remain unimpressed by the ambition of his list. And by yours.


That's an important problem. But my list was a list of startup ideas, not important problems generally. I'm not certain the problem of water supplies is best addressed by for profit companies (it might, but I'm not sure), so it doesn't make a good example to put on a list of startup ideas.


That's fair, but I think we, in this community, have a stigma that we work on trivial and superficial problems. There are far too many cat photo sharing sites like Color and not enough efforts like 1 Laptop Per Child. We have to be diligent in clarifying that.

Our goal has to be to make the world a better place, not simply to make a few of us more more money than we can spend.


OK.

One of the core problems the world faces going forwards is resource supply - water, food, fuel. It's long been predicted that the next large-scale war will be over water supplies; there are already a number of simmering conflicts that go back to access to water.

Why are we running out of resources? Significantly, because there's too many people for the resources we have.

Why do we have too many people? Explosive population growth over the past 100-150 years, particularly in the last 50. World population has gone up nearly 50% in 25 years.

What's historically been the best way of arresting population growth? War; kill lots of people. Hmm, not really an acceptable plan. What's the next best? Education, in particular female education.

Now, at its core I agree with your point that the list was under-ambitious. Well, let's refine that - only parochially ambitious. So let's go big. We want a sustainable, long-term end to hunger and world peace.

What do we do for that? Give a man to fish - teach a man to fish. Food parcels and free medicine are a sticking plaster - necessary in the short-term, insufficient in the long-term. Fair trade helps but alone is improving their position at the bottom of the supply chain. Get improved education and access to knowledge though and we enable a long-term culture shift that both better equips the presently impoverished for the future and attacks the root causes of the resource instability that currently so impoverish them.

Which is where projects like OLPC and the Indian tablets are so important, and where (if done properly), the startup idea to replace universities is the biggest and most important on the list. Take it all the way and it's not about knocking down Yale and Harvard; it's about world peace and an end to famine.


OLPC has been such a failure because it was designed without understanding or respect for the situation the people live in. I agree with you about medicines etc. being bandages and education being the most important thing. But I think when you veer into "universities" and "cheap computers" that things become sketchy.

Replacing universities doesn't help the developing world, not really, because the education would have to be tailored to them.

If you want to read a book about one of these problems (sanitation) and how aid efforts tend to go totally haywire for the same reasons OLPC does, read The Big Necessity.

The needs are more basic. People require self-sufficiency in the basics before having a tablet computer is going to help.

Agree with you on female education… especially also business education and entrepreneurial loans. In this vein, there is a great charity here: http://www.girleffect.org/question

And there is that Indian entrepreneur who developed affordable sanitary pads for women:

http://www.rnw.nl/english/article/entrepreneur-a-passion-per...

Note that this wasn't a charity effort, but a social entrepreneurship effort. And pads are important. Women are not just uneducated, they are hampered even when healthy.

Finally, many people in developing countries have a cell phone which they can use via SMS to determine market prices for their farmed produce etc., which is already a great boon.

I personally know some folks, who grew up in Africa, who are doing great & interesting things to serve these people where and how they are, not by assuming they need somebody to come in and "revolutionize" them. (Which will inevitably fail.)


The reason replacing univeristies will help the developing world (and why I still argue for cheap computers tailored for the needs of the developing world) is that it makes access to aducation that much easier. Yes they need primary rather than tertiary education, but tertiary's the easier market to start deploying in so use it as a beach-head and work down.

At present, why don't they get education? Is it that they don't want it? Aside from a few communities actively holding back education, by and large they know the benefit and want education. The present model doesn't work for them though; the communities are frequently rural and some distance from the provision of education, the families are operating in the subsistence economy and can't afford to release the children to education.

Technology can help attack this from two fronts. One, it enables distance learning at the time of the student's convenience. Education need not be directly opposed to family responsibilities this way, the two can be more easily dovetailed. Education can also more easily be tailored to individual needs; by removing the need for geographic concentration to provide the required training, more appripriate education can be delivered.

Two, education is a (potentially fundable) need on such a scale as to enable transformational economic change society-wide. So much of the economic disadvantage is down to lack of information and poor access to markets from the impoverished communities. An infrastructure to enable distance learning on a whole-society scale is just as able to provide general community information and revolutionise access to markets. Think eBay's had a tranformational effect on some businesses? Wait until you see what the same thing could do to third-world subsistence farmers.

Transform education through technology-based distance learning. The easiest starting point for that is tertiary education in the west, which shows high demand but poor utilisation of possible technological effects coupled with high costs from the incumbents. Then use that infrastructure to achieve the really transformational change, by educating the third-world poor.


They die because they can't afford clean water, so there's no money to be made in solving that problem. Capitalism does not have a heart.


No, they die because they are ignorant of sanitation and therefore don't have any. Have you read anything on the subject, or are you just assuming?

Secondly, there is certainly money to be made in solving the problem, if you're not a truly lazy thinker who bases his conclusions off what he hears in TV news soundbites:

http://www.gizmag.com/go/4418/

http://opinionator.blogs.nytimes.com/2010/11/15/clean-water-...

http://www.waterforpeople.org/programs/how-we-work/initiativ...

http://www.unwater.org/downloads/Sanitationisabusiness.pdf

http://tlc.howstuffworks.com/family/green-inventors-solar-sa...

Lastly, I'm sure that TOMS and Warby Parker would be interested to hear that there's no money in helping poor people gain access to shoes or glasses, as well.


My statement was not meant to be factual but rather indicative of a certain cynicism on my part about what I think the motivations behind people starting (the vast majority) of companies are. Sorry that wasn't clear.


Well it worked out for the best! No worries. But I know there are people who are actually thinking that.

Sarcasm: it's dangerous (but sometimes useful) on the internet ;)


I thought about this too!

Disease are a serious case.

But its shameful that at this stage in the evolution of human kind we have people dieing of hunger and malnutrition.


It's not a disease which causes children to die of diarrhea, it's inadequate sanitation. That's right: exposure to human waste.


In my experience, Sand Hill Road does not want "frighteningly ambitious" startup ideas if substantial capital expenditure is involved. (In fairness, they are willing to hear those pitches – I guess that's something.)

> Now Steve is gone there's a vacuum we can all feel.

Pixar got funded only because Steve Jobs (Steve Jobs!) paid for it of pocket to the tune of $50 million total. It's Pixar that made him a billionaire (not Apple, as most people assume). How often does Steve Jobs invest in companies? Virtually never. But he knew (correctly) that Pixar was on to something.

I'm dealing with the Pixar bootstrap-problem at my own company, Fohr. Fohr is the live-action version of Pixar (photography, not animation, is what gets computerized), and requires $32 million in capital to do the process today on a feature film (well over half of that is for hardware - $2 million alone for electricity!).

Fohr is only constrained by capital – the R&D has already been done (it took nearly 13 years to develop the tech) – so you'd think Fohr would be ripe for funding. And you'd be dead wrong. There are no Steve Jobs left to pay for it.

The startup world today seems to only want tech innovation on the cheap, and that includes Paul Graham and all the rest.


So if I wanted to do PixActing like I wanted to breathe, my five year plan would be a) get VC funded for anything, b) achieve a modestly successful exit, and then c) recruit one similarly situated person and just shake the money tree. Without making disparaging comments about identifiable businesses, it is not a controversial observation that proven entrepreneurs with existing networks have vastly superior access to capital compared to first-time entrepreneurs with no network, independent of idea quality, target market, or execution ability.

$40 million is not a number that is unachievable in 2012. The password is just a bit different than for $200k, $700k, or $5 million.


It's probably worth mentioning how Steve Jobs was introduced to (what became) Pixar:

http://engineering.stanford.edu/profile/alvy-ray-smith-ms-19...

> One of my champions at Xerox PARC was Alan Kay. So I knew Alan Kay, who was by this time a fellow at Apple. And Steve Jobs had expressed some interest in computer graphics, so Alan Kay said let me introduce you to the guys who do it best. So Alan Kay brought Steve up to spend an afternoon with us at Lucasfilm. That’s when we first got to know each other. I had actually had one earlier conversation with Steve at some design conference on the Stanford campus one summer, but that was just a first meeting sort of thing. The first serious meeting with business possibilities was that one at Lucasfilm with Alan Kay.

> Shortly after that, Steve and Apple broke up. And meanwhile, Lucasfilm was trying to sell us. Steve ended up buying us from Lucasfilm for $5 million.

So not only was Jobs alerted to Pixar by an existing contact, in buying it he was to a large extent reusing the business model that had already worked with the Macintosh: take PARC goodies and commercialise them, hiring some of the PARC guys themselves.


I'm basically doing that, actually. Fohr has ridiculous technology, and I'm parting it out (feels like chopping a car) as you describe.

I can do Fohr without the capital, it'll just takes me longer as hardware gets cheaper and my own net worth goes up.

18 month ago, it would have cost over $100 million to operate Fohr, so time is on my side.


btw, Fohr looks really cool. However, the pull quote at angel.co/fohr is a little unfortunate:

  “Our dream of building a Pixar for films that are
  photographed is just weeks away from being realized.”

  (Posted 4 months ago)


A good laugh.


Thanks, I've updated AngelLest with the latest news:

> Pre-production continues on the first computer-photographed film, Carpathia, and production begins on June 1, 2012 in Los Angeles.

At the time, we were very close to going through with a deal. (Obviously, that fell through.)


Pixar did it for 16 years (1979-1995) before they released a movie.


1986-1995 is a bit closer, I think; from 1979-1986 they were in effect an R&D division of Lucasfilm, and it wasn't their job to even think about making films. They were supposed to develop new tech and do special-effects in films, which they did do for quite a few prominent films.


You're right that VCs tend to be leery of the most ambitious ideas. That's another of the obstacles in your way if you pick one. But you shouldn't let your ambitions be limited by what VCs will fund.

(In any case you can trick them by only telling them about the initial few steps.)


Looks pretty interesting but I find myself having to guess what you're doing. It's important to be concise and reference things people already know ("Pixar") when you want to convey information quickly, but it's unclear what the value of the technology is in the 2 paragraphs I could find written about it. Everyone knows "Pixar" by name but I'm struggling to understand what you're doing. Is it animation software that maps photographs to the virtual world and renders it "almost-real"?


Please see my reply to @ricardobeat in the parent post.

Part of the issue is that Fohr has two sources of funding. One source is for the technical side of the company, which I expected tech funding to pay for. The stuff on AngleList is basically only about that.

The other source is film funding for the first computer-photographed film, Carpathia. I have a completely different talk for that, which is more about how the tech is actually used to make a live-action film.

Point is, AngleList is only a small (but expensive) part of the story.


It's likely that even Steve Jobs would not have invested in Pixar if he knew how much it was going to cost to make it successful. He originally put up $10 million ($5 million for Lucas and $5 million to finance it). It became a money pit that either pride or faith compelled him to keep funding.


Sand Hill Road is just one source of money and money is the most fungible representation of wealth, so if they won't help you, go to somebody for whom 32 million is chump change or who are used to pay way more for a movie. Hollywood way be more receptive to your ideas (making a block buster isn't cheap and there is always a risk, so they should be used to taking them).


I'm curious. How does that work? You transform the film action into 3D and can then manipulate the animation?

The video on AngelList looks like just a rendering demo, and I can't find any other references.


See: http://erichocean.com/fohr/index.html for more info on how the tech works.

Filmmaking is a technical and artistic discipline, and only the films themselves are sold to a mass audience, so my pitch really only makes sense (and is tailored to) those in the industry.


Thanks for the link, that clears it up nicely. Maybe the video could explore the production workflow a bit more, it looked like a simple tech demo to me.


As an amateur historian, I found the Colombus bit a bit interesting, and probably more on-point than Graham might have even known. Columbus, his backers, and his detractors all accepted that the world was round. What they disagreed about was how big it was, and how far it would be to Asia by sailing West. Everybody, pretty much, by that point knew that the world was literally round (and flat only in stories). This was especially true in monastic and church circles which had known this for longer.

In other words they all agreed it was a great idea and an ambitious project that might succeed. They disagreed about what it would take to get there, and whether there might be obstacles in the way.

Seems like a very fitting metaphor for an ambitious startup.

Edit: For sources, you can start with "Heaven and Earth in the Middle Ages: The Physical World Before Columbus" by Rudolf Simek, which is a book uncommon in its level of insight. His description of Marco Polo's purported encounter with a unicorn had me laughing in both humor and amazement.

Simek's basic thesis was that Columbus's expedition was important historically because it blew away an important piece of medieval ethnographic thought--- once it became clear that the areas he had reached were not India, but were inhabited anyway, it doomed the Augustinian argument against the existence of inhabited continents beyond Africa, Asia, and Europe. This then paved the way for questioning the religious and classical basis for some aspects of the physical world, and lead in many ways to the Renaissance (though I think the failure of the Crusades and the translation of Arabic writings into Latin had a strong hand there too). The importance of Columbus's voyage about changing the way we think about our place on the world was still important. Another good point about ambitious startups?


I'm currently working through SICP and watching the 1986 lectures of Abelson & Sussman, and one interesting bit in Lecture 2b on Compound Data is when a student raises his hand and asks Hal Abelson about the axiom of doing all of your design before any of your code.

Abelson's response: "People who really believe that you design everything before you implement it are people who haven't designed many things. The real power is that you can pretend that you've made the decision and later on figure out which one is right, which decision you ought to have made, and when you can do that you have the best of both worlds."

Probably the same holds true for startups.


Thanks for the reference.

Of course, every sailor knew the Earth was round. Just take a morning off and watch the boats going out of a fishing harbour should your location allow this...

I found that pg quote useful as well, in a 'look for local advantage' way. Plan the next step based on how the weather and sea look today. Tomorrow it might be different.


The Simek book is quite interesting. Of course it wasn't just the sailors. By the 13th century, pretty much everyone knew the world was round as it was a common description in popular literature. Their ideas of antipodes were rather funny and something the church struggled against for some time until the discovery of the New World but an end to the question.... And until Vasco de Gama proved them wrong, they might not have believed you could sail across the equator.... but they knew it was round.

But the Simek work is interesting beyond that. It's largely on the basis of his work and the understanding that Europeans often knew more about Asia than Europe that it's fairly clear what a unicorn was: it's what you get when you describe an Asian rhino using a horse as a reference point. (Pliny's description of a monoceros is also frighteningly like a rhino, although some things are exaggerated.)


In re #4, I'd suggest that your biggest hurdle isn't movie studios (as we often like to suggest here). It's Comcast. It's Time Warner Cable. It's AT&T. These companies exercise an oligopoly on most people's internet connectivity, TV UI and UX, DVR experience, etc. They also set the terms, with the networks and studios, for what you actually get to watch on demand. They pushed their crappy DVR onto the masses, effectively killing off the far more innovative and superior TiVo, because they offered their boxes at point-of-cable-hookup to consumers. They control so many strategic channels in the TV business, on both the B2B and B2C ends, that they're basically running the industry. (They were also the prime movers in the PIPA/SOPA legislation, and they'll be back with another attempt as surely as the sun rises in the East.)

Netflix, Apple, and Amazon look like compelling alternatives to the cable oligopoly. Unfortunately, studios are deathly afraid of handing over monopolistic control of their distribution to a single player like Netflix, so they're fighting with Netflix and trying to push their own alternative onto consumers (Ultraviolet). Meanwhile, they remain relatively oblivious to the real snakes in the grass (Comcast, et al.) -- an obliviousness that's going to get even worse, now that Comcast owns a major player in the production system.

To beat Hollywood isn't to beat the studios. To beat Hollywood is to beat cable. This isn't a war over content; this is a war over distribution. Technology vs. technology. Content producers will go wherever there's distribution to be found, and money to be made.


Here's another tip: I'm African, and I don't understand what you are talking about here. Maybe the next entertainment innovation should force global scale...


It's an interesting analysis but still the problem for any upstart distribution technology is to get content, it's a chicken-and-egg problem. I've worked in the past for content distribution technology company and the main issue was to get content, there were also specific issues to the technology chosen that made it sort-of-dead-end but I didn't see how it could get the content at relevant terms.

The technology to distribute the content is out there already, bittorrent showed the way and it is working at a fairly large scale. Any problem down the road technology-wise can be solved by some (non-trivial) amount of money and creativeness. It's not a technological problem.

The main trick is to get good content on a trivial distribution method. I've been thinking about this but I'm a technology guy and couldn't figure how to get the content.


I think personal health monitoring is probably the most important thing on that list. The thing that excited me the most when smartphones started becoming popular was the prospect that they could coordinate data collection from a number of sensors always collecting data - basic ones like Nike+, but perhaps also sensors measuring sleep, taking periodic bloodwork, etc. At the same time, perhaps you could automatically monitor personal behavior such as foods eaten.

Personal diagnostics would be an important use of that, but I think more importantly, with a very large public dataset of basic biometric data correlated with behavior data and medical results across a significant portion of the population, we could stop treating human health studies as bespoke one-offs put on at great expense and start treating them as data mining problems. You could begin to spot correlations between behaviors and results that are unintuitive given conventional wisdom. I think that the resulting burst of discoveries would be on par with any of history's great scientific revolutions.


There is a fundamental difference between a scientific study and data-mining.

Science is based on probability theory. Until we discover the "grand theory of everything", out other theories will be only approximate, and out experimental results not 100% predictable. Therefore, scientists consider a prediction as correct if the chance of predicting something at random is less than some probability, usually 10%, 5% or 1%.

However, for this to work, each study must be based on new data. If you use the same data to check e.g. 10 predictions, each of which has 10% chance of happening even if incorrect, you will in average confirm 1 of your predictions, even if all are incorrect!


It's also fairly easy to, even unwittingly, begin tuning some of your procedures to the data set (i.e. overfitting), even if you're employing the usual precautions, like cross-validation. As an extreme example, if we had one gigantic fixed data set and gave researchers 10 years to work on it, by the end of that 10 years there would be entire techniques at least partially specialized to that specific dataset, with poor generalization outside of it.


No, if 10 predictions each have a 10% chance of a false positive, you will always have on average 1 false positive. Whether you test them on the same data or not doesn't matter (unless 10 predictions test the same thing, of course.)

What you need is a control sample that you know should be negative, so you can actually measure the false positive rate. (But with a sufficiently large base sample, you can look for correlations in small subsamples and use the whole sample as a control.)


> No, if 10 predictions each have a 10% chance of a false positive, you will always have on average 1 false positive. Whether you test them on the same data or not doesn't matter

That's true.

I should have said it differently. In fact, I'm not even sure that my understanding is correct.

The problem with not using new data to test each new prediction is, that if a scientist wants to show A on data X, but data X doesn't confirm A, the scientist modifies A slightly and now tests A' on X, which is again rejected, and then modifies it again, testing A'' on X, and so on... until the data X actually confirms hypothesis A'''''''!

That's the real problem - using data without a predefined plan for how you will use this data. In the above example, the data that you collected affected your decision-making process, so your results are not independent of the data (and thus not replicable!).


Yes, this is correct. It can still be useful to look for things that warrant further study, but it won't be proof of something in and of itself.


I was talking about discovering correlations which aid in potentially discovering relationships between behaviors and results. If you have a sample size which includes everyone in your study, that's as good as you can get on your confidence interval... you can use a different subset of the data for each experiment if you'd like, and that would be functionally equivalent to running individual experiments with a subset of the population each time.


Scott Adams (Dilbert guy) proposed this a few years ago as a way to win the Nobel Prize in Physiology or Medicine.

Being able to "tivo rewind real life" would be pretty amazing -- watch a bunch of things passively, record data, notice a spike in mortality, then find the common factors and stop the new plague (or the kids who found a Cobalt-60 source, or the pump infected with Cholera).


I think it would bring about enough breakthroughs for a lot more than one Nobel :-) But yeah, I'm definitely not the first one to fantasize about being able to correlate behaviors or even just basic bio signs across a big chunk of the population with the medical results.

I even think it'd be feasible to pull it together as a company, since there are lots of immediate potential benefits to using the things that would collect this data. A much better replacement for Life Alert, which works even if you're unconscious and can't press the button. Dieting aids, sleep aids, exercise aids, passive diabetes monitoring. It's a lot to bite off, especially as a startup, but I think a team that knew how to execute on this kind of product could roll out products serially and pull together that dataset. That dataset would enhance each other product in the way that each of Google's views on the internet (DNS traffic, browser feedback, analytics) help its core product. A hell of a defensible advantage. Not sure how one could convince that company to give up its crown jewels for the sake of medical research, though :-)


Interestingly I have a friend (a hematologist) that says that they actually try not to overscan for fear of finding something. There are plenty of cases where a scan reveals something that may not manifest itself into a noticeable issue in the lifetime of the patient but they then need to treat. Unfortunately often the treatments themselves are invasive (sometimes far more so that the issue itself might have been).

I guess that problem will diminish as advancements in medical diagnosis and treatment progress.


There's a definite opportunity to build a competitor to Apple and reach the hackers first: building a better PC for hackers to create new software built on Open Source and Web technologies.

The cracks are starting to show with using Mac OSX as a primary machine for hacking. It's got unix under the hood, but every successive release has become more consumer focused and less hacker friendly. The proprietary nature of developing native apps also turns off a lot of the great OSS hackers.

If you could get an all star team together with someone like Rahul Sood to design the hardware and someone like Miguel De Icaza to design the OS and developer APIs, you'd be well on your way to tackling this problem and building the next Apple. And this time, it could be a lot more open source friendly.


Do you think there is really a large number of hackers who are not satisfied by the Mac on the one hand, or Linux on a Thinkpad on the other, and who would buy into this new system?

OK, so you really just want to target hackers first and then move into the consumer space once you have some traction. But consumers are starting to move away from laptops and toward things like smartphones and tablets. Apple is moving in this direction and has a head start. If you start by making laptops, Apple's head start is just going to grow.

Better to figure out what is going to come after smartphones and tablets, and get there before Apple does. I don't see any particular reason to retrace Apple's steps and start by building a laptop.

Some companies I suspect might be following a long-term strategy similar to this already:

* Google, with the rumored heads-up display project based on Android

* Jawbone

* Razer (in this case Apple is not the target)


> * Jawbone

+1! I think Jawbone has a great opportunity for post-phone consumer devices. They have a strong brand and are branching out to non-headset form factors.

I am surprised at how readily mainstream consumers have adopted Jawbone Bluetooth headsets. People where them all the time, even though 99% of the time they are not taking a phone call. They look like dorky Borgs, but I think this is a sign that mainstream society will be open to transhuman/cyborg enhancements.


> Do you think there is really a large number of hackers who are not satisfied by the Mac on the one hand, or Linux on a Thinkpad on the other, and who would buy into this new system?

I don't know if it's a large number yet, but it's growing and you can see the cracks sprouting. Most of the hardware outside of Apple's really sucks, and Apple's software ecosystem and the way they run things turns a lot of people off. If there was something viable to switch to, I think it could get some traction quickly,

> Better to figure out what is going to come after smartphones and tablets, and get there before Apple does. I don't see any particular reason to retrace Apple's steps and start by building a laptop.

This is intriguing. A Post PC device that hackers can use to create software. Definitely something that hasn't been fully explored.

There are a few challenges, however. Hackers need significant hardware to get anything done. Also, most of our tools that have stood the test of time (Emacs, Vi, C, etc) require traditional keyboards as input.


Text input is the biggest problem of post-PC. While it may eventually get solved, I'm exploring ways to make non-text-based programming productive: http://noflojs.org


What's wrong with running Linux on Apple hardware?


Better to figure out what is going to come after smartphones and tablets, and get there before Apple does.

I would figure out what the developing world needs the most and how they will be using computers and communications.


It's here, but not how you think: http://www.raspberrypi.org/

Do you know any hacker who knows about Raspberry Pi and isn't planning on buying one?

Radically undercutting existing platforms in price, but with comparable functionality enables totally new uses for general purpose computing devices. That's how to take on the Mac/PC business - not by hitting it head on.


rπ is a great project, but it's hardly out of left field.

There have been plugins for years, at seriously affordable prices. You can buy a netbook for $100 these days and run Windows/Linux on it happily.

rπ is interesting because of its positioning as a learning platform equivalent to the BBC Micro which nurtured David Braben and many of the other backers. Not because it holds some magical quality over Gumstix, BeagleBoard, PandaBoard, CottonCandy, GuruPlug, DreamPlug, Arduino etc.


I think that purpose designed hardware at the $25 price point is a radically different proposition than a second hand netbook for 4 times the price (because you sure can't get a new Intel netbook for that price).

It does hold a magical property over all the platforms you've mentioned: price. At $25 it's close to disposable, whereas I'm going to look after a $90 BeagleBoard.


I once walked into the Arduino IRC and asked if it could run maxima. The response was that I'd be lucky to get it to run Linux. The arduino costs more than $25. Everything you listed there costs more than the utility value it has for most people.

For $25 a pop you can do innovative prototypes on the kids-allowance cheap. The size is an extra bonus. There is no end to the number of ~$50 projects I've thought up that consist of a pi and peripherals. To say that those other projects you mention are somehow comparable is ridiculous. They all cost multiples more money than the pi.

The pi is really going to speak to my demographic: Teens in their basement who have no money to spend on flashy prototyping equipment. But have tons of ideas they want to try out.

EDIT: And continuing, for doing a "production run" having multiples lower production costs is a serious advantage for cash strapped endeavors. Not sure if the pi foundation will let you order enough to do that though.


I don't see the difference between what you just described and what Canonical is doing with Ubuntu.


Canonical != Hardware manufacturer.

The problem with the hardware business is that good hardware takes money to make. It's not the sort of thing that you can fund on the kind of margins that most startup rounds deal with.

(Disclosure: I have never attempted a startup or tried to obtain funding for one.)


I will not be upgrading to Lion or later -- I prefer Finder to have ~/Library, I don't want to see all the thousands of tiny UI images on my computer, I don't want the mouse to scroll backwards, or a stray finger on the mouse to go back a page in webkit when I wanted to scroll left. Don't get me started on the latest Xcodes, which I have to use at work.

So, speaking as the target market, Ubuntu has created a horrible user experience with Unity, and have made it not worth the effort it takes to remove it. I will never run Ubuntu (desktop) again.

My Linux laptop now runs Mint, which is fine but cannot connect to a network without the Gnome graphical configuration tool for whatever reason. probably layer 8 or ID10T in this case.


Agreed. My next laptop is going to be a Linux one, and I'm going to pay for a high-quality hardware system... I can only hope the drivers are high quality.

I'd pay $2250 for a high-quality Linux laptop (roughly the price of my OSX laptop). :-)


Linux runs great on an Apple MacBook.


"GMail is slow because Google can't afford to spend a lot on it. But people will pay for this. I'd have no problem paying $50 a month."

Ok. Number of Paul Grahams in the world times $600/year = ?

Most people on the web are ridiculously stingy. "I would pay for this" is a terrible way to think for an entrepreneur. Believing that what we think represents the masses is a rookie mistake.


Number of business email users? Tens of millions certainly.

"Would I pay for this?" is a great question for founders to ask, because it combines two of the most powerful techniques for generating startup ideas: solving problems you yourself have, and using payment as a test of how much people want something. One of my techniques for helping founders to come up with ideas is to ask them what they need so much that they'd pay for it.


But there are decades of precedent against how much people will pay for email.

Look at a large enterprise org - you think that Lockheed with 150K employees would spend $50/yr/user on email?

Hell no. they dont spend 7.5MM per year on their email accounts for the employees.

Thats the problem with enterprise scaling vs cloud/startup scaling. They are inverse;

The enterprise wants the cost per unit to go down when scaling. The startup/cloud wants the profitability to increase at the same rate when scaling.

We all want great services, but NOBODY wants to pay for it.

I myself seek to offload cost at every opportunity; work pays for machine, phone, travel, software etc...

Same model.

Yeah - I'll seek to solve problems I have, but not based on how much I would pay - but how much I would like to offload that cost.

(Clearly there is a lot of grey here, and there are areas where this doesn't make sense -- and others where it does -- and these are not mutually exclusive. (i.e. in areas where I am both building for the consumer and the provider (healthcare))


"Look at a large enterprise org - you think that Lockheed with 150K employees would spend $50/yr/user on email? Hell no. they dont spend 7.5MM per year on their email accounts for the employees."

Yes, I certainly do believe that any American corporation with 150K employees spends significantly more than $7.5mm/year on their messaging system.

These systems actually get _more_ expensive as they grow larger - Disaster Recovery, Business Continuity, Sarbanes Oxley, Customer Service, SLAs, Data Loss Protection, Intrusion Detection - All these email services that the small enterprise doesn't worry about (that much) - add up significantly in larger enterprises.


Lockheed is a horrible example too because they have extensive classified operations (their support costs for email within classified projects probably exceed 7.5mm alone), and because Lockheed IS&GS is a major contractor for outsourced IT services.

I think the Gartner figure was something on the order of $500-1000/yr per employee for messaging in large high tech businesses. A lot of that is IT staff, and all the other systems for security and compliance. Email is one of the big apps within enterprise.


Would it be possible for you to elaborate on what your email needs are and how they're not met?

Would your problems be solved by hiring a personal assistant (a real person)? Then the solution is AI and it's hard.

Or do you believe it's impossible for anyone but you to sort through your mail?

If that's the case, what we need is to make sorting email "fun" (more enjoyable than TV).

We may be looking for the Angry Birds of email.


The problem with emails is not the spam or the sorting, it's that they're not actionable. Tasks in a to-do list are almost directly actionable, and that's what most emails aim for.


One of the simple ideas we could do with email for power users is to show the sender a list of imap folders the receiver has created in his inbox and allow the sender(maybe an assistant) to target the emails in that folder thus turning it into a collaborative effort in organizing stuff


Seeing as how you're on HN you may have heard of 37Signals, Fog Creek, and Atlassian. All sell products which are domain-specific improvements for email. They collectively have, conservatively, X00,000 paying users. These are small companies in this problem space - IBM has at least 48 options for I-can't-believe-it's-not-email at every price point between $200k and $200 million. Ditto Microsoft and a dozen other big software companies.

"The web" includes a bunch of stingy twenty something's on the consumer Internet, but it also includes the producer Internet, and the producer Internet is one giant system for turning piles of money into bigger piles of money. Something which makes that happen is cheap at any price.


Rich people have the same number of seconds per day as poor people, except each second is worth more. Right now everyone is driving a black Model-T; there isn't a premium edition of email available, though one is certainly desired (personal assistants and secretaries currently fill this gap).

If the cost of a premium email/task-list system is less than the dollars saved in time, and less than an assistant, people will buy it. Aim for the higher-end market, solve their problems first on their dollar, and later expand downward to everyone else.


there isn't a premium edition of email available, though one is certainly desired (personal assistants and secretaries currently fill this gap)

I think you misunderstand. The premium edition of email is much like the premium edition of anything else- it's the email you don't have to use, the plane you don't have to fly, the car you don't have to drive, the food you don't have to cook. So I don't know that personal assistants and secretaries are "filling a gap" in the stop-gap sense, but rather they are the solution.

If you want to capitalize on it, do like a chauffeur company would do, but with email.


Somehow people manage to pay Dropbox hundreds of millions of dollars a year, in spite of their stingy-ness. The email version of Dropbox would almost certainly be a bigger market.


"I would pay for this" is a terrible way to think for an entrepreneur.

Peter Drucker said: "The purpose of business is to create and keep a customer."

So an entrepreneur tackling this problem would already have at least one prospective customer, thus they'd immediately be in business. And who knows, maybe something "big" grows out of it (e.g. maybe you discover there's a big market for this in the enterprise space).


Most households have no problem paying $50/month for a service that their whole family uses frequently: cable TV. So here is my idea: make and sell this great email-like/todo-list/file-sharing/whatever service that an entire household can use, and charge $50/month.

That's only $10/month/user for a family of five. Come up with reasonable limits to ensure enough revenue (eg. max 10 users per plan).


I'm not sure you can compare cable to email. I may be in the minority here, but I agree that email is not really a pain point for me. I pay $50/month for cable because that represents a choice of having ready access to television or nothing.

The idea that households would pay $50/month for email is a bit of a stretch given that the alternative is free email that works pretty damn well.


I am not talking about email as you know it.

I am talking about this service that pg thinks needs to replace email. I do agree with him that people currently use email in a way it was not intended to (todo list, sending files, etc), and that surely there must be a better way of doing these tasks, that would be worth $50/month, while getting rid of the limitations of email such as attachment size limits.

In fact I know I am right, because the reason we, in 2012, still have no easy way to share large files between friends and family members, is precisely because the storage and bandwidth costs are higher than the cost of email.


You honestly believe there are a lot of families that would pay you $50 a month for a service that enables them to easily create todo lists and share attachments? You're a pretty optimistic person.

How many services do you yourself pay for today on a monthly basis? I can count mine on one hand. To get $50 from me every month you better have a life-changing service.


I think email needs to be changed from top to bottom, but offering speed alone would be a hard sell to launch with. You can get the Microsoft offering (Exchange Online or packaged with Office 365) starting at $5 a user. Microsoft can afford to spend money on it. I've used it and it is a lot faster than GMail (although it wasn't enough for me to switch).


One thing I noticed is that email is a problem for "important" people who receive tons of messages per day. They must read/answer that one vital email right away. Kevin Rose talked about this a few years ago, and I've heard many high-profile investors complaining about this.

For the vast majority of the people on the planet, GMail is just good enough. Yahoo Mail and Hotmail are still doing well.

Perhaps email is perceived as a problem just because it gets tons of "face time" with us every day. I'd leave it alone and focus on one of the countless unsolved problems in the world.


GMail is just good enough

It is, but only in terms of what we think of email in today's usage. I think the next step will be where you can make the messages dynamic which would basically allow you to receive/deliver an interface to an app. Of course with all that power comes the issue of how to control it, but that would be an interesting problem to solve. Imagine instead of getting a notification email for an update on a Basecamp todo list, you would see the actual todo list and could manage right from your inbox.


The only reason you can't do that already is because email clients prohibit it for security reasons, and Hotmail is already experimenting with emails that allow that within a sandbox.

Emails can send HTML and JS, there's nothing innovative about it. You only need to convince developers to add JavaScript sandboxes to their email clients.


for some reason, shades of two-way-rss hype just came flooding back from 2006.


I thought of 2009, when Google Wave was announced.


That's exactly what my future start-up will do! Is anyone interested in making it happen?


Perhaps, but as Bentley and Gulfstream discovered, there is a tidy profit to be made servicing the niche needs of high-profile individuals.


Undoubtedly, but tidy profit and frighteningly ambitious are two different things.


People are like sheep. If you get the "important" people to use it, you have most certainly will have a large user base of "regular" people


Then what's a good way to think? If you can't start from "I would pay for this", then where do you start?

If you wouldn't pay for it, then why would you expect others to pay for it?


Would you pay for computer support, or extended warranties? I wouldn't, but millions of people do.

It really depends on the context. When thinking about something for the masses, you are one very particular data point. You want evidence that lots of people would pay for something.

If you are building something for people like you and:

- there are lots of people like you, and you can make it very cheap

or

- you ARE paying tons of money for it already, and it could be better and cheaper (e.g. travel)

then you may be on to something.


The last graph of #6 is great.

I hate to stray into politics but my scary ideas revolve around public policy and the various actions people undertake in the public sphere that affect it. More specifically: Is it possible, by providing better tools for publishing and accessing information, to substantially improve public policy debates? Can we reduce the very large rewards for dishonesty and the use of disinformation?

This is the crux of the problem with our current political system, I think. It's not campaign finance, it's not religion, it's not disagreements about economics, foreign policy, security vs liberty (a lovely false dichotomy) or what have you. It is simply the fact that lies win and truth loses. Or, if that statement is not necessarily true, it is true in the current practice.

So, if you buy my premise, how can technology help? Isn't it a problem of human nature? You can't force people to be honest. You also can't force people to learn how to recognize dishonesty in spheres where they have not much competence. You can't impose good sense or decency.

But human nature is varied, and so maybe the seeming ascendancy of its more unfortunate aspects is situational. Maybe by improving the context and presentation of information they can be mitigated. Maybe technology can be used to recognize and reward honesty and to point out and discourage dishonesty. It hurts to think about, doesn't it? It does for me, because it is so hard, and that's what I took from pg essay. Granted, I may not be talking about problems to solve which would make you the next Google.

As an aside, I think that the utility of greater transparency of public actions (governmental or corporate) is already well-understood by many and much work is already being done in this direction so I am leaving out. But that doesn't mean there isn't room for new solutions there, as well.


Interesting idea. I think the most intractable part is that many people believe what they want to believe. They gravitate to a world view for various reasons, backfit it to the data, rationalize the cherrypicking of supporting data and discarding of refuting data, integrate it into their id or ego, fight tooth and nail to protect it, and happily accept the political dis/misinformation you're referring to.

I'd imagine you'd have to identify why people do that, why others don't, and whether it's formalizeable and transferable. Pretty sure psychology has done some work in that area, but brain-fried atm and drawing a blank...


Yup, psych has been looking at this intently for a while - I think cognitive has been doing work looking into this.

There was an article on this a while back on HN as well - why walmart knew someones daughter was pregnant before her dad did. (forbes, after taking it from wired iirc)

Its based on the fact that peoples habits (in regards to shopping) are ingrained, and that there are only a few times in life when those habits are open to change.

Thats when there is a major change in their life - like a new job, baby, marriage and so on.

Dan O Reily (from arming the Donkeys) also had an old pod cast on this.


It was Target and the New York Times Magazine, for what it's worth.

http://www.nytimes.com/2012/02/19/magazine/shopping-habits.h...


Thanks!


Yes, I agree. But these people do continue to be influenced by information in the public sphere, although they do tend to select sources that they agree with. They also were influenced when settling into their original positions.

I really don't think this general behavior will change, but it's not all or nothing, it's a matter of degress. I think everyone does this at least a little bit. Unfortunately, a lot of people do it a lot. If you can shave away at it a little bit at a time, it could have a large impact in the end.


This is the crux of the problem with our current political system, I think. It's not campaign finance, it's not religion, it's not disagreements about economics, foreign policy, security vs liberty (a lovely false dichotomy) or what have you. It is simply the fact that lies win and truth loses. Or, if that statement is not necessarily true, it is true in the current practice.

I think Eric Drexler had hopes like this for hypertext before the World Wide Web started to hit it big. However, "In every age, in every place, the deeds of men remain the same."


You should check out Robin Hanson's idea of 'futarchy' here: http://hanson.gmu.edu/futarchy.pdf


But don't miss Mencius Moldbug's rebuttal of Hanson's Futarchy:

http://bit.ly/7gFYD


The problem here is that politics is largely not about facts, but the difference in interpretation of those facts.


The thing about replacing e-mail is that is isn't just a todo list, for many people it's just a receipt box - the thing I keep all my notifications that I bought stuff from amazon. For others, it's still the primary means of business communication.

My work e-mail is largely about communications, with a todo element to it and unfortunately some file storage too. My "home" e-mail is completely different. It's where I get my monthly statements for banks and investments and where my notifications go. When replacing e-mail you would need to service all these components of what e-mail is.

The thing that originally made e-mail so important was it's identity factor. That seems to have withered away as other services have replaced some components of what e-mail was for.

I would argue that e-mail needs to not be replaced, just reclaimed. My e-mail client (web or otherwise) should know that an e-mail in this case is actually just a twitter DM notification and be smart about how it presents that to me. It should know that something from Bank of America is probably something I want to keep, but something else from Bank of America is just marketing junk.

I haven't seen anything that is smart enough to do that on it's own. I don't want to have to deal with creating filters - it should just know. I would totally switch from gmail if this were out there.


> for many people it's just a receipt box

There's a good startup idea right there! Sign up on receiptbox.com and give it my email username/password (or maybe some sort of oauth token). It periodically scans my email and looks for receipt emails from well known e-commerce sites. It knows how to parse them and pull out the relavent details (like TripIt does for travel stuff) and it builds a builds a nice searchable catalog of all my receipts.

I would sign up for this tomorrow if someone on here goes and builds it. :)


Give a third party my password so it can scan my email for financial data? No thanks.

I realise there are people who would love this convenience, and you'd make a killing on targeted ads, but this is a privacy nightmare. Good luck getting people to trust you. Furthermore, you really want the results of the filtering to be applied in the user's own mail client rather than having a separate UI..

Might be feasible as a client-side app. How about a Thunderbird/Outlook addon with a subscription service for known filters?

(What is the Google Chrome of desktop mail clients, anyway? Hardly any seem to use WebKit.)


Re: the first part, I'm reminded of this web application called Mint.com...


> I would sign up for this tomorrow if someone on here goes and builds it. :)

But would you pay for it? If so, there is a way to generate almost infinite revenue with this service, which is to charge a small fee for each receipt stored. Naturally, when receiptbox.com charges this fee, it issues a receipt, which it emails to you...


You don't need to charge me. You're getting information about everything I'm buying online (which increasingly means...everything I'm buying period).

Predict what I want to buy next and take a cut of the purchase.



A replacement for email should be a lot smarter and I think what pg hints at is pretty much the same as you're saying here, but broader. If I receive an invitation to something, it should end up in my calendar and whatever gadget I have on me should notify me and ask me if i wanted to participate.

If i receive a receipt it should be stored and analysed. For example if the item had a 30 day guarantee it should ask me before that if I am satisfied with it.

If i receive a shipment notice it should automatically tell me on the day it arrives and alert me when I'm in proximity of the post office that I need to pick it up.

Actually I would want almost all of my emails to be read solely by a computer so that all of these emails I didn't even see. I don't need to see that I've bought something — I know that! I need to be told when its in my post box though, or if its a license key I need my computer to pop up a question if I should apply that license.

So a good startup idea here would be something that took your email, filtered it and just removed all of the receipts/etc messages from your view, while keeping it neatly organized somewhere else for the future.


That's exactly what I'm planning to build! I'm not sure if I should go hybrid or all-in.

By hybrid I mean that people could receive regular human-readable emails, but senders could include a small url or tag that links to the semantic information (it could be an event invitation, receipt, valid email confirmation, password changing, task proposal, marketing offer, flight information, etc.).

The "smart" email client could then automatically interpret semantic emails, and act accordingly. It would also hide those emails, and only show you the relevant notification.


there were sites like this , swipely and blippy.. both failed as I don't think people in general are ok with giving out their purchases information... I'm really against giving permission to anyone for my email.



Try otherinbox.com


0. free internet

   - as in beer and as in liberty
1. a new search engine

   http://duckduckgo.com/
2. replace email - with a todo list?

   - as I look at that old Palm IIIxe sitting in the cradle on my desk,
my mind swims in the ideas of all the databases on all of the devices everywhere all being in sync

3. replace universities

   - yup, and recreate the free university of olde
4. internet drama

   - indie drama titles (movies/shows/etc) on netflix / apple
   - or just read the comments ;)
5. the next steve jobs - but I am still impressed by the iPad3 rollout and it's screen

   - raspberry pi
6. bring back moore's law

   - easy parallelism in software
   - it's a compiler -- that's the hard part
   - a compiler on the web as a web "service"
   - an optimization marketplace: people in the machine doling out smart answers
7. ongoing diagnosis

   - how about ongoing prevention? because cancer is a symptom too
   - why limit yourself to 1000 years of ?barbaric? western medicine?
   - why not look at all of humanity's history of medicine from all cultures?
8. tactics

   - remember that columbus was a tyrant, and he didn't "discover" anything.
   - start small
yup, the best plan is not to have one, and never make one, when it's a fait accompli then you announce the plan


> - why limit yourself to 1000 years of ?barbaric? western medicine? > - why not look at all of humanity's history of medicine from all cultures?

Western medicine is nothing more than medicine being tested scientifically before being accepted. If you know anything better than the scientific method to test medicine than please do tell. How would you prove to people your medicine work, because "it is so"? Nonsense. If by "medicine from all cultures" you mean medicine who refuse to be tested scientifically, then thanks but no thanks.


I don't really think this is the characterization of wester medicine. What you're calling for, "Evidence-based medicine", or "Evidence-based practice", is actually new. Bayesian probability theory is still nowhere near the dominant method of measuring certainty in the field.

I hope mmphosis clarifies what he means by "western medicine", because I'm confused by that term myself. It's not about the modern issue where the US FDA mandates that "only a drug can cure, prevent, treat, or diagnose a disease" (so if you try and sell oranges under the claim that they prevent scurvy you can be thrown in jail), since he mentioned 1000 years. I don't think he meant that western medicine has any less a desire than non-western medicine to "make people better".

I hope he doesn't mean homeopathy since I hope we can all agree that's a silly enterprise good only for making its proponents more money. But I suspect the "western medicine" refers to alternative treatments that don't have to be homeopathic. I think his overall meaning is that for current medical research, the memory of the field only goes back 1000 years or so, which may or may not be true. (I'd argue it's closer to 100 years.) It may be a call for more testing of what old societies used to do for various things, such as the Greeks, Romans, Egyptians, Persians, and Chinese, and whether any valid techniques are there that we should bring back. I seem to recall off-hand that St. John's Wort historically has a use for depression, and in some study had about the same effectiveness as some other depression drug (though neither were much better than a placebo); if this is really the case then we need more studies on St. John's Wort to confirm and in the US particularly we need some new FDA rules and reduction of power (or just get rid of them). A similar point is made when people say "stop destroying rainforests, there may be a cure for cancer there!" There probably isn't, but it's not like we were looking very hard anyway, and maybe we should.


Western medicine is nothing more than medicine being tested scientifically before being accepted. If you know anything better than the scientific method to test medicine than please do tell.

There is no such thing as the scientific method. What you're thinking of (presumably something Popperian) is not how science is really done. A lot of science is pig-headedly ignoring evidence which tells you a theory is wrong. Sometimes that works itself out and you get a better theory (see Einstein). Other times it doesn't.

Look, the purpose of medicine is to make people healthy. Definitions of 'healthy' are different for different cultures and social groups at different times. Western scientific medicine is only one approach for one kind of culture, and it often does more harm than good.

Furthermore, western scientific medicine often ignores procedures and knowledge coming from non-western sources which, when translated into scientific language, turn out to be correct within the WSM framework.

edit: formatting


I also bend on rationalist side, but the first rational step is to know the limits of reason. Here there are limits of medicine testing. For instance, I have heard aspirin, which is a great medicine, would not pass the tests, were it invented today...


DDG isn't the answer - not yet, anyway

I find that ~50% of my results I re-attempt with a !g, !w or !wo before it

I know the guy behind the site reads threads here, so i'd ask him to check his logs for the number of instances a user has re-run a search query but with Google or Bing and then work out why his results for those queries are not working.

I think the 'new google' will be built around a PageRank that takes into account online social networks rather than links


A sidenote, but it's interesting that your comment drifts into the 'fallacy of the now' which is the antithesis of the essay.

You posit Raspberry Pi (flippantly or not is not clear) as the 'next Steve Jobs' response... I love what Upton's trying to achieve, Braben is a hero of mine and I cut my chops on the BBC Micro in the early 80s, but it still stands that this is just the latest in a long line of interesting forays into the minicomputing sphere.

Jobs' post-NeXT vision was fascinating in that no steps were widely anticipated. No betas, no early release, just pure visionary product. I'm a linux guy, no Apple fanboyism here, but I see what pg is trying to reference. From nowhere, blow away your competition _and_ create new markets with singular product vision. The Anti-Lean. There's a Jedi aspect to this - the next one will arrive, we just have to hope (s)he doesn't turn to the dark side.


Huh? Free Internet is ipso facto not a business idea.


What about free email? Or free search engines? I'm not really suggesting that ad-supported Internet is a great idea, but it's at least a possibility.


No, it's better than that.


If Microsoft : Google :: Google : Facebook, I'm not sure that the frightening startup idea here is to replace Google. Don't get me wrong, I've also started to see some cracks in the G edifice; and Facebook has definitely begun to set their agenda, but doesn't that mean search in general is already waning and that the next big thing will be whatever replaces Facebook?

Great essay though, lots to think about. I really like the anecdote about bolting an iMac to the wall as well. I still have a TV, but it's only purpose is to act as a large dumb monitor for my laptop, and I've been seeing a lot of this type of thing happening even among my non-hacker friends and family. I'd like to see an 'app-store' translation for drama as well, but it seems like tv / movies are not as amateur friendly to create as games. One person can develop a fun indie game, but it's nigh on impossible to create drama with a similarly small budget.

What aspiring drama writers / directors need are tools equivalent to game level editors to create their scenes without actors, cameras or studios. Packaged believable human CG characters may not be possible, but cartoon, animal, alien, etc. characters might be able to bridge the gap the way they do in video games and still tell a compelling story.


Even barring the inability to create life-like characters without serious investment, there are other problems, like animating them. But I'm reminded of the early days of the Ill Clan and machinima.

You would think a small team could have san animator to produce the needed animations, a few operators to control movement (and maybe voice), and a director. But I really haven't seen much movement in that direction since visual effects and YouTube became so accessible. But that's shop heavily comedy it's... Well, maybe not annoying, but unfortunate.

Anyway, my point was that I can still easily see a future where people download actors and write, choreograph, then compile, feature films in their bedrooms. And for a few programmers, I think it's a perfectly achievable goal.


> What aspiring drama writers / directors need are tools equivalent to game level editors to create their scenes without actors, cameras or studios.

I'm not trying to be intentionally dense, but that's just literature.

If you get rid of things like actors or cameras, you're no longer operating within the medium of television/cinema. And that's not specifically bad, but that's not specifically good, either.


So Pixar makes literature?


Pixar has actors and studios.


We're a YC company (www.post.fm) working on one of those ideas - email. We've encountered lots of headwinds as PG mentions, and we've managed to stay strong to our beliefs by keeping the team small and focused, and using our product ourselves to constantly remind us that what we're working on is better than Gmail.

It's not been easy admittedly, we didn't come up with some small idea that could grow into an email replacement over time, or some add-on to gmail to give us early traction, etc. We focused on replacing Gmail from day one. And that's no small feat, cause who wants to use a minimum viable email service?

We also realize it's a huge bet, and we may be wrong. But at least we're building something for ourselves, so we can't be too wrong, and that thought keeps us going.

Can't say I agree with "Email was not designed to be used the way we use it now. Email is not a messaging protocol. It's a todo list."

Email was designed to be the electronic version of a letter - an async messaging channel. Not some to-do list protocol. But with increased volumes managing all that mail became difficult (I'm sure celebrities still struggle to catch up with physical mail). That's the problem we want to solve, by letting algorithms and better user interfaces help you manage your mail.

A to-do list is something different in my view, but naturally closely related (and should be part of the same application). A piece of mail often prompts you to create an associated to-do item, but today this functionality isn't integrated so we rarely bother.

Sure IM, Twitter, and To-Do list apps chip away at some of email's use cases, just like instagram is doing with facebook, but we're confident that email can be just as good if done right.

Now I just have to finish it and avoid thinking about my idea for the google-search-killer that would be oh-so-easy to try out ;)


Combining a better user interface with algorithms seems like the obvious angle of attack. But I think there might be a better one.

Automate the whole thing. Get rid of the interface.

You know those virtual assistant services people use to manage their mundane emails? Imagine if you automated all of that.

It sounds like a ridiculous idea. How can I allow a machine to reply for me?

But I would start with the simplest mundane use cases. If you can automate complete handling of even a small percentage of a busy user's emails, they will be delighted. On your end, you don't have to literally automate it on day one. You can semi-automate it with some human help and over time use the data to reduce the human input.

Over time as the user got more comfortable, they might not mind automating more of their emails.


We do in a way automate classification of mail in certain instances, which allows the user to correct the system in case we get it wrong, but it's important the the UI is intuitive and doesn't seem like a magical black-box full of pretentious AI.


You talk so abstractly. Can you give some concrete examples of how you would be better than gmail?

In what ways will your service be faster than gmail? How much time will I save?


Unfortunately there's isn't much I can say until we launch, but here goes: we've created a social network where email is the mode of communication, and everything is organized around people and organizations.

There isn't a single inbox as such, there are people, and groups of people, with correspondence between them. Like in Facebook or a CRM system.

What Gmail query do you have to do to see the correspondence between you and PG, assuming PG has 3 different email addresses? How do you find that email from some company you don't remember the name of?

These are just some examples of things that are truly intuitive on our system. I don't mean to sound snobbish, but it's a bit like trying to explain the first iphone - its like a PDA without a stylus. But we all know the iphone is ultimately so much more. So you really have to try it for yourself.

I understand that claiming to be better than Gmail leads to lots of scepticism, and I have a huge amount of it myself every time I see a new email app. Luckily, noone has quite done what we're doing just yet, but its only a matter of time.

All I can promise is - its better for me and all my friends that have tried it. But that doesn't mean it will be better for you, or for some insanely heavy email user like PG, but I sure hope so.

Oh and in terms of speed, yeah its faster. In fact we've just had to rewrite everything from scratch to make it blazing, moving almost all the logic to javascript and caching everything client-side. And did I mention its awesomely beautiful compared to Gmail? :) But really thats the low hanging fruit - Gmail looks awful, more so every year it seems. Sure some people don't care about that, but I'm a horrible perfectionist which is why we're probably the oldest startup still in stealth.


I don't know what your product is or does, but from what I just read, it doesn't seem to be very innovative.

You're building a faster horse instead of a car.

The solution lays in semantic communication. As long as you mainly communicate with text that someone has to read, analyze and understand, you're doing it wrong.


The problem as we see it isn't so much email's fault. The problem is that the number of contacts we have is growing exponentially due to the introduction of "white pages" (linkedin, facebook and the internet fall under this) - you no longer have to meet someone to find out their email address and email them.

Chat / IM is text and yet that isn't a problematic communication medium because there is a small set of people you chat to. The problem isn't the number of emails, sentences or words, its the number of people we are connected with, its relationship management.

Sure some magical AI to read emails and books for you, summarize them and even reply for you may be great. Or organize your mail into folders for you based on clustering or some ontology. The problem is these solutions never work.

Our focus is on organizing mail around people, and in turn organizing people into groups, which is not some magical silver bullet, but still a radical improvement over an unorganized mailbox.


Thanks for the reply. It would be really cool, when you guys are ready to go public with the product, to post a youtube video of a split screen of your app and gmail; going over killer features that might get people to actually sign up and try it out.

I think many people need to actually see it first and then be motivated enough to play with your public beta. If your want lots of people to try , keep the video as simple as possible.

You can't explain the iPhone with words, to someone who never seen it before. They just wouldn't be interested.

Good Luck.


I'm also working on the email problem but on a different aspect. If, as PG suggest, you consider a new protocol, is there a way to share this work so others may build on it and contribute to push this big snow ball or do you plan to make it a proprietary technology ?

I fully agree that SMTP is broken. It is so obvious when examining DKIM.


We're not creating a new protocol. Sure SMTP and MIME are quite awful, but the network effects if these standards are so vast that even Google couldn't make a dent with Wave.

IMO a new protocol is something that can come later, we've explored that, but only an email service in a leadership position can make those sorts of changes to the standard thus forcing others to implement them.

At the end of the day there can always be a more efficient protocol, but I don't think that's the biggest problem in email.


It all depends on the problem one is trying to solve. Since you are addressing the user experience, then indeed, the protocol is not in your way. It is, for the problem I'm trying to solve. Note that the barrier of network effect can be reduced by providing a gateway between the two communication domains (i.e. Facebook). I've subscribed to www.post.fm so I should be informed of your progress.


This all assumes conditions as usual. Which I think is a painfully mistaken notion. In an era of depleting fossil fuels, inferior ore quality of iron, copper and all the rare earth metals required to make modern electronics, water shortages and overpopulation, I doubt people will be worried about email overload in even 20 years. What's missing from this list are the real big issues of our time, such as quitting our reliance on fossil fuels, building a sustainable economy for the planet's resources, and creating a currency not solely based on debt expansion. In an economy with less energy surplus our problems will be more primitive than worrying about heart disease or faster computation or better search. Solving the shrinking energy surplus..., now that would be a really scary big startup of planet-wide implication.


"A New Search Engine": You don't need to compete head-on with the biggie(s) even if you want better search. You can vastly improve search by specializing, i.e. getting data that the biggie(s) don't yet have. Find something that is "universal" but where the necessary data is hard to obtain. Then innovate on how to obtain that data, rather than focus on how to search or match etc. Find a business model where you provide value for both those who generate/grant access to the data as well as those who use/search it. When you have that data, either build on it or sell it to one of the biggies.

"The Next Steve Jobs": Watching the TED talk by Cynthia Breazeal (http://www.youtube.com/watch?v=eAnHjuTQF3M&t=09m25s), it recently struck me that the next Apple will come from the robotics side. The engagement level and seamlessness you can achieve with the physical medium is on a completely different level from "devices". Even smartphones/apps require (at best) minimal cognitive facilities for interaction. Being able to I/O on a "reptilian brain" basis with body language, tone of voice, etc. could literally "change everything" by weaving intelligence into even the dumbest activity. People will not be able to do without such robots if they are well done.


If you want to take on a problem as big as the ones I've discussed, don't make a direct frontal attack on it. Don't say, for example, that you're going to replace email. If you do that you raise too many expectations. Your employees and investors will constantly be asking "are we there yet?"

This is critical. I have tried it the other way, and struggled for these very reasons.

I think the way to use these big ideas is not to try to identify a precise point in the future and then ask yourself how to get from here to there, like the popular image of a visionary. You'll be better off if you operate like Columbus and just head in a general westerly direction. Don't try to construct the future like a building, because your current blueprint is almost certainly mistaken. Start with something you know works, and when you expand, expand westward.

Eat small morsels, chew well!


Let me disagree.

Unless you plan to discover new ideas by mistake, you absolutely need a vision, and then find the way to get there.

If you start from the bottom (bottom-up), you'll constantly compromise technically, as lots of things are not yet possible to do.

If you start from the top (top-down), you'll "know" that it's possible to accomplish, and you'll only have to find out how to do it.

You have far more chance to solve an enigma if you know there is an answer than if you don't. Visionary ideas make you believe the answer exists, which makes it much more easier to accomplish.


Let us say your goal is building a search engine.

Clearly, you have to "know" that it's possible to accomplish and then find out how to do it. I don't think even PG was disagreeing with that.

When he says "don't have a blueprint", he is saying don't presuppose you know how to get from where you are to replacing Google as the de facto search tool. Instead, just make progress. It seems you agree with that.


>This is critical. I have tried it the other way, and struggled for these very reasons.

It's also the Lisp philosophy of building up from small functional pieces, rather than starting with an overdesigned top-down blueprint.


As a programmer, I intuitively understand the value of building up from small pieces.

I think what Paul is saying in the first part I quoted is different though. He is saying don't even say you are going to build the "grand vision" because it sets unreasonable expectations and defocuses you from making incremental progress.

The second part I quoted is about avoiding top-down design / blueprints and you are right on the money there.


I've found myself nostalgic for the old days, when Google was true to its own slightly aspy self.

Can we please stop using that word? It trivializes the disorder and encourages the stereotype that engineers should be socially awkward.


Intel has pretty good automatic parallelization (https://www.google.com/search?client=opera&rls=en&q=...) in their latest C++ compiler, but it's $1000+. I remember at a php conference they simply compiled php with their compiler versus gcc, and everything ran significantly faster in various benchmarks.

For a while I was working on automatic parallelization, and wrote plans, white papers, etc. but at some point was introduced to the current methods of automatic parallelization and saw that there are some pretty good solutions out there right now such as Intel's C++ compiler.

Ideally, everything would be compiled with something along those lines at which point the baseline for everything else would take advantage of multiple cores, at least in the simple to advanced cases without additional direction from the programmer.

After all, so long as you're not eval'ing you know the entire scope of the program and you can link things up as parallel independent queues. It requires more storage during compilation, and likely longer compilation times, but the performance result can be dramatic.

It's disconcerting that something like Intel's apparently wonderful automatic parallelization C++ compiler isn't more popular, even though it's demonstrably better performance-wise than anything else I've seen.


There's a clear search engine improvement which I'd love, and my friends would love, but I think it's not yet technically possible. (the tech to do it exists, but it's a big engineering challenge to actually build it in an economically viable way).

A way to search (public, private) documents without leaking ANY information (beyond possibly "I did a search") to the operator. DDG's "trust us" security policy doesn't really go far enough. A mix-net anonymizing your query is the best option now, but it's insufficient. Just knowing someone in the world is searching for a specific piece of information is itself highly actionable in some contexts.

USG and other highly security conscious entities accomplish this by having the full search corpus onsite and running the searches on their own hardware. There's Google Enterprise (which was the most red-headed stepchild product I've ever seen from Google) too, and there are commercial ways to buy the crawl and run an engine on top of it, but this isn't really something even Fortune 500 companies do.

Basically, either a permanent "personal google appliance", potentially hosted in the cloud using some tricks, or a way to spontaneously instantiate a google each time you want to do a search.

Probably the way to do this is to write some interesting sci fi novel featuring the dangers of public search leaking, and also wait for some interesting prosecutions which use search data as evidence.

You could actually still do advertising this way, too; just requires some tricks.


The tech exists, which is called fully homomorphic encryption scheme. But I concur in what olalonde says, average people simply don't care that much about privacy.


Hence why OP mentioned the need for a scifi novel and some court cases to create a market. I'm not sure that would work, it doesn't seem to have done so in other domains.


While there may be a niche for your idea, it is common knowledge that the average Joe simply doesn't care enough about privacy.


So if Apple's not going to make the next iPad, who is? None of the existing players. None of them are run by product visionaries, and empirically you can't seem to get those by hiring them. Empirically the way you get a product visionary as CEO is for him to found the company and not get fired. So the company that creates the next wave of hardware is probably going to have to be a startup.

I realize it sounds preposterously ambitious for a startup to try to become as big as Apple. But no more ambitious than it was for Apple to become as big as Apple, and they did it.

I thought about this before, and I think building a hardware startup like Apple, or a systems software company like Microsoft is an order of magnitude harder than when they were founded. Let us take Apple. When Apple was founded, there was an ocean of people that did not have a PC in their homes. Big, uncharted market. When you hear Don Valentine (Apple's investor from Seqoia) talk about it, you can see how they did not care about anything but the market. Do we have that kind of market today? Maybe. At the moment everyone is occupied with their ipads, phones, and PCs.

Technology. Today the hardware is so complex, that it can be only competed with by largest companies in the world. It is not a coincidence that it is only Samsung that can compete with Apple in mobile devices. Take a Texas Instruments or Qualcomm chipset, you will face a complexity barrier at every corner. We won't hear you saying things like, my co-founder designed a chip so efficient, it will be a game changer. Anyone remember the JoJo Pad before the ipad was released?

So what could be done? I think it comes down to playing on the above two variables. For a new hardware/systems startup, it must target uncharted territory, i.e. introduce (mobile) computers to an area of use where it has never been tried before, and make sure everyone in the world needs it. (Like that thermometer startup, except find a wider use case) Use existing cutting edge technology, and build your new technology upon them (e.g. I would probably start with a top notch chipset + android + add new, hard-to-replicate technology.)


100% agree with you.


Preventative Diagnostics as Paul describes in #7 will really be the future - it's barbaric that we can only make a diagnosis when the disease has already manifested (in most cases). There are a few players in this space (Scanadu comes to mind), but it's seems like nano biosensors and the like are still very new technologies. Correct me if I'm wrong.


I'm not optimistic. I think pg's discussion of automatic diagnosis is a bit ill-informed.

For example, the recent trial that showed screening CT scans reduce mortality in lung cancer cost 250 million dollars to run. Even then, nobody is sure if it is even a cost effective measure.

It is difficult and costly to produce a screening test. It also takes many years to validate. Then there is the problem of what to do with the results - for example, if you are diagnosed with possible pancreatic cancer, the treatment is a massive operation to replumb your upper abdomen. 5% of people die because of the surgery alone, and the surgery costs a fortune.

Unfortunately a simple relationship like "find cancer early = good outcome" does not exist. There are incredibly high barriers for a startup developing diagnostic tests for screening. There is a good reason why the only people doing cancer screening studies are large government funded research consortia that can afford to wait 10 years or more to prove a result.

The example of Bill Clinton is misapplied - cardiovascular disease is really common, maybe 30% or more of people will get heart disease in western countries. We don't need to have a cool machine to screen for it, we need to risk stratify people with a few simple tests (ie ask them if they have a family history, check their cholesterol and blood pressure) and improve their risk factors (eat better, quite smoking, exercise, lower cholesterol etc). But then you are talking about modifying human behavior...


Is that the case for all cancers though?

I live in Australia, and we are indoctrinated to check your skin for moles that maybe cancerous. There are claims that the high rate of early detection leads to higher survival rates[1].

My understanding is that early detection of bowel, breast and prostate cancer is relatively easy and produces good outcomes too.

There are radical ways to do early detection (sub dermal computers continually monitoring, etc etc) but there are ugly hacked solutions that just might work, too.

How much would it cost to build a toilet with a bowel cancer test kit built in?

[1] http://www.cancer.org.au/policy/positionstatements/sunsmart/...


Not sure what you are arguing here... if you are arguing that screening for cancer can be useful and saves lives, then I agree with you!

If you are arguing that a start up could have come up with a screening program for bowel cancer for example, then I don't agree with you for the stated reasons.

Also: Prostate cancer screening is not recommended (http://www.cancer.org.au/File/PolicyPublications/Position_st...)

Breast cancer screening is not as useful as you would hope either. (http://dx.doi.org/10.1002%2F14651858.CD001877.pub4) 2000 women need to be screened for 10 years to save 1 life, with 200 initial false positives requiring biopsy. Also, I see lots of people diagnosed with breast cancer despite having mammograms.

Radical ways to do early detection are fine, but you have to prove that it works and that requires a lot of people for a lot of years and a lot of money.

Building a toilet with a bowel cancer screening kit built in is a form of behavior modification to improve uptake, and that is a great area for start ups to get involved in. pg was talking about something different however.


pg was talking about something different however

See, I don't think he was. "Ongoing diagnosis* doesn't have to mean new tests if you can make the existing tests radically cheaper and easier. Given that existing behaviour is always hard to modify it would seem sensible to try and piggyback on existing behaviour.

Toilets with cancer sensors that would check for bowel cancer everytime you go would be as about as "ongoing" as diagnosis can get.

Maybe toothbrushes could be modified to check for viruses in saliva.

I'm sure there are other easy tests that could be done if you have blood. There are obvious ways that could be integrated into everyday life (for women, anyway).

I've read some studies that showed dogs could be trained to smell cancer. Maybe people would pay to have their clothes sniffed (!) when they have them sent to the laundry.

I've previously suggested (on HN) the idea of payment companies partnering with food outlets and exercise software vendors to log the calories you are buying. That's a good input into diagnosis software too...

I'm sure there are a lot of other ideas - look for low hanging fruit and you can do radically better than the status quo.


System on chip PCR machine? The lowest hanging fruit would be miniaturization and better engineering of existing diagnostic machines. At the moment the medical diagnostic market is filled with overly expensive devices that could be easily made cheaper and more efficient (somewhere with a favourable patent/legal regime so you don't get sued to oblivion).


For insurance companies early detection of chronic illness can lead to a longer lifespan when old, which costs them more. Better you die fast after a certain age?


I look at medical diagnostics as though I'm looking at my production servers (a poor analogy in many ways, but lets go with it)

Watching charts on a production server, you see patterns over time - e.g you can see a weekend, or holiday quite easily. Keeping an eye on these, you can see when things start hitting bottlenecks and you need to do something to improve the situation.

When I get my bloodwork checked, its once a year - can you imagine checking in on your production servers just once a year?

I live in the Seattle area - Vitamin D deficiency is a huge (but not well known) problem in this area. I would pay real money to see a chart of my Vitamin D levels on a daily basis (without needing to draw blood!). I can then adjust my supplement dosage as needed.

I have a food allergy that is slow to flare up and slower to go away again - being able to check various levels of things in my system to track against food intake will help me find out exactly what foods cause what issues. The US food industry would pay a fortune to have this ability not be available!


There are a lot of players in this space, but you're right - it's still in the research space with not very many viable products yet.


I think that the hard part of tackling these ambitious projects is (often) not the actual engineering but rather making the ideas popular and fighting against the status quo. Another hard part is how difficult is the transition from how we do things now to the new way.

So most of these things he mentions, people are working on them, or something similar, or even have functional software. That software just isn't popular. Not because it isn't useful, but because it didn't catch on.

And a lot of these ideas aren't really useful until they reach a critical mass of users, which makes it even harder.

The big ambitious thing I wanted to mention was DONA (data-oriented networking or http://en.wikipedia.org/wiki/Content-centric_networking). You might be able to combine that with some type of semantic knowledge storage and engineering along with a type of e-democracy. People have working examples of these things, its just hard for people to pick them up and start actually using them and then mention them to others for them to become trends.

Before (or instead of) human-controlled knowledge engineering we may see Google (http://mashable.com/2012/02/13/google-knowledge-graph-change...) (or possibly some start up) come out with a Watsonish system that builds huge knowledge graphs by actually comprehending the semantics of web pages it spiders and then lets you query them more naturally. (Which I guess that type of system does exist, just didn't catch on, maybe because it wasn't quite up to human level comprehension or didn't become popular for whatever reason.)


One of the benefits of something like Y Combinator is that if you have one of these ideas, and you can actually make it work, YC and its associated network of people can be massively helpful in spreading the word.

In fact, that may be the major benefit of YC.


The KDD cup had some entries in 2006 on this idea. I found one of the papers by accident yesterday. It looks quite interesting, might be worth checking out. www.daimi.au.dk/~ifrim/publications/kdd2006.pdf


Replace email and replace search are two signs that Google is failing at its core strengths, and I agree with this. They have made many mistakes in the past few years and there is a definite opportunity in taking them head on in both of these markets.

with number 6, if you are going to break it up for cores you may as well break it up for computers and put it on the network. Hence MapReduce, etc.

with #7, our bodies are very good at telling us when something is wrong with warning signs. we can't afford health care as it is today, let alone with the system being clogged up with healthy people paranoid about possibly being ill.


our bodies are very good at telling us when something is wrong with warning signs

No, not really. It does warn you, but in many cases way too late, and then you lose the opportunity of fixing the problem with a fast, cheap and safe procedure. If you want to save money, early diagnosing is key.


I have my own to add which I'll tackle if I ever get smart enough. Code is horrible right now. The problem is that code is written linearly, when in our minds it is a graph. It's usually a bad sign when our minds see things differently than our computers do. I think if we could properly abstract the concepts, and change both our linear list of functions and our unsorted list of files to a single graph structure, we could understand our software so much better. I guess I'm thinking of UML diagrams with code, but in a way that feels natural to code in the first place, even for a beginner, not as a commented afterthought.


I just read an article today or yesterday about the reason that VIM's cursor keys are h,j,k,l or whatever.

I think the reason that code is ASCII text is pretty much the same reason VIM's cursor keys are h,j,k,l. All they had was a terminal so it had to be ASCII.

If you think about it, all information is structured and multidimensional. But people can only make one sound at a time, so information must be serialized to be communicated.

There are some easy starting points for understanding why we should get away from pure ASCII source code. One of them is to try first coding a complex UI with pure text and then build the same UI using a graphical editor with widgets.

Another one is this: just answer this question -- why can't I represent division in my source code using a numerator over a divisor the way that we are taught to write mathematics? Should we continue to pretend that we are required to edit our code on terminals from 1979?


As you suggest, this is definately a legacy issue of being tied to the teat of a 1980's terminal. People still like to edit in simple editors.

I've viewed it as a Model-View-Controller problem, where everyone is attempting to merge the Model and View into one. Technically your editor should be worrying about the View. It can draw division however you configure it to. But when saving to a Model source file, that should be portable, without any (or very little) formatting embedded in it (e.g. MultiMarkdown).

You should be able to customize your editor, much like swapping out a CSS file on a website, and skin it to your desires.


We talked about exactly this at a Camp Smalltalk almost a decade ago. If it had flown, at least one computer language (Smalltalk) could've allowed each programmer to have their own customized code formatting (View) while the code was actually stored as the Abstract Syntax Tree nodes.


Have you seen the intentional domain workbench?


I've been thinking exactly the same thing! My workaround so far has been to use IDEs that have some project management features, and separate the code into separate files.

For example, SAS Enterprise Guide lets you create a project flow like this: http://blogs.sas.com/sasdummy/uploads/egparallel4.png

Note some of the advantages: nice code structure, easy (minor) parallelization, easy code segmentation, ability to examine intermediate data, etc.

I haven't found anything nearly as useful for SQL or more general languages. I would love to see something similar in Clojure/Python.


Nice! I've started using files to separate logic, I recently went from 3 tiered (GUI/code/libraries) to 4 tiered (GUI design/GUI action code/data structures and algorithms/libraries) and added folders for elements (like a custom made back arrow image) and IO samples to run my programs on. Making my software structured like this has helped, but it's far from where you can see even a simple algorithm in it's graphical form that I see in my mind, as that algorithm would all be in a single section of my structure. I'll have to look into the separate files idea, though perhaps I'll build my own custom code editor, I'm skilled in things like that, perhaps I could pull it off.


It's not ideas which make billionaires, and it's not a lack of ambition which keeps these ideas from being reality. It's that it takes killer execution with a huge amount of luck. People think that they can predict the future, but they can't. Capitalism triumphs because it lets a million monkeys do a million zany things, and when a few become mega hits those particular monkeys are hailed as visionaries.


Let's add one more

    8. Replace prisons
Prisons do not make criminals back into normal people. Prisons are expensive. Prisons are a big market. Prisons stay the same for the last few centuries. What is a better way to punish and a better way to bring criminals back to civil life?


Don't all governments have a monopoly over their country's prisons?


I think in U.S. prisons are private enterprises.


This is highly dependent on the locale. Some places have outsourced prisons, usually with poor results. Most have not.


http://www.youtube.com/watch?v=szNLMtgI7hU

Serco appear to run privatised prisons in the UK and Australia, so I guess not.


Monopsony might be the better word here.


"The popular image of the visionary is someone with a clear view of the future, but empirically it may be better to have a blurry one."

The future is uncertain, because each person is a variable and chaos is inherent in nature. However, with the sun as my witness and the earth as my ally, there is nothing that will stop my effort to liberate all beings from suffering through my startup. It's all I got left in the world, there is nothing else that matters to me. I am 22 and there is no job I want in the world, so I will create one through my ideals of universal compassion and scientific method. I will post on HN soon, I hope people understand my vision of leading Homo sapiens to become Homo universalis, that may be the only way we can actually have a type 1 or 2 civilization.


Now if I accidentally put the cursor in the wrong place, anything might happen.

This has been my experience with Google search and Gmail (the Google products I use most). It's really frustrating that sometimes I'm handling them the way I'd handle a Samurai sword. That's not how it should be.


Oh man! Replacing email is my personal Holy Grail. The ongoing escalation between spammers and spam fighters is proof enough that it is a system that has lived beyond its time.

I first became aware of PG when he was working on Bayesian spam-fighting techniques, circa 2002. Email already seemed absurd to me. I was thinking of writing my own email client, but I would have preferred to get on whatever email-killing bandwagon there might be on the horizon, so I sent him an email asking if he knew of such a successor. He wrote back and said no, he was not aware of such a thing.

That we are still using SMTP in this day and age just boggles my mind.


"Google used to give me a page of the right answers, fast, with no clutter. ... And the pages don't have the clean, sparse feel they used to."

If you disable Javascript and cookies for *.google.(tld), you'll be greeted with Google circa a few years ago: http://imgur.com/LDBLk .


Will it stop replacing my technical terms with non technical synonyms, and randomly leaving out my search terms? Cause whatever they've done to the UI, it pales is comparison to those two things.


This, absolutely. It's fiddly to get Google to perform well now. Quotes everywhere and inablity to copy addresses from results ... bleurgh.


Use "verbatim" under "Search tools".


This is the most wonderful thing anyone has every told me.


maybe its because of my search history, but I find google gives the best technical results.. and actually its when I do a search for something that has non technical meaning the technical results come to the top.


I think the percentage of users who even know what Javascript is, are negligible to an extent that they don't even matter.

And people who don't know look for a new search engine rather than learning how do disable Javascript.


This is great..Thank You!


There are very few successful futurists in the literal sense.

The long bets are not on the current startup ideas which will still mould the world 5 years from now. YC's view of investing in those with the wherewithal to effect change - not those who necessarily have the answers to hand - bleeds through the ambiguous edges of this essay.

pg's reticence to put his full belief behind a specific idea due to the evanescent nature of the current concept-du-jour is good guidance - tackle the extant problems and retain half an eye on the bigger picture.


The example of email as an "irresistible force vs an immovable object" really resonates with me. There are many things in that class:

- The way we communicate (pg gave a good example of this one)

- The way we write software (text files? really?)

- The Operating Systems we use (All the major design decisions were made in the 80's.)

- Computer input systems (are keyboards really a global maxima for efficient control?)

Eventually, these will definitely be replaced. Why not make "eventually" now? We won't be running Windows or Linux in 2050. Why not be the person who invents what we ARE using?


"2. Replace Email"

Google tried this with Wave and they failed. I wish they had succeeded. I think they should have spun it was "Email 2.0" and made the transition easier.


I think we'll see it again, though not necessarily from Google. What failed about Wave was the way the project was organized and pitched to users, not the idea.


Personally, I don't see what Wave has to do with email, it's a completely new protocol that wasn't integrated with email in any way and that's why it failed. They should have done this from the start: http://code.google.com/p/wave-email/wiki/Outline

Google wave was also horribly complex. Email needs to be simplified, not just added to, which Google keeps doing with Circles and other stuff that they keep bolting on to Gmail. Reminds me of how MS used to do stuff.


I actually loved wave, I think they would of succeeded had it been tied in to your email account. I had to remind people to send me messages @wave.google.com or whatever it is. If all my gmail messages automatically went to wave and vice versa, I think it would of succeeded.


The CEO of that company, the "next Steve Jobs," might not measure up to Steve Jobs. But he wouldn't have to. He'd just have to do a better job than Samsung and HP and Nokia, and that seems pretty doable.

Some might say that Amazon is already doing better than Samsung, HP, and Nokia.


[3] Roger Bannister is famous as the first person to run a mile in under 4 minutes. But his world record only lasted 46 days. Once he showed it could be done, lots of others followed. Ten years later Jim Ryun ran a 3:59 mile as a high school junior.

One of the great stories of the last 100 years. There are many recountings of it, but "The Perfect Mile" is as good as any. Supposedly it was claimed to be impossible, and that any person to break the 4 minute mile would likely die from the effort. Bannister also wrote his own book about it.


I wonder if it bothered pg that two consecutive paragraphs in the Tactics section started with "Empirically,".

I wouldn't let that slide, because it triggers pattern matching not relevant to the subject at hand.


That was intentional. That was anaphora.


I totally agree regarding the decline of universities. In particular I think the research side will be the first to shift away from universities; at least with education you are essentially paying for a brand name which has inherent value. With research, the principle investigator writes the grant to pay their own salary, the salaries of their graduate students and postdocs, and their equipment. The university then takes almost all the scientists IP and charges "indirect costs" equivalent to more than 50% of the grant to supply "Facilities and Administration" - which is what exactly? Lights, building space, and a whole lot of bureaucracy.

Already, some really innovative initiatives are getting around this problem. The Pasadena Bioscience Collaborative offers lab space and equipment for ~$1,000 per month (no contract required!) and the EMBARK program administers scientists grants and encourages them to outsource experiments to core facility specialists (while providing access to a basic shared lab for those experiments that can't be easily outsourced). Both initiatives offer ways for scientists to avoid high indirect costs and burdensome admin - and importantly the scientists retain 100% of their IP!

These initiatives are the way of the future - it's hard to see how big, inefficient universities will be able to attract the top talent for much longer.


I think Paul Graham and Ycombinator have done some great things for the world. However I do disagree with some of the things that Paul Graham says, and this article is one of points that I disagree with him on.

5. The next Steve Jobs Why does PG seem to think that there has to be the "next" Steve Jobs? Is there some sort of pattern to be recognized from the Apple story, that a startup can emulate and be successful.

Wasn't Apple a large company already, even before Steve Jobs came back to it. Though Apple at that time was in a dire conditions, it wasn't exactly a start up. How come some hardware startup during the 90's, 00's, and this decade, do what Apple has done.

I know people think Apple is constantly inventing something "new" and always needs something "new" for it to survive, but I don't think that is the case. I bet the iPhone wasn't really created in just 2 years, I am sure Apple had been working on it for a long time. A feet that is much more difficult for a start up, to do R&D for a long sustained period and pay the bills with some other product. PG had the following quote.

PG: "well, and I asked him if the people now running the company would be able to keep creating new things the way Apple had under Steve Jobs."

I think Steve Jobs had a particular vision for his products for a long long time. He might have thought about functions of the current iPhone and iPad during the Newton days. Steve might have had 3-4 products that he wanted to create, and thats it. We don't have enough data to interpret, that Steve would of kept pumping out "new" products if he was alive, like the iCar.

Allot of Apples success have been through luck and timing and making the right gamble. Jobs couldn't have put Apple back, without the help of numerous people, and the above mentioned.

I think if PG seriously wants to find the Apple formula in a start up; he may as well start playing the lottery. Eventually with enough time he will find one. But the odds don't look so good.

I am a fan of his writing, but I found this article to be disingenuous at best. Allot the things we use today aren't just formed by start ups, they formed by sole inventors, governments, large corporations, and random hobbyist.

You can change the view of your world to include more items than startups.


Also frighteningly ambitious is the prospect of any meaningful startup driven disruption in the energy industry. Which is a worry considering how desperately disruption is needed.


Electric aviation and space launch. See our web-site, http://electrictakeoff.com, for details. We are looking for collaborators, co-founders, volunteers, etc.

I agree with PG that ambitious startups are better. However, I think engineers should focus on sectors where the customers are dissatisfied. Aviation seems like a good candidate. Most people dislike oil dependence also.

Tactically, we are aiming to fly advertising banners as a short-term path to revenue. The next market would be air freight with prices lower than diesel trucking. Passengers would come last after the technology is proven.


Guys, please do something which will replace Word and Excel. These tools were good in the '80 and '90.

One of the main reasons I do not like to work for corporations is Word doc attachments hell.


On the other hand, I'm sure you'll love the Word Replacement attachment hell.

It's not the tools that need replacing here, it's the process.


In regards to email, google wave took a stab at attempting to change the dynamic of communications but that project has now been shelved.

The difficulty in a new "email" replacement is overcoming the hurdles of engrained habits - see http://zenhabits.net/ for more on that psychology or even www.iwillteachyoutoberich.com (people see the inbox as a to do, that they grind through in mechanical fashion).

Think to tackle the problem of email is to put a new UI layer that wraps messaging into context while piggybacking off the traditional email protocol. For example, with my own work email we tag are subjects with TASK, FYI, MEETING, FOLLOWUP, FEEDBACK, etc. indicating what action we need done, helps with searching and labels now in gmail, however take that "context" element and combine with say www.trello.com UI concept of boards/cards in a visual dashboard type of style would be stellar.

I just imagine something like this on an ipad i am just swipping/slidding through my different buckets of communication (sort of like flipboard). KILL THE INBOX :)


  6. Bring Back Moore's Law
hmmm, maybe developer cycles are more valuable than machine cycles really is getting out of sync with current conditions?

I didn't connect stalled clock-speeds to the web being slow til reading this. One reason is that web-serving is usually embarrassingly parallel, as each client is independent. Some other causes are increased client-side JS; assembling many services (eg. amazon); increased usage with resources not keeping pace. But pg's point is surely a factor too.

Bloated frameworks, and software with many layers (some quite unnecessary) were facilitated by increasing clock-speeds - but at least it's possible to get rid of them. Also, work has been done on JS JIT compilation. Server languages are getting faster too.

This may seem like a tangent, but bear with me: Clayton Christensen (who coined disruption) makes an interesting point about "integrated" (closely-coupled, interdependent) vs. "modular" (clean interfaces enabling mix-and-match) architecture.

The advantage of integration is you can make it perform fast - you can optimize "performance" according to a variety of definitions (e.g. smaller, lighter, more memory, less battery power etc). This wins when customers value increased performance - Christensen describes this willing to pay more for performance as it "not being good enough" because once it's good enough, they won't pay for more of it.

The economic advantage of modularity is you can develop fast, you can create and customize more quickly. Part of this is reusing components (e.g. buy off-the-shelf or open source, or reuse internally) - this wins when customers value that over performance. This usually doesn't happen until performance is "good enough": if it's too slow to use, who cares how configurable it is?

An example is iPhone/iPad (integrated) vs. Android (modular). The iPhone/iPad is fast, light, slim, long battery-life, better resolution, smoother animation etc. In contrast, there are many different Android devices, with different prices, displays, shapes etc, and many have customized UIs.

Christensen's fascinating point is not that one approach is better than the other, but that they change over time, cycling back and forth. It depends on what the market wants at the moment: what will customers pay for more of?

Following the example, once smart-phones become "good enough" in performance, customers will start to buy on other factors, such as price. This seems to be starting to happen for smart-phones; but not yet for tablets.

In relation to pg's observation of server slowness, it seems that formerly, performance was good enough, and so the developers that were most successful favoured mix-and-match layers, because they were faster to develop and easier to customize. But now, performance is a problem... which may mean that developers who favour integration will be most successful. It's not black and white, but an interesting perspective.


Bloated frameworks, and software with many layers (some quite unnecessary) were facilitated by increasing clock-speeds - but at least it's possible to get rid of them. Also, work has been done on JS JIT compilation. Server languages are getting faster too.

It's instructive to note how much more compact alternative operating systems have been made compared to the mainstream ones. Full fledged BeOS systems used to weigh in at just a couple of hundred megabytes, with feature sets comparable to OS distributions taking several gigabytes. Also note that users of Symbolics Lisp machines used to be under the impression that the company had hundreds of programmers, and were amazed to learn that there were just 8.

A bit of bringing back Moore's Law could be done by getting rid of bloat. Perhaps the advent of the Raspberry Pi will allow this to happen.


RPi is the antitheses of bloat removal, it is desired because it brings enough horsepower to shift "what you want"+"the everyday bloat that needs", in a small package.

Things like ddwrt on a netgear router are anti-bloat. Usable networked Linux in a 4MB image with 8MB RAM.


From the standpoint of embedded devices, this is true. By desktop OS standards, not as much.


Today the OS has to support way, way more different hardware than BeOS and Symbolics ever had to. BeOS made their own boxes right? Symbolics certainly did and I doubt they had a TCP/IP stack build in....


Be did initially make BeBoxes, but BeOS later ran on off-the-shelf x86 hardware (provided it had drivers, of course, but basic graphics worked on pretty much anything of the era). BeOS R5 at least also had a TCP/IP stack, and while it wasn't highly regarded (Be was rewriting it when it went under) it more or less worked.

By this argument, OS X should be a lot lighter than Windows, which it isn't.


That is an interesting point, but I don't really buy it. According to what he said we should make the integration tighter -- favor simple codes over frameworks -- but my guess is that it breaks down when a service can no longer be run on one computer or, as is the current problem, one processor core.

At that point we need more abstract abstractions. I my guess is that much of the problem steams from problems with lowlevel threading and I guess programs would be easier to write if we could use async message passing between threads, rather than shared memory. I know that the performance will suck compared to threads (say only 50% as efficient) but at the same time we only need a single iteration of Mores low to solve it and I believe that it would scale much better where that matters -- in the brains of the developers who have to write these things.


A recent blog post that didn't get enough love here: (http://www.ribbonfarm.com/2012/03/08/halls-law-the-nineteent...) seems to suggest that we are probably at a point where the new "Moore's Law" (or new new Hall's Law) is soon to be discovered.


For big ideas that others clearly understand to be problems and of which investors are afraid to get in on there always is... crowdfunding! I wrote about the pros and cons of raising money through crowdfunding to pay for your startup here: http://news.ycombinator.com/item?id=3687835


7. Ongoing Diagnosis

I always thought of creating a wearable device that can report the body's condition in real time. A device that can test the blood to find out amount of haemoglobin, essential minerals, sugar, cholesterol, urea, water etc. The device could be made safe enough to be inserted just below the skin, it could be made to transmit the information via radio waves to a receiver outside the body where you can read the information. We could write a program for the receiver which will process all the information and compare it to healthy values and based on it provide real time advice to the person. Eg. When you are dehydrating the receiver will say "Hey dude, drink some water quickly, or else you'll faint in 30 minutes!" "Hey dude, you should get some Vitamin B/C/D/E/K." If the circulation of blood slows down it could say "Hey you've not exercised in ages. It is time to exercise." It will redefine how we take care of ourselves. Caring in real time!


E-mail isn't perfect, however as a transport system is is not meant to solve the problem you described in "Replace e-mail".

If someone wants to have a meeting with me, they might send me an .ics attachment[1] that will work with almost ANY software that I have my computer. Since most meetings are in fact in-person and something that we have been doing as a civilization for some time, the semantics are well defined and easy to model.

A task list will always be harder to model, but not impossible and there is certainly a lot of ideas on the topic. As long as it is an open standard this sounds great, but I wouldn't like to see my tasks locked into either a proprietary format or the cloud[2].

If you are going to replace something that is standards based, it should be with a new standard of some sort. Not code, that's an implementation detail, but standard.

1. http://en.wikipedia.org/wiki/ICalendar

2. I'm too old for this shit, and not everyone lives in the cloud.


I started off reading this essay a bit carelessly -- it seemed that pg was saying that these ideas are too nuts, and YCombinator would never back something like that. I was about to write a post completely disagreeing.

Our company for example is building a new type of social search engine, will in many cases replace email for messaging, AND long term have a third party platform that will enable websites to take advantage of our single sign-on and be instantly social, while safeguarding privacy. We have a patent application on this (yeah, I know...)

Is it too much to bite off? Maybe. But look at our usage already, after a year. http://qbix.com

And lastly, I am very much hoping to build value, and not just sell quickly. I haven't read Steve Jobs' biography yet, but I have heard he refers to such ambitious people as "real entrepreneurs". I don't know... all I know is, I am driven to accomplish this. And so far we've got some positive results.


How do you "just say you're building todo-list software" and not get laughed out of the room? Investors say the space is too crowded, and engineers joke about it being one of their first classroom assignments. And even if you're making traction on the vision to replace the inbox, Y-Combinator's partners will turn you down.


Good reason not to depend on VC funding


Actually, we bootstrapped for 2 years. We had launch coverage in lifehacker, mashable, the atlantic and entrepreneur magazine. We had over 10,000 users the first month. We got to the point where we needed some funding to go all-in, but everyone we spoke to was really burned on the task management space. Based on this experience, I'm really interested in how someone can take on the task management space in a way that gets noticed.


Many startup ideas are about extracting a few more dollars from the end user - what kind of annoyance can we solve today? - but I think it's important to think beyond the scope of consumer products to get to the real game changers.

A lot of the value being created in the digital sphere right now revolves around collecting information about people and providing it to third parties, who in turn use it to solve problems (and collect even more data). I believe these services will change our lives the most.

For example, major innovations in the near future might revolve around creating 100% safe neighborhoods through smart surveillance. People are rapidly becoming accustomed to being tracked all the time, anyway.

Technological progress and major societal changes go hand-in-hand. I think whoever can best envision what those changes will be - and how to profit from them - will become the next Steve Jobs.


"...creating 100% safe neighborhoods through smart surveillance. People are rapidly becoming accustomed to being tracked all the time, anyway."

This comment shocks me. I'm not sure what to say. Are you being sarcastic?

An apt metaphor for people becoming accustomed to surveillance is slowly boiling frogs to death. And what is your definition of safe? If it were up to me, nobody could wear the color black at night -- too dangerous! Also, drinking too much at the club is not acceptable in my society!

I'm being silly, but where do you draw the line? It's a slippery slope...


“GMail has become painfully slow. [2]”

“[2] This sentence originally read "GMail is painfully slow." Thanks to Paul Buchheit for the correction.”

heh :)


I saw a neat Kickstarter a while back that seems like it was trying to tackle redesigning email from the ground up. Looks like they hit their goal too!

http://www.kickstarter.com/projects/1380180715/mail-pilot-em...


It would be great to have a todo list with a similar interface to today's email-integrated calendar apps. Someone could send me a todo item, and I could accept or reject it. If I accept it, it gets synced across both of our todo lists. That would be a huge step up over putting todo items in email.


Task management apps like Asana probably let you do stuff like this. Email is just for messaging. Nothing is stopping people attaching a to-do item as a file that gets opened in some to-do app, just like attaching vcards or calendar items.

Its just that very low percentage of emails actually map to some associated to-do item. And in most cases you really just want to set up a reminder.


www.taskforceapp.com (Disclaimer: I'm the developer for Taskforce)


I think all of those things are being worked on right now. Khan Academy and co on replacing universities. The whole 'quantified self' set of gadgets, like the basis band, the zeo sleep tracker, the withing's scale, etc. DuckDuckGo for a search engine startup, etc.


This is now my favorite essay of Paul Graham because it shares my thoughts.

The only thing I feel less comfortable with is that it emphasis financial value over a useful contribution to mankind. In my view the later is more relevant than a goal to become the richest person of the cemetery.

I guess this is a kind of perception distortion one gets when the main variables considered on a day to day basis are ROI, wealth, influence power, etc.

I'm aware that wealth provides a significant leverage to contribute to mankind's good, but it is easy to forget about this relevant next step by solely focusing on increasing one's wealth.

Open source is one example showing the difference and it also proves that we don't need to be a billionaire to significantly contribute to mankind's good.


I don't get the Augustus reference.


He was Julius Caesar's nephew and successor, who ultimately did all of the things that Caesar wanted to do but never had a chance to to see done (the month of August is even named after him). In the lines that Paul mentioned his legacy, I think he means that someone similar to Steve Jobs (his would-be successor in our pop culture) could continue a similar path as Jobs. It's now clear that being a visionary in that position is possible, and he showed us all what just one person can do there.


That may be it. However, it's not completely clear.

>Steve Jobs has shown us what's possible. That helps would-be successors both directly, as Roger Bannister did, by showing how much better you can do than people did before, and indirectly, as Augustus did, by lodging the idea in users' minds that a single person could unroll the future for them.

We've got Banister, who lead, and Augustus who followed (I mean in succession - he was still a leader), and ... well, it's a little confused but I guess you're right.


At the death of Julius, Rome being controlled by one man was unconscionable. By the time Augustus died the opposite was true.


For a peak into a future with 'ongoing diagnosis' see the very cool Greg Egan short story Yeyuka:

http://www.infinityplus.co.uk/stories/yeyuka.htm

"So why did you go into medicine?"

"Family expectations. It was either that or the law. Medicine seemed less arbitrary; nothing in the body can be overturned by an appeal to the High Court. What about you?"

I said, "I wanted to be in on the revolution. The one that was going to banish all disease."

"Ah, that one."

"I picked the wrong job, of course. I should have been a molecular biologist."

"Or a software engineer."


I enjoyed this essay very much, but I think that presenting Apple as a hardware company ("the company that creates the next wave of hardware...", "If a new company led boldly into the future of hardware...") won't help to find the next Steve Jobs. Apple products success is based on the perfect combination of well-designed hardware and software. I think the next Steve Jobs will need the same holistic approach to product design.


The conclusion on Tactics and starting small and achievable is pure gold and something I took a decade to learn. I'd simply emphasize that it's important to fulfill a real need on a small scale. e.g. Harvard students really wanted to stalk each other, a basic interpreter really was needed, Columbus sailed west to find faster trade routes.


why can't I configure a bayesian filter on my inbox to do more than just filter crap I don't want? Can't I have another filter that builds a model of the things I click on first given any set of new emails and generate a likely list of things I will want to see first?

Is anyone doing this?


I'm not a Gmail user so I'm not sure how well it works, but I thought Gmail's Priority Inbox was built to solve precisely this problem. How intelligent it is at determining what gets the priority label I don't know but at least someone's pursuing it.


I can say the priority inbox works extremely well for me.


Maybe how frightening these ideas seem is a measure of your ambition. I have no fear at all of these because they're all so much bigger than what I can tackle that they become fun theoretical "how will the future be" ideas.


I like this article, but am I the only one for whom gmail is still seemingly as fast as ever? I don't doubt that it is painfully slow for PG, would be interesting to see the distribution of performance and what impacts it.


"The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us."

http://en.wikipedia.org/wiki/Parallel_Extensions


I love this:

"There's a scene in Being John Malkovich where the nerdy hero encounters a very attractive, sophisticated woman. She says to him:

Here's the thing: If you ever got me, you wouldn't have a clue what to do with me.

That's what these ideas say to us..."


I'd add micropayments to this list. If someone built a system (and -- the hard part -- it got widespread adoption) to charge a few cents with low friction, it could disrupt advertising.


Re: "5. The Next Steve Jobs" / "None of the existing players. None of them are run by product visionaries, and empirically you can't seem to get those by hiring them."

Jack Dorsey.


Agreed. When I was reading it, I was thinking of him as well. Square is a shining example of simplistic design and value.


JD is very good, but sorry to say he is not even close to being next SJ.


Is there a reason why Paul Graham's essay titles are images?


Of the 7, only Replace Universities (already happening) and Ongoing Diagnosis is frightening and ambitious, imo.

I would have included:

1. Alternative Energy 2. Fix the Government



Small nitpick: what does it mean to say "...search queries to be Turing complete"? I didn't think that the SERP defined a set of rules. ;-)


You type something in, you get something back. Why can't the search engine be a REPL for some program environment, with the entire web as its data?


I wonder if PG would create more value by doing one of these ambitious startups than he is by running Ycombinator.


Probably not. Y Combinator is an entire startup ecosystem. It is a convening source, and inspiring source, for a multitude of startups. Any single company -- no matter how grand its ambition or its achievements -- offers less scale and less total value than an entire ecosystem of potentially big companies.


The medicine comments are spot-on, right now we only fix our bodies when we feel pain, when it is often too late!


Takeaway for me: Great ideas will change the world, but you never start with them in the first place


I always think back to Linus Torvalds's first email about starting linux.

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones."


We don't need to replace email. Just add an NLP engine and the ability for 3rd party apps on it.


I want a pause button. For my life.


pg, reports like these http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/M... gains the confidence over ambitious.

Relevance dictates the shift.


Change all the "Replace" with "Displace" and you are on your way to IPO.


An obvious but scarey startup idea to me a a Skype competitor!


is there anything like cloud funding? the most ambitious start up should kill the jobs of VCs for good.


the 'Automatic medical diagnosis' suffers from a common misconception - that people want to 'go to the doctor'. It's pretty rare that people do, and in most cases they avoid doing anything medical unless they absolutely have to.

How would you gather the information to make the 'ongoing diagnosis' if the people aren't going to come to you to do it? And that's just getting the symptoms - what do you do for tests for more info, which people like doing even less?

I also think there are some seriously fundamental technical issues, but I'll leave those off due to 'ambitious'.


I'm working on #2, but the solution I've found is not exactly what Paul suggests. I ran into a need while working on my latest startup.

I'm in bed with a large tech investor for my current company, and I'm working on this tech on the side. My term sheet is such that my company owns whatever I create right now, so I'm hoping said large investor isn't too annoyed with me allocating some time on the side; my plan is to ask forgiveness instead of permission.

Aiming to kick it out to the public in a month optimistically.


Is all about primary dependencies when it comes to large disruptions, so any ideas that target the primary technologies, hierarchies or costs associated with the agriculture, energy, manufacturing, telecoms, trading and transport sectors.

And the really ambitious ideas are the ones that simultaneously target as many of them as possible.


PG is back. :)


Not at all surprised that aapl has taken a dip in after hours trading. Wouldn't be surprised to see it fall monday either. The last time PG publicly endorsed amzn we saw it briefly spike before returning to ~183. Almost reminds me of what 50cent did with hnhi.

Not sure whether it's a clearly causal thing or that PG simply has his finger on the tip of investor consciousness.


Great post, but minor quibble:

The CEO of that company, the "next Steve Jobs," might not measure up to Steve Jobs. But he wouldn't have to. He'd just have to do a better job than Samsung and HP and Nokia, and that seems pretty doable.

That really should be:

The CEO of that company, the "next Steve Jobs," might not measure up to Steve Jobs. But they wouldn't have to. They'd just have to do a better job than Samsung and HP and Nokia, and that seems pretty doable.


Hell, thanks, you just pointed out that pg is not PC. Hope he will never start writing in this silly new grammar.


I agree that nitpicking is unhelpful, but you're wrong about singular they. It goes back at least to Chaucer and is as English as can be. Shakespeare used it ("God send every one their heart's desire") and Jane Austen way used it (http://www.crossmyt.com/hc/linghebr/austhlis.html). No one can accuse Austen of "silly new grammar".

The point is that all the good writers have used it forever. That includes Byron ("Nobody here seems to look into an Author, ancient or modern, if they can avoid it") and Wilde ("Experience is the name everyone gives to their mistakes") and, as Geoff Pullum wonderfully discovered, E.B. White: http://books.google.com/books?id=aqe9RoZorLIC&lpg=PA109&...

The kicker is that the real PC silliness turns out to have been generic he, which was imposed 200 years ago by grammarians who wanted to make English more proper and Latin-like. It never fully took, and now it's fading as social norms shift. If those meddling ideologues had left well enough alone in the first place, we wouldn't be having tedious pronoun battles today. Given the history, though, it's pretty clear that English is slowly reverting to what the real masters have been doing all along. Sounds fine to me.


Whether you find "they" as a third person singular pronoun objectionable, it's not "new"--it goes back to at least Shakespeare (through Austen).


I object to:

- the idea that changing grammar is a way to fix moral problems.

- any barrier, taboo, or artificial distance between what one think and what one write.


I wasn't talking about the grammar in what was written, I was talking about the content which implied that the person would be male.

any barrier, taboo, or artificial distance between what one think and what one write.

How was this case -- implying the person would be male, vs being gender neutral -- a barrier between what one is thinking and what one is writing?


I agree - we should get more people to think in singular they, rather than just write.


I don't like this kind of thinking police, sound too 1984ish to me.


Any time you advocate a position you are encouraging people to think in a particular way. I don't think we're at risk of being thought police as long as no one is forced to change what they think.


I don't find 'they' any less readable than using 'he' or using 'she'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: