Hacker Newsnew | past | comments | ask | show | jobs | submit | hharrison's commentslogin

Yes. AI/cog sci could learn a lot from this. Intelligent structures are self-organized, the order isn't imposed from above or from some "central executive."


The correct tense is "is learning". They've been studying this for years. This is not news to the AI/cog sci community, to the point that it's insulting to claim that it is.


Yes and no. I am aware of the status of the field (well, AI less so). I myself am a psychologist working on the self-organization of behavior. Yes, we have been using examples like termites and ants for years. But we are far from the mainstream. For example, in Pinker's How the Mind Works, he dismisses self-organization as "fairy dust".

Some aspects of our work have been trickling into the mainstream. But if you don't assume that the mechanisms of mind are computations on representations, then it is very difficult to get a job without compromising your principles.

And perhaps you could argue that the mainstream thinks this system of computations is the result of self-organization on some lower level. And I wouldn't disagree. But for the majority of the field that is just lip service. Their research goals are to uncover the algorithms that the mind is running. There's no self-organization at that level.

Of course there are always exceptions. But if anyone is still reading this and is interested I can provide literature staking out the relevant positions.


Take a look at Marvin Minsky's The Society of Mind. Consciousness as an emergent property of the communication between simple agents is a large arm of cognitive science.

http://www.amazon.com/The-Society-Mind-Marvin-Minsky/dp/0671...


I am aware of Minsky's work, see my other reply. I was too flippant in my comment. Perhaps I should have said intelligent behavior, rather than intelligent structures, is self-organized.

But, I don't think any approach to consciousness can be considered a large arm of cognitive science. Most cognitive scientists don't want to touch consciousness with a ten-foot pole. But of course you're right that connectionism lives on.

Let me try to explain the kind of self-organization I have in mind. Consider the fundamental question "how is behavior organized"? The behaviorists pointed to organization in the environment. Cognitivists point to organization of internal representations. Connectionists and similar approaches point to organization of neural structures. Yes, something intelligent emerges from simple, perhaps self-organized, components in this scheme. But they are unwilling to take self-organization to the level of behavior.

In my opinion, a true self-organizational approach to behavior is to say that behavior emerges from the interaction between organism and environment. This is the level at which we need to accept self-organization. It is far from the mainstream. The mainstream approach to vision, for example, starts with the retinal image and asks what can be inferred from it. Yes, maybe they say that this inference engine is itself a self-organized structure. But it still reifies an input-process-output view of cognition. The sensory system receives input, constructs a model of the world. The "higher cognitive" centers formulate plans from this model. The action system instantiates these plans.

To bring it back to the ants: The ants demonstrate what can be done without explicit planning. Modern cognitive science studies explicit planning, even if they agree that this capability emerges from simple components.

As I said in the other post, I could provide literature if you are interested in any of these specific debates.


The analogy that a mind is like a computer is just that, an analogy. It's used to present the high-level structure of the brain in a accessible, understandable way, not as a logical argument.

Actually, this analogy is the foundation of our modern understanding of the mind and is accepted by nearly all experimental Psychologists/Cognitive Scientists. It is certainly pervasive enough to be arguing against. Few of them seriously recognize its limitations.

This analogy is not the justification for why people believe that "the mind can be replicated on a computer". And yet the article tries to disprove the latter, deep and meaningful point by attacking the relatively superficial analogy between [current] computers and minds.

I think you're missing the point. No one cares whether the mind can be replicated on a computer. Researchers/theorists care about whether the mind is a computer. The solar system can be modeled on a computer, and yet there isn't an entire theoretical approach to astronomy predicated on uncovering the algorithms running on the solar system computer. This line of discussion about the capabilities of math and computation are beside the point.


Yes, thank you. So many people here are missing the point. The debate isn't whether or not minds are simply too special to be modeled by a computer. The debate is whether or not the mind itself is a computational system

I mean, consider the solar system. Can the solar system be modeled by a computer? Yes of course. Is the solar system a computational system? Is it organized on computational principles? Of course not. These questions are not the same thing, and yet conversation about the latter always seems to get drowned out by people saying "How could you possibly deny the former!"


I feel like I should share a perspective from Cognitive Science/Psychology.

In theoretical/experimental Psychology, the dominant paradigm is a computational/representational paradigm. Take vision for example. The accepted facts are that we receive as input an impoverished view of the world, insufficient to know what's really out there. So we have to take that input and build upon it, based on assumptions and past experiences and what-have-you, until we have an internal representation of the external world. And then we can reason with this internal representation, we can refer to it when planning actions, etc. So in this view, we are not really in contact with the external world, only our reconstruction of it in our heads.

This view is probably familiar to you in some form. I am part of a group of scientists pushing an alternative view, however within Psychology we are considered fringe for questioning this dogma. We take a non-representational view of the mind. Going back to the vision debate, if you assume a stationary vantage point and a single snapshot of an "image", and if you assume that the final "output" of visual cognition is a representation in 3D-coordinates, then yes, the visual input is underspecified. However, if you assume a moving point of observation, if you realize that the really rich information is not in the snapshot but in the way the light changes over time, if you realize that in order to successfully control actions you don't need a full 3D map of the world, then there is enough input. Some of what we do is to work out mathematically that the information is there to support certain actions, then to demonstrate experimentally that indeed, people do seem to use these "shortcut" strategies that don't require intermediate representations.

Of course, there's a lot more to it than that. I could take about thermodynamics, self-organization, and lots of other interesting stuff. But what I wanted to show is that the debate about computation is alive and well within Psychology, and indeed the computation side is extremely dominant. It may take the tack of representationalism vs non-representationalism, however the representational theories are firmly computational. Research has the explicit goal of figuring out what is the storage/transmission format of these representations? What operations are performed on the input to create them? What operations are performed on them to use them? Etc.

Also, despite what some of the comments here suggest, at issue is not whether or not computers can model a mind. All the behaviors that we think are done without representations? We model them and study them with the aid of computer models. Of course. But that's not really interesting at all. And yes if you modeled a brain physically, you might get a mind (I'd argue you would need to model the body as well, not to mention quite a bit of environment). But that's not really the point. Today, Psychologists do research with the idea that they are setting out to discover the software that the brain is running. This is very different from the claim that a computer could model a brain, and it pervades how we think about minds, even (and especially) among experts in the field.


Any good editor should be able to figure out the indents when pasting. I'm not an emacs user, but I'd be surprised if there wasn't a plugin with smart python pasting.


My point is that it's not always possible for the editor to know what the indentation is supposed to be because it can't know what the code is supposed to do.

Suppose you have code like this:

    [...]
        if a:
            b
        c
    [...]
And then you paste some snippet you got from somewhere else between b and c:

    [...]
        if a:
            b
    pasted_snippet
        c
    [...]
The editor cannot know how to indent that properly. It's not a problem in most other languages.

Again, I'm not trying to say it's a deal breaker and Python is useless as a result, I just think it's a small mistake in the design of the language. It's like non-breaking switch/case in C, it doesn't make the language unusable but it is an annoyance.


In that example, a good editor should indent pasted_snippet at least to the first indent level. If you wanted it to be part of the if statement then you could just select the pasted block (both Vim and Emacs should be able to do this with a single command) and indent it by one more.

Python's use of semantic white space is more a function of it's inheritance than anything. It's based off of ABC[1].

[1] http://en.wikipedia.org/wiki/ABC_%28programming_language%29


The editor cannot know how to indent that properly. It's not a problem in most other languages.

It will have a pretty good idea. If you paste the snippet, then hit 'tab', odds are high that a good editor (I use emacs python-mode) will Do The Right Thing on the first try, although sometimes you'll have to hit tab again or backspace a couple of times to get the right indent level.

Occasionally I need to use a keyboard macro to fix the indent after a paste, but this is very easy to do and doesn't happen to often, really. I'm sure by now with python's popularity there are more advanced indentation management tools but I still just use emacs keyboard macros.

It's a very small price to pay for the huge benefits of semantic whitespace.

Generally the worst case scenario for copy/paste is that I'm using emacs in a terminal window and I forget to switch to fundamental-mode before pasting. Because the terminal is handling the paste, not emacs, python-mode treats it all as if it had been entered one line at a time and auto-indents everything after a colon, then pasting in lines that already have indentation and the result is a complete mess. (but then I just undo it all and re-paste it the right way) GUI emacs doesn't have this problem.

The worst case scenario I can think of for semantic whitespace (outside of copy/paste) is accidentally changing indent level of a piece of code without realizing it and a syntax error doesn't result, meaning there's now a logic flaw in your program you don't know about. Python is more susceptible to that than sort of regression error than most languages. That said, usually that sort of mistake WILL cause a syntax error and be easily fixed.


Yep, ggplot is the one thing that I keep coming back to R for, plus the odd statistical model I can't find in statsmodels - which is rarer and rarer.

One of the many awesome features of IPython - the interactive python shell and notebook, is that you can call code blocks in R just by prefacing with %%R. So my plotting habits are usually first to try the python port of ggplot, and if that can't handle my situation I just jump into R without having to switch windows or do any complicated data transfer.

It's worth mentioning that matplotlib is designed to mimic Matlab's plotting API, so for people coming from Matlab there's very little change, plus there's all the benefits of the other plotting libraries others have mentioned.


Yeah, I'm a Python convert like the author, though coming mostly from Matlab rather than R, and everyone in my field reacts with surprise when I tell them I prefer Python. They're open-minded, and I'm hoping to convert a few myself, but I don't think the mass migration has happened yet.

Regarding your second comment- you're correct of course, but what makes this a "blind spot"? After all, if the user is writing code in Python, they're doing scientific computing in Python, regardless of what the Python library calls behind the scenes. In my experience, a lot of people doing scientific computing--particularly those more interested in the science than the computing--could care less about what's going on behind the curtain. Any moment they have to think about implementation is a moment not thinking about science and therefore a waste of time. So it's actually a benefit for an ecosystem to hide the underlying mechanics--calling it a "bizarre blind spot" seems to imply they're doing something wrong.


I call it a "bizarre blind spot" because it seems like there's a silent consensus to never talk about this basic fact. It's a bit surreal attending SciPy and hearing all of these people talking about scientific computing in Python when almost every single person in the room spends the vast majority of their time and energy writing C code.

I disagree that the separation between implementation and user-land that's enforced by two-language designs like C/Python or C/R is socially beneficial:

1. If your high-level code doesn't perform fast enough (or isn't memory efficient enough), you're basically stuck. You either live with it or you have to port your code to a low-level language. Not impossible, but not ideal either.

2. When there are problems with some package, most users are not in a position to identify or fix those problems – because of the language boundary. If the implementation language and the user language are the same, anyone who encounters a problem can easily see what's wrong and fix it.

3. Basically a corollary of 2, but having the implementation language and user language be the same is great for "suckering" users into becoming developers. In other words, this isn't just a one-time benefit: as users use the high-level language, they automatically become more and more qualified to contribute to the ecosystem itself. It is crucial to understand that this does not happen in Python. You can use NumPy until the cows come home and you will be no more qualified to contribute to its internals than you were when you started.

These benefits aren't just hypothetical – this is what is actively happening with Julia, where almost all of its high-performance packages are written in Julia. In fact, I never realized just how important these social effects where until experiencing it first hand. The author of the article wrote:

> It turns out that the benefits of doing all of your development and analysis in one language are quite substantial.

It turns out that it is even more beneficial to not only do development and analysis, but also build libraries in one language. Of course, Julia has a lot of catching up to do, but it's hard to not see that the author's own logic implies that it eventually will catch up and surpass two-language systems for scientific computing.


> You can use NumPy until the cows come home and you will be no more qualified to contribute to its internals than you were when you started.

Just for whatever it's worth, as an occasional contributor to numpy who is an absolutely terrible C programmer, there's a _lot_ you can contribute with pure python. Yes, the core of the functionality is in C, but most of the user-facing functionality isn't.

That having been said, I completely agree on the benefits of Julia.

However, I'd argue that Julia has the potential to compete with (or replace) the scientific python ecosystem for a completely different reason: It's more seamless to call C/Fortran functions from Julia than from Python. (Though Cython and f2py makes it pretty easy in python.)

There's an awful lot of very useful, well-designed, very well-tested scientific libraries written in C and Fortran. It's far better (i.m.o.) to have a higher-level language be able to call them seamlessly than to have a high-level language where reimplementing them is a better option. (Julia does wonderfully in this regard. Python does pretty well, but not as well, i.m.o.)

Also, from what I've seen, I think the Julia and scientific python ecosystems are more complimentary than competing, at the moment. There seems to be a lot of cross-pollination of ideas and collaboration, which is a very good thing.

(...And I just realized who I'm replying to... Well, ignore most of what I said. You know all of that far, far better than I do! Julia is _really_ interesting and useful, by the way!)


:-)

I completely agree that Julia and SciPy are complementary rather than competing. I've attended the SciPy conference for several years and it's great – I love the Python and SciPy communities. It's definitely crucial to both be able to easily call existing C and Fortran libraries and write code in the high-level language that's as fast as it would have been in C. You don't want to reimplement things like BLAS, LAPACK and FFTW – but you do want to be able to implement new libraries without coding in Fortran or C, and more importantly, be able to write them in a very generic, reusable fashion.


I'd just like to add that what I love about Julia is that it actually lets you go deeper than C code. For high-performance computing it's easy to hit a wall with C (i.e. with SIMD vector instructions), and it's fairly difficult to jump the barrier to programming assembly. Julia makes it easy to muck around with the generated LLVM IR code as well as native assembly code. You can go as deep as you want without leaving the Julia REPL.


Thanks for this reply. I thought the "bizarre blind spot" comment was some sort of (absurd) thought that numpy users were unaware that C was being used under the hood.

> it eventually will catch up and surpass two-language systems for scientific computing.

Assuming that, like hardware engineers, scientists have a fair bit of general-purpose scripting to do, Julia will itself be part of a different kind of two-language solution unless it is up-to-snuff w.r.t. said general-purpose scripting. This implies libraries and good interaction with OS utilities. Any thoughts on whether or not this will be an issue with Julia?


Julia is designed to be a good general purpose language. There are already a bunch of database drivers; a simple web framework, etc., etc. http://docs.julialang.org/en/release-0.2/packages/packagelis...


In addition to what jamesjporter mentioned, Julia has IMHO a very nice, clean shell interaction paradigm for this very use case ("glue"):

http://julialang.org/blog/2012/03/shelling-out-sucks/

One of the best examples of this is the package manager's concise wrapping of git CLI commands:

https://github.com/JuliaLang/julia/blob/master/base/pkg/git....

(aside: there has been some discussing of moving to libgit2 for performance reasons)

Until recently, the startup time somewhat precluded use for general scripting. However, on the trunk the system image is statically compiled for fast startup, so scripting usage is viable.


WRT shell integration, a follow up post details the safe (no code injection) and straightforward Julia implementation.

http://julialang.org/blog/2013/04/put-this-in-your-pipe/


Fair enough. And interesting, I hadn't thought of some of this.

SciPy users are certainly doing scientific computing in Python, but it is surprising to get that attitude from the developers.


>Regarding your second comment- you're correct of course, but what makes this a "blind spot"? After all, if the user is writing code in Python, they're doing scientific computing in Python, regardless of what the Python library calls behind the scenes.

What he means is, they can't expand the core primitives provided for them in Python itself, so they are constrained by what's given if they want performance. Unlike with, say, Julia.


I think Nimrod could really make inroads in those cases. Almost as easy to write as (and in fact looks a lot like) like python with lightweight type annotations, runtime characteristics are those of C, since that what it compiles via.

http://www.nimrod-lang.org


Eventually, but strong AI just means intelligence matching or exceeding human intelligence. We already have billions of entities with human intelligence, and it is taking us a long time to produce something smarter than ourselves. If the first strong AI is just a little smarter than us, and if it chooses to put its energy toward the creation of more AIs, then maybe it will eventually produce something smarter than itself. But it's not so simple that we just get to the singularity straight away.


The problem with people is multiplicative intelligence parity. (Like that phrase? Feel free to use it as your own.)

You can find one smart person. But how do you find another smart person for them to work with? With AI-level smarts, you reduce the coordination problem to zero since it can spawn multiple copies of its own brain state (maybe?) and work concurrently and intelligently on the same problem. Just ignore the problem of killing off divergent brain states once the task is complete (nobody cares about the life of a kage bunshin).

I sure could get a lot more done if I had five more of me (unpaid, of course) to tackle a problem all at once.


We are far, far from hard AI. If anything, this article shows that we're only just now starting to ask the right questions. And that's even debatable. Plus they're very hard questions.

The problem is we have no theory of intelligence, no theory of psychology. Research in the cognitive fields is fractured, all about tiny insignificant phenomena with little relation to anything else. Our best theory is "the brain is like a computer" which is, frankly, a terrible theory.

Here's something I find more promising: On Intelligence From First Principles: Guidelines for Inquiry Into the Hypothesis of Physical Intelligence [1]

In short, what we really need to understand is self-organization and non-equilibrium thermodynamics. Not image labeling.

[1] http://www.tandfonline.com/doi/pdf/10.1080/10407413.2012.645...


The problem is we have no theory of intelligence

I don't think we can have a theory of intelligence. At least in the public consciousness, intelligence is one of those "God-of-the-gaps" style concepts that continually evolves in order to maintain the illusion of human superiority.


Well, to the extent that it's a scientific problem, it sure would help to have a theory. I sure don't expect that theory to enter the public consciousness anytime soon, if at all.

But I do agree with your sentiment as far as the way intelligence is usually discussed, even among the science-literate.


Intelligence certainly exists. There is a reason humans are building space ships and chimpanzees are playing around with sticks.


The point (maybe irrelevant to the larger discussion) is that as soon as we figure out how to implement intelligent behavior in a machine, it stops seeming intelligent. Chess used to be a prime example of intelligent human strategic thinking. Now it's just an item on that long list of things computers can beat us at (incidentally, I predict Jeopardy will go off-air sometime in the next 10 years due to declining interest now that we have Watson).

Once we figure out all the issues of general intelligence, it will stop seeming so special. We may even begin to think that humans are really bad at it afterall.


Chess playing programs worked because they use unfathomable amounts of computing power to essentially brute force the problem. I don't think there are any chess programs that play anything like a human does.

Because of this there are a number of games that computers still can't beat because just stupidly trying every possible move doesn't work like it does for chess.

Watson actually does use a lot of natural language processing and machine learning so it is kind of intelligent. Though at it's core it's still just a glorified search engine. Jeopardy was always just a game of memorizing facts, not a demonstration of intelligence.


I suggest actually looking into the architecture of deep blue and followon programs, because right now you are exhibiting the very fallacy I was talking about. Exhaustive search over board states would take longer than the lifetime of the universe to compute a single move.master chess for grams work by using sophisticated algorithms to manage the search process. it's not the process that humans use, but it is intelligent nonetheless. Of course now that it is a solved problem, the common perception is different...


It's a guided search, so what? There is no fallacy here, deep blue is not intelligent. You can solve any problem with enough computing power and a basic search. No one has ever claimed otherwise or said that it would be intelligent.

What people did predict wrong is that it would take general intelligence to solve chess. As in, if you solved chess, you could also pass the Turing test and everything else. Here is a quote from Douglas Hofstadter:

>There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone.

And they would have been right if computers hadn't become exponentially faster.


Right. But how do you define it intensionally? Saying that "humans have a lot of it", "chimpanzees have less of it", "ditto for dolphins", "reptiles have very little of it", etc. is defining intelligence extensionally. Why is this a problem? Because an extensional definition doesn't tell you how to add new elements to the set.


I'll try: a system is intelligent if it is able to respond to low-energy deposits (information) with high-energy reactions (e.g., movement) in order to seek non-local sources of negentropy to dissipate.

But then again I'm more interested in the intelligence that differentiates a slime mold from a hurricane than the intelligence that differentiates a human from a chimpanzee.

For example: hurricanes are self-organized, constituted by a structured flow of energy and matter rather than specific pieces of matter. But a hurricane is a slave to the local potential. It will dissipate all the negentropy in its wake, and in doing so maintain its structure. But once there is no more energy differential to dissipate, the hurricane will itself dissipate as it is not able to break free of the local potential and use information to seek out non-local negentropy sources. The question for research is what is necessary to make that jump from self-organization to intelligence, given that operationalization.


Don't wildfires have the ability to break free of a local potential? All it takes is a small spark, carried on the wind.

Likewise for seeds, fish eggs (carried in the gut of birds), etc.


Wind is a local potential in this example. An intelligent wildfire would be one whose sparks can go against the wind, because it perceives more fuel in that direction.

Also: my definition is meant to include fish and birds, even plants, as intelligent.


I've seen sparks go against the wind, at least on a small scale. Sparks can be launched pretty far when the burning wood falls and breaks apart.


You've responded to the second part of: "able to respond to low-energy deposits (information) with high-energy reactions (e.g., movement)"

How does your comment about sparks address the alternatively stated requirement: "because it perceives more fuel in that direction"


How do you define perception?

* A spark flies out randomly and contacts a fuel source

* A blind person reaches out randomly and finds a glass of water

What is the essential difference between these events?


Either the blind person is responding to some low-energy distribution (scattered sound waves, perhaps, or past samplings of the energy distribution, i.e. memory) or the blind person isn't perceiving any more than the spark is (in this example).

In any case my post above implied a definition for perception: responding to low-energy distributions with an asymmetric high-energy response.


It's not just arbitrary examples, there is reasoning behind it. You can look at humans building spaceships and using tools and demonstrating understanding of abstract concepts. There are a number of tests you could do that would confirm something is intelligent like looking for any of those things.

Some rough and imperfect, but still useful, definitions of intelligence could be the ability to make good predictions based on past data, the ability to solve optimization problems well, and learning ability.


demonstrating understanding of abstract concepts.

What sort of test can show that a subject demonstrates an understanding of abstract concepts?

So far, from what I've seen, if a test can be written then software can be written to solve the test.


You could talk to it or you could have it solve a difficult problem.


Right. That's commonly called the Turing test. This just pushes back the problem of defining intelligence to one of creating a proper Turing test. How do we do that?


The Turing test is actually a pretty decent and straightforward test.

I don't understand why this is an issue though. Testing intelligence was never the hard part of AI. There are so many tasks that computers currently suck at that we would be happy if they were solved, regardless what label you gave the solution. And I don't think many people could see a computer doing tasks like having conversations or solving difficult problems and deny that it is intelligence. Even if there is no formal test to perform that is 100% certain.


The Turing test is actually a pretty decent and straightforward test.

How so? To me, it appears completely open-ended. You could sit there forever asking questions and never reach a definitive result.


I don't see how. Have you ever tried talking to a chatbot? It becomes apparent pretty quickly that it's not intelligent.


Has anyone bothered to develop a full mathematical definition of "optimization power"? Because I've been thinking about how to do it.



Bugger. That bastard has 10 years' head start on me; simply not fair.

OTOH, looks like not very much was actually formalized.


Go read Juergen Schmidhuber's research and then tell me we're so far away we can't even see Hard AI on the horizon.


Read the paper I linked and you'll understand why I'm not impressed.


I wasn't talking about his neural-networks work. That's just what he does to get funding ;-).


I'm already familiar with some of his work. What did you have in mind specifically? In any case, he may be one of the best in the traditional computational approach to AI, but I think framing intelligence in terms of computation is inherently misguided. Of course I'm not going to get far with that unorthodox perspective on HN :)


I had meant AIXI and Goedel Machines, but if you've got a non-computational view that's also scientifically grounded, I'd love to hear it.


I think the first true strong AI will come from research on non-equilibrium thermodynamics. We need to get down to the basics: where do entities come from that self-organize, more specifically that are able to use information in structured energy arrays to find and dissipate negentropy deposits, and dissipate that energy in order to maintain their own state away from equilibrium and hence avoid dissipating themselves? In short, strong AI will not come from top-down research on problem solving or learning, but bottom-up research on what makes autonomy and agency possible.

Goedel machines might actually be the closest thing to this in the computing literature, my reaction is less about the work itself than the rhetoric surrounding it, to be honest. JS should collaborate with a physicist on the thermodynamic side of the problem.

If you're intrigued, you could start with the article I linked above, or if you have journal access, anything from the same special issue. I chose that article just because it's the only one not behind a paywall.


Yes I think the same. Understanding how thinking machines self organise out of a set of sufficient conditions is very important. I am commenting mainly so I can read your link later.


I'm taking a course with this book right now... and yes, I stole it at the authors' recommendation. Textbooks are too expensive and I don't want to carry it around anyway.


I think that it's legal to copy a book it if the author says it's ok to copy it... but don't quote me on that.


Only if the authors are also the copyright holders, ironically that is rarely the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: