Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Python built-ins worth learning (2019) (treyhunner.com)
287 points by type0 on March 9, 2022 | hide | past | favorite | 118 comments


I didn’t know about breakpoint(), which landed in 3.7, about a year after I stopped using Python regularly (I considered myself an expert at the time; now, the ecosystem has moved on enough and I’m rusty enough that I’d rate myself below expert, but able to become expert again in a very short time). Until hearing about this, if I were needing to debug Python stuff I’d have been perfectly happy continuing to use pdb.set_trace(), but breakpoint() is nicer now that I know about it.

This reveals something important that is easy to overlook: while you’re actively working in an ecosystem, you may hear about new things that come along, but if you spend some time away, things may well happen that you didn’t hear about, and so in some cases beginners may even surpass you in current idioms. It’s worthwhile, in such cases, at least skimming over some materials and especially the release notes since you were last an expert—if I were going back to working with Python/Django extensively, I’d read at the very least the headings of the release notes for Python 3.6–3.10 and Django 1.8–.


I tend to drift in and out of Python, and follow the same pattern as you suggest when coming back after a couple of releases. Python's version notes are truly excellent¹.

The only thing I really wish for is an ecosystem "what's now" guide, it would list things such as where the community has moved for common libraries. Off the top of my head it would have covered things like nose to pytest and such. Obvious problem being I just can't imagine how you'd get buy-in to make such a thing work, especially without it getting overrun with grifters.

¹ https://docs.python.org/3/whatsnew/index.html


There's the Hitchhiker's Guide to Python which has a lot of advice and covers best practices and/or projects over a number of big areas like web frameworks, databases, command line tools, etc.: https://docs.python-guide.org/


Yeah, something like that.

However, taking "Command-line Applications"¹ as an example it lists six alternatives. I think in the version I'd like to see a good few of them would disappear. For example, I'll presume plac/fire/cement aren't in common use given that only two are packaged in Debian and the two that are don't have reverse dependencies. Cliff looks largely uninteresting too, as outside of OpenStack nothing seems to use it.

I think my ideal implementation would probe recent releases of popular packages for dependencies to see what the community has settled on or moved to. I also suspect this comment that somewhat moves the goalposts also points to the difficulties of maintaining such a document.

¹ https://docs.python-guide.org/scenarios/cli/


Totally agree with you. I would put argparse, the built-in command line argument parser, over Click on that list. Click is great, but it's one more API to internalize, for something that's not really the meat and potatoes of your application.

That said, when I started out using Python professionally, I got a lot of mileage out of this guide.


The problem is, any such whats-now guide is very soon already outdated, at least in some aspects. It should be a Wiki which everyone can edit.

And btw, such Wiki exists, even officially: https://wiki.python.org/moin/FrontPage

Although, as you might expect, it is a mixture of outdated, more recent, messy, detailed, lists or guides and everything.

What are the current common libraries also doesn't really have a simple answer. Maybe this should be auto-generated based on Google or GitHub trends.


>> What are the current common libraries also doesn't really have a simple answer. Maybe this should be auto-generated based on Google or GitHub trends.

I followed the Python and PSF community very closely for about 12 years through 2018. I haven't kept up for personal reasons. I suspect they are uncomfortable with doing this because they don't want to feel like they are picking winners. Whatever they list will be "blessed" no matter how many disclaimers are attached. In a way, I observe a similar problem impacting how they've approached ecosystem problems like packaging. There is a clear issue and a central authority was needed to sort it out and that never emerged when it was originally required.

Right now PyPI highlights some trends (https://pypi.org), but there is more they could do. Python success stories at https://www.python.org/success-stories/ could probably be evolved to highlight some of the libraries and tools which get serious usage. Anything that's autogenerated from other sources probably has value, but would only be something that would be compatible if it's detached from python.org. Expanding PyPI trend information would be more compatible with python.org.


I’ve been using JS continually since the mid 90s. But once the changes started coming fast and furious with ES6, I just couldn’t keep up with them. I think the only thing I’ve really adopted is template literals. People who are newly coming into JS probably know and use the newer changes. For example, I’m so used to prototype inheritance that I don’t think I’ll ever use class. But I imagine new people use class and might never even learn what is actually happening under the syntactic sugar.


Of all the new features I think `class` is the one I've seen least adoption of. Not that I've read a huge amount of JS.

There are some cases where it can be nice, but often it feels like a somewhat unnatural bolt-on to the language. I guess OO isn't that widely used in JS anyway.

Stuff like nullish coalescing and spread operators however... those things are great - they can save a lot of code.


JS classes are one of the things I'm most unsure about in the direction of the language. On the one hand, it is nice to have a form of abstraction of prototypes that is quite close to the typical mode of programming used in popular languages like Python and Java. On the other hand, it is quite a needless abstraction for a well-functioning system. The ease with which a class system can be implemented on top of prototypes just shows how prototypes embody the JS spirit: Flexible, powerful, a bit too easy to shoot yourself in the foot with.


I learned about breakpoint() while prepping for an internship interview. I ended up not getting that internship, but it ended up massively improving my workflow. Now I annoy all my friends into using it whenever we run into bugs while working on Python together.


I seem totally incapable of stopping using “import pdb; pdb.set_trace()” in my code. Once every couple months I’m reminded breakpoint() exists and that I’m still apparently not using it.


What do you think you would have to learn to become expert again?


Mostly it’d just be about dusting off the cobwebs and using it all again; I still know it all, but my skill will doubtless have atrophied at least a little from six or seven years of disuse. In such a case, having been familiar with Python 2.5–2.7 first probably also works against me, as the distance of time will probably make recollecting the ancient Python 2 way occur more often relative to the Python 3 way than it would have while I was actively using Python 3. As far as actual ecosystem changes are concerned, packaging practices are probably the most significant area where I gather there’s been a fair bit of change happening.


I think you're really overestimating the gap between python 2 and 3. My experience was that just diving into a large python3 codebase got me most of the way really quickly.

The only thing I can think of that I had to really read up on was string handling. Everything else I could just google as I came across it. And even after 6 or 7 years, it all comes back really quickly.

Packaging stuff... weren't we using virtualenv 6 or 7 years ago? And even that - it's hardly difficult to get working with.


The advent of declarative package management (poetry et al.) is a significant change. Not that hard to get working with, but definitely a big departure from previous practices.


Which language(s) did you switch to?


Personally, Rust; professionally, mostly web frontend stuff (HTML/CSS/JavaScript, plenty of each) rather than Python web backend stuff.


Getting a bit off topic but useful.

Input with history and editing:

  import readline

  while True:
      print(input('text>'))
https://docs.python.org/3.8/library/readline.html


That's really cool. I knew that Python had the `readline` module, but didn't know that it monkey-patched the `input` function to add functionality automatically.


I very recently learned about dict().setdefault()

For instance:

  def myfunc(**kwargs):
    # we need to be sure kwargs has ['foo'] set, with 'bar' as the default: 
    kwargs['foo'] = kwargs.get('foo', 'bar')

We could write that this way:

  def myfunc(**kwargs):
    kwargs.setdefault('foo', 'bar')


For this particular use case, shouldn’t you just set the default value in the function definition or am I missing something?

    Def myfunc(foo=“Bar”):
        pass


It’s convenient when you need to pass the entire kwargs dict to something else, but still need to ensure that one or two values have defaults if they aren’t set.


That would make the local variable foo in myfunc equal to "bar", as opposed to setting a key named foo in the dictionary kwargs equal to "bar".

edit

Nevermind, I see what you mean. Yes, it would be equivalent, but it can be used in other scenarios. This was just the one I thought of.


A good example scenario for setdefault/defaultdict is grouping a list of tuples :

    grouped = {}
    for (key, value) in ungrouped:
        group = new.setdefault(key, [])
        group.append(value)
defaultdict(list) would make it even shorter, but setdefault() is more general, as it makes it possible to have a "default" value that depends on the key itself.

defaultdict combined with late-binding closures also allows to define nested dictionaries :

    nested_dict = lambda: defaultdict(nested)
    nested = nested_dict()
    nested[0][1][2] = "value"


You can't use this if you also have a nonlocal or global variable named foo that you want to access.

    foo = 123
    def myfunc(foo="bar"):
        global foo # problem!


The main 'problem' there is the `global` keyword :^) (it's not needed for read access)


How would you read foo if it is hidden by a local parameter.


I guess not at all. It's shadowed. But this being Python, this is a way:

    foo = 123
    def lol(foo="bar"):
        print(globals()["foo"], foo)
That will output "123 bar". But it's terrible!

Your example is a SyntaxError, it doesn't even compile:

    In [1]: def myfunc(foo="bar"):
       ...:     global foo
      Input In [1]
        global foo
        ^
    SyntaxError: name 'foo' is parameter and global


> But this being Python, this is a way

That's why I said nonlocal or global ;)


Seems like you shouldn’t do that?


Thank you! I have had to use the clutch of your first example many times and never knew about dict.setdefaults.


setdefault also returns the value:

    is_foo_enabled = config.setdefault('foo', False)


Why not just use defaultdict?


because you may not have been the one that created the dict


A built in a lot of people don’t know is operator.itemgetter and operator.attrgetter they can be used together with functools and itertools for a lot of purposes for example you can sort a list of tuples using operator.itemgetter and sort: https://docs.python.org/3/library/operator.html


Woah the other operators are even more interesting to me. My goto is an in-line lambda to avoid the import but I didn’t realize there were so many operators defined in the stdlib. Thanks for sharing.


While I do know about those and they are indeed much more pythonic and cleaner, I often end up just using `lambda x: x[1]` instead of `import operator; operator.itemgetter(1)`. For sort, I'll also often do `sorted(mydict, key=mydict.get)` which is again not the most pythonic.

I just wish there were nicer ways to do these in the global namespace, but poluting builtins is probably not a great idea either. Maybe `slice.itemgetter` or `object.attrgetter`.


Hm, I didn't know about those. At first I thought that I'd just replace it with a lambda, but it seems to support looking up multiple elements/members at the same time, so that might come in handy at some point.


Structural Pattern Matching[0] (PEP 636) that landed in 3.10 was a pretty nice addition too.

[0] https://peps.python.org/pep-0636/


That is extremely useful. Thank you.


This!


I would add f-strings to the list and some of the string methods especially .strip(), .split(), and .join()


For me, Specifically f”{x:,}” to return the value of x as a comma separated number string


fwiw `format(x, ',')` will do this too and looks a bit more readable imo


I think that holds true for just x, but not for, say,

f”Total Cost: {round(x / 1000000, 1):,}M”


The reason f-strings are not as popular is because they were added in 3.6 and no one could use them for backward compatibility reasons.

Now that 3.7 is the minimum supported version, it's safe to use.


F-strings are the reason I upgraded all my projects.


we also FINALLY got removeprefix() and removesuffix()! no more strange `string[:len(suffix)]`


f-strings are so incredibly handy!


Favorite thing I learned recently was that instead of doing

    print (f’some_var: {some_var}’)
one can just do

    print (f’{some_var=}’)
What’s the next cool thing I should know about f-strings?


In case anyone is wondering how they missed this: The feature was added in Python 3.8, so it wasn't part of the original f-strings feature set.

https://docs.python.org/3/whatsnew/3.8.html#f-strings-suppor...


I hadn't come across f'{some_var=}' before. Output looks like:

  some_var='abcd'
Great find!


It's even more useful for complex expressions e.g.

  f'{f(x, y)=}' -> 'f(x, y)=5'


Wow, didn't know this one (just added to my lecture notes!) thanks


    # Instead of doing this
    if len(numbers) == 0:
        print("The numbers list is empty")

    # Many of us do this
    if not numbers:
        print("The numbers list is empty")
I tend to write the first form because it is how I think, and I'm writing what I think. "If this list is empty, then..." I suppose a list is only falsy if it is empty. I used to exploit Python's truthyness / falsyness a lot more, but now I like to be more explicit. Ultimately it doesn't matter much, making a single line a tiny bit longer is rarely the most important issue in a project.


The second case is preferable if numbers is an Optional[List], which is usually the case when passing lists around as arguments.

  def custom_sum(numbers: Optional[List[int]] = None) -> int:
    if not numbers:
      ## something

    return sum(numbers)
This can handle both cases - if numbers is None, or numbers is []


If you use the technique more generally, you need to watch out for the first case:

  bool(0) -> False
  bool(1) -> True
  bool(str('0')) -> True
  bool(str('')) -> False
  bool(None) -> False
If you are checking that a variable is set bool(some_var) doesn't work for 0.


Wouldn't it be easier to say

    if numbers == []:
       print ("The numbers list is empty")
if your intent is to express "is the list empty?" in the condition?


That condition would fail on non-list sequences, like `numbers == (1,2,3)`; the `len` version makes the code more permissive.


I often use the same if len() construct.

It will fail for non-list inputs, which is another reason it can be handy if you want clarify that you’re enforcing a list.


I often use the same if len() construct.

It will fail for non-list inputs, which is another reason it can be handy if you want clarify that you’re enforcing a list.

len will work for strings, tuples, dicts, sets, plus all kinds of custom classes that implement the __len__ magic method.


Ah yes, that's another reason I do it, but I failed to mention. If you see "if not x:" then you're left wondering what x is (a boolean presumably), but "if len(x) == 0:" let's you know you're checking to see if a container is empty.


If you want to do type-checking, I don't think this a good way to do it. As a sibling comment points out, it doesn't work all that well since many objects respond to `len()`. I think it would be preferable to add type annotations and use mypy to check them or perhaps, in rare circumstances, do an isinstance check.


Alas, I find this suggestion unpersuasive for the use I have in mind. I get that there would be cases it does apply to.

I just want to enforce that the object acts like a list, mainly versus a float, int, or boolean. Tuples are definitely 100% ok, and if someone wants to give me a set, that's almost certainly OK too, in the use cases I'm considering. And so is a numpy vector!

Often these objects are held in a container (a dict or class variable), so type annotations would be clumsy as far as I know. Wouldn't I need to create a separate object to enforce the annotation?

And the type annotation would need to allow tuples and all flavors of lists or numpy arrays.

On the other end of the scale, I don't think either of us wants to use isinstance checks!

My primary intent isn't to be bulletproof, it's to show intent. The main fail of using len() in this way is that it accepts strings.


I don't see how using len() this way is any different than doing an isinstance check. You're essentially saying, "If this object doesn't respond to len(), raise a TypeError."


> I don't see how using len() this way is any different than doing an isinstance check.

It's a protocol (structural) runtime type check, not a class (nominal) runtime type check.


Perhaps true but misses the point. If you’re using len() for type checking, it’s essentially no different than doing isinstance(items, Sized). The latter explicitly calls out what you’re doing and makes it clear, to me, that’s it’s not really correct, in that it doesn’t show your intent, which is to enforce a sequence of items in a container not that the object has a size.


Tuple, list, and numpy array are all different isinstance()’s, so I’d have to call them all out. len() avoids that.


This is incorrect, because you can generally do this for sequence types:

    isinstance(items, Sequence)
Or perhaps:

    isinstance(items, Iterable)  # works with numpy arrays
Or, equivalent to using len() for type checking, you could do:

    isinstance(items, Sized)


“supports len()” and “is a Sequence” are fairly reliably equivalent, and using len() fails fast in many cases (e.g., iteration over a potentially infinite iterable) not doing so would not and be less desirable.

Type checking via mypy, pyright, etc., is great, but in library (as opposed to an app) you can't count on consumers doing that, and isinstance() checks don't support duck-typing.


I wish `any()` returned the first truthy item in an iterable instead of `True`.


The itertools documentation¹ include a recipe to do just that, and links to the excellent more-itertools² package which bundles up a heap of recipes that make iterators even better. This specific case is handled by first_true² in more-itertools.

¹ https://docs.python.org/3/library/itertools.html#itertools-r...

² https://more-itertools.readthedocs.io/en/stable/api.html#mor...


I think the first_true optio is great and I highly recommend more-itertools, but if you’re looking for a built in alternative I usually use

  next((I for I in iter if is_thing(I)), default) 
It’s slightly more work than any but is pretty much all I need when I need to handle whatever values are returned from the any.


You can now get the first item by abusing the walrus operator, e.g. `any((z := x) for x in [3,5,7] if x % 3)` -- afterwards, z have retained the first assignment for which `x % 3` was true.


In MAGMA, a computer algebra programming language, there is a “exists” statement which takes a comprehension and evaluates to true if any item is true, something like exists{x % 2 == 0 for x in numbers}. But it also has an extra parameter which can bind the value of the first item satisfying the condition, something like

If exists(y){x : x % 2 == 0 for x in numbers}: Print y

There is also a “for all” statement which can return the first offender which does not satisfy the condition.

It’s an awkward but incredibly useful language choice - it would be interesting to see it explored more in mainstream languages.


`next((x for x in xs if x), False)` is pretty concise


Sounds like coalesce () from SQL.


Similar, though a more perfect analog to coalesce() would return the first non-None item.


I like.


I'm extremely surprised to see `chr` and `ord` in the "You likely don't need these" section, and that the author has "never really found a use for them in [his] code". I guess he's never done any reverse engineering or NLP work?


I've worked with Unicode and ASCII conversion directly before in Python, and even then I didn't need these two. There is `str.translate` which requires ordinals aka Unicode codepoints integers, which you would use `ord(str)` for. But then there's also `str.maketrans` which does this 'low-level' fiddling for you: https://docs.python.org/3/library/stdtypes.html#str.maketran... . So even then, arguably a prime use case for `ord` and `chr`, I was able to forego them and work a 'level higher'.


I guess the author has never done any (two extremely specific subfields) work? Most people don’t need to convert characters into numbers, there’s nothing overly surprising about that…


... sure.

But most Python programmers don't do reverse engineering or NLP work either.


I'll probably get a lot of flak for this, but I find myself reaching for locals() a lot more than you'd think! It's great for prototyping, but also nice when you want to generalize things like filling a data class from kwargs, then an environment variable if that didn't work, then a config file if that didn't work, then a default if that didn't work. Probably not idiomatic, but reads simply enough and pretty bulletproof, especially considering the next easiest way is something quite dangerous like eval().

Happy to be proven wrong and learn something though!


I personally wouldn't let any usage of locals() through code review, except perhaps in some extremely rare circumstance.

Do you have an example of "filling a data class from kwargs"?


A dataclass is just a class where all it does is fill internal attributes (lots of self.x = "foo"). To fill a dataclass from kwargs, instead of:

  def __init__(self, x=None, y=None):
    self.x = x
    self.y = y
You can do:

  def __init__(self, **kwargs):
    self.__dict__.update(kwargs) # This may be **kwargs, I'm fuzzy on it
With this, the following code works with both:

  myObj = MyClass(x=1, y=2)
Now, you might say "but with the kwargs method you can add _ANY_ attribute not just x and y!"

That's precisely the point. When you're still iterating on the ergonomics, you want the ability to add new things without repeating a bunch of boilerplate. Once you've figured out what those look like, you can and should go back and fix the names in stone.


I was looking for an example of using locals() to "fill a data class from kwargs" or something similar to that. The example here doesn't use locals().

That aside, I generally wouldn't use the kwargs approach shown in this example either. I'd use https://docs.python.org/3/library/dataclasses.html or https://www.attrs.org/ instead.


What kind of style/idiom is this ? And what industry/space do you usually see this practice? Is it common? Just genuinely curious because my python is only used for scripting/pandas.


Concretely, why?


This might be the flak you're expecting, but when I see locals() used in production code I've had problems refactoring it. I don't see how it's different from "from foo import *" I appreciate the terseness but looking at the code, or even with the IDE's help, I can't really see what's expected (especially if I'm looking at someone else's code). If I move things around I'll get bit at runtime--perhaps in a corner case that doesn't get caught the first time you run it.

Maybe this is the use-case you're talking about, but when I have something super generic I usually have clear and careful error handling inside. It kind of defeats the terseness of using locals(), but the goal is more about flexibility.


Here's a locals() example from apsw:

  # You can use local variables as the dictionary
  title="..."
  isbn="...."
  cursor.execute(sql, locals())
In the pre-dataclass / pre-namedtuple days I would use:

  class Spam:
    def __init__(a=1, b=2, c=3, d="four", .. lots args ..):
      self.__dict__.update(locals())
      del self.self
in prototype code. But that wouldn't go into production. (I wouldn't put the namedtuple version in production either.)


I don’t necessarily want to encourage this behavior but at the same time if it works and is “pretty bulletproof” in your deployments than I’d say more power to you.


None of those sound like use cases for locals. Why not *kwargs for the first example?


Can't set default arguments with kwargs. See my example at https://news.ycombinator.com/item?id=30623987 .


I’d love to see a high level example of this too, as I’m not quite sure I get the use case.


Share a link, I'm curious


Any other .NET folks out there that have accidentally misused the any() builtin when doing something in Python? About 2-3 times (with gaps of a year or so) I have accidentally used Python's any(some_collection) ("is there any element of some_collection that is truthy") where in .NET I would use someCollection.Any() ("is someCollection non-empty").


Wouldn't `Count` ( https://docs.microsoft.com/en-us/dotnet/api/system.collectio... ) be the preferred way to check if a collection has values? `someCollection.Any()` is the same as Python's `any`


I didn't mean specifically ICollection, maybe I just used the wrong word (probably "enumerable" or "sequence" is better?). But I was referring to this method: https://docs.microsoft.com/en-us/dotnet/api/system.linq.enum...

> Determines whether any element of a sequence exists or satisfies a condition.


See also "Understanding all of Python, through its builtins" https://sadh.life/post/builtins/


I love these kind of “programming life hacks”, but I feel like Python is full of articles about “did you know you could …” which I don’t see so much for other languages.

Since I work in data science I would love to read more “how to solve typical problems elegantly using base …” for both R and Julia.

Is it a community or language thing? E.g, Python comes with more tools to do clever solutions, or is it something else? (Obviously the large user base helps)


Python’s user base is growing more each year than many other languages’ total user base. I’d guess more people will start learning python this month, than the total number of fortran, cobol, and lisp programmers ever lived. People who don’t even identify as programmers use python. It’s just a different world from most languages in this respect. (Although R probably has a lot of users in this category doing various data analysis.)


I got confused at the very start:

> What’s the best way to learn the functions we’ll need day-to-day like enumerate and range?

Day to day? I don’t think I’ve used either, or needed either, over the last several thousand lines of Python I’ve written. Why would those functions be particularly useful?


Funny, I use them quite often.

Enumerate will let you iterate anything iterable while also giving you your current index.

    for idx, item in enumerate(foo):
        if idx > 100:
            break
        otherwise()
I use the above example all the time to pluck the first few items from a generator when debugging, for instance. A generator is something that hasn’t been fully actualized, it might be an infinite list or streams for example.

Logging is also fun:

    for idx, item in enumerate(thing):
        logger.info(f’item {idx}: {item!r}’)
        yield item
This could be used to wrap an existing iterator to just log each element with a counter prefixed. Also a handy debug tool.

range() is useful when you want an iterable of N size.

    for i in range(num_tasks_to_create):
        foo()
It can be used in this way to “do this thing N times”

Or you can use it other ways like:

    if year in range(2000, 2020):
tl;dr I can think of a half dozen ways to use either!


Check out more-itertools which has things like take, tail and a bunch of other nice functions for working with iterators if you haven’t. Would allow you to do something like

  for el in take(100, iter):
    print(el)
Plus there’s a bunch of other things in there for windowing, finding out whether or where empty stuff in the generator are.

https://more-itertools.readthedocs.io/


BTW you can get head and tail with plain old unpacking :)

    >>> head, *tail = range(3)
    >>> head
    0
    >>> tail
    [1, 2]
Granted, unpacking also coalesces lazy iterables (like range in this case) into lists, but if you aren't working with large/infinite iterables you should be alright.


I use this lib for this exact reason! Great suggestion. I wish the standard library had more functional functions like this. head/tail/take etc


It… does?

They’re called next, islice, and islice.

more_itertools is nice but none of those 3 is a compelling argument for it.


As a variant of your logging example, start enumerate at 1 to get the current line numbers:

  for line_num, line in enumerate(open("filename.txt"), 1):
    if not line.strip():
      print(f"Empty line {line_num}.")


Aha, thanks, I do occasionally need an index.


Interesting site, I liked it. I learned about vars and I keep forgetting dir.

Only thing is that unvisited links are stylized as purple so I was slightly confused by the first three links I encountered.


partial() is really useful and often overlooked


But it’s not a builtin. partial() is in the functools module.


I’m confused on this point, isn’t functools part of the standard library? Or has this changed?


Built-ins are available without import. Stdlib are available without separate install.


> Built-ins are available without import.

That distinction is a bit muddy:

    >>> import os
    >>> type(os.scandir)
    <class 'builtin_function_or_method'>


heapq exposes a minheap by default, but there is a maxheap as well that for some reason, is not exposed! Just do heapq._heapify_max




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: