Hacker News new | past | comments | ask | show | jobs | submit | lynndotpy's comments login

You are entirely correct. The massive investment in science (and the culture of valuing scientific knowledge) that started with the cold war is coming to an end. It turns out that destruction is far easier to do than creation.

I was a PhD student in machine learning during the first Trump administration, and even then things were on very shaky grounds. The Muslim ban alone hit really hard, and was a boon for research institutions outside the US. (Look at Canada's Google Brain branch, for instance.)

But, until recently, there was still the plausibility that the whole Trump thing was a flash in the pan. When Trump lost in 2020, there was a sigh of relief that science would continue in the US.

This is on top of plummeting educational attainment in the US and the as-of-yet uncertain ramifications of students widespread reliance on LLMs.

It is very difficult to imagine a path of returning to the good reputation we had in science.


> the compute on a phone is now good enough to do most things most users do on desktop.

Really, the compute on a phone has been good enough for at least a decade now once we got USB C. We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long

I'm happy this is becoming a real thing. I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.

I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.


> I hope they'll also allow the phone's screen to be used like a trackpad. It wouldn't be ideal, but there's no reason the touchscreen can't be a fully featured input device.

I might have misunderstood but do you mean as an input device attached to your desktop computer? Kdeconnect has made that available for quite some time out of the box. (Although it's been a long time since I used it and when I tested it just now apparently I've somehow managed to break the input processing half of the software on my desktop in the interim.)


Yes! I enjoy KDEConnect a lot for that :) With the phone being the computer, the latency can probably be made low enough that it just feels like a proper touchpad

> We're still largely doing on our phones and laptops the same things we were doing in 2005. I'm surprised it took this long

Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.

Sending plain text messages is pretty much the same as back then, yes. But these days I'm also taking high resolution photos and videos and share those with others via my phone.

> I hope they'll also allow the phone's screen to be used like a trackpad.

Samsung's DeX already does that.

> I'm fully agreed with you on the wasted processing power-- I think we'll eventually head toward a model of having one computing device with a number of thin clients which are locally connected.

Your own 'good enough' logic already suggests otherwise? Processors are still getting cheap and better, so why not just duplicate them? Instead of having a dumb large screen (and keyboard) that you plug your phone into, it's not much extra cost to add some processing power to that screen, and make it a full desktop pc.

If we are getting to 'thin client' world, it'll be because of 'cloud', not because of connecting to our phones. Even today, most of what people do on their desktops can be done in the browser. So we likely see more of that.


> Approximately no-one was watching 4k feature-length videos on their phones in 2005, or playing ray traced 3d games on their laptops.

Do people really do this now? Watching a movie on my phone is so suboptimal I'd only consider it if I really have no other option. Holding it up for 2 hours, being stuck with that tiny screen, brrr.

I can imagine doing it on a plane ride when I'm not really interested in the movie and am just doing it to waste some time. But when it's a movie I'm really looking forward to, I'd want to really experience it. A VR headset does help here but a mobile device doesn't.


you position it vertically against something in bed and keep it close enough (half a meter) so that its practically same size as tv which is 4-5 meters away and you enjoy the pixels. i love doing this few times a week when im going to sleep or just chilling

Hmm ok, for me a phone at 50cm is way smaller than a TV but mine is also not 5m away. In bed I usually use my meta quest in lie down mode.

We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.

The thin client world is one anticipating a world with fewer resources to make these excess chips. It's just a speculation of what things will look like when we can't sustain what is unsustainable.


> We were watching videos and playing games on our laptops in 2005. Of course they mostly weren't 4K or raytraced, don't be silly.

The video comment was about phones. The raytracing was about laptops.

Yes, laptops were capable of watching DVDs in 2005. (But they weren't capable of watching much YouTube, because YouTube was only started later that year. Streaming video was in its infancy.)

> It's just a speculation of what things will look like when we can't sustain what is unsustainable.

Huh? We are sitting on a giant ball of matter, and much of what's available in the crust is silicates. You mostly only need energy to turn rocks into computer chips. We get lots and lots of energy from the sun.

How is any of this unsustainable?

(And a few computer chips is all you save with the proposed approach. You still need to make just as many screens and batteries etc.)


Last time I used DeX you phone does become a TouchPad for the desktop when plugged to a monitor

Yes it can, it can also become a keyboard in fact.

One thing I'm kinda missing is that it doesn't seem to be able to become both at the same time on a system that has the screen space for that. Like a tablet or Z Fold series.


:D I avoid Samsung products but I'm happy that at least exists. I hope it's not patented, and Google is both able to put the same thing into Android, and that it's available in AOSP

This concept has been floating around for a long time. I think Motorola was pitching it in 2012, and I'm sure confidential concepts in the same vein have been tried in the labs of most of the big players.

And there is a mutually understood degree of nuance. There is no space to consider every route of uncertainty or qualify every statement. You can say "the Earth is round" instead of "most of us agree that the Earth very very likely exists and is very likely to be round".

Can you quote the contents of this tweet for those of us without Twitter accounts?

You can still read the tweet without an account.

Just replace x.com with xcancel.com

I don't think you should be getting downvoted, you are right. I don't think it's "100x more", but more complicated rules require more resources dedicated to their management.

It's one of the reasons people push for UBI. Welfare programs waste a lot of money trying to make sure the "right" people are getting it; UBI just gets rid of the waste.

The solution is simply: Make school lunch free for every student.


I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.

There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)


I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!

GHC has an -fdefer-type-errors option that lets you compile and run this code:

    a :: Int
    a = 'a'
    main = print "b"

Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.

This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.


> I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!

I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”

Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.

Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.

Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!

In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.


On the other hand, I can't count how many times I've written some code like

    let iteration_1_id = 10;
    dostuff(iteration_1_id);
    let iteration_2_id = 11; // warning, unused variable!!
    dostuff(iteration_1_id);
and then spent 10 minutes debugging why iteration_2 wasn't working, when it would have been resolved instantly if I had paid attention to the warnings.


Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.


You should give Julia a shot. That’s basically that. You can start with super dynamic code in a REPL and gradually hammer it into stricter and hyper efficient code. It doesn’t have a borrow checker, but it’s expressive enough that you can write something similar as a package (see BorrowChecker.jl).


Yes. https://github.com/heyx3/Bplus.jl/blob/master/docs/!why.md This is good writing on this topic.


Unless you would like to AOT-deploy your code, then good luck with using this 3rd party package with scarce documentation.

Or even enums, which are a joke in Julia.

Julia had so much potential, and such poor implementation.


Some Common Lisp implementations like SBCL have supported this style of development for many years. Everything is dynamically typed by default but as you specify more and more types the compiler uses them to make the generated code more efficient.


I quite like common lisp but I don't believe any existing implementation gets you anywhere near the same level of compile time safety. Maybe something like typed racket but that's still only doing a fraction of what rust does.


I think OCaml could be such a language personally. Its like rust-lite or a functional go.


Xen and Wall St. folks use it.


Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.

In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.

For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.

And these are very regular requirements in a game, tbh.

And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.


> Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.

I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].

1: https://web.archive.org/web/20220719130541mp_/https://commun...

2: https://web.archive.org/web/20240202140023/https://amethyst....


> Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.

C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.


I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.


I used to hate the language but statically typed GDscript feels like the perfect weight for indie development


It is indeed great for creating a prototype. After that, one can gradually migrate to Rust go benefit from faster execution times. The Rust bindings are in a pretty decent shape by now

https://godot-rust.github.io/


Nowadays we have the luxury of LLMs to help migrate projects/code from one language to another. I would imagine a pipeline with Rust as an intermediate “compiled” step might be possible. LLM accuracy isn’t there yet, but I can dream.


It is not that complicated or time-consuming to do the transformation manually. On the contrary, it's even fun and a good practice (but admittedly, I do have a rather conservative view on the matter)


Yeah I haven't really used it much but from what I've seen it's kind of what Python should have been. Looks way better than Lua too.


I like it better than python now, but it's still got some quirks. The lack of structs and typed callables are the biggest holes right now imo but you can work around those


What numeric types typically need conversions?


The fact you need a usize specifically to index an array (and most collections) is pretty annoying.


Thats a feature not an annoyance. We need to keep Rust like it is to preserve its core value delivey: Robust software quality in exchange for development & compile time/pain, in other words: "the pain is moved from production to development" you can not have joy in both at the same time.

Not sure if Rust should be promoted to build games, i prefer it being pushed to build mission critical software.


This could be different in game dev, but in the last years of writing rust (outside of learning the language) I very rarely need to index any collection.

There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.


I'm no game dev but I have had friends who do it professionally.

Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.

This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.


> Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.

That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.


The underlying issue with game engine coding is that the problem is shaped in this way:

* Everything should be random access(because you want to have novel rulesets and interactions)

* It should also be fast to iterate over per-frame(since it's real-time)

* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways

* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have

* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.

* Congratulations, you are now the maintainer of a bespoken database engine

You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.


It's not at all easy to implement as an optimisation, because it changes a lot of semantics, especially around references and pointers. It is something that you can e.g. implement using rust procedural macros, but it's far from transparent to switch between the two representations.

(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)


Meh. I've tried "SIMD magic wand" tools before, and found them to be verschlimmbessern.

At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.

I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.


Very much agree and love your analogy but there is a third option - make a sock knitting machine.


> verschlimmbessern

Thank you for this delightful word.


I'm not a game dev, but what's a straightforward way of adjusting some channel of a pixel at coordinate X,Y without indexing the underlying raster array? Iterators are fine when you want to perform some operation on every item in a collection but that is far from the only thing you ever might want to do with a collection.


Game dev here. If you’re concerned about performance the only answer to this is a pixel shader, as anything else involves either cpu based rendering or a texture copy back and forth.


A compute shader could update some subset of pixels in a texture. It's on the programmer to prevent race conditions though. However that would again involve explicit indexing.

In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.


You can manipulate texture coordinate derivatives in order to just sample a subset of the whole texture on a pixel shader and only shade those pixels (basically the same as mipmapping, but you can have the "window" wherever you want really).

This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).

Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.


That is true. Hadn't occurred to me because I'd had in mind pixel sorting stuff I did in the past where the fetches and stores aren't contiguous.

Interestingly enough the derivative functions are available to compute shaders as of SM 6.6. [0] Oddly SPIR-V only makes the associated opcodes [1] available to the fragment execution model for some reason. I'm not sure how something like DXVK handles that.

I'm not clear if the associated DXIL or SPIR-V opcodes are actually implemented in hardware. I couldn't immediately find anything relevant in the particular ISA I checked and I'm nowhere near motivated enough to go digging through the Mesa source code to see how the magic happens. Relevant because since you mentioned it I'm curious how much of a perf hit rolling your own is.

[0] https://microsoft.github.io/DirectX-Specs/d3d/HLSL_SM_6_6_De...

[1] https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.htm...


Huh wasn't aware of that. Nice.

About the performance question, during the frag shader phase neighbouring pixels are already being tracked, so calling those is almost free. It would be difficult to match that performance when already on the compute phase.


That's just a matter of what's in cache. If your compute shader operates in coherent blocks it should generally be on par with the equivalent fragment shader. The potential exceptions are where access to dedicated hardware functionality is concerned.

What I'm curious about is if there's a hardware intrinsic that computes derivatives or if the implementation of those opcodes is generally in software.


I chose to focus on the fact the frag stage is already tracking those changes because at that point it's basically free. And you don't need to worry too much.

To answer your question, which is very pertinent, they seem to use different hardware accelerated mechanisms. In the compute stage, wave based derivatives are used, and you need to account for different lane counts between GPU architectures.

Understanding that now makes me believe you're right. But one needs to benchmark them to be sure.


You're right - I should have just said "shader" and left it at that.

> There is some subset of problems that absolutely requires indexing to express efficiently.

Sure. But it's almost certainly quicker to run a shader over them, and ignore the values you don't want to operate on than it is to copy the data back, modify it in a safe bounds checked array in rust, and then copy it again.


> run a shader over them, and ignore the values you don't want to operate on

Use a compute shader. Run only as many invocations as you care about. Use explicit indexing in the shader to fetch and store.

Obviously that doesn't make sense if you're targeting 90% of the slots in the array. But if you're only targeting 10% or if the offsets aren't a monotonic sequence it will probably be more efficient - and it involves explicit indexing.


On the contrary, I find indices to be the most natural way to represent anything that resembles a graph in Rust. They allow you to sidestep the usual issues that arise with ownership and borrowing, particularly with mutability, by handing ownership to the collection and using indices to allow nodes to refer to one another. It's delightfully simple compared to the mess of Arc and RefCell that tends to result when one tries to apply patterns from languages that leave "shared XOR mutable" as the programmer's responsibility. That's not to say that Vec and usize are appropriate for the task, but Rust's type system can be used to do a lot better.


This is getting downvoted but it's kind of true. Indexing collections all the time usually means you're not using iterators enough. (Although iterators become very annoying for fallible code that you want to return a Result, so sometimes it's cleaner not to use them.)

However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.


An iterator works if you're sequentially visiting every item in the collection, in the order they're stored. It's terrible if you need random access, though.

Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).


History lesson for the cheap seats in the back:

Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)

Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)

When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.


> compressing files and then taring them

Just use squashfs if that is the functionality that you need.


That works too if it's available.


In C++, random access iterators are a thing. Indeed, raw pointers satisfy the requirements of a random access iterator concept. Is that not the case in Rust?


While you maybe "shouldn't" be indexing collections often (which I also don't agree with, there is a reason that we have more collections then linked lists, lookup is important) even just getting the size of a collection which is often very related to business logic can be quite annoying.


For data that needs to be looked up mostly I want a hashtable. Not always, but mostly. It's rare that I want to look up something but its position in a list.


The actual problem with this is how to add it without breaking type inference for literal numbers.


What I mean is, I want to be able to use i32/i64/u32/u64/f32/f64s interchangeably, including (and especially!) in libraries I don't own.

I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).

I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".

It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.



Please correct me if I'm wrong, but I don't think this would let me, say, pass an i32 returned from one method directly as an f64 argument in another method.


No, it would not. Even conversions using "as" are discouraged in favor of conversion traits such as From and TryFrom. Rust's goals of being explicit and correct are at odds with people wanting things to be immediately simple and easy to use.


String conversions too


I can't tell if this is a serious suggestion, or if it's proposed in the same tone as "a modest proposal".

In case you are serious: This is a pretty horrifying proposal. Humans can get microchipped, but these cost money, are very painful to administer, and importantly are RFID only, i.e. not useful for finding ones own children.


I am a fan of Mr Swift, the suggestion was not serious, but my musings about ICE's sadism are.


I agree, the API change was the last nail in the coffin, honestly. Reddit was always bad for several reasons, but it always had some availability of smart people that placed it alongside StackExchange and Hacker News. But 2022 and 2023 really saw a mass exodus of expertise from Reddit (and Twitter, etc.)

Lots of smart people left to Mastodons, at least.


My account on Reddit, and so as I was the founder also the sub r/AmazonRedshift, was in Sep 2023 banned by an automated system.

The sub was working normally, I posted about the Amazon Redshift Serverless PDF, and then Reddit began behaving oddly.

After some investigation, and some guesswork, I concluded my account had been silently shadow-banned, and the sub banned (and then shortly after, deleted).

Two years of posts and the sub disappeared, instantly, abruptly, without warning, reason, appeal process or notification, and Reddit was trying by shadowbanning to lead me into thinking my account was still active.

I used a utility to scramble (you can't delete) all the posts I'd ever made to Reddit, and closed my account.

(There's a bit of a happy ending - about a year later someone who was doing work with Reddit and had an archive of all my poss sent them to me, as a thank you. I processed the JSON and put them up on my Redshift site.)


I like how you both responded to GP's roast of "I agree" comments by saying "I agree". Maybe that was intentional.

Anyway, I agree. I used Reddit fairly regularly before the API change, though I was already starting to get disenfranchised by the political hive mind by that point. The death of FOSS third party clients that made the platform bearable to use was the straw that broke the camel's back, for me. I've completely left it behind since.


The complaint was about comments that are functionally an overly large upvote, not comments that have the word agree in them.


I know, but it's still funny.


I hate that they took Apollo app from us.


In this case, per the link:

> went sleuthing and quickly found a PDF from the campaign site with the font embedded

So, the PDF had the font Xband Rough embedded inside of it.


But was Xband created by copying the math from FF Confidential?


I wonder how much of it can be blamed on the Vision Pro being nothing more than a big wobbly iPod Touch, instead of a real computer.

For me, a Vision Pro would have been fantastically useful if it was a little bit more like MacOS (or Android), and shipped with a native, real terminal that I could run things on. $3500 is suddenly a lot easier to swallow if I could think about it like 20 monitors to run terminals on.


The better analogy is that it's the Apple TV interface for your face. All the buttons are ginormous, the options are barebones, the information density is crap.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: