Hacker News new | past | comments | ask | show | jobs | submit | more CrLf's comments login

If this was an actual person performing, I would have smiled all the way to the end. Since this is AI-generated, I feel no emotion towards this, and just listened for a few seconds. It's technically interesting.


Seriously, the "creatives are screwed" narrative has fallen apart for me because the stuff made in AI has proven to be worthless.

Why is it worthless? Because the point of art was to communicate or convey something with other people and the AI has no idea because its not human.

A few words or a sequence of sounds can be enough to transfer great deal of feeling and meaning because we run about the same software and as a result we can generate the same output with a little bit of input. This is all done by looking inside and externalise it, that is someone feels something and makes a song from it and that song can be used to regenerate feelings in other people.

The current AI tech doesn't have a way to do that because doesn't have a way to look inside. At best, it can imitate things within some context but the output doesn't have any meaning at all. The most successful AI content was maybe the "Pope wearing Balenciaga" image but that wasn't because the AI thought it mean something but because someone looked inside and thought this can be interesting.

So no, AI isn't taking over the creative process. AI is taking over the mechanical part of it only, that is the part where the artist traditionally had to master a method of production or an instrument.

The AI evangelists keep pushing short videos or drawings that look "professional" and claiming that Hollywood is done, artists are screwed etc but those are worthless outside of the context that AI made it. No one is interested in paying or even spending time to consume this content, its extremely dull.


Get a room in any hotel run by one of the large chains - Accor, Hilton, IHG...

On the wall, you will find an Obligatory Art. Sometimes it's just a canvas with 3-4 stripes of paint: you can imagine a purely mechanical process for churning these out; a conveyor belt with brushes hanging over it, perhaps. Other times it's a little more creative. Each room is slightly different. You can also sometimes see these in cheap home decor shops. It may not be much, but it does the job - it really does make the space more pleasant than just blank walls would be.

There are a lot of rooms to fill. Someone has to make all these. It may not be all that creative, but it sure beats working in, say, a produce packing plant. Meanwhile, it's hard to make a living in art - some are wildly successful, yes, but the tip of that pyramid is very small and getting there takes as much luck as skill; and there are a lot of people further down the pyramid who also need to eat while waiting for their big break.

Those are the jobs at risk from generative AI in its current state.


Sure, worthless filler images is something that AI can do.

But these jobs never existed in first place, that generic art is done by contractors who charge for the materials.


I fully agree, but that’s not what all music is. Most commercial music is pure craft created by expensive professionals that the music corporations would be very happy to swap with expendables and cheap AI models.

It boils down to the economic model and the financial and political choices like in every creative industry.

Regarding potential displacement, I would apply the stock photography theory to any creative industry. Ask yourself: is what I do in my creative endeavor the equivalent of stock content for the visual imaging industry? If the answer is yes, you might want to future proof your craft. If the answer is no (as in, your art is more than a simple soulless piece of easily digested and quantity-oriented content) then you will be fine in the long run after the current unsustainable hype cycle dies out.


I think even people who are writing songs for cash are actually looking inside, they just perfected a method of doing it and can do it all the time. AI wouldn't be able to do that unless is designed to work like human and has human experience.

The stock photography stuff is either documenting event or displaying low effort illustration for low effort productions. I guess AI can be good at churning Apple images for low effort Apple news.


> Because the point of art was to communicate or convey something with other people and the AI has no idea because its not human.

This is mixing up art with the art industry. Artists will struggle just like copywriters are struggling after the arrival of LLMs. Not everything in the art industry is trying to break new artistic ground or communicate some deep emotion to the listsener. For much of the industry, "good enough" will suffice if it's 10x cheaper.


I disagree, the industry participants still need to have this ability to look inside when doing their work even if they do routine/mundane work. That's how you get people who are better or worse in their jobs.

Maybe with exception of strictly technical work like people who remove background in mages or calibrate instruments. Those people are screwed yes.


What if you didn't know either way? Would you refuse to enjoy a song, until you were absolutely sure it was performed by a human?


It will depend on the person, but I think generally speaking, a song is but one aspect of an artist / the art of music; if you're a mass consumer that just has something playing in the background, it probably doesn't make a jot of difference (consider also "muzak" / elevator music), but if you're more of an active listener you may look into and enjoy the story behind the music and the artist as well.

Personally I think knowing the story behind music makes it better. The music isn't to everyone's taste, but for example Devin Townsend's wiki page / story is a trip: https://en.wikipedia.org/wiki/Devin_Townsend


Completely agree, but it seems to me there's a difference between "I like this, so I'm going to find out everything I can about who made it and why because I will enjoy it even more that way" and "I can't possibly like this unless I know who made it and why".


I think we kind of already have the answer to this one. Commercial music that use computers to enhance the song or singer is already prevalent. Even straight AI assisted track generation probably already happens. People don't use to mind it for as long as they don't know. Once they do, many still don't care but some feel betrayed and think honesty and humanity is part of the art. I haven't heard of a single person who refuse to enjoy songs they don't know how they were produced, though, and I strongly doubt the parent would too.


Would you refuse to enjoy food if it didn't come from a reputable source? Of course you would. You don't just eat shit at random.

If a friend recited a poem, would it matter to do if they read it off the Internet or composed it themselves? Of course it would.

If someone tells you they love you, does it matter if they are a robot or an honest human or a con-artist human catfishing you? YES, THAT MATTERS TO YOU. Yes you "refuse to enjoy" things that have suspicious sources.


I don't think those examples are equivalent at all. Food isn't just about taste, but could literally kill you. Not many people have friends who recite poetry. And as for love, that's something that takes years to meaningfully develop.

But, I barely know any musical artist's name today. Most music is just something to listen to at the gym. It's pleasant enough, but I don't dig in to know who is singing at all. Every source is equivalent to me, as long as it's pleasant to the ear.

Perhaps you're different. The only question is, which attitude is more prevalent?


That's a poignant question, but with an easy answer: If I didn't know, I'd probably enjoy it up to its imperfections. But I'd feel defrauded once I discovered.

Like so many people felt defrauded when they discovered that the Milli Vanilli leads didn't actually sing, and that wasn't even AI. https://en.wikipedia.org/wiki/Milli_Vanilli

Edit: I might add that I already suspect any illustration that even superficially looks it might have been generated by AI. This has ruined the enjoyment of so many people's artwork whose style has been co-opted by AI.


This is the thing. Live performance is always gonna be different than something from pre-recorded or AI generated. That's why music lovers like to go to live concerts and performances.


https://www.youtube.com/watch?v=scu8bz1yM4k

https://www.youtube.com/watch?v=IvUU8joBb1Q

https://www.youtube.com/watch?v=yoAbXwr3qkg

https://mikuexpo.com/

That's the thing about art: when people make general statements like this one, others will go on to create things purely to see what's outside the box.


I think people will gladly engage on a superficial level, but refuse to engage more deeply if that makes sense?


What if you didn't know? You wouldn't know whether you like it until you've learned more about the artist?


AI Audio isn't far enough along to be convincing in song (at least in this song, anyway).

This song sounds like a disturbing uncanny-valley rendition of a slow Phoebe Bridges song performed by Taylor Swift using a broken auto tuner.

While I think it's technically impressive that this exists at all, I think this tune is still at the stage of "Pope in a puffer jacket".


Yes, that's how communication between humans works. It is context dependent.


For me it's the opposite. This is interesting in its absurdity, the mistakes it makes, how it tries and somewhat fails to convey emotion etc.

Someone making a "proper" song where they just sang these words would be quite boring, then I'd rather spend my time checking out other music.


I'd be curious to hear your reasoning behind this?

Why does this being computer generated ruin it for you?

Auto-tune has been around since 1997 so it's not like computers have not been a big part of a lot of music we hear every day.


Because the cause of a person singing is their mental states (desire, emotion, intention, etc.) and the cause of this generation of audio is that the words are associated with some backcatalogue of previous music.

Listening to songs, as speaking with people, is in large part about enjoying the causes of the song rather than the mere variations in pitch.

Beethoven's 5th even, purely instrumental, is enjoyable because of how the composer is clearly playing with you.

To generate pitch variations identical to beethovens fifth makes this an illusion, one hard to sustain if you know its an illusion. It isnt an illusion in the case of the 5th itself: beethoven really had those desires.


The cause of many popular performers singing is primarily their desire to make money. It's not even some kind of closely held secret. And they still sell albums by the millions.

What you describe certainly exists, but it's not the entirety of art, and I would argue that at this point it's not even most of art.


Meanwhile, Hatsune Miku remains popular. There are even concerts.


Hatsune Miku has more in common with Gorillaz than AI slop.


I did consider mentioning Gorillaz, but they are voiced by actual people, whereas Miku is software synthesis. The suggestion was that "the cause of a person singing is their mental states", but there is no person singing here and therefore no mental states in the singer - it's just Vocaloid.

Meanwhile, "To what extent can a piece of art be a thing that is of interest in itself, divorced from its creator, context, or any representation of anything in particular?" is absolutely a valid area for people to explore and one that artists are exploring all the time. There is now one more tool to play with in the toolbox.

I heard the exact same objections to the modtracker scene three decades ago - "it's just computer generated slop, I'm not interested unless it's a real person performing on real instruments". I maintain that not only was it a perfectly valid mode of expression then, but tools like Ableton grew in part from those experiences and are an integral part of much - most? - music now.


For me all art I enjoy has some aspect of connection to someone that's sharing my human experience.

If we get AGI, I could imagine feeling something towards the art such an entity creates, since a big part of the human experience that we would probably share with an AGI is inescapable death.

But for today's "AI" generated music, I feel the same towards it as I would towards the random step function output of a given tool in Ableton - sounds cool, now what can we do with that to make it into music?


> sounds cool, now what can we do with that to make it into music?

So a human using the tools of sound production is what transforms the function output into music. (Please let me know if I’m misunderstanding you).

I think I see what you’re saying, but that’s already happened here hasn’t it? I mean, it's not as though an AI made the decision to generate this all by itself, a human had an idea to create this piece and wrote a prompt which created this output.

The order is of events is reversed from your Ableton example, but I would contend that this kind of production is no less musical than what someone could create using a DAW, simply that the tools are more accessible

(and I presume there is less direct control over what the end result is going to sound like, but the same could be said of conducting an orchestra versus playing a piano.)

Eta: For example, some people in this thread have complained that the AI generated voice falls into the uncanny valley. I agree, and I think that’s part of the art here.


> but the same could be said of conducting an orchestra versus playing a piano.

Conducting an orchestra is an important role but the music is mostly a result of first of all the composer, and then the conductor / arranger's interpretation as well as the skill of the musicians. I really don't see the similarity to a human input of "GNU license, sad, jazzy." The resolution is just way too rough.

In fact, imagine comparing the experience of reading Snow Crash, to reading the sentence, "Cyberpunk story with sci fi elements, VR universe, pizza delivery guy with samurai sword."


I’m not meaning to equate the level of effort or skill involved. And I’ll grant that I know very little about music composition beyond my experience in Middle School band in which the musicians’ personality and skill presents a significant constraint for the conductor/arranger :)

I would readily compare the experience of reading Snow Crash (one of the first SciFi books I read of my own volition) to the output that a LLM may produce from such a prompt. My iPhone informs me that I’ve spent nearly 10 hours playing with Characters.ai in which SF storytelling characters are my favorite to interact with. When I first read Snow Crash I felt like “finally, an author that understands that part of the story that _I’m_ interested in!” and my experiences of AI driven creative writing has felt similar. Certainly it feels less “magical” since I’m aware that I’m customizing the author to my personal taste - is that “magic” of feeling connected to the artist *the* art?


I'm not here to yuck your yum, if you're having fun with it, by all means. If you're getting output from characters.ai that are on par with a neal stephenson novel, I would really enjoy seeing that and learning how I could do the same, that sounds very fun.


When I go up to the self-service booth in McDonalds, go through the menu and select a portion of McNuggets the result food has been made, the food was only made because of my actions, no nuggets would have been made if I specifically didn't want them, but to say I am the one who cooked them would be absurd.

Like, if Trent Reznor had produced Hurt not by putting his doubts, self loathing and pain into words and music, rather by typing "sad, trending on artstation" into a console then heading for lunch, I don't think it would be any way as meaningful even if it was note for note beat for beat the same output.


The meaning the listener imparts to the song is constructed in the listener's head, a combination of the song and the listener's own knowledge, experiences, personality and emotions.

I knew nothing about Trent Reznor the first time I heard "Hurt". Often when a song is heard on the radio - perhaps a sentence that dates me, but even so - there is no explanation of where it came from or even what it's called to accompany it; or perhaps there may have been, but the listener wasn't paying attention until after they realised they liked what they were hearing; indeed, there used to be an entire industry for solving the problem of "I heard a song I like and want to know more about it, or at the very least find out what it's called so I can hear it again".

When I first heard "Hurt", it resonated because of how those sounds interacted with my own experience. Everything else came after. Had those exact same sounds any other origin, that first experience would not have been affected - I would have had no way to know.


> The meaning the listener imparts to the song is constructed in the listener's head, a combination of the song and the listener's own knowledge, experiences, personality and emotions.

This is reductionist IMO. The equivalent seems to me to be, "the meaning the reader of words imparts to the meaning is constructed in the reader's head..." but clearly the vast majority of the meaning of the words is derived from the writer's intention. Of course that can be misinterpreted, reinterpreted, co-opted, etc, but regardless, it doesn't mean the author can be simply ignored, or that a psuedo-random generation can be treated the same as human-generated.


> clearly the vast majority of the meaning of the words is derived from the writer's intention

This is not clear at all.

Written words are just marks on a surface. Whoever made them may have intended to convey something, but they made them and walked away; they are now absent, taking their intents with them; only the marks remain, and those are not sentient or even alive - they contain no intent. There is nothing about the patterns left behind in themselves that makes them different from any other patterns the universe contains as far as the universe is concerned. If handed a set of marks on paper with no other information, you have no way of knowing for sure how they came to be. You could guess, you could be super confident, but you couldn't be /certain/.

If a reader later comes along who happens to have studied the same pattern-codes as the creator of the marks, however, seeing them will make that reader recall the associations and build up meanings in their head. These may or may not be the same meanings the writer intended to convey.

Children learning to read understand this very well - reading is /hard/, associating meaning with code is /hard/, decoding similar meanings to everyone else is /hard/, aligning the associations spoken words trigger in your head with others around you takes /study/ and /effort/, even realising that you end up with different meaning in your head when presented with some symbol to what others get, though frequent, takes deliberate effort. Reading comprehension questions in elementary school tests are there for a reason.

To a well-practiced reader, the process is natural and seamless, and feels like telepathy; it /feels like/ meaning has been transferred directly from the writer to the reader. But it is not that, and many problems arise when people forget this.

Once you have learned to seamlessly decode symbols into meaning in your head and the process is fully automatic, symbols you encounter in the world will seamlessly trigger meanings in your head regardless of their origin.

This is the human condition: we are all locked inside our own heads, and you can't take a piece of yourself and place it inside another directly. The best we can do is shout into the void and hope something similar inside the person across from you resonates; but it turns out that human shouts are not the only thing that can make those strings vibrate.

When we encounter combinations of symbols in the world that trigger complex meaning inside us, we /expect/ them to have an author who intended to convey something like that meaning, because in the entire history of human experience to date, the only other way for such things to appear in the world has been incredibly unlikely coincidence, invariably accompanied by context that makes it clear it is coincidence. (A certain proportion of social media content is, in fact, people sharing instances of these coincidences!)

However, this is an assumption that we make, and the world is rapidly changing in ways that mean it may, going forward, no longer be a valid one.

More and more, we will encounter combinations of symbols in the world that trigger complex meanings in our heads but originate from no human intent beyond "I need a combination of symbols that will trigger these meanings in the heads of those who encounter them", if even that much is explicit. We are rapidly improving the processes that produce them, and one of the ways we are improving them is removing tells. You will see the symbols, they will decode into stories in your head, and you won't know for sure if a human author was involved or not.

It is vital, going forward, that we all remember that symbols triggering satisfying meaning in our head does not automatically imply deliberate human intent for that meaning in their origin, lest we be entirely unprepared for the brave new oncoming world, just the way a chunk of the populace was unprepared for nigerian prince emails in their heyday, or are still unprepared for telephone calls from "internet tech support" right now.


Art is primarily a means for one human to convey emotions to another human, but for something to be art, the artist must also have invested some skill/effort into the artwork(1).

AI-generated art may have a bit of the former (assuming the human had enough control over the details of the final output), but has practically none of the latter.

Hence, AI-generated output is not art. But art can be produced using AI tools somewhere in the process.

(1) When I look at art produced by one of those "artists" that commission the actual work to someone else, it's similar (I don't recognize the "artist" as the human I'm connecting to, ideas are a dime a dozen). However, it's still art because I can connect with the anonymous human which actually implemented it.


In reality it will not be exploited like that by the big players. It will be used to create hits for even cheaper and then the labels will be looking for a puppet singer that will perform for the audience.

At first it will be kept secret, and then as it spread more and more among the industry, it will be more or less stated, but by that point people will be already accustomed.


While I find the current pervasiveness of AI articles¹, artwork, & code³, irritating, especially when it claims to be something else², that is different because they often try to appear not AI generated, or are presented by others as not being.

This states from the outset that it is using AI tools, at which point I become more understanding. Someone had an idea, but lacked the singing voice or a friend with a singing voice & free time, so used a tool to fill the gap. This is better, at very least more honest, use of tech as a tool than, for instance, autotune on studio albums, IMO.

If you _really_ want it with a real human voice, perhaps contact some of the many performers on social media to suggest it might be an amusing way to generate some content to monetise. Or, of course, sing it yourself!

--

[1] I've gone from clicking very few of facebook's “recommend for you” articles to clicking absolutely none of them – the number that are, or are indistinguishable from, hallucinations from an LLM that doesn't understand what is actually being written about, already dwarfs things that are worth reading. SciFi TV/film/book reviews and essays seem to be particularly affected, with “local” news links not far behind.

[2] “you won't believe this isn't AI generated!” — no, I won't, because it quite obviously is. I don't know whether to be insulted that you think just saying that will convince me otherwise or sad for the state of humanity that many do seem fooled.

[3] Too many people seem to think that slapping code out of copilot into a stackoverflow answer without nothing to check it for correctness in any way is acceptable, and before that was possible there was already too much bad (sometimes working but blatantly insecure) code out there that people were blindly copying. And that is before the potential licensing & moral issues that mean I have not yet been convinced to use anything like copilot myself, but I'm getting far off-topic here…


This. AI production loses “soul” as human empathy is a crucial component.


Art is like a lossy compression algorithm. If there is a soul of any form the only reason you think you're observing it in human-produced art is because your decompression algorithm is adding it.

While I don't disagree there's a "human touch" to art, I'm not convinced it can't be synthesized to some degree. It may not be innovative, but I think since AI is extrapolating from learned data, it can at least mimic the current pop culture.


And Arkanoid had 486, which made me fear this project wouldn't have enough resolution for decent gameplay, yet it works perfectly (for my standards, at least).

I'm not sure why Arkanoid's spinner had so many steps. It can be to allow for pixel-by-pixel movement on its 336 pixel horizontal resolution, but it can also be because the way they were polling the encoder might miss some quadrature steps and they needed the extra resolution to ensure smooth control. Perhaps a combination of the two.


I built one of those many years ago as well. :)

  * https://github.com/carlosefr/DisKnobUI
  * https://www.youtube.com/watch?v=MvpPVjJnbao
It thought about using that (the spindle motor from a hard-disk) for this project. The issue is that it's not very precise at very low speeds. It sort of works, but it falls out of sequence too often.


This project does use interrupts for the encoder. It uses the "Encoder" library mentioned in another comment.

https://www.pjrc.com/teensy/td_libs_Encoder.html


I might be missing something but I think the author of this library missed a very simple implementation.

This library is hundreds of lines of assembly for implementing some jump table for all possible input combinaisons.

I propose instead to shift left the two inputs into a 8 bits accumulator. And then there are only 3 states to match. Inc, Dec, Invalid (happens if you jiggle the encoder in place, effectively doing halfinc/halfdec).

Something like that:

    sw = A << 1 | B
    if (state & 3 != sw)
        state = (state << 2) | sw
state == 0b00101101 for increment state == 0b01111000 for decrement Any other state is ignored.

If this code runs on interrupts, the conditional can be skipped if spurious interrupts are not possible.

Any switch bounce can be filtered either in hardware (capacitor + resistor), or in software by averaging over time.


It's about time we stopped calling projects that require copyright assignments "open-source", because they aren't. Regardless of license.


All GNU projects require assigning copyright to the FSF[0]. It feels a little absurd to call a GNU project "not open source".

But I would certainly trust the FSF not to change licensing terms (aside from moving to newer versions of the GPL/LGPL) to something unsavory, while the same can't be said of any old random project out there. I think that trust (or lack thereof) is the real issue. Ultimately, though, it's better to just not have to trust; I don't sign over my copyright to projects either, unless it's part of a job and the stuff that I write would otherwise be owned by my employer anyway.

[0] https://www.gnu.org/licenses/why-assign.en.html


The FSF requires copyright assignment so they can upgrade to newer GPL versions at will. I happen to disagree with that as well.


Agreed, and this is why I never have contributed to a project with a CLA.


Not even to e.g. something from the Apache Foundation? Or Eclipse? Or CNCF?


There are two organizations I would consider assigning copyright to for free work: the FSF and the ASF, which are both organizations with noble goals.

Certainly not the CNCF.


Yup! I might make an exception at some point but so far I haven't. I believe that all contributors should be equals and not some have more rights than others. Also I need to research and trust the entity which I sign the CLA with.


Also, non-tracking cookies (e.g. load-balancing, settings selected by the user) don't require consent. Still, many websites put cookie popups in place for those, adding to the noise.


Because everyone mindlessly copy everything they see on the net.


Two year warranties after purchase are already common throughout the EU, and so are warranties upon repair. What this directive appears to do is make the former also cover replacement throughout the warranty period, and extend the latter to one year instead just a couple of months.

"which would then come into force 20 days after it is published in the Official Journal of the European Union"

One important detail...

EU directives don't come into force until they are translated/incorporated as law in each member state, which can take years. This is unlike regulations (e.g. GDPR) which don't require local incorporation.


It may be difficult to understand, but maybe the EU has other things where they want to be competitive instead? Maybe, I don't know, quality of life...?

Please stop measuring the EU using US standards.


> Maybe, I don't know, quality of life...?

I’m very happy with my public healthcare. I think every American would be as well.

And not to mention that our kids don’t need to do active shooter drills in school.


Public healthcare? You mean the free healthcare for 1000EUR that single German freelancers have to pay monthly? For $1k you can get a US insurance for the whole family!


It’s tax funded, so you have a point. Still, it covers everyone so you never see anyone doing gofundmes just to stay alive.


Well, maybe try not to pay your health insurance as a freelancer and the insurance company promptly disowns you and you are at the same spot as uninsured US folks immediately. In Germany. Just because fees are hidden from you doesn't mean they aren't there and a failure to pay them results in similar consequences to US.


I wouldn't know, but I live in Ireland anyway, and while we have a two-tier system, we pay insurance only for the better hotel services in hospitals - things such as single-patient apartments.


Maybe ask yourself how is EU going to pay for all that if it misses on all trends in the industry and over-regulating anything that could be the next growth factor.


Doesn't seem to be working too bad so far.


I must be living in another world then. GDP of EU is stagnant since 2008 whereas US and Chinese GDP exploded during that time. Ultimately whoever has the most money is going to win so let's not try to redefine economical indicators and say that GDP is no longer relevant etc.


As a member of eastern EU block, I sometimes look at avg. salary in China (114029.00 CNY/Year per [tradingeconomics](https://tradingeconomics.com/china/wages)) and wonder when will they earn more (though the wages are basically incomparable due to different taxes). It won't take a long time.

15% inflation in 2022 was fun, 11% in 2023 even more fun.


AFAIK Chinese workers are already more expensive than eastern EU ones (but tight logistic integration makes the overall manufacturing cheaper) and so are Indian devs in India already more expensive than eastern EU devs. "Prosperity"


Japan's GDP has been stagnant since 1995.


Archaic hardware should be running period-correct software anyway. It’s not that they can’t stay on a previous kernel.

Itanium isn’t truly archaic, though. It was only discontinued 3 years ago.


"The week number for the 6502 is important because apparently 6502s made up to week 26 had faults!"

I thought only the first revision 6502's (1975, in a white ceramic package) had the ROR bug, or are these faults something other than ROR?

In a video from early this year, Eric Schlaepfer claims the ROR behavior in the early 6502's isn't actually a bug, the instruction was consciously left out. I found the explanation interesting:

https://www.youtube.com/watch?v=Uk_QC1eU0Fg


Also of note: datecodes in "wwyy" format. "yyww" is far more common.

Funny thing is (assuming that KIM-1 still works): you buy a brand new computer today. In 5, 8 maybe 10y (if you're lucky) it will be ewaste. Quicker if it's a phone. And chances are by then that KIM-1 will still work. Or be easy to repair. Similar with many machines of that period.

Tech has advanced tremendously. But in some aspects, backwards.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: