Hacker Newsnew | past | comments | ask | show | jobs | submit | osswid's commentslogin

Yes


Go to telehack.com and type 'notes'.

It's not comprehensive, but will give you a taste of the flavor of what Usenet was like.


  In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6.  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-Tac-Toe."
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play"
  Minsky shut his eyes,
  "Why do you close your eyes?", Sussman asked his teacher.
  "So that the room will be empty."
  At that momment, Sussman was enlightened.
 -- AI koan


This allegory has haunted me for ten years. I always see it posted here and invariably the poster doesn’t elaborate or explain at all. Im pretty sure the people who post this just do it to look smart. I can’t find an explanation anywhere on the web. I asked chatgtp what it means and it said this:

This allegory, often referred to as an "AI koan," is a story that conveys a deeper meaning about the nature of artificial intelligence and the process of learning.

In the story, Sussman is a novice who is attempting to train a neural net to play Tic-Tac-Toe. When Minsky, a renowned AI researcher, asks why the net is wired randomly, Sussman responds that he does not want the net to have any preconceptions of how to play. Minsky then closes his eyes, explaining that he is doing so in order to empty the room.

The meaning of this story is open to interpretation, but one possible interpretation is that it is highlighting the importance of approaching problems with an open mind, free of preconceived notions and biases. By wiring the neural net randomly, Sussman is allowing it to learn through trial and error, without being constrained by prior assumptions about the game. Similarly, by closing his eyes, Minsky is symbolically "emptying the room" of preconceptions and biases, allowing himself to approach the problem with fresh eyes and an open mind.

Overall, the story encourages us to approach complex problems with a beginner's mind, free of preconceptions and biases, in order to allow for creative solutions and new insights to emerge.

Back to human commentary: I’m not sure that makes sense. will someone please explain this stupid allegory and let me finally rest?


I can't speak to the technical details but my basic interpretation was the opposite of what ChatGPT just said - no real-world agent ever approaches a problem with zero preconceptions. That's simply not how learning works for humans. Presumably AI models also have 'preconceptions' at least in how they are designed.


The way the net is wired is the preconception. Just as closing your eyes doesn't actually empty the room, randomizing the preconception doesn't make it go away.


The room doesn't become empty by the mere act of closing one's eyes.

Closing one's eyes temporarily pauses sensory input from the room to one's brain (through the eyes). To fully pause sensory input to one's brain, one will also need to block the ears (from picking up sounds) and perhaps stand aloof from others in the room (to avoid touching any surfaces or being touched by others).

Even if it were possible to block all senses: visual (via the eyes), auditory (via the ears), olfactory (via the nose) and haptic (via the hands) input, this doesn't make the room become empty.

Put differently, reality (what is there) and our perception of reality (what we perceive) are two distinct concepts that are easy to conflate.

So, for anyone to claim they were successful at removing preconceptions or bias from a neural net, in a way that can be independently verified by others, they will first have to enumerate all forms of bias known to man, then show that all of those biases were avoided in the programming of the neural net.

At least, that's how I understand the koan.


I take it to mean that a random wiring still represents some (random) preconception even though we don’t know it - just like the world still exists when you close your eyes even if you don’t see it.

I could be getting it wrong, maybe ChatGPT is more intelligent than me…


Maybe Marvin is saying that there might be advantageous patterns in the random values, and that the researcher is only selecting for what he can see but not necessarily what exists, since many chess algorithms might just look like random noise to a human. So the point of the allegory is that intelligence is much broader algorithmically than what human intuition is able to grasp.


I think you have to loosely interpret "randomly wired" as "fully connected", i.e. no predefined structure (because an actually randomly wired net probably wouldn't learn well). The thing is as we've seen with convnets and now transformers, the structure of the network actually matters quite a bit. Even though a fully connected network could theoretically learn the right weights so that it emulates a convnet, in practice this is too hard to do. See the discussion at https://news.ycombinator.com/item?id=34748190


Minsky and Sussman slaving away in front of that apocryphal PDP-10 in 1970. And 50 years later…last November..suddenly BOOM. Everything they yearned for comes into view.


I keep on stumbling on this story, yet i can't seem to be able to grasp it.

We have senses so our brain has some sort of a priori knowledge of the world? We're all almost blind when we born.


The point of Minsky's action is to demonstrate to Sussman that Sussman's intent is essentially "if you can't see it, it's not there", but of course the room doesn't become empty when Minsky closes his eyes - the neural net won't lose preconceptions just because you randomly wired it, you just lose the ability to see and control what those preconceptions are.


ls -f


       -f      do not sort, enable -aU, disable -ls --color
       
       -a      do not ignore entries starting with .
       -U      do not sort; list entries in directory order
       -l      use a long listing format
       -s      print the allocated size of each file, in blocks
       --color colorize the output
I assume you mean to imply that by turning off sorting/filtering/formatting ls will run in a more optimized mode where it can avoid buffering and just dump the dentries as described in the article?


Yeah, exactly. OP is changing 3 variables and concluding that getdirent buffer size was the significant one, but actually the problem was likely (1) stat calls, for --color, and (2) buffer and sort, which adds O(N log N) sorting time to the total run+print time. (Both of which are avoided by using getdirent directly.)



But why is this bad? Let people choose how to spend their money how they want. On geo, on food, on trips, on saving..on whatever. Pay them for their work, let them allocate their spending as they wish for their own priorities. Why does it matter where they live, or where groups cluster? Seems like a fear-driven response.


It's not bad in absolute terms, but to me it's understandable that globally distributed tech companies might want some portion of their workforce to come from tech hubs, or their home countries, or just to have a certain level of diversity in terms of where people are from, instead of primarily selecting for people from low cost of living places. It's not only about saving money, and I don't see how "fear" has much to do with it either. These incentives are real and could have a huge effect on the culture of a company.


Well done! This must have been a massive project. 1000 hours?

Don't miss the project photo album: https://photos.app.goo.gl/7yxiuzpsFReUh5Yy5


It was a year from idea to finished, installed clock. 1000 hours? Maybe. Yes, if you include time spent worrying about how to make the clock hands look good and stay on.


“The pennies you save will be the dollars your widow’s husband spends.”


Why is it important to leave something behind?



We had shadow banning in the Topix forums in 2005.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: