Hacker Newsnew | past | comments | ask | show | jobs | submit | jshmrsn's commentslogin

I wish Taiwan’s reactors were never shut down in the first place, and I hope Taiwan can hold out long enough to get it started back up again. It’s a step towards being able to withstand a blockade (Taiwan lacks oil, gas, and goal resources, so it relies on imports). If PRC chose to attack a nuclear power plant, it might give the necessary pressure for international intervention.

For what it’s worth, I’ve personally walked around the nuclear containment area on Orchid island and swam in the waters around it. It’s a well managed and nice place.


Hasn't Russia chosen to attack a nuclear power plant in their recent aggression? Unless you're thinking of a more destructive kind of attack, I probably shouldn't be counting on international intervention.


I don’t mean to suggest it alone would tip the scales. And I agree the hope for international intervention is dimmer than it ever has been. But it would be one thing on the scales, as it has been in Ukraine as well. While there has not been direct military intervention in Ukraine, the support that has been provided relies on political popularity, and Russia’s endangering of Zaporizhzhia has contributed to the disdain of and attention towards Russia’s invention.


Scale AI is a provider of human data labeling services https://scale.com/rlhf


This is true again for the most advanced fighter aircraft, except the active hand is now a computer.


Rockets too

Can't do that in Kerbal Space Program (at least not without mods), but it works fine in meatspace


If the machine can decide how to train itself (adjust weights) when faced with a type of problem it hasn’t seen before, then I don’t think that would go against the spirit of general intelligence. I think that’s basically what humans do when they decide to get better at something, they figure out how to practice that task until they get better at it.


In-context learning is a very different problem from regular prediction. It is quite simple to fit a stationary solution to noisy data, that's just a matter of tuning some parameters with fairly even gradients. In-context learning implies you're essentially learning a mesa-optimizer for the class of problems you're facing, which in the form of transformers means essentially means fitting something not that far from a differentiable Turing machine with no inductive biases.


I am familiar with ‘it’ as a default closure input from Kotlin. From a quick search, that in turns seems to be inspired by Groovy.


This goes at least as far back as anaphoric macros: https://en.m.wikipedia.org/wiki/Anaphoric_macro.


Some of the text that the LLM is trained on is fictional, some of the text that its trained on is factual. Telling it to not make things up can tell it to generate text that’s more like the factual text. Not saying it does work, but this is a reason how it might work.


Did that model also factor in risk of damage, liability, and normal wear and tear that a tenant brings over a vacant unit?


If the “sound” is an internal perception, then noise cancelling headphones would not help at all. They might make it worse by quieting any background sounds that could otherwise help cover up the internally produced sensations.


My tinnitus gets worse afterwards if I'm subjected to noise (as in an airplane). Noice-cancelling headphones is a must for me at this point if I'm to experience prolonged increased sound levels.


It depends on your tinnitus itself. My tinnitus gets crowded out by a loud environment, I don't tend to hear it. I only hear my tinnitus when there's no sound. So for me, noise-cancelling headphones do give some temporary symptom relief.

Wearing a Bose QC 35 is so important for me when I go to sleep, because the ANC also blocks out sound and blocks out my tinnitus to some extend. It's a bit of a skill to sleep with them (you can get audio feedback of the ANC mics) but I've mastered it and improved my sleep game a lot because of it.


But active noise cancellation removes (perceived) sound. Wouldn't that make it worse, then?


The way I experience it is as follows.

Normally:

- Tinnitus: 100%

With ANC:

- Tinnitus: 50%

- Bose ANC: 50%

I like the ANC sound more than my tinnitus.

I haven't noticed my tinnitus becoming worse or less worse.


Oh, by “the ANC sound”, do you mean the white noise floor of the ANC?

In that case, have you tried something like the Bose Sleepbuds? Same idea, much more comfortable to sleep with.


Bose recently retired their 2nd attempt. The team behind those have a new one coming out in January, https://ozlosleep.com I'm pretty interested... hard to tell how long the pre-sale discount lasts


It doesn't seem to have ANC, so it doesn't noise cancel, it only masks. And if ANC is the same as masking then they'd need to put that in their marketing.

Also: yep, that's what I mean by it, the white noise sound.


An idea I hear often listening to talks about LLMs, is that training on a larger (assuming constant quality) and more various data leads to the emergence of grater generalization and reasoning (if I may use this word) across task categories. While the general quality of a model has a somewhat predictable correlation with the amount of training, the amount of training where specific generalization and reasoning capabilities emerge is much less predictable.


I can only speak from my own internal experience, but don’t your unspoken thoughts take form and exist as language in your mind? If you imagine taking the increasingly common pattern to “think through the problem before giving your answer”, but hiding the pre-answer text from the user, then it seems like that would pretty analogous to how humans think before communicating.


> don’t your unspoken thoughts take form and exist as language in your mind?

Not really. More often than not my thoughts take form as sense impressions that aren't readily translatable into language. A momentary discomfort making me want to shift posture - i.e., something in the domain of skin-feel / proprioception / fatigue / etc, with a 'response' in the domain of muscle commands and expectation of other impressions like the aforementioned.

The space of thoughts people can think is wider than what language can express, for lack of a better way to phrase it. There are thoughts that are not <any-written-language-of-choice>, and my gut feeling is that the vast majority are of this form.

I suppose you could call all that an internal language, but I feel as though that is stretching the definition quite a bit.

> it seems like that would pretty analogous to how humans think before communicating

Maybe some, but it feels reductive.

My best effort at explaining my thought process behind the above line: trying to make sense of what you wrote, I got a 'flash impression' of a ??? shaped surface 'representing / being' the 'ways I remember thinking before speaking' and a mess of implicit connotation that escapes me when I try to write it out, but was sufficient to immediately produce a summary response.

Why does it seem like a surface? Idk. Why that particular visual metaphor and not something else? Idk. It came into my awareness fully formed. Closer to looking at something and recognizing it than any active process.

That whole cycle of recognition as sense impression -> response seems to me to differ in character to the kind of hidden chain of thought you're describing.


Mine do, but not so much in words. I feel as though my brain has high processing power, but a short context length. When I thought to respond to this comment, I got an inclination something could be added to what I see as an incomplete idea. The idea being humans must form a whole answer in their mind before responding. In my brain it is difficult to keep complex chains juggling around in there. I know because whenever I code without some level of planning it ends up taking 3x longer than it should have.

As a shortcut my brain "feels" something is correct or incorrect, and then logically parse out why I think so. I can only keep so many layers in my head so if I feel nothing is wrong in the first 3 or 4 layers of thought, I usually don't feel the need to discredit the idea. If someone tells me a statement that sounds correct on the surface I am more likely to take it as correct. However, upon digging deeper it may be provably incorrect.


This depends for me. In the framework of that book Thinking, Fast and Slow - for me the fast version is closer to LLM in terms of I'll start the sentence without consciously knowing where I'm going with it. Sometimes I'll trip over and/or realise I'm saying something incorrect (Disclaimer: ADHD may be a factor)

The thinking slow version would indeed be thought through before I communicate it


My unspoken thought-objects are wordless concepts, sounds, and images, with words only loosely hanging off those thought-objects. It takes additional effort to serialize thought-objects to sequences of words, and this is a lossy process - which would not be the case if I were thinking essentially in language.


You have no clue how GPT-4 functions so I don't know why you're assuming they're "thinking in language"


I am comfortable asserting that an LLM like GPT-4 is only capable of thinking in language; there is no distinction for an LLM between what it can conceive of and what it can express.


It certainly "thinks" in vector spaces at least. It also is multimodal, so not sure how that plays in?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: