Hacker Newsnew | past | comments | ask | show | jobs | submit | croniev's commentslogin

I like the idea! Now you're just left with the dilemma of what happens when you reach many people with it - will Scrappy be made for thousands of users, polished and flashy?


I agree, but there are also cases where it is blatantly clear that companies are not only on Trumps side but taking initiative themselves to corrode our political culture, and people here in europe are too comfy to make a switch, apart from a lack of similar alternatives.

X and Meta are most obvious, and I don't know about google's involvement, but have been trying to convince people to move away from it for years. It's a similar situation with streaming services.


I'm sorry but it sounds like you were trying to spread conspiracy theories, the way you are putting it. Do you have some examples?



In comparing neural networks to brains it seems like you are implying a relation between the size/complexity of a thinking machine and the reasonability of its thinking. This gives us nothing, because it disregards the fundamental difference that a neural network is a purely mathematical thing, while a brain belongs to an embodied, conscious human being.

For your implication to be plausible, you either need to deny that consciousnes plays a role in reasonability of thinking (making you a physicalist reductionist) or you need to posit that a neural network can have consciousness (some sort of mystical functionalism).

As both of these alternatives imply some heavy metaphysical assumptions and are completely unbased, I'd advise to avoid thinking of neural networks as an analogue of brains with regards to thinking and reasonability. Don't expect they will make more sense with more size. It is and will continue to be mere statistics.


I'm not implying anything or delving into metaphysical matters.

All I'm saying above is that the number of neuron-neuron connections in current AI systems is still tiny, so as of right now, we have no way of knowing in advance what the future capabilities of these AI systems will be if we are able to scale them up by 10x, 100x, 1000x, and more.

Please don't attack a straw-man.


Piranesi by Susanna Clarke... I've been wanting to read that too! Jonathan Strange and Mr Norell has been one of my favorite and most immersive books, absolutely brilliant!

Too bad Susanna Clarke got CFS, a very ill researched illness :/


Timeshift does not work for me because I encrypted my ssd, decrypt on boot, but linux sees every file twice, once encrypted and once decrypted, thinking that my storage is full, and thus timeshift refuses to make backups due to no storage. At least thats as far as I'm understanding it atm


> linux sees every file twice, once encrypted and once decrypted

fixing this should prove profitable


Many commenters here seem to be hostile towards philosophy. Heres a take for you:

This is only a paradox if you think of language as a way of describing some "real", static state of affairs in the world (look up " correspondence theory of truth"). There is no paradox here if we think of language as pragmatic, action directing, since it is obvious what the sentence should convey (look up "pragmatism"). Some people will argue that there is some sort of static meaning hidden behind the actual words which enters the consciousness of the listener, others will say that the meaning is only generated by the person hearing the words.

This is as philosophical of a question as it gets, and has been debated even more heatedly ever since Wittgenstein.

If you do not see that debating these questions is relevant and interesting, but would rather reduce all of philosophy to that first obvious-seeming and thus "not a real problem" position, then I wish you a good time bathing in your ignorance.

However, if you comprehend that what we take for granted in every area and discipline can be subjected to reasonable reflection, then I welcome you to the dark side. Nothing is clear, no knowledge absolute - many engineers seem to forget this while over-indulging in an overly simplistic world view :)


I think it is okay to philosophise about situations and events that seem a bit paradoxal but the explanation is obvious. Even more, that is a core trait of philosophy.

There are many similar situations where what we hear, read or see is technically incorrect. Since the sender (or the activator of an agent) of the message in such case assumes the interpreter has enough common knowledge, it is a perfectly okay communication.

A video tape containing a recording: "your watching this means I'm dead."

A secretary of a company impersonating the company when sending a message to many recipients.

An actor speaking about his character, as if they are somebody they know very well.

Writing that an AI hallucinates.

My car informing me that one of the tyres is low on pressure, even though it does not know what a tyre is, let alone how to measure pressure.


> [...] My car informing me that one of the tyres is low on pressure [...]

Thank you for putting this in a larger philosophical prospective. For I have something on the next level, a car that tells me "a tire has low pressure", but does NOT tell me which tire it is. I did the best I could to understand why my beloved car would do this, but gave up and had to interpret it as a deliberately malicious act.

My suspicions were confirmed when 1 of 4 sensors (inside the tires) had failed, and the technician read the diagnostic and proceeded directly to the tire in question. I had been desperately holding on to the remote possibility that the car really didn't know which tire has low pressure or which sensor failed, but it told the technician but not me.

I am wondering if it would be best to give the car to the technician to ensure my personal safety. How would the next aspect of this dislike manifest? Most scenarios I can think of result in the car's own suicide, but perhaps it will run me over? Please help.

Signed, LOW TIRE PRESSURE


Weird that tires have pressure sensors. I really thought that the control unit's logic simply averages rpm values to identify low pressure tires. I'm glad my car informs me which tire it is convinced of to suffer from 'depression'. I already checked that it identified the correct one.


Two different design philosophies: indirect (not all wheels rotating at a expected speed for current steering), or direct (wireless sensor in the tire).


Lots of linguists only work on static structures, diagramming morphosyntax and semantics of a standalone utterance. All of the fuzzy social context stuff doesn't generally get broken down into a "mathematical" structure in the same way. The piece we're missing here is called "grounding", the mapping of words in the utterance to entities in the context. I think it gets largely ignored because of this study of static structures. I don't know of any generic parsing framework/theory/tool that comes with grounding out-of-the box. It's just not done. Please prove me wrong if you know of a tool that does this. Other keywords are "deictic" and "pragmatics". Generally you only start to worry about grounding when you're doing some kind of robotics or other human-computer interaction and you need a human's words to map to sensor data or something on the screen to understand what the intention is.

As a possible exception, anaphora/cataphora resolution is pretty well-studied and for example is supported in spaCy (https://spacy.io/universe/project/neuralcoref), but this is mapping one word in an utterance to another word in the utterance, rather than to an entity in the context.


Human creativity turns into a point on that curve. One mission of art is to find a different dimension, out of the curves reach, until eventually it becomes more common and the curve can be fitted to it again. AI cannot think outside of the box because it cannot think at all, there is no meaning behind what it does.


>Human creativity turns into a point on that curve.

All human creativity are points ON the SAME Curve.

It doesn't matter what turns what into what into some point on the curve or finding a different "dimension"

If you come up with an algorithm that can traverse that curve you've found the algorithm for human creativity.

We are close, deadly close, to the end.

Especially given the fact that these AI algorithms literally treat the problem as a best fit curve from a mathematical perspective. Like the analogy I made is not even really an analogy, it's the reality of how these algorithms actually work.


> All human creativity are points ON the SAME Curve.

That's what they said about mathematical proofs.

> If you come up with an algorithm that can traverse that curve you've found the algorithm for human creativity.

And that's what they said about programs that take finite time to prove whether or not other programs halt.

Neither of those turned out to be true though. (In particular, because the curve you're talking about is infinitely large and so you cannot compute on it in finite time.)

Also, real world things cannot be reduced to their bit descriptions because they have metadata even if their descriptions are identical: https://ansuz.sooke.bc.ca/entry/23


>And that's what they said about programs that take finite time to prove whether or not other programs halt.

An actualization of human creativity exists. Your brain.

The existence of your brain as a physical thing indicates that human creativity can be actualized. It’s literal proof via existence.

It’s the complete opposite of what you describe. We have proof that it is 100 percent possible.


Turing completeness arguments don't apply to brains because we have infinite storage space ;)

(it's in the environment around us and other people's brains.)

Obviously you can construct something with human level creativity, it just takes two people and nine months.


"We're at the tippity top of the mountain, but we're only halfway up".


We're at the foot of the mountain. We only just started.


Surveys of humans strongly demonstrate that they are rather impressed with their mountain climbing accomplishments thus far. The main shortcoming to humanity is only the actions of those other people.


I'm in the following camp: It is wrong to think about the world or the models as "complex systems" that may or may not be understood by human intelligence. There is no meaning beyond that which is created by humans. There is no 'truth' that we can grasp in parts but not entirely. Being unable to understand these complex systems means that we have framed them in such a way (f.e. millions of matrix operations) that does not allow for our symbol-based, causal reasoning mode. That is on us, not our capabilities or the universe.

All our theories are built on observation, so these empirical models yielding such useful results is a great thing - it satisfies the need for observing and acting. Missing explainability of the models merely means we have less ability to act more precisely - but it does not devalue our ability to act coarsely.


But the human brain has limited working memory and experience. Even in software development we are often teetering at the edge of the mental power to grasp and relate ideas. We have tried so much to manage complexity, but real world complexity doesn't care about human capabilities. So there might be high dimensional problems where we simply can't use our brains directly.


A human mind is perfectly capable of following the same instructions as the computer did. Computers are stupidly simple and completely deterministic.

The concern is about "holding it all in your head", and depending on your preferred level of abstraction, "all" can perfectly reasonably be held in your head. For example: "This program generates the most likely outputs" makes perfect sense to me, even if I don't understand some of the code. I understand the system. Programmers went through this decades ago. Physicists had to do it too. Now, chemists I suppose.


Abstraction isn't the silver bullet. Not everything is abstractable.

"This program generates the most likely outputs" isn't a scientific explanation, it's teleology.


"this tool works better than my intuition" absolutely is science. "be quiet and calculate" is a well worn mantra in physics is it not?


“calculate” in that phrase, refers to doing the math, and the understanding that that entails, not pressing the “=“ button on a calculator.


Why do you think systems of partial differential equations (common in physics) are somehow provide more understanding than the corresponding ML math (at the end of the day both can produce results using a lots of matrix multiplications).


... because people understand things about what is described when dealing with such systems in physics, and people don't understand how the weights in ML learned NNs produce the overall behavior? (For one thing, the number of parameters is much greater with the NNs)


Looking at Navier-Stokes equations tells you very little about the weather tomorrow.


Sure. It does tell you things about fluids though.


What is an example of something that isn't abstractable?


Stuff that we can't program directly, but can program using machine learning.

Speech recognition. OCR. Reccomendation engines.

You don't write OCR by going "if there's a line at this angle going for this long and it crosses another line at this angle then it's an A".

There's too many variables and influence of each of them is too small and too tightly coupled with others to be able to abstract it into something that is understandeable to a human brain.


AI arguably accomplishes this using some form of abstraction though does it not?

Or, consider the art word broadly, artists routinely engage in various forms of unusual abstraction.


> AI arguably accomplishes this using some form of abstraction though does it not?

It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.

> artists routinely engage in various forms of unusual abstraction

Abstraction in art is just another, unrelated meaning of the word. Like execution of a program vs execution of a person. You could argue executing the journalist for his opinions isn't bad, because execution of mspaint.exe is perfectly fine, but it won't get you far :)


> It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.

Abstraction doesn't have to be perfect, just as "logic" doesn't have to be.

> Abstraction in art is just another, unrelated meaning of the word.

Speaking of art: have you seen the movie The Matrix? It's rather relevant here.


This is just wrong.

While computer operations in solutions are computable by humans, the billions of rapid computations are unachievable by humans. In just a few seconds, a computer can perform more basic arithmetic operations than a human could in a lifetime.


I'm not saying it's achievable, I'm saying it's not magic. A chemist who wishes to understand what the model is doing can get as far as anyone else, and can reach a level of "this prediction machine works well and I understand how to use and change it". Even if it requires another PhD in CS.

That the tools became complex is not a reason to fret in science. No more than statistical physics or quantum mechanics or CNN for image processing - it's complex and opaque and hard to explain but perfectly reproduceable. "It works better than my intuition" is a level of sophistication that most methods are probably doomed to achieve.


"There is no 'truth' that we can grasp in parts but not entirely."

The value of pi is a simple counterexample.


We can predict the digits of pi with a formula, to me that counts as grasping it



> There is no 'truth' that we can grasp in parts but not entirely

It appears that your own comment is disproving this statement


> There is no 'truth' that we can grasp in parts but not entirely.

If anyone actually thought this way -- no one does -- they definitely wouldn't build models like this.


I agree. Saying that "the establishment is pro fraud" is not helping at all, because conforming with such a simple label will make us blind to the actual reasons why fraud occurs. It is not that all fraudulent scientists are part of some cult or get taught to be fraudulent. The fraud rather results out of a long series of seemingly small decisions that stack up to the point where they hire paper mills, all the while not performing enough self-reflection. If you want to combat this, you will not be successfull labeling the people doing this as fraudulent. Some may take offense to this and not listen to you. Rather, apart from systemic change also more reflection and ethical awareness should be promoted (Please stop thinking and teaching that science has nothing to do with ethics - that just allows scientists to be blind to their own faults).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: