Hacker News new | past | comments | ask | show | jobs | submit | more JohnStrangeII's comments login

There are no neutral actions, though.


I'm working in formal ethics, mostly on formal aspects of value structure, and agree with you. This article is a hodgepodge of many different ideas, some of them interesting, others a bit naive, and has almost nothing to do with formal models for ethics. There is plenty of research in formal ethics such as deontic logics, abstract argumentation frameworks for preferences and norms, nonmonotonic logics for value-based reasoning, normative systems, input/output logics and formal axiology.

One thing that the author does not address in enough detail is the simple fact that there are many different ethical traditions that come to different conclusions about particular normative issues, and that there are plenty of authors in ethics who (still) consider their business a normative one. A "bottom up" machine-learning based approach to this would invariably fail and miss the whole point. There are some ethicists who consider it mostly a descriptive endeavor - Schopenhauer was one of the first, for example -, but they are in a minority.

As long as experts in ethics cannot agree what the "right" ethics is, it's hard to see how we would be able to teach it to machines. Many meta-ethicists including me would even deny that there can be an "expert" about moral and particularly about ethical questions at all. However, I have no doubts that various robotics companies will implement those ethical rules and approaches that best serves their interests as companies.

That's why I think robot ethics is kind of misguided. What we need is laws that regulate AI and put its use into a legal framework and closes loopholes. This is a political, not just an ethical issue.


Hello, I saw the pingback for this thread today. I'm surprised (and a bit giddy) that my obscure little blog wound up on HN, so I hope you don't mind my coming in a little late.

The purpose of this post was more exploratory, rather than for the sake of drawing specific conclusions, which I am aware can be frustrating. Thanks for taking the time to read it anyway.

You are correct that I was not very interested in formalizing ethics. I'm not so sure why OP chose to use that to title the thread, because only a part of the post was oriented towards overviewing the AI safety discourse, and within that discourse, the development of a complete formal system for ethical reasoning is considered a naive way to attack the problem. I never wrote about the formalization angle directly.

The problem, such as it is, is that it's troublesome to encode what we want of an AI agent, even more so to encode how it should reason about what we want of it under unforeseen circumstances, no matter what it is that we want. Ultimately, in order to successfully implement regulation, an agent must be "taught" (in whatever sense) to make judgments according to what is wanted of it. Technically speaking, this is a hard problem. However, there does not necessarily need to be an absolute answer as to what the correct ethics is, in order to teach a machine methods of reasoning about ethical problems, nor for us to reason about what sorts of imperatives a machine should be reasoning about.

The thrust of that section, however, was to raise an issue with the (in my opinion) narrow way in which that problem is typically formulated, to which you have also alluded. Specifically, my beef was ontological (which, for the sake of communication, was distilled into the bottom-up/top-down distinction), but there are conceivably other gaps.

This could have been elaborated; ontological approach is one of the ways in which ethical traditions are distinguished, but far from the only one. Any distinguishing factor may have been missed by an approach like IAD.

> That's why I think robot ethics is kind of misguided. What we need is laws that regulate AI and put its use into a legal framework and closes loopholes. This is a political, not just an ethical issue.

This much I agree with wholeheartedly. I made the case that we shouldn't shaft the ethics for the politics, but the same goes for the reverse. Where how we consider ethical problems comes into play again in the second case, is that putting the use of AI into a legal framework and "closing loopholes" has roughly the same shape as the problem of AI ethics, it just chooses potentially different parties to determine what it is we want of the agents.

Anyway, thanks again! If you have a moment, I'd be interested in reading what you thought was naive in my approach.


Sorry, I've seen your post only now, so my reply is a bit late. I think we mostly agree, but it appears to me that you might not be fully aware of the magnitude of the metaethical problem and persistent disagreement about them. To give you an example from formal ethics, according to Temkin's Spectrum arguments strict "better than" comparisons are not transitive. Some authors agree with him, others disagree. Some want to give up completeness instead of transitivity, others opt for lexicographic value hierarchies. Others deny having the intuitions and argue for the status quo. Even this one simple issue has far-reaching normative consequences, though. If Temkin is right, then even if we all agreed to be classical utilitarians, the position would be infeasible and any account based on utility functions would be wrong from the start.

If at all, we could try to implement conforming with systems of laws as hard constraints, so at least robots would not openly break the law. The rest of moral behavior could then be learned. Or, so one might think. However, even that is not possible. It is well-known from the philosophy of law that systems of law are not contradiction free. There are conflicts between opposing laws. This is studied in normative systems research and there are solution to it (essentially, by logicians in the computer science tradition). However, these require some form of defeasible rules, and among the myriad of nonmonotonic logics that can express some form of nonmonotonic reasoning, not a single one has a normative justification. So again, as long as human standards are not coherent enough, it's going to be impossible to make a machine conform to them in a way that is satisfying.

Related to that, I believe there are two main issues that you haven't addressed in your post:

1. Different standards for blame and accountability: The standards for blame and correctness of decision making are completely different between machines and humans. Even very intelligent AI would be judged at much higher standards as humans. I simply don't believe that we would accept robots that commit murder, just as long as they commit murder less often than humans. By the same token, we do expect machines to make substantially less errors, and in certain areas would not allow errors at all. So it's not just about teaching them to follow human ethical standards - they need to excel at this and may not break the law. However, as I tried to show with the above examples, we don't know and agree on our own standards well enough to be able to start making sure AI fulfills these high expectations. There is also a much higher need for transparency of decision making for machines than from humans. An idiot driver is an idiot driver. If a car is driving like an idiot by itself, however, you'd expect to be able to at least retrospectively find out why it did what it did.

2. The political dimension: Laws are the result of a political process. Appointed judges resolve potential conflicts between laws and interpret them. More broadly, whatever standards we expect AIs to fulfill hinges on ethical positions and personal preferences. It can ultimately only be decided as the outcome of a political process as well. For example, a safe a self-driving car needs to be, how it makes decisions (e.g. "Should I try to save my driver or save the pedestrian, or not have any priority at all?"), and which standards it needs to fulfill is up for debate. That is not decided by moral philosophers. It needs to be decided by the publicly accountable, democratically elected representatives of the people. The idea of teaching a machine to behave morally is fine, but there are also strict standards of safety and transparency it has to fulfill. This is a political problem and can only be solved in a broader context, within the debate of how much AI should be allowed at all. For example, should AI judge your creditworthiness? If so, what false positive rate would be tolerate? I believe using AI for such purposes should be strictly prohibited. Others disagree - I'm sure AI is already used for that. Such issues can only be resolved politically.

I considered your post a bit naive, because you omitted these two crucial issues, the higher standards we expect from AI, and the political dimension. Other than that, I agree with much of what you've said.


I can't really put my finger on it, but this article seems slightly biased against Silicon Valley CEOs.


The observers are part of the universe, too, as is my pocket calculator.


Turtles all the way down. I think we should try to nail down answers on simple cases before trying to unravel apparent paradoxes.


This is a rather interesting article. I'm a Platonist. From what we know today, real numbers cannot exist in a finite space, but they seem to exist mathematically. The same can be said about many other mathematical structures, including those that can only be characterized adequately (categorically) in higher-order logic.

It's also worth noting that physical and mathematical existence are based on completely different criteria. For non-constructionists mathematical objects exist once they are not demonstrably contradictory (although the absence of contradictions often cannot be proved in an absolute sense). In contrast to this, for physicists an object exists once it can be measured, where measurement is ultimately tied to sensual experience. There are also theoretical entities in physics that cannot be directly measured, but their existence is usually downplayed, they're not supposed to "really" exist but only as theory-dependent entities. In any case,the two "kinds of existence" are very different from each other.


The perception of numbers exists mathematically. Likewise for more abstract structures.

You cannot show me threeness. You can show three things and tell me to generalise, and if you keep generalising you'll end up with a consistent-ish framework of sorts for your observations that can be applied to certain other physical experiences.

But fundamentally this is an exploration of the consequences of psychological processes - like perceptual grouping, and inductive relationship inference - not observations of external phenomena.

It doesn't seem that way, but there is no external authority you can appeal to which will state definitively that when mathematicians all agree on something their experience of "true" is absolutely and objectively correct, and not a distorted and limited artefact of human cognition.


This kind of sensualism was often defended in the debate, but it has problems, too. I think the history of mathematics makes the position implausible.

For a long time, up until recently, mathematics was way ahead of the applications of mathematics to physics. In Ancient Greece there was a general consensus that infinity and real numbers do not exist. But then some people found out that the side of a triangle must sometimes be a real number. However, you cannot ever measure SQRT(2) precisely. Whatever number you extract from the physical world is only a rough finite approximation. You need to represent the number in a different way and solve the problem algebraically. Many scientists in Ancient Greece rejected this idea vigorously. Still, real numbers are very useful for describing the physical world, so useful that we couldn't possibly do without them today.

Imaginary numbers are another example. They were ridiculed as abstract nonsense when they were described for the first time and widely conceived to have no physical reality or application at all. Despite all that, they play a vital role in modern physics.

There are many more examples like that. To cut a long story short, at least until recently mathematics was always ahead of physics (now they seem to go more in tandem). This fact makes the idea very implausible that mathematical structures are merely useful abstractions from the physical world we invented to describe it. It simply doesn't describe what happened in mathematics. And I find the idea that mathematicians just came up with arbitrary imaginations equally implausible.

> no external authority you can appeal to which will state definitively that when mathematicians all agree on something their experience of "true" is absolutely and objectively correct, and not a distorted and limited artefact of human cognition

That is true for everything, it's just a radical skeptic position. Nevertheless, mathematics has the highest standards of rigor for proofs among all disciplines.


Quite an astute observation. I’ve been thinking about this quite heavily recently. It has odd occult references too in the sense of “nothing creates something.”

That which is measured, exists.

No measurement itself, as in of itself as a thing, is a thing that exists. You can measure 3cm but “3” and “cm” don’t exist. Virtual values assigned by consensus. All axioms are agreements. Truth or existence itself is convergent and resembles certainty only in the majority agreeing to it.

Imagination allows nothingness (imaginary things) to be measured.

Imagine any unit, give it a “rule” and now it can be measured!

Pretty interesting bridge between the “imaginary” and the “real.”

There’s so much nuance on what existence is, and isn’t, or even not is nor isn’t is depending on if you subscribe to classical logic based on what we currently describe as mostly Aristolian or if you use many-valued logics which are making a come back yet rooted in the Vedas...

I find this extremely profound and it’s one of my top focal areas of study right now.

It’s also related to my observation of disagreement, chaos, and behavior of the human animal. It seems the zeitgeist of the egregore lacks the ability to adjust perspective and see truth depending on the “rule/roles being played” (all disciplines are games/acting in a way).

Tons to unpack.


I'm not very convinced of this AI. I got 96% for http://peppermind.com and only 68% for https://talumriel.de - I'm almost certain that most users would prefer talumriel, and both scores them seem overrated to me. I mean, I really have no clue about design...


Thanks for this really. I looked it and trust me AI has got this completely wrong ..we will include this as exception when we train again.. thanks for trying..


Wait, how do you know it got it completely wrong?


talumriel is unreadable for me because of overlong text width. Dunno maybe it's a factor, but overall it does look cleaner.


I have a question about this. Is SEO really still relevant and necessary? Is it really effective?

I've been wondering about this for a while. It seems to me that everybody is doing it anyway, and that in the end what counts is that many pages link to your page. Meanwhile, search engines heavily penalize certain "optimizations".

If you have a page that loads instantly and contains all the standard meta tags and keywords, does additional SEO really make a difference? Are there really any "tricks" that work?


Yes SEO is still very much relevant and effective IMO. Regardless of the paid ads, Google sends a lot of organic traffic when you rank for the right keywords. If you were to buy this traffic you would spends thousand of dollars doing it. Same for Youtube. A good video is amazing source of targeted prospects.

Just think about how you find a new service, chances are you googled it and tried the 2-3 results.

Now as for the question - is it necessary? The answer is yes. Because if you don't do it your competitor will and as the joke goes "The best place to hide a dead body is page 2 of Google".

> Are there really any "tricks" that work?

I'm no SEO expert and so I don't care for meta tags, keywords. Title and page description are still important I think. I believe the most relevant things are the time user a spends on your page and social signals (shares, etc) and unfortunately backlinks from high authority pages (this is the worst part of SEO).

The good news is that you don't have to do anything sneaky to do SEO anymore. Make an excellent page on which a user spends a lot of time (so good that he actually bookmarks or shares it) and it starts ranking. For all its evilness Google is still doing something right here.


sitemap.xml can cause your search result to appear with sub-links.

Keyword density (just the right amount) and total word count seem to have an effect and I’ve seen targeted landers work very well for specific search phrases.

Backlinks will probably always be relevant to rankings as they were the original bedrock principle behind PageRank. If you get prominent blogs to link to you, that can help a lot. Inversely, use rel=nofollow on anchors to avoid seeping relevance to other pages.

Ever since Mobilegeddon your site MUST be mobile friendly or you will get penalized. Also other UI stuff matters (E.g. don’t put ads above the fold)

Not an SEO expert here but I would consider those effective and small things you can do for SEO.


Also performance - make sure you use this tool https://developers.google.com/speed/pagespeed/insights/


A lot of people don’t do technical SEO right. It sounds like you’re asking about whether there is something other than technical SEO and links to do right?

Not really. If you write good content, with relevant keywords, and people link to it, that is good SEO. Not clear what distinction you’re trying to draw.

There may be some scammy stuff with short term results, but you’re always one step away from an algo change or a manual penalty.

Edit: I forgot good internal linking/url schemes. Those matter.


There's gotta be like a github repo somewhere that lays out the actual technical aspects of SEO, without any of the bullshit. Anyone know of a project like that?


It would change based on google rollouts. A forum would have more upto date information.


I would also like to know, but SEO is also about the backlinks. Quantity and what anchor text they use, where are they etc.


In my experience the best software is the one that no longer distinguishes between the markets. For example, I've just found the Serif Affinity graphics software suite. Each of their programs can be used as substitute for ten times more expensive Adobe software and is good enough for professionals (with caveats and limitations, I suppose) but at the same time is affordable enough for non-professional consumers.

Recently, almost the whole pro audio market has shifted in a similar way. They realized that they can make more money by targeting hobbyists and musicians instead of studio owners, so now most companies offer 70% to 90% sales from time to time. Others have started to launch software at a tenth of the price that used to be normal - $30 instead of $300.

I prefer that to the "free plus pro" version model. But I admit that I have no clue about marketing and sales and don't know the downsides of aiming at consumer and professionals at the same time.


It will substantially harm sales of B2C products, but it may make sense for B2B products that require support and training and aren't based on any substantial know how/trade secrets. In that case releasing as open source might even be beneficial. A good trade off is to release libraries as open source but keep parts of your final product (e.g. GUI) proprietary.

Even in the B2B market successful open source companies often use dual licenses and release the open source version as a sort of crippleware by keeping essential libraries and tools proprietary or under very restrictive open source licenses.


I think Kong* does this rather well. They have a usable open source product that has a lot of the core functionality and it more than enough for small businesses and hobbyists. They also provide enterprise support and a enterprise products which includes functions that are typically used by enterprises (like fine grained auth).

* https://konghq.com/


I've got to agree, The Three Body Problem was the second worst science fiction novel I've ever read - to be honest, I only read 3/4 of it. But he's not an English author so he doesn't count.


Which was the worst? I am not a great fan of this trilogy but I've read many worse scifi books, including some considered classic.


I found it almost unreadable, and I can tolerate a lot. Maybe something gets lost with the translation. As for the worst, I forgot the author's name. It was a military sci fi novel with an alien invasion theme, so you'd expect it to be bad. However, I love the topic and was willing to tolerate the focus on right wing US prepping. I'm fine with all of that, but, SPOILER AHEAD: The problem of the novel was that the fight was pretty much unwinnable, so in the end the author simply just introduced vampires who allied with the paramilitary preppers and swiftly eliminated all aliens in the last chapter. That was by far the worst plot I've ever seen.


Sounds like David Weber's Out Of The Dark


https://en.wikipedia.org/wiki/Out_of_the_Dark_(Weber_novel)

> With nearly every Shongairi base on Earth destroyed, Thikair orders his remaining forces off the planet, while planning to outright destroy Earth from orbit. However, as the last fleeing units reach their ships, the Shongairi dreadnoughts suddenly destroy the rest of the warships, all but one. Thikair is confronted on his own ship by same enemy that had destroyed the bases and smashed his fleet: vampires, who need no air to breathe, and can travel as mist.

> Their leader, Mircea Basarab, is actually the immortal Vlad the Impaler, whom humans remember as Dracula. He and his kind have been hibernating, and kept hidden before the Shongairi's arrival forced them to "protect" the people of Earth by creating more vampires and build an army to eliminate the Shongairi. Thikair is told that the hijacked dreadnoughts are being sent to each of the Shongairi worlds to destroy their Empire. Thikair is slain by one of the vampires, Stephen Buchevsky, whose human family was killed by the initial bombardment. Humanity now possesses Shongairi and Hegemony technology from the Shongairi industrial ships, and is fully united under the newly established Terran Empire, becoming a mighty adversary to the Hegemony which had so casually sent the invaders against and innocent and unsuspecting world.

Haha!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: