Voice Dream Reader is also very good on the iPhone. It can use any of the apple voices (download high quality versions in accessibility settings), and has other voices available. It can OCR pdf files that are otherwise inaccessible. It also makes a great audio book player.
More to the point, I’ve had trouble using Siri to speak in certain titles in iBooks on iOS. It would drop words or entire sentences or paragraphs. It was unusable for those titles. Not sure if it was bad OCR on the source file or what. I haven’t tried it on titles purchased from Apple, so I can’t speak to that.
I think this is an excellent explanation for why it may have been shut down and the need for some degree of monitoring accountability.
The fear is not that the chat bot will come to life, but rather than the text content it's been trained upon could potentially regurgitate back dangerous responses.
I don't think it's too far of a leap to see someone taking the output from the bot too literally and possibly creating a negative situation.
The whole concept of a dangerous response from a chatbot is anathema to a society that values the free exchange of ideas.
Who gets to decide what's "dangerous"? Why? Over and over in human history, we've seen speech restrictions ostensibly to protect the public used to impose orthodoxy and delay progress. Even if some utterance might be acutely dangerous, the risk of restrictions being abused to cement power is too great to tolerate them.
I reject AI safety rules for the same reason I reject restrictions on human speech. There is no such thing as a dangerous book or a dangerous ML model. If such a thing is dangerous, it's a danger only to those who have done wrong.
>There is no such thing as a dangerous book or a dangerous ML model.
ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?
Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.
I just don't understand your reasoning about this - if books can't be dangerous then why are they so powerful? If ML models can't be dangerous then how can they have utility?
> If ML models can't be dangerous then how can they have utility?
> Something can only have utility if it's dangerous.
smh
> ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?
> Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.
Whoever decided to take the results of that model and directly translate what it says into actions without any further thought.
So awesome!, rip my weekend plans, I have done similar thing for fun using python and modifying class dictionary (half working), more in scope of multi-objective optimization.
One of the largest programs generated had consisted of about 300 instructions.
This particular program included if/then conditionals, counting down in a loop, concatenation of a numeric value with text, and displaying output.
The system also attempts to optimize the number of programming instructions executed, in which case complexity can actually be considered based upon the resulting behavior, rather than LOC.
That's... decent. It's a lot larger than I ever got with a GA.
I don't think we're ever going to get a GA to produce an air-traffic control system, or a database, or an OS. But 300 lines (that work) is further than I was aware had been possible.
(When I said "decent" in the first paragraph, that sounds like I'm damning it with faint praise. And I kind of am. But I am doing so in light of the Linux kernel, which is something like 100 million lines. I am not doing so to minimize this as an achievement within the world of GA-generated code. I'm just saying that we're a ways from being able to handle most real-world problems with GA-generated code.)
I discovered Hexo when I decided to build myself a micro-blogging site about my code learning experiences 3 years ago. I was impressed with the whole experience then - I was after a simple-yet-not-stupid static site generator and that's what Hexo gave me. I did have to write my own tooling to get the site to deploy to AWS S3 but I assume that they will have a plugin for that sort of scenario nowadays.
The Traveling Salesman Problem is indeed a potential application of quantum computing. Grover’s Search could theoretically find a solution in quadratic time (sqrt(n!) versus n!). On a physical quantum computer, this would have profound impact to many different real-world applications.
My blog about artificial intelligence, quantum computing, and software development topics.
I also post frequently on Medium as well.
https://medium.com/@KoryBecker