Hacker Newsnew | past | comments | ask | show | jobs | submit | more primaryobjects's commentslogin

Composition also facilitates utilizing (GOF) design patterns. They can make code much cleaner and reusable when used properly and sparingly.


Congratulations! I’m curious if a quantum algorithm could help with this, as the problem deals with factoring large numbers?


Nice. I created a maze solver in js and canvas as well, solving using the A* and Tremaux algorithms.

You can even create your own mazes by passing json in the url.

Simple Maze Solver in HTML5 and Js http://www.primaryobjects.com/maze/


Cheese and yogurt both have lactose.


Most people can consume cheese and yogurt, but cannot consume raw milk. This is the distinction commonly meant by lactose tolerance (whether it holds up to a technical analysis or not).


I assume you meant raw pasteurized milk. I've seen anecdotal claims that drinking actual raw milk causes an increase in lactose tolerance, the hypothesis being that you lose some beneficial bacteria during the pasteurization process (not to mention various proteins are denatured).


Yeah, sorry, I forgot that “raw milk” was a thing.


Some types of cheese and yogurt have greatly reduced amounts of lactose compared to plain milk. Many people who can't tolerate plain milk can still tolerate some types of cheese and yogurt.


Yes, this is what I meant.


This is a classic AI planning problem, solvable with STRIPS.

Solution to the Sussman Anomaly

https://github.com/primaryobjects/strips/blob/master/Readme....


I'd imagine it's solvable by just about everything. This is one of those cases where the core problem isn't that the problem is hard, it is that it is so easy that it is a challenge to get humans to even perceive it as a problem on such a small scale. Any planning system will hit something like this all but immediately.

I'd say a lot of the earliest AI optimism was a failure to understand just how hard things that in the simplest case are so simple we don't even perceive them as problems in the first place. For a much more complicated example, I doubt anyone in 1970 would imagine that in 2018 we're still trying to get robots to walk from here to there. (Yes, lots of progress has been made, but it's still very much cutting edge, a research project, and a bespoke one-off effort each time, not a "Oh, I'll just swing on down to the robot store and pick up a walking chassis".) We don't see it as a challenge because for our conscious minds, by the time we're old enough to be doing robotics research, it's zero cognitive effort for us.


Moravec's paradox - stuff we thought is hard (chess, derivatives) is easy, while stuff we think is easy (face recognition, getting out of a car, opening a door) is really hard.

https://en.wikipedia.org/wiki/Moravec%27s_paradox

One of my favourite videos is this compilation of robots falling (failing) during the simplest tasks at the 2015 DARPA challenge:

https://www.youtube.com/watch?v=g0TaYhjpOfo


To be fair, it takes us humans a solid couple of years of continuous training to make sense of our sensory inputs and get accustomed to our limbs enough to begin doing anything significant with them, so I am regularly flabbergasted at the expectations we place in our lowly contraptions to outdo us by two to three orders of magnitude.


And yet humans need even more years to correctly play chess.


Humans have had millions of years of system level optimisation (evolution) which then only needs a couple of additional years to train the wetware to operate the system for daily activity.

Chess has only been around a mere 1,500 years and has had an evolutionary impact on only a very minute portion of the population. Hence it takes a few additional years to specialise a human for optimal chess performance.


Optimal chess performance is not remotely achievable by humans.


This solves the problem by searching the graph of reachable states (and presumably doing cycle detection, at least in depth-first mode), which avoids the problem of picking an unsuitable sub-goal by considering all possible ones. While this is a perfectly good solution, it doesn't seem to throw much light on how human intelligence works. A person is likely to notice, for example, that extending any stack that does not have C on the bottom is not helpful, without mentally running through all possibilities, but just on the basis that it will have to be torn down at some point, to move the bottom block to where it belongs.


It also just "blows up" on any problem of any real complexity, and as such isn't really a solution.


Of course human intelligence also stumbles on planing problems of true complexity. So it is still possible that human intelligence does something like a naive graph search, and this is good enough in the common case.


I do not think naive search would give rise to the sort of reasoning that I posit in my initial post, even as a post-facto rationalization rather than as part of actually solving the problem.


Are there any unsolved/unsolvable anomalies similar to this one?


I'd say that all the NP-complete problems are of this kind and much worse.


Here are the results of my research into program synthesis using genetic algorithms.

Using Artificial Intelligence to Write Self-Modifying/Improving Programs

http://www.primaryobjects.com/2013/01/27/using-artificial-in...

There is always a research paper, if you prefer the sciency format.

BF-Programmer: A Counterintuitive Approach to Autonomously Building Simplistic Programs Using Genetic Algorithms

http://www.primaryobjects.com/bf-programmer-2017.pdf


Buyers are typically more interested in buying a niche customer base/traffic, then they are about buying a web site code base.

In fact, many buyers don't even consider the programming language or platform the site is hosted on as being important, so long as they can get it up and running - and it has good PR and traffic analytics.

Hence, why fippa works (at least for selling a site for a few hundred; if you're lucky).


I understand Flippa, but there's one part of it that grates on me.

A person is selling a site for $200, and it makes them $50 a month. Many of these sites require little to no maintenance so why are they selling the site in the first place? Surely it's worth keeping the site going until the traffic dies out because you'll make more money that way?

I've been tempted to buy a site on Flippa before, but even though this is an extreme example I've often felt like some of the deals are too good to be true.


The deals are too good to be true.

Otherwise people (me included) would be snatching them up.

Most of the sites on flippa are newly-created wordpress sites. Often they build a site once, then sell it repeatedly, each time with a new domain name. They can sell them for $60 or something and have a minimal amount of time invested, plus a domain name.

These sites likely will never make their new owner one cent, although perhaps they're not a bad way to get your feet wet.

Anyway, any site selling buy-it-now for 4x MRR is lying about something.



I'd recommend checking out Bootstrap Datatables. You just include the cdn link and select the table to style via javascript/jquery. Very simple and you get column sorting, paging, and search.

https://datatables.net/examples/basic_init/zero_configuratio...


Very cool, I forgot to share that I use react, so I think a react based component is easier to adapt. Will look in more detail.


Thanks!!


Are you referring to something like this?

Using Artificial Intelligence to Write Self-Modifying/Improving Programs http://www.primaryobjects.com/2013/01/27/using-artificial-in...


No, that looks like GAs with string/array representations. Similar thing worked for me when I tried randomly referencing nodes within a chromosome (say from index A to index B) - generating a graph like structure

The outputs are like this for some not-so-easy targets:

Op nodes ['ADD', 'ADD', 'MUL'] EXPR: ['ADD[ni_99](ADD[ni_49](I__7[ni_43](), ADD[ni_19](I__8[ni_66](), ADD[ni_79](GET_CONST_3[ni_25](), I__9[ni_71]()))), I__1[ni_61]())', 'ADD[ni_13](I__6[ni_17](), ADD[ni_49](I__7[ni_43](), ADD[ni_19](I__8[ni_66](), ADD[ni_79](GET_CONST_3[ni_25](), I__9[ni_71]()))))', 'MUL[ni_68](GET_CONST_8[ni_73](), FLOAT[ni_42](I__1[ni_91]()))']

With genetic programming (using an AST), it can solve complex equations:

However, this simple equation i.e. correct answer "(a + b + c - d) / e" could be evolved and will result into either of these (depends on my luck maybe)

Case1: ((int)((b+((c-d)+a))/e)&(int)((b+((c-d)+a))/e))

Case2:

((((int)a&(int)(((((((mod(e,a)c)/e)(((((int)(d-a)&(int)b)+e)/a)/e))/((((int)e&(int)c)+e)+(c+(e/a))))c)/e)e))/((((int)e&(int)c)+e)+((b/e)+(e/e))))+(((((((((mod(e,a)c)/e)(((((int)(d-e)&(int)b)+e)/a)/e))/((((int)e&(int)c)+e)+((b/e)+(e/e))))c)/e) <..........10383 characters here.........> ))))))/e))/((((int)e&(int)c)+e)+(((d-e)/e)+((d/b)/e))))+(((mod(e,a)/e)+(b/e))+(((c/b)+((d+b)/e))/e)))))))))

The GP output (case 1 and 2) above was generated with a tweaked version of https://github.com/rogeralsing/go-genetic-math


The paper references program synthesis via neural networks. Here is my take on it, using genetic algorithms.

http://www.primaryobjects.com/2013/01/27/using-artificial-in...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: