Hacker Newsnew | past | comments | ask | show | jobs | submit | questerzen's commentslogin

Here is an example of a case that backtracking is required where there is only one solution:

      11110
      -----
   11|....0
    2|....0
    0|00000
    0|00000
    0|00000

The sequential constraint solver fails here even though the deduction required is trivial. The first row can only be 10010 or else the 2 in the second row isn't possible.

A more difficult problem is the following:

      11 1 
      11211
      -----
    1|..0..
    2|..0..
   11|.010.
    4|.111.
    0|00000

There are two choices at this point. One option is:

      11 1 
      11211
      -----
    1|..0..
    2|..0..
   11|.010.
    4|01111
    0|00000
And we immediately run into a problem with the 3rd row, 1st col which needs to be both 1 and 0.

The other solves the problem immediately as all remaining squares are immediately specified by a single constraint.

      11 1 
      11211
      -----
    1|00010
    2|11000
   11|00101
    4|11110
    0|00000

Compared to most sudoku solves, I think this is pretty straightforward (you only need to look ahead one move to one other square). I think this would be fair game to give as a problem.

Of all the games with a unique solution that the sequential solver can't do that I looked at, almost all fell somewhere in the range of difficulty between these two. I didn't find any that require more than one move lookahead.


I calculated that 25,309,575 games have a unique solution. My back-tracking solver correctly finds all answers for all of the 28,781,820 possible distinct games.


My sequential constraint solver (no backtracking) also found 24,976,511 games it could solve without backtracking or using more than one constraint at a time.

To get an idea of the speed difference between solving sequentially vs solving using backtracking, on my 10 year old MacBook Pro running on a single core, solving all 28,781,820 possible distinct games: - sequential solver: ~75s to either solve or abandon each problem - backtracking solver: ~175s (2.3x) to find every solution for every problem

The backtracking solver is unfairly disadvantaged in this comparison as it takes more time to solve the difficult games requiring backtracking and those with multiple solutions - for example the game {1, 1, 1, 1, 1 | 1, 1, 1, 1, 1} has 120 solutions that need to be found, but the sequential solver abandons its attempt after making no progress on its first pass through the constraints.

For the 25,309,575 uniquely determined games only, the gap in performance is a bit narrower: - sequential solver: ~60s - backtracking solver: ~120s (2.0x)

The recursive backtracking solver was a far simpler program to write though!

Incidentally, to generate a database of all possible games took ~35s and finding all solutions for all of the games by looking them up took ~20s in total.


I realised the backtracker can stop early as soon as all squares are filled in (doh!). As a result the timings have changed dramatically.

Database generation: 25s; Sequential solver - all 'solvable' problems or abandon: 52s; Backtracking solver - all solutions: 19s; Database Lookup - all solutions: 16s;

Key takeaway is that the backtracker is not only much simpler, its actually much faster (for a computer at least) and almost as fast as looking up the answer in a table.


My definition of a "valid" nonogram is a little different as it excludes puzzles that require branching or back-tracking strategies. I think my use of the term "solvable" led to some confusion, sorry.

https://news.ycombinator.com/item?id=44153840


The number of unique 5x5 grids is 33,554,432 (2^25) and the number if you ignore rotation or reflection is 4,211,744. What is 28,781,820?


That's the number of unique combinations of clues for all 2^25 puzzles. There are 13 possible clues for each row/column (0 1 2 3 4 5 11 12 13 21 22 31 111), but not all 13^10 possible combinations can appear together - for instance, 5 5 5 5 5 / 0 0 0 0 0 is obviously impossible.

(I wrote a program to calculate this by generating all 2^25 puzzles and their clues, then sorting by the clues. I also verified the count of 25,309,575 clues with unique solutions.)


That's great, thanks. I've also independently verified the 25,309,575 using similar techniques.


My experience with Packt has been wholly negative.

To give an example of what to expect, my experience with one particular book on Scientific Python: * most code examples wouldn't run without significant modification, mainly due to missing lines and multiple typos * some of the text was cryptic to the point of absurdity * topic coverage was unstructured, patchy and made no coherent sense overall * the website for the book was broken and I couldn't find any way to feed my corrections back to the author

Clearly Packt put no effort into quality assurance or editing. I would suggest you would be better off waiting for something from a more reliable publisher: such poorly edited books will just waste your time.


Funny, I just reread "What do you care what other people think?" yesterday. The parallel of NASA's design process for avionics software to TDD is both striking and revealing. People struggling with TDD could definitely benefit a lot by thinking hard about the examples of the main engine design process and avionics software. Thanks for posting!


Other people have also suggested Latin as the most likely base language. A better discussion is provided here: http://www.science20.com/patrick_lockerby/patterns_of_latin_...


Growing up around the tech industry in the 1980s, I found the casual racism and sexism of computer engineers difficult to deal with. The behaviour was "excused" by its being an exclusively white male niche, with "geeks" derided by society, excluded from the social core and under-valued as people. As long as this is the defining self-image of people in technology, framed by the US high-school geek vs jock enmity, there is little hope for change. This was not my experience, and it is not a frame that is useful given the dominance of technology in the wider culture. We need as an industry to stop behaving as outsiders and start to accept the responsibilities inherent in leadership. TBBT is one example of the antediluvian attitudes that no longer reflect reality and need to be changed. Cudos to the creator of this video for calling it out.


The Black Swan has a good point to make about reliance on over-simplified mathematical models, but Taleb's macho anti-intellectualism is ultimately nihilistic since he denies even the possibility of achieving a deeper mathematical understanding of risk. But what I really have a problem with is that he has created a band of idiot Twitter acolytes who can be counted on to heap brainless abuse at anyone who Taleb chooses to disagree with, which these days seems to be almost everyone with even slightly diverging views to his. He will not attempt reasonable debate, especially when he is faced with superior erudition: in this regard Mary Beard is only the latest of his many victims. The guy is a pure and simple thug.


I followed the drama around it and my end conclusion (to paraphrase somebody's tweet)

Mary Beard is respectable, well educated and gracious. Taleb is right.


So I hate C++. What I really want is the speed, flexibility, universality and simplicity of C. But I wish structs had destructors, and a way to bind namespaced related functions. Only since I basically now have classes, I should really think about adding inheritance. And I also find prototyping, testing and non-speed critical programming is a pain without a good generic container library. Containers can be implemented fairly elegantly with templates, but then I probably also need some compile-time processing ability. Oh crap, I just ended up at C++ again. And so I persist with C++ as my main language, with a constant voice in the back of my mind saying, "this is just stupid, there must be a better way. Why is there SO g-d* much accidental complexity in this language?". In my view, C++ has probably wasted more programmer hours, and added more sadness and despair to the world (at the very least MY world) than any other technology. But given my language wish list it's very, very hard not to be tempted to fire up c++-mode for just one more hit.


Other than C, are there any languages that interest you? Are languages like Rust or D or other systems languages not what you're looking for? How come?


I use Python a great deal, especially for tooling and prototyping and appreciate the fact that programming is fast, easy and focused on the inherent complexity of the problem. Similar positive experiences with Smalltalky and Lispy languages as embedded languages. But I write a lot of performance-critical libraries for programs that need good C/C++ interop and where C++ is pretty much the lingua franca. Swift, D and Go, for example, all have their strong points, and I plan to do more experiments when I have some time ... after this one last C++ project, though!


D? Nim?

Jonathan Blow's upcoming language?


> But I wish structs had destructors

In C++, structs are just classes whose members are public by default.


I'm constantly surprised by how often I still need to look up file formats, and having a resource like this is a great starting point. Some examples from my recent experience for which using a library would be overkill / more effort: examining a .bmp file to figure out why the alpha channel got dropped during resizing; extracting meta-data (top-left pixel alpha value) from a .png file; fixing a failed library-call attempt to change the aspect ratio of an image; working out the version of an ancient document file so I could find an application to read it. We shouldn't become so dependent on libraries that this kind of simple manipulation forces us to install huge libraries and learn complex APIs for simple tasks, or else we collapse hopelessly into despair. Especially when a simple Python script and a one-page file description can allow us to do the job in a few seconds.


And that is why you, sir, are the guy who knows the file formats. I look forward to repurposing your scripts for manipulating .pwn files.


It is probably a sign of a good language that the words used most should be similar in frequency to their use in pseudo code. When words like "end" (Ruby), "self" (Python), "import" (Java), "err"/"error" (Go and Node) are over-represented, it's likely a sign that the language is introducing accidental complexity. By this metric Swift looks astonishingly sane.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: