Hacker News new | past | comments | ask | show | jobs | submit | Matetricks's comments login

Hey—congrats on the launch! I'm a big Valorant player (peaked Immortal 3 a few seasons back) and I've also used/talked to the people at visor.gg and pursuit.gg when they were around.

Totally understand the concerns here around being at the whim of the publisher, and previous companies have been blindsided by changes in how companies like Blizzard decide what is and isn't allowed on their platforms. Interestingly, YC has had a few of these companies go through similar experiences but I think they still think there's something to be built in this space.

For what it's worth, I think this is a good idea that's going to run into similar issues, regardless of what safeguards are put in place to avoid stepping on the publishers' toes. I'm sure there are some learnings from talking to Ivan or James from the last two YC companies that tried to do this. Also happy to share some more of my thoughts :)


Thank you very much for sharing your thoughts. We're aware of visor.gg (had a talk with Ivan) and pursuit.gg and are trying to learn from previous experience in this domain.


Congrats Geoff! You helped us a lot when we were young, first-time founders and we learned a lot from you and Tim. Thanks again for all your advice—excited about the future for YC :)


Chris is an amazing person who takes the time to get to know you.

When we were fundraising, instead of having us do a regular pitch he took me to an event where Peter Thiel was giving a talk about his new book. We were a chess education company, so he thought it would be good for us to talk with Thiel as he's a chess master himself.

He didn't end up investing in us but my experience with him was much more memorable than any of the other meetings we had while fundraising.


How much funding did you seek for a chess education company?


It was called Chesscademy and we were raising $750k.

We were tackling the adaptive learning problem by creating tailored curricula. The plan was to move outside of chess to other subjects as we grew, but we found monetization to be very difficult.


How does this compare with other companies in the space like Chorus, Gong, and other YC companies like People.ai or VOIQ?

Congrats on the launch!


The biggest difference is single call vs. trend data.

We're focused on not just looking at how to dissect a single call (chorus, gong), but how to make sense of a larger dataset of calls. That let's us understand multi-day/week trends and how to best optimize call behavior across an entire team.

As I understand it, People.ai is looking at metadata (call length, percentage of time spent talking vs. listening, etc.).


Hey Daria, thank you for the shoutout! We do a bit more than just voice metadata tracking. To use a baseball/Moneyball analogy, while voice is the first/entry thing you do in sales, just like catching the ball, People.ai works across all sales activity channels, such as email, calendar, phone, conference systems and even Slack to optimize individual sales rep and team behavior.

While voice analytics tools can help produce the best catchers, we are focused on making every rep a 5 tool player.


I think the move rate is capped at two per second, so even bots will have limitations in their movement speed.


If I understand your code correctly in analyse_evaluations, you're defining a "surprising move" as a move that has a large change in valuation when it's considered at a higher depth. So if a "human" (really Stockfish at depth 5) evaluates a move as +1 and a "computer" (Stockfish now at depth 11) evaluates the move as +5, the move is surprising.

This is pretty interesting, but I'm not sure if it fully captures all the nuances of what a surprising move is. You might be able to classify a move as tactically surprising if it becomes clear after depth 7 that the ending position is favorable. However, in my opinion truly surprising moves are ones that carry plans that I haven't even considered. Hence, this methodology doesn't capture moves that are positionally surprising as there wouldn't be such a drastic change in evaluation at different depths. I'm not sure where you would start to figure that one out though :)

That being said this is really cool work!


You can filter for this:

Take a database of games from pro players; Take the set of all moves where Stockfish-5 agrees that the move actually played is the optimum. Filter for all the moves where Stockfish-11 has a different opinion that results in a big gain in position. What you get is a list of moves that would surprise pro-players under time pressure.

I wouldn't be surprised if professional chess players are all running a version of this against individual known opponents before a tournament to probe for weaknesses.

A harder problem would be to cross-reference this final list with the post-game opinions published by professional commentators and identify major discrepancies. This would be the "wouldn't have thought of it in a million years" list.


The list that's returned still contains mainly tactical surprises where Stockfish inaccurately evaluated the position at the end of depth 5. I think what I'm trying to say is there are some moves in a position that aren't tactically surprising (a piece sacrifice, a crazy attacking move, etc.) but positionally surprising (a long maneuver to get a piece to a certain square that I didn't think of). These positionally surprising moves aren't captured by this methodology because they don't involve large fluctuations in valuation when the depth changes.

As to your second point, an issue with how computer chess affects the modern scene is how playing the "best" move in any given position isn't representative of how humans play. Humans carry out plans and evaluate positions to the best of their ability, but the heuristics and procedure they use aren't the same as a computer's. For example, Karjakin didn't prepare for his match against Carlsen last month by playing a bunch of games against Stockfish. Rather he probably analyzed Carlsen's past games and opening choices to come up with a strategy.

I do think you can come up with a way to prepare against individually known opponents by identifying weaknesses programmatically. You can model a human's approach to playing chess as a distribution of parameters (material, king safety, pawn structure, etc.) that take in the current position and return the best move. You also have Stockfish's evaluation which returns the "best" move. With this, it's possible that you could use build a neural network that learns to play very similarly to a certain player by using their past games as a training set and comparing the chosen move to Stockfish's move. The network could learn to mimic the heuristics that the human individual uses to make decisions and playing against this new AI would be great practice for preparing against specific opponents.


I'm not sure I follow your point about tactical vs positional surprise. Surely the ultimate goal of the positional surprise is the same as the tactical surprise - you get an advantage at the end of an expected series of moves. Otherwise what's the point of getting into a surprising position that's not better than the conventional one?

My question is, is there any difference here that can't be solved by, say, upping the ply-number?

On humanlike chess-AI: have an adversarial network that works to classify human vs machine players, and optimize for humanness * strength-of-play in the AI?


The difference is that the positional sacrifice is less tangible. A space advantage, a tempo advantage, more mobile pieces, improved cohesion/coordination of pieces (Kasparov was legendary for taking this last kind of advantage and turning it into a lethal attack). It's a dynamic advantage rather than a static/permanent advantage, which also means there's a risk of that advantage dissipating as the game drags on.

These advantages aren't the kind where you can sit back and let the game play out confident of winning. It's a deliberate unbalancing of the equilibrium of the position, and one where this temporary dynamic advantage needs to be used to create a longer-lasting and static advantage.


Would it be fair to say you are trying to optimize for future positions where you aren't sure you will win, but the positions resemble certain archetypal positions/ share certain features that are advantageous (i.e. has a high probability of transforming into conventionally advantageous situations)?

I'm sure the chess AIs are full of this sort of knowledge internally, though, in the form of computation optimization algorithms. Perhaps the issue is to translate it to a human-usable format.


Indeed, chess engines do have heuristics to include positional advantage in their evaluation of a board, so they "know" in some way that a doubled pawn is disadvantageous or that development of pieces or attacking central squares is beneficial, much as humans know these things.

I've never heard experts discuss this, but I bet it's true that human beings still succeed in appreciating many of these benefits at a higher level of abstraction than machines do. An argument for this is that computers needed an extremely large advantage in explicit search depth to be able to beat human grandmasters. So the humans had other kinds of advantages going for them and most likely still do. One of those advantages that seems plausible is more sophisticated evaluation of why a position is strong or weak, without explicit game tree searches.

I looked at the Stockfish code very briefly during TCEC and it looks like a number of the evaluation heuristics that are not based on material (captures) are manually coded based on human reasoning about chess positions. But if I understood correctly, they are also running machine learning with huge numbers of simulated games in order to empirically reweight these heuristics, so if a particular heuristic turns out to help win games, it can be assessed as more valid/higher priority.

You could imagine that there are some things that human players know tacitly or explicitly that Stockfish or other engines still have no representation of at all, and they might contribute quite a bit to the humans' strength.


Perhaps the positional sacrifice can be identified by similar means. The most superficial measurement of a position is the material left on the board. So when you compare the superficial measurement to a deeper positional measurement and they are divergent, then we have something positional.

I think one of Kasparov's games against Karpov in the New York portion of one of their World Championship matches involved Kasparov sacrificing a queen for positional compensation on the black side of a King's Indian. It would be interesting to see what this project thinks of that game.


This is pretty much what I wanted to say.

What is a surprising move varies greatly from player to machine. Here's a good example:

http://www.chessgames.com/perl/chessgame?gid=1064780

Capa's move 10 here (Bd7) is completely surprising to the vast majority of players and computers. It breaks most of the standard 'rules' of development and space control. However, it doesn't move the needle in terms of tactical significance at all. To me, that's a surprising move.


Surprising may be a bit subjective. I know that I am all too often surprised by my opponent - and not in a good way. This may be a good way to study what kind of patterns have interesting weaknesses.


Surprising is very subjective. I was playing chess in a cabin full of people and found the checkers game next to me to be more entertaining - most likely because of the people. My chess moves were not quite random, but because I wasn't really paying attention they were really frustrating my opponent because he couldn't make sense of what I was doing. My moves were either not very logical or so brilliant that he didn't know what I was up to and it was really getting into his head. Surprising moves? Yes. Good moves? Not so much.


I didn't build this, it's a site that's been trending on my news feed. Two people in my network built it over a weekend.


I would tell those 2 guys to build this out including parts that should be there but they need the data, etc, targeting it as a value-add service that Disney can sell to ESPN subscribers. Done properly with multiple sports over multiple years and consistent annotations, I would think sports fans would go nuts over this, wouldn't they? (Any sports fans here?) Like, think of the ability to jump immediately to the video of most any noteworthy event in a sports event in recent history - "hey, remember that time PlayerX did that funny fake tag out to PlayerY? Hey let's pull that up and watch it, grab this slider you can move it back and forth to see exactly how he pulled off that trick, hahaha awesome".

I think Disney has plenty enough in-house talent and they're usually pretty forward thinking, it's surprising they haven't done something along these lines.


I used to work at ESPN.com. My job application was a proof of concept of a similar idea, but because it was 2002 and wasn't possible to build technically it was designed to show that I could code and think creatively about consuming sports online.

When I took the job, the realities of licensing footage hot hard: the leagues were not keen on letting a third party like ESPN have historical, searchable footage like this available on demand. The leagues were still figuring not how to make money from digital and rights to their games are the biggest asset they have.

Things have changed since then but the tl; dr is that from 2002-2005 the reason this wasn't built by ESPN was not technical or due to a lack of imagination, but a business reason - it was prohibitively expensive / impossible to get licensing rights from the leagues.


I can see it working. Especially if it's community driven. Like use AI/machine learning to create the foundation for recognizing what event corresponds to which scene, and then have viewers be tagging or writing comments while the video is rolling


We're building exactly what you're describing at http://www.swish.io (available on both ios and android) We organize clips by plays, teams, and games and annotate videos using the meta data from the play-by-play. We even have the slo-mo slider capabilities you mention, if you check out our app you can press down on any video and move your finger back and forth to watch in slo-mo (works much better on iOS at the moment).

We source our content from social media, not from the league's sites so we do not have every play but we have a lot. Usually 30 or more per game.

If you're a sports fans try it out would love to hear your feedback!


They're probably updating the servers so that the games won't be as laggy. Shouldn't be long!


They've added a few more ML courses—the go to class for undergrads is now ORF 350 (Analysis of Big Data) with Han Liu. ELE 535 (Pattern Recognition and Machine Learning) is also a new course that was added last year.


For those interested in the "Probability and Statistics" portion of the guide, Princeton recently created a certificate program in statistics and machine learning that has some more updated information on courses: http://sml.princeton.edu/

I'm pursuing the certificate right now and the courses have been great so far. Princeton's known for having a rather theory-heavy approach in their quantitative classes but I've found a good balance with applications in some of the classes (COS 424, COS 402).


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: