MS has always shown that they can do research, halolens is one such product and the recent ML enablement spree is another example. It is a good thing that they are going beyond their comfort zone of milking Windows. What is interesting is to see if their efforts make any difference in the status quo. I say this after using AzureML, which I liked, there is no such thing on the internet which allows you to write a ML model without knowing a programming language! It is a webapp which asks you to put data inside it, click a few buttons and it generates python or R code for you. Just brilliant.
Their "News QA dataset" contains 120k Q&As collected from CNN articles:
Documents are CNN news articles. Questions are written by human users in natural language. Answers may be multiword passages of the source text. Questions may be unanswerable.
NewsQA is collected using a 3-stage, siloed process. Questioners see only an article's headline and highlights. Answerers see the question and the full article, then select an answer passage. Validators see the article, the question, and a set of answers that they rank. NewsQA is more natural and more challenging than previous datasets.
Their "Frames" dataset contains 1369 dialogues for vacation scheduling:
With this dataset, we also present a new task: frame tracking. Our main observation is that decision-making is tightly linked to memory. In effect, to choose a trip, users and wizards talked about different possibilities, compared them and went back-and-forth between cities, dates, or vacation packages.
Current systems are memory-less. They implement slot-filling for search as a sequential process where the user is asked for constraints one after the other until a database query can be formulated. Only one set of constraints is kept in memory. For instance, in the illustration below, on the left, when the user mentions Montreal, it overwrites Toronto as destination city. However, behaviours observed in Frames imply that slot values should not be overwritten. One use-case is comparisons: it is common that users ask to compare different items and in this case, different sets of constraints are involved (for instance, different destinations). Frame tracking consists of keeping in memory all the different sets of constraints mentioned by the user. It is a generalization of the state tracking task to a setting where not only the current frame is memorized.
Adding this kind of conversational memory is key to building agents which do not simply serve as a natural language interface for searching a database but instead accompany users in their exploration and help them find the best item.
---
Can anyone with experience in ML/AI comment on how novel/complex these projects are, and how expensive it would be to build out these datasets? Would be interesting to see what it takes to publish a few datasets trained on 20 day conversations between real people, and get acquired by Microsoft/Apple/Google.
> how expensive it would be to build out these datasets?
Assuming you have the expertise necessary to design and run the process in-house, the major expense is going to be compensating the humans in the loop, which can add up quickly.
This is why organizations that already have access to large datasets have such a huge advantage.
I think that one of the reasons we're seeing such a rush to deploy chatbots is that even a minimally-useful bot will quickly start accumulating extremely useful (and very clean) training data.
There is a lot of noise being made about "democratizing AI", but as long as the best results require a lot of training and huge amounts of training data it will remain the bottleneck.
Look for progress on 1-shot and 0-shot learning to get a better feel for how much progress is made on real democratization.
Thanks for the pointers - I'm looking forward to seeing how AI/ML is brought to market, as we hear a lot about research but not as much on the product side just yet. Sounds like MSFT will be pushing in this direction as well with the Maluuba team:
Last fall, we formed the Artificial Intelligence and Research organization, bringing engineering and research closer together to accelerate the pipeline from cutting-edge research to product development. Maluuba, too, has closely aligned its research and engineering teams, and we’re looking forward to learning from their experiences as well.
Understood, I'm looking for an analysis - it would cost x to hire the same size team of researchers with comparable research experience for the same amount of time to build out these datasets as proof-of-concept products, plus whatever other costs an ML research team accrues.
You're missing the variable N, the number of times you'd have to repeat the experiment to replicate the quality of the results. N might be 10, might be 100.
As an analyst it should be easy for you to figure out x for Maluuba, just McKinsey it. But that isn't really a meaningful number. Researchers aren't fungible commodities traded on an exchange.
Sure, but I think the size of a research team's budget/grant allocation is a good proxy for how "valuable" that research team is. Also, just because you and I are paid salaries doesn't make us commodities, but it does give info on our worth to our organization's/context in the overall economy.
Someone had to evaluate this deal, and I'm interested in understanding the process behind valuing something so far-removed from any commercial product or market.
If you want to know /how "valuable" that research team is/, find out what Microsoft paid for the Maluuba. The number is not in line with their salaries. A few notes on that:
"DeepMind was said to have around 75 employees when Google picked it up in 2014. If the deal closed at the rumored amount, that would put the cost per employee at an eye-popping $6.66 million. Magic Pony is said to have had a team of 14 when it was acquired by Twitter—$10.7 million per employee."
http://pitchbook.com/news/articles/tech-giants-paying-big-mo...
Just FYI the frames dataset hasn't been available for download since the day they "released" it (although the QA dataset has been). If it is - someone do let me know :)
"Frame tracking" and adaptive slot filling without pre-canning is actually a very active area of research - the goal is to provide not just an Eliza style infinite conversational ability, but to be able to reason about and get the user to a specific outcome in a real world use case by prompting them for information in a semi supervised manner.
Being able to do it via a series of differentiable (calculus wise) functions is a fundamental improvement in achieving convergence between statistical systems and logical reasoning (which harks back to "classical" AI problems such as planning). Microsoft has some very interesting research here and papers like these [1][2]. Maluuba has some good stuff here as well. [3][4]
I doubt the company was acquired for their datasets. They've been a well respected AI company with some top notch researchers, and I think their area of research gels well with MSR's own NLP research.
I doubt they were acquired based on their datasets! The "Frames" dataset states 12 users over 20 days.
But if one of the next 12 users was a Microsoft executive who liked what they saw, well then that's an important data point :) Having Bengio as an advisor definitely helps too.
I think you may be confused...they had 12 "wizard" participants converse with each other for 20 days in order to build out the logic needed for the dataset...not that 12 people used the dataset for their own projects/products or as customers.
Here's a presentation from one of their researchers, Harm van Seijen[0], to give you an idea of what sort of work they do.
Applying reinforcement learning to dialogue systems seems incredibly difficult, but if Maluuba (or others) can get a handle on the problem it would not be unreasonable to expect another revolution in the vein of applying convolutional nets to vision.
not sure if this was a successful exit. LinkedIn shows the employee count being cut in half over the last little while. Also 2 of the other cofounders left?