Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DeepMind readies first commercial product (ft.com)
158 points by s1512783 on March 31, 2019 | hide | past | favorite | 126 comments



https://github.com/iamadamdev/bypass-paywalls-chrome

No need to rely on third party service.



Do you know why it's not added to the official directory of Firefox Add-ons?


The add-on is encouraging/enabling piracy of paid content. FF and Google will get legal pressure from news publications if they put this in their official stores


Dude, don’t leave everyone hanging, spill the beans!


Me? It wasn't rhetorical, I really don't know. But I installed it from GitHub. When I first tried that on my phone, I mistook the popup in question for my device saying it wouldn't do it for security reasons, which made me think I should get it from the Add-Ons directory instead (but couldn't find).


Actually surprised this service hasn't been turned into an HN bot yet.


On FT outline only does a summary, check the bottom of the page:

" Summary Data

Page length reduced by 38%

⏱️ 3 min read"


thanks for the outline. article was behind a paywall from my end.


thanks


this is stealing content right?


yes


nope


Thank you.


Was gonna complain that ft was behind a paywall. Ty.


you should still complain, this nonsense has to end.


Nonsense? Why do you say that? Is it nonsense that a publication needs to pay its employees? Is it nonsense that non-ad driven revenue allows a publication to focus on quality material versus optimizing for clicks?


You would be right if the article had some unique exclusive information. It doesn't.

Maybe HN should ban paywalled sources.


Most publications including the New York Times allow for a limited number of free article views before a paywall is imposed. That approach allows for casual viewers to get a feel for the content, without open access to all the material.


I’ll bet you say the same thing when you are forced to pay for groceries before eating them.


Because, as with groceries, you're depriving someone else of content by consuming it. You wouldn't download a car.

It's the most disingenuous argument against free media (or piracy etc) that's being thrown around.


> It's the most disingenuous argument

No, it's not. I actually used to write for a newspaper before. These days, they are shoestring operations locked in a constant battle for financial survival. The stories are written by real people who get paid (very little) real money. Take that out of the equation and the news goes away. I deeply value journalism and want it to survive. The level of entitlement evinced by you, gp, and many others on this site as regards the whole "information wants to be free" meme is simply stunning. Like, you really haven't given a moment's thought to what "free" implies here.


While I do agree with some concerns about quality journalism, and I want it to survive, I find the specific argument you used to be disingenuous. We're dealing with perfectly and infinitely reproducible 'product'. Any scarcity here is artificially imposed. It's not like with groceries, where we have finite amount of goods, that can only be consumed once.

I know, that reality of dealing with non-scarce goods doesn't gel very well with free-market capitalism, but I think limiting access to journalism (and IP laws such as copyrights and patents) is a net negative for our society, so we shouldn't resort to bending our only post-scarcity type of good into a scarcity driven economic system.


Splendid. Impressive. Who pays? It’s a very simple question I’m asking you.


Get rid of market capitalism and nobody has to


I'm sure you're joking. Information is not a consumable, it doesn't get used up in the process.


And yet it still costs money to produce...


Karl Marx said the same thing about collecting firewood in the forest.


The cost blowout in healthcare isn't due to diagnosis or care but administrative costs. All these diagnostic AIs won't significantly reduce the cost of healthcare nor improve quality. What we really need is automated administration and billings to really move the needle and that's more human bottlenecked than technology.


> What we really need is automated administration and billings to really move the needle

Right, because every other developed countries gets similar outcomes at lower costs (poor capita, per GDP, or on any other reasonable basis for comparison) because they are far ahead of us on automated administration and billing.

There might be countries where that's what's needed, but in the US there's clearly a lot of lower hanging fruit that doesn't require any new technology.


Yeah, so I work at a huge health tech company. A lot of the issue is actually private insurance.

Currently in the US the largest health insurance provider is actually medicare/medicaid. When a patient with medicare goes to the doctor, the government says "This is what we will pay for that procedure, no exceptions." The hospital has a choice to either accept that rate, or to lose out on the massive medicare market.

Private health insurances plans have vastly fewer subscribers, and don't have the power to negotiate prices like the government does.

But regardless, why is health care and insurance a for-profit industry? It creates incentives to put profits ahead of people's health.


> Currently in the US the largest health insurance provider is actually medicare/medicaid.

Medicaid is more 50 different insurance providers (it's run by each state—with separate programs for, at least, D.C., Puerto Rico, and Guam, and it's not even a single insurer in every state, e.g., California uses a number of county-level managed care plans as well as a traditional fee-for-service plan.) And all of them are separate from Medicare. Which also is less of a single insurer with common coverage policies than it seems on the face (even excluding Medicare Advantage, which is just publicly subsidized private insurance), since whether any given procedure in reasonable and necessary and therefore covered in any given geogrpahical area in Medicare depends on local coverage determinations made by the private insurer that is the Medicare Administrative Contractor for that region and claim type (there are separate contractors, with different geogrpahical regions, for regular part A&B claims, home health and hospice claims, and durable medical equipment claims.)


Another high cost of healthcare source is the lack of competition on price. You have zero idea how much each procedure costs. Imagine going to a supermarket where there are no price tags, and you must buy the items once you pick them off the shelf. It's madness.


It's worse.

If you have insurance, the price tag will be X, and they will pay for some part (most?) of it.

If you don't have insurance, the price tag can easily be 100X and it's your responsibility.

If you have insurance, but you happened to pick up that can of pepsi from the left aisle (out of network) instead of the right aisle (in network), they won't cover it, and you're stuck with the 100X price. You won't know that before you buy the can of pepsi, of course.

If you try to ask your insurance if they cover the specific can of pepsi from the left aisle, they may or may not tell you. They may or may not give you the right answer, and if they tell you it's covered but then refuse to cover it, it's your problem (and again, the price will be 100X).


It should be illegal to charge two different people two different prices for the same procedure at the same facility.

The whole model of negotiated rates, rebates, etc. needs to go.

I also think the concept of networks is bizarre on the face of it. A certified medical provider should be covered to perform procedures in their area of expertise. Period.

Personally I like the idea of anyone who wants to be able to buy into Medicare A & B. And if you don’t have Medicare then you can always pay the Medicare rate of the procedure at 100% (versus having Medicare where your copay is 20%).

If insurance companies can’t compete with that then great.


I agree with you. I wonder what it would take to get rid of those discounts in terms of legislature and how individual voters could work together to fix it. The current system is ridiculous. A $17 procedure should not be billed at $1,200 "because they can".


So the solution is more layers of administration over the prices of the pseudo marketplace?


How is that a fair representation of what I proposed?

I think part of the problem is absolutely opaque, discriminatory, and predatory pricing.

I went to get a basic blood count last month. I gave the lab my insurance card, but they must have copied a number wrong, because when I got the bill they had me down as self insured, and had a bill which said;

  Lab Services     :  $1,541.00
  Patient Adjust   : -$  385.25
  Total Due        :  $1,155.75
I called back and gave them my insurance card and they said ignore the bill a new one will come in the mail.

Last week I got the new bill:

  Lab Services     :  $   17.12
  Insurance Pay    :  $   12.12
  Total Due        :  $    5.00
Yes, I absolutely think it should be illegal to try to bilk a cash carrying customer over $1,000 for a $17 blood count test.


You missed the part where you chose an in-network can (hospital) that is filled with out-of-network soda (doctors).


That's funny. I'm not familiar with the US, but are costs not disclosed at all until after they're incurred?


That's generally true in the states, but it is even worse. You as the patient don't see the actual price before the procedure (unless you ask). After the work is done, you then get a statement saying how much the procedure cost, what your insurance paid and what you owe. All those numbers are opaque and negotiable. Even better, it is common for insurance and the provider to continue sending you bills for months after even common procedures like child birth. It's a total mess.


Even if you ask, they may not be able to tell you. Many places are just not set up to be able to give that information beforehand.


Yeah, mostly a hospital could tell you what they will probably charge but they don't have all the variables to come up with what the actual negotiated charge will be nor what your co-pay would be.


Oh and just wait until you get to the sleazy methodology for charging those rates. Childbirth for instance, the room will be billed to the Mother for the days before the birth. So the Mother probably meets their deductible, then as soon as the child is born the room and all the things in it get billed to the child who now has a new deductible to hit before the insurance company has to pay their full amount.


Costs in healthcare aren't very "real". US providers use what they call a charge master to determine prices. So while an "IV" may show up on the bill, it's not the actual IV you're purchasing when you're in the hospital. Behind that number the hospital has factored in all sorts of cost centers such as part of the nursing staff cost for example. These "costs" therefore do not represent the true actual cost of providing the service, but rather the charge that the insurance company will pay. Insurance companies have a negotiated rate that they will then pay to the hospital. One of the major "hidden" problems in our current healthcare market is that there exists extreme consumer price insensitivity (since most cases are covered by some type of insurance) alongside large consolidation of healthcare providers.


> All these diagnostic AIs won't significantly reduce the cost of healthcare nor improve quality.

Thanks but this sounds ridiculous to me. Doctors are people too who really do get paid and really do make mistakes.

Using AI to learn from the best of the best will improve results overall, and automating tasks that expensive doctors did will reduce cost.

Yes we need to handle the exploding and ridiculous administrative costs, but clearly there can be gains on both fronts of this battle.


Agree that it's silly to write off potential gains, but it's also important to recognize that consumer costs of healthcare aren't driven by supply costs. It may be easy for administrative and insurance layers to claim the surplus created by AI.

This is an important factor in how we choose to build products out of AI advances. If your product replaces a part of a service chain that is not either right next to the transaction or in a competitive part of the supply chain, it might only help incumbents aggregate more power and charge more rent. If you can deploy in a way that improves the competitive landscape, you not only distribute gains more but probably keep more leverage as a solution.

This is harder for AI than many technology areas because incumbents tend to have more data and regulatory protection on that data. Health care might be the boss level for that problem.


The cost blowout is a US thing, diagnostic AIs can be used anywhere. And even if the prices weren't affected (which doesn't seem plausible), this can hugely increase the availability and the timeliness of a reliable diagnosis.


Its not a US thing, other countries such as Germany deal with it by making their hospital doctors work insane hours (manditatory 24 hour shifts on weekends from time to time (400-600€ extra for that unless you are in training, then you just have to do it), one assistant doctor for a psych ward over night) and by introducing collective bargaining. So regardless of whether you are a heart surgeon or a dermatologist you will be paid based on seniority and rank not individual performance. Unions in have for the most part colluded with industry, resulting in an inflation adjusted decrease in average salary over the last 30 years. Paperwork takes up the majority of time for a station doctor, they do a few rounds a day and the rest is spent booking treatments and billing for their patients in their internal 80-90s IT infrastructure.


>It's not a US thing, other countries such as Germany deal with it by making their hospital doctors work insane hours (mandatory 24-hour shifts on weekends from time to time (400-600€ extra for that unless you are in training, then you just have to do it), one assistant doctor for a psych ward overnight)

In fact, it IS "a US thing": • Most major teaching hospitals mandate 24-hour weekend shifts for residents; • One "assistant" aka resident doctor for a psych ward overnight is standard

Retired neurosurgical anesthesiologist here who's been there and done that many, many times....


They could be actually better and much faster than humans for initial diagnosis. This has far greater implications, e.g. early detection actually reduces big burden from system.


That has been available since the 1970s. Healthcare is not a very complex compsci issue for 80% of cases. The problem is presentation of symptoms is highly subjective and needs highly complex interpretation that no AI will ever be able to achieve and deal with the liability conundrum


No one claims AI will be the authority, it is good for finding candidates much earlier. We never had devices that has success rate of specialists on several eye problems in one go, definitely not in 70s. I don't understand what you are arguing against to be honest.


> liability conundrum

So many programs and devices are used where someone would be liable if they malfunctioned. In a production line for example, if something goes wrong and it has to be turned off, every hour costs $$$ to the production plant owner. Similarly for robots: there have been cases where industrial robots have killed people. Accidents with machines can happen in so many industries. If the machine is wrong in 0.2% of cases, that's a risk that can be calculated. If its rate of misdiagnoses is equal to the rate of a human expert, then replacing non-experts with it will improve patient experience. Of course, there might be super experts whose patients would be worse off if they were treated by an AI.


>that no AI will ever be able to achieve

The Luddites are back! We'll see how this prediction holds up in 30 years.


The jury is out on whether the Luddites were indeed troglodytes[1]. If you were to apply the 30 year nostalgia rule in 2049, then we might be hankering for the present, where 'AI' and the surrounding issues are still in their infancy.

However, a pragmatic approach would suggest that any form of AI and it's derivatives, would be assistive in the medical field and play a hybrid role, rather than being a panacea[2].

[1] https://news.ycombinator.com/item?id=17667375

[2] https://towardsdatascience.com/why-ai-will-not-replace-radio...


That user talked about "liability"


I really don't believe we want more automation in this area. Think of Youtube or Google play store moderation, do you want that for medical billing? And I doubt even 10% of medical costs could be contributed to regular administrative overhead.


Depending on the type of procedure/practice a quick google search indicates somewhere north of 20% is administration. https://www.nytimes.com/2018/07/16/upshot/costs-health-care-...

Every time I visit the doctors in the US there is a trail of paperwork produced. Outside of basic checkups (where you don't dare mention any niggling health issues because your instantly going to get billed consultation fees) there is always a back and forth between the patient, doctors office and insurance over what the doctors office is asking to be paid and what the insurance says it is prepared to pay. This is an enormous time wasting game - the doctors (or their medical group) ask for much more than the procedure cost to perform knowing full well that the insurance company will push back with a lower accepted payment and the patient will sat in the middle having to negotiate between the two.

Removing the pricing game would lower costs substantially (and vastly reduce the time wasted by regular Americans calling their insurance/doctors trying to sort out their individual billing messes).

Having lived in both the USA and the UK there is no question in my mind that the US health care system is a dysfunctional mess. In my experience the only place the US system has an advantage is that the healthcare providers are very (too) willing to send a patient for tests/scans to check for every eventuality and are happy to prescribe whatever medicine is 'best' for an ailment.

In the UK a fit person with a common cold would get sent home and told to not waste the doctors time, in the US you'll get a consultation and a dose of Z-Pack (and hey if I'm paying for my insurance I should use it when I get sick, shouldn't I).


AI is being developed to make MRI scans 10x faster.

https://www.forbes.com/sites/samshead/2018/08/20/facebook-ai...

If DeepMind is trained on millions of MRI’s, we might have better preventative medicine.


I have talked at length with several people in the FastMRI project and in my opinion it is actually very dangerous. There is no feasible way to validate that such a model will not hallucinate normal tissue in the presence of a rare abnormality. The argument often used is that advanced reconstruction techniques such as compressed sensing have not required validating against very rare abnormalities, however when deep neural networks get involved you have a nearly universal function approximator whose behavior is nowhere near as bounded.

The FDA will probably set the low bar of validating that the reconstruction algorithm fares well in the face of a few abnormalities and call it a day, instead of the proper (admittedly infeasible) validation of testing against all abnormalities that the reconstruction algorithm will encounter. Facebook will probably happily jump over this low bar, and patients will get hurt.


I never understood these arguments. Isn't it like claiming self-driving cars via neural networks would never be possible because you can't test that the neural network would take the correct decision in every situation?

I view the whole issue stochastically, with the immediate aim being to make (significantly) fewer errors than the current approach, which is having a human decide. I don't claim that I can design an experiment which could selve as an indication whether we are improving upon human judgments, but I think this should be the goal.

Reflecting upon my view, i think it comes from the experience of training ml-algorithms. You are always minimizing errors, but you goal is almost never to make 0 errors, because often your data is noisy and you are probably overfitting. I know the medical enviroments are more sensitive, but I can't really wrap my head around how we could design a learning algorithms that does not make any error and works on all abnormalies. I think it will always missclassify.

Rephrasing my argument: I think the approval should be given if an significant expected improvement over the distribution of real-life abnormalies can be detected and not over the uniform-dsitribution over all abnormalies.

EDIT: detecting out-of-distribution samples is hard and I don't think this is a solution and leads to a false sense of security.


It isn't. It obviously is possible to drive a car. The problem here is you're trying to reconstruct image from less data than what is fundamentally possible. Compressed sensing is already as low as you can get, unless you prove it is possible to reconstruct the image accurately from even less data, then it's probably pointless to try use neural networks to decode it, and it's actually a bad idea anyway.

The problem with neural networks is that they can reconstruct something that looks "normal", not necessarily something that is accurate. The more abnormal the scan, the more likely it will get reconstructed as something that looks perfectly fine even when it's not.


I really don't know anything about MRI and their usage, i was mainly criticising that his arguments sounded like a lot of the arguments I've heard against other usages of ML-Algorithms.

What I meant: There's probably a medical reason why you want such a product and if a more readily available MRI saves (really significantly) more lives than the chance that it might miss some abnormalies which could lead to death, then I think we should allow it. That's what I meant with a stochastic view. If we, for example, only have a few scans per hospital available because the chance might exist that we missclassify something and lots of people get worse or delayed treatment because they are not high enough on the priority list to get access to a super-resulution MRI with a fidelity they don't really need (again, i don't know anything, just to illustrate my point), then I think something is wrong.

His argument just sounded dismissive without giving a, to my uninformed point of view, valid reason.


the distribution of abnormalities should be weighted by the severity of a miss. Unfortunately there are many abnormalities with lethal outcomes if missed. So it’s very hard to get a sense if a reconstruction algorithm will adversely affect outcomes. I would appreciate even a more detailed look at trying to estimate this, especially in light of how much more a deep learning system can be. However that is not what’s happening. Everyone is happy to just jump over the bar that has been set for previous reconstruction algorithms.


I don't have strong feelings about this, but a few thoughts come to mind regarding exploring accelerated MRIs via neural networks.

1. If an MRI can be done 10x faster with the same results except in exceptionally rare cases, might that not still be a win? Order of magnitude reduction in time may translate to substantial reduction in cost and increase opportunity for applications. It seems like it is worth considering whether these benefits might be worth compromising the accuracy of the imaging.

2. How accurate are radiologists at diagnosing / detecting these rare abnormalities that validation might miss? If radiologists are actually pretty mediocre at this, might it be OK to make the scan slightly imperfect if the next stage (the human) is already very imperfect?


1. Maybe, but that should be a tradeoff that is made consciously with some analysis and care, instead of just jumping over a low FDA bar, which is what everyone in this space seems to be doing. I think there are enough abnormalities that can be fatal which a net would just fill in (e.g. aortic dissection?)

2. There are thousands of different abnormalities. From what I understand about the FDA validation process for this sort of thing there would be only 10s of abnormalities. There are likely, many, many of them that are quite obvious to radiologists. And once again, this would be a question that should be studied carefully when people's lives are at risk, instead of just assuming that it will be fine then going ahead to "move fast and break things"


> And once again, this would be a question that should be studied carefully when people's lives are at risk

I agree 100% with this, but how can this question be studied of this research to develop these systems doesn't go forward? You earlier characterized the FastMRI research as, in your view, "actually very dangerous." As I see it, the research here is potentially extremely valuable, and the danger comes not from the research, but from somebody deciding to deploy it without considering these questions. Typically the road from research program to wide deployment is quite long, and I disagree that we should discourage the research because of potential flaws in the productization process. (Unless there is ill will on the part of the researchers, which I'm assuming there is not).


I think it’s the lack of will to study these sort of things that is dangerous. There just doesn’t seem to be this sense of caution among the people working on this. They seem to just want to jump over the low bar set by the fda and move ahead to patient care.


It's my understanding it's sometimes exactly those obvious abnormalities that are missed by radiologists.


Adding another lossy step will only make even more anomalies get missed, or worse, misdiagnosed as something else (I'm thinking about the photocopiers that sometimes change numbers in documents).

The argument that "our system is already bad so we should just merge this new bad component because it doesn't make it worse" is bad in software, and unacceptable in medicine.


I don't think that is a fair characterization of what I wrote.

If the scanning process has a sensitivity of 99.999%, and the next stage in the signal chain has a sensitivity of 50%, and we consider what happens if the MRI sensitivity drops to 99.9%, that's well in the noise in terms of diagnostic value. Use some of the extra money freed up by doing the MRI 10x faster to pay a radiologist to look at the scan for 10% longer, and perhaps the net accuracy has improved.

What is unacceptable about this kind of holistic reasoning about the system? High performing systems are frequently not composed of perfect components.


It's fine to talk about statistical accuracy when the distribution is somewhat normal. ML systems we've seen so far don't typically degrade in nice, statistical, predictable ways.

Omitting 90% of samples (based on 10x speed claim) leaves an awful lot of room for bizarre errors.


One advantage of software is that it can't really have a bad day. The radiologist can be affected by external human factors. Perhaps an angry wife and a dinner getting cold at home. He might not take the extra picture a bit further down the lower back where the golfball tumor was growing (story from my mother-in-law).


To be honest I am highly skeptical that FB would be able to have the sort of institutional capacity to launch any sort of medical device health product. It takes a non trivial amount of effort and time to fully productize it (in terms of system integration/ product testing/ FDA submission). Unless there is involvement with one of the major MRI manufacturers, I don't foresee this going far. Many more players in the game that actually have a decent shot of getting something like this to market.


The thing is that because the highest-ROI application of machine learning is online advertising, unquestionably the two deepest pools of machine learning talent are at Google and Facebook. So this may be able to overcome the obvious culture mismatch between "move fast and break things" and the FDA.


Google has a long long history of failing in the medical space. They tend to overengineer their products in a way that makes sense if you’re a computer engineer, but not if you understand anything about the healthcare system. Facebook has no legitimate history developing these products either. It’s more involved to develop health products than it is to sell ads. And when it comes to deep learning, optimizing your ml for 2D rbg is not the same beast as volumetric 3D data.

And besides, let’s say tomorrow they have a method to do it tomorrow: how do they prospectively scam patients? How do they deploy the algorithm in a clinical setting? For any of this as a product to work, it would have to be integrated into a mri controller. Unless I’m eating my words at RSNA this year and deep mind is presenting their work, I’ll remain highly skeptical that this is going anywhere beyond a PR story that’s been sold to the media.


Deepmind has projects related to 'formal verification' and conservative bounding, presumably for these very reasons.


From what I understand there are tons of MRI applications. If you are looking for something exotic of course you will try to get a human, probably best in the field.

But if you are trying to bring this to masses one can start with with something "simple" like bones or some some relatively trivial organ. Not that MRI prices themselves are dropping much - gotta wait for room temperature superconductors.


nobody expects these systems to be perfect, which is why they have less validation against extremely rarely observed problems- the total impact is small enough that we don't prevent good systems from being approved.

Rather than saying "very dangerous", it would be safe to say that some individuals woudl not be well-served, but ideally, the overall health of the population would increase for a reasonable expenditure.


"Using AI, it may be possible to capture less data and therefore scan faster, while preserving or even enhancing the rich information content of magnetic resonance images, says Facebook.

The key will be to train artificial neural networks to recognise the underlying structure of the images in order to fill in detail omitted from an accelerated scan."

Ah, right, what I want is for an ANN to invent information in a medical image. It's one thing to upscale textures in a game, but I can't see the use for this at all in a medical imaging device.


I don't care what it has to do, it can be drawing pentagrams in blood for all I care, as long as it has even a fraction better health outcomes to unassisted doctors I'm all for it.

The point here is making people healthy, not providing some sort of validated theoretically clean design. If the rcts say it works, bring on the deep learning voodoo.

(edit: I, for the record, want to trust in average accuracy.)


This is not how trust works


You mean it's not enough to trust in average accuracy, you want trust in specific instances of application?


Averages are not adequate descriptions of situations with bizarre outliers. An ML model that creates nonexistent detail in a medical image screams bizarre outliers.


Medical conditions are usually binary choices.


The article is about faster scanning, not making a diagnosis.


Well, imagine a doctor having a suspicion of some kind, sending a patient to get scanned and an AI ignoring an anomaly and substituting it with an interpolation of normal tissue. The doctor looks at the result and thinks he might have been wrong and says to the patient that he is fine.

Even if it happens more rarely than a doctor missing an anomaly on the scan, it doesn't feel right. A patient had a good chance to be diagnosed and he wasn't.

What's more, in case of normal MRI a patient or a doctor can ask another doctor to have a look, but in case of AI and the fast scan described in the article, the AI's interpretation of data is final, unless you go make another scan, this time in the traditional way.


I think this is more a problem with your feeling than the actual process.

You're focusing on the person who got falsely detected as healthy, and are ignoring the people who would be correctly detected as unhealthy. That's why the important question is expected outcomes.

I don't really care whether the doctor gets to be morally culpable or not. I'm not sure why I would ever care about something like that as a patient. If I get sick and die, the fact that a person was responsible rather than an algorithm gives me precisely zero comfort.


You falsely believe that it is a binary choice. Cost-cutting allowed by this fast and creative scanning is not the only way to help more patients.


I'd assume (hope) they mean to infer data and not invent it.


It is clear to any mind of intelligence that you cannot give something less information and expect it to give you back better decisions.


Not only that you can - you have to do this to make better decisions. It's called focusing and everyone does it all the time.


Idk. A lot of the applications of AI to compressed sensing are resulting in plausible but inaccurate data being filled in. This is fine for, say, video games, but not as great for medical imaging I'd guess.


I find it somewhat interesting that purely analytical techniques can be replicated using CNNs. Sorta goes to show that there's a sorta divergence in a purely EECS method vs some of the newer approaches that are able to integrate advanced signal processing with deep learning. Hopefully these hybrid approaches are able to advance the field. That said, MOST of the current publications out there on this topic don't seem to be pushing the field that far compared to conventional signal processing approaches.


This assumes that’s what medical companies want. If we get closer to full value based care, then yes.


Which medical company doesn't want to move costs down?

You don't need high costs to charge a lot.


If your business is making money on treating issues, preventing them isn’t aligned. Worst case, you neglect issues until they get to a point where you start making money. Legacy dialysis business is a good exsmple - they need ass-in-seats to justify clinics, so prevention isn’t as big a priority. Also, doctors often have shares in clinic revenue.


A friend of mind works for a software company that was utilizing Medq AI's brain bleed/hemorrhage detection Algorithm to identify urgent cases to be reviewed by a radiologist.

They were pitching it to a medical group, and they were like that's great for patient care but how does it cut costs to the organization...


This makes sense, but Facebook / Google have enough resources to push their solutions onto the hospitals (give few years for free, maybe give the CEO some other fun stuff).


MRI manufacturers want to develop products that bring on new functionality and ability to justify their prices. A brand new 1.5T scanner easily costs several MIL to make, and for a clinic to purchase one will have to fulfill a business case to justify the purchase. For the clinics, acquiring new service lines (such as adding cardiac) is one such way to do so even in a more value based care setting.


I think this was already achieved in Switzerland without AI: https://www.dagensmedicin.se/artiklar/2018/11/20/en-mix-av-b...


That's Sweden


I'll believe it if I see it at RSNA 2019.


Is that really the way to make the MRI faster? It seems like a time and motion problem. You lie in the machine for a long time but there's a lot of slack; the actual image process takes up barely any of the time. Remainder of time is spent waiting for things to whiz around and for the tech to click "next" on their console.


That's the problem, current MRI's are like taking high-def pictures in raw format 1 pixel at a time. There's loads of redundant information in the raw image encoding. If there was some way to only image the information we needed, the process could go faster.

Adding more "cameras" to our MRI so we can do parallel imaging is expensive.


Depends on what you're trying to image. Tbh, if it takes 15 mins vs 30 mins to get an image of your knee, it's maybe not really that much of a game changer.

From the perspective of a MRI purchaser (say clinic or hospital director), maybe you can squeeze 1 or 2 more cases in a day, but it seems like a marginal improvement overall. I think cardiac is clearly the exception where you bring on board potentially new functionality with quicker acquisition.


Presumably there will have to be a proper medical trial with a declared up front hypothesis and double blind testing?


I couldn't read the article, but I wonder if the NHS will get this for free, since we "gave" them all our data in order to train the model.


> A DeepMind spokesperson said that if the research results in a product that passes clinical trials and regulatory approvals, doctors at Moorfields will be able to use the product for free for an initial period of five years.


Depends on the data transfer agreements. However, there are plenty of way you can use public data if even you don't end up using the actual data in the final commercialized model. You just got to be clever. ;)


I thought DeepMind's actual first commercial product is Google Cloud Services premium voice generation service (made from WaveNet)


Maybe it's semantics?

Google cloud services voice generation uses models/algorithms developed by DeepMind, it's not directly a DeepMind product.


What’s next, an AI to make a judicial verdict? Seriously though, this is very exciting. I can imagine a future in which medical diagnosis is damn near 100% accurate - all you need to do is lay down and get scanned and smelled by an all in one machine and your diagnosis is displayed with recommended treatment. If treatable with drugs or molecular repair, done on the spot.


> an AI to make a judicial verdict?

There is some discussion towards this direction: https://www.wired.com/story/can-ai-be-fair-judge-court-eston...


I hope AI can assist in the knowledge gathering phases, and for civil cases I think it can be a huge value add.

For criminal cases though: the current judicial system is wayy too punitive. And an AI that would apply the letter of the law would likely criminalize society even more than what has already happened so far.

One happy scenario would be if the laws were more responsive and were changed to not be so punitive since the AI would have a high rate of conviction.

But then, you might have other failure scenarios. Rich people buying AI programmers and hackers to mess with the system.

Its a constant game of outsmarting the latest tech.


This would be a major holy grail and a turning point where major professional analysis and decision making are are outsourced to AI services. You would have a hybrid of lawyers with AI and that should lower the costs....unless the other defendant counsel uses their own AI to raise the costs.

We are in for some truly exciting times.


didn't deep-mind folks tried controlling one of the data-centers which ended up reducing the overall power requirements a while back ? i was hoping that they would go that route...


Unrelated but anyone knows why the URL is not "SEO friendly"? I'm guessing they don't bother because the content is behind a paywall anyways?


How does one invest in deepmind ?


Buy shares in GOOGL.


For those not being able to read the article, you can use this "no paywall" bookmark:

    javascript:window.location.href='https://m.facebook.com/l.php?u='+encodeURIComponent(window.location.href);


I’m offering name suggestions:

DeepEye AEye


[flagged]


MindSweep


April Fools!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: