Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway010317's commentslogin

Could somebody give an example of a use case for this?


This comment reminds me of something I read recently on Yaron Minsky's twitter:

"An odd habit of functional programmers: when confronted with a nice, but clearly imperative way of structuring a program, they will often declare that this technique is in fact functional." (https://twitter.com/yminsky/status/950883335324225541)

and then

"Case in point: structuring a program as an imperative, deterministic state machine, where the state is determined fully by the state machine logic plus the sequence of transactions." (https://twitter.com/yminsky/status/950883598189686784)


I use C# in my day job and try to learn as much as I can about cool ways to use its type system. Genuinely curious to see some examples if you're able to point me in the right direction?

> I’ve been able to do great things with C# generics + value types


You can see a few tricks in my project here: https://github.com/naasking/Dynamics.NET

You can use static classes to cache information about types. That project caches delegates to perform deep copies, to invoke constructors, to check for deep immutability, and whether a type can contain any cycles. This is all done structurally via reflection, but the results are executed only once and cached in static fields by exploiting runtime types.

The cached delegate trick can be used whenever you need to some efficient dynamic dispatching. I use it here [1] in another project to cache the Dapper methods that would be invoked so I can easily compose smaller queries into larger ones that return very complicated objects.

A more complicated example would be the fastest immutable dictionary available for .NET in my Sasa library [2]. Generic methods and types defined on structs are JITted separately, so the dispatching overhead is low. I exploit this to create a very efficient hash array mapped trie.

[1] https://github.com/naasking/Dapper.Compose/blob/master/Dappe...

[2] https://sourceforge.net/p/sasa/code/ci/default/tree/Sasa.Col...


DeepMind claims that AlphaGo has already paid for itself when it was used to cut Google’s data center cooling costs, so it is not infeasible there may be many other domains which may benefit.


Where did you read that? Are you sure they weren't talking about machine learning in general?


It has been cited in a number articles quoting Demis Hassabis; Hassabis repeated the claim in his recent talk on AG (specifically, that deep RL related to AlphaGo has reduced Google datacenter cooling costs by 40% and is saving Google hundreds of millions of dollars a year) and added in the additional interesting detail that Google has changed its datacenter design plans to add in additional sensors & controls for the deep RL agents.

They haven't gone into detail as to how exactly this is done, but my guess is that they use their historical data series on cooling/electricity consumption to bootstrap and then it learns online normally with policy gradients (since cooling and temperature control seem like something with continuous actions rather than a few discrete ones, so an actor-critic rather than DQN).


If you look more closely at the details here (beyond just deepminds blog post) you'll see that 1. It has not yet been deployed to all data centers. They just turned the system off for a period of time and looked at how much extra money it cost to run the data center like that, then extrapolated.

2. There was already a Google engineer back in 2012 who applied neural networks to this problem and saw huge gains (There is a blog post about this somewhere). In the deepmind blog post, they don't compare to this system, do for all we know it could just be a small refinement of that system. It is actually not clear whether deep RL was actually used or not from the deepminds blog post: it is only kind of implied.


https://www.theverge.com/2017/5/30/15712300/alphago-ai-human...

"Say you’re a data center architect working at Google. It’s your job to make sure everything runs efficiently and coolly. To date, you’ve achieved that by designing the system so that you’re running as few pieces of cooling equipment at once as possible — you turn on the second piece only after the first is maxed out, and so on. This makes sense, right? Well, a variant of AlphaGo named Dr. Data disagreed.

“What Dr. Data decided to do was actually turn on as many units as possible and run them at a very low level,” Hassabis says. “Because of the switching and the pumps and the other things, that turned out to be better — and I think they’re now taking that into new data center designs, potentially. They’re taking some of those ideas and reincorporating them into the new designs, which obviously the AI system can’t do. So the human designers are looking at what the AlphaGo variant was doing, and then that’s informing their next decisions.” Dr. Data is at work right now in Google’s data centers, saving the company 40 percent in electricity required for cooling and resulting in 15 percent overall less energy usage."


At the very least I think this is deceptive because the source (If you keep clicking on links) for the 40 percent savings is the original blog post, and there's no other information saying things have been rolled out fully, yet the Verge is seems to imply that here.

I am somewhat convinced that something resembling RL was used based on your argument in the other thread, but I think even you would agree that calling it "a variant of alphago" is a pretty big stretch.


> If you look more closely at the details here (beyond just deepminds blog post)

Where would that be? As far as I know, the DM blog post and Hassabis's occasional discussions are the most detailed public information available. And they don't mention that it's just a brief demo.

> 2. There was already a Google engineer back in 2012 who applied neural networks to this problem and saw huge gains (There is a blog post about this somewhere)

I don't remember this.



So, that was 2 years deeper into the deep learning revolution, post-DeepMind acquisition, doesn't actually say it was a NN (the diagram could be literally any ML model from linear model to random forest), doesn't say they reduced costs by anything approaching 40%, or even are using it in production at all aside from the one instance they patched around some downtime.


"Today we’re releasing a white paper (PDF) on how we’re using neural networks to optimize data center operations and drive our energy use to new lows."


Ah, missed that. In any case, the paper confirms what I said: they haven't used it in practice, and the only time they have was the brief one mentioned in the post where it resulted in a small PUE saving (it quotes 0.02, off an unspecified reduced load but note for comparison the average PUE of ~1.12, so saving anything remotely like 40% is unlikely).


Here is a followup from the same lead author of the paper referred to in that first blog post (Jim Gao) who apparently was involved in Deepmind's project. Note the conspicuous lack of any sort of reference to deep reinforcement learning

https://blog.google/topics/environment/deepmind-ai-reduces-e...


Using forecasting for 'control' doesn't make too much sense (why the need to train a second ensemble to prevent overshoot if it's just supervised learning?), and the first author on that post, is not Gao but Richard Evans who is a DeepMind deep RL researcher (most recent publications: "Deep Reinforcement Learning in Large Discrete Action Spaces", "Deep Reinforcement Learning with Attention for Slate Markov Decision Processes with High-Dimensional States and Actions", "Reinforcement Learning in a Neurally Controlled Robot Using Dopamine Modulated STDP").


Misremembered the year. Actually 2014


I can believe "deep RL related to AlphaGo" but that was somehow transmuted to "[AlphaGo] was used to cut Google’s data center cooling costs", which doesn't make sense.

(Just nitpicking here.)


FYI I heard this first had from a DeepMind employee.


Could you please elaborate on how TypeScript's type system is better than C#?


I disagree with OP, but two interesting corollaries of TS type semantics are its strong type inference and structural typing:

Const x = [{x: 3, y: "hi"}, {x: 9, y: "bye"}];

Automatically gets you x: "Array of {x : number, y: string }". And if you declare the same type somewhere else, as long as they're "structurally equivalent" (i.e. same x and y types) you can actually use them interchangeably. You can't do that e.g. with c#, where one class will never be exchangeable with an unrelated class , no matter how similar the definitions.

This lets you do some funny stuff like"subset type detection ": https://gist.github.com/hraban/66c1778cdd31868034b12db93fcce...

All in all, it's more of an oddity than an actual strength, if you ask me. It's necessary to emulate JS semantics, but I wouldn't consider it an advantage in a new language.


> You can't do that e.g. with c#, where one class will never be exchangeable with an unrelated class , no matter how similar the definitions.

Agreed. Duck typing is a killer feature. I can't say how many time's I've rolled a new class for no reason other than data structuring, which would have been far preferable as something just defined inline.


Mostly, for me, it comes down to union types, intersection types, the ability to disallow implicit nulls (meaning null must be declared as a valid value at the type level, and therefore cannot creep in unexpectedly), string literal types, discriminated unions (most of the good stuff from algebraic data types), and index types.

All demonstrated here: https://www.typescriptlang.org/docs/handbook/advanced-types....

The result is a very nice, expressive way to state your types and get maximal benefit from the type checker.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: