Hacker Newsnew | past | comments | ask | show | jobs | submit | duality's commentslogin

Do these "life hacks" apply in other developed countries, or is the US special in this way too?


> Do these "life hacks" apply in other developed countries

If by “developed” you mean the U.K., Australia, France, Germany or Japan, then yes. Parsimonious retention is a generally prudent measure.


If a company is okay with employees "cross training" is that a problem? They're not forced to train employees, but the good ones will.


I stopped reading when he said IBM Watson was somehow key in this "globotics" revolution. Even IBM seems to have stopped advertising Watson.


Given where Watson is now, the fact that systems mostly only get better, and all the other advances in AI, I think it’s safe to say that it will get better. He could also be using Waston as a general example of AI that might be relevant to a wider audience.


Watson isn't a thing, it's just fancy words for their army of low-paid consultants. I worked for a competitor and everyone largely decided Watson wasn't anything to worry about.


Watson is, essentially, just the code name for their consulting services.


Can anyone working in this field comment on how it's applied? For example, what is computed? Homology groups, just the ranks of these groups, or something else? Given these, what does one learn about the data set under analysis? https://en.wikipedia.org/wiki/Topological_data_analysis#Appl... is pretty scant on detail.


There are a number of approaches; since it's still relatively new, there's a lot of "playing around" with techniques.

1) The first idea is to just see which topological features persist as the filtration/threshold parameter varies. I personally find the barcode illustration, rather than the birth-death diagram, to be more intuitive in this respect. For instance, say you had LIDAR data about a vehicle. Looking at the persistent homology of this point cloud could allow you to ascertain the size and number of windows (smaller windows would have shorter persistence than larger windows, and the number of persistent bars would correspond to the number of windows or "openings" on the vehicle). This might allow you to figure out if the vehicle is a van or an SUV.

2) For applications like those in neuroscience (I like, for instance, the talk you can find on Youtube titled "Kathryn Hess (6/27/17) Bedlewo: Topology meets neuroscience") there is instead a look at how these ranks behave over time. A rough sketch is this: look at the topology of the pattern of activation in neurons as a mouse (or an AI) learns something. As the learning process happens, what happens to the Betti numbers?

3) Sometimes one might want explicit generators. A thought-provoking small case of this can be found in "Mind the Gap:A Study in Global Development through Persistent Homology" https://arxiv.org/abs/1702.08593 This paper looks at statistics like GDP and infant mortality of countries around the world and find explicit oddities in the data. It's a proof of concept, to me; I'm interested to see where it'll go.

There is a lot more of course but that brings you to three interesting yet different directions.


We recently applied TDA to get more information about the 'relevance' of nodes in graphs, which in turn we employed to improve graph classification. Let me know whether you want to discuss this some more!


I don't work in the field, but I did a little reading about the theory last year. (I can't say anything about how any of this is applied---I have been wondering for a while whether it does anything beyond giving really nice looking diagrams!)

The basic input is a filtered simplicial complex. There are many ways to do this, but at least in what I read there are two main ones that come from point clouds: the Čech and the Rips complexes. Both are, roughly, from taking n-tuples of points that are mutually within some distance of each other to form the simplices. The filtration is indexed by the epsilons.

(From what I remember, the Čech complex is better behaved in terms of convergence properties, but the Rips complex is apparently easier to compute with. That said, they sit between each other in the filtration. There are some papers on detecting holes in sensor networks that make use of this fact, using the Rips complexes to infer the existence of a relative 2-cycle to prove a sensor network is hole-free.)

Since homology groups are functorial, the filtration gives induced maps on all the corresponding homology groups. Given a homological generator for a particular epsilon, one can then push it forward for increasing epsilons, seeing how long it persists, until, potentially, it is eventually in the kernel and dies. Conversely, any particular homological generator might be in the image from a previous epsilon, and there is some value of epsilon at which the generator is born, in the sense that it is not the image of a generator from a previous epsilon.

A barcode (maybe also persistence diagram but I'm not sure) is the collection of intervals representing all the homological generators for all values of epsilon, where an interval represents the persistence of that generator. Somehow practitioners make judgments about how long a generator persists to learn about topological features about a given data set.

It seems most of the time the homology computations are done with field coefficients. This, in particular, implies the homology groups themselves are vector spaces, and thus by knowing the rank you know the group. So, the barcode is a perfect representation of all the homology groups.

In a way I do not comprehend, what is "really" going on is that, since the point cloud has only finitely many points, the filtration has only finitely many discrete steps, so there is an associated spectral sequence that relates all the homology groups to each other. I think this might give a finer-grained barcode, where births and deaths sometimes come from events in higher and lower dimensions.

In the off-chance you already have a complex, there is another kind of filtration to try, which is from the sublevel sets. Persistent homology techniques apply, and I have heard of symplectic geometers using it to prove things about symplectomorphisms, where the sublevel sets come from a symplectic flow.

As an example of sublevel set persistent homology---and a point about what "birth" and "death" really mean---imagine two circles moving toward each other then merging. Each step of this event corresponds to a level set for a pair of pants. At the beginning, with two circles, there are two H_0 generators, one per circle, and two H_1 generators, one per circle. At the exact point the two circles merge, then there is a single H_0 generator but still two H_1 generators. Just after the merge, there is a single H_0 generator and a single H_1 generator for the resulting circle. The two generators of H_0 and H_1 "merged" in a sense. For H_0, it is arbitrary which of the two generators is the one that persisted and which was the one that died at the merge event. For H_1, it is somewhat strange: it is the sum of the two obvious generators that persists for all time, and either of them can be taken as the other interval in the barcode, which then dies at the merge event.

This illustrates that the actual generators in a barcode diagram are somewhat arbitrary. It seems to be only as arbitrary as bases for a vector space might be (though sometimes finding the "right" basis is key to understanding a pattern!)


Very likely different forms of "lower social cohesion."


What, if anything, do the quotation marks mean here?


never relax


What is the right data format to move around? JSON?


The point is you're writing mostly business logic and glue. You get a server request, you transform it with some logic, call some other servers, combine the responses and run some more logic, and return a response.

The scalability and interesting work has been factored out and handed off to infrastructure teams that build stuff like this auth framework, load balancers, highly scalable databases, data center cluster management tools, etc.

Which really is the smart way to do it. To the extent that you can stand on the shoulders of giants who've basically made scalability the default, you are free to focus on what you're actually trying to build. The only downside is if all the interesting engineering challenges are already solved for you, the remainder might not that be that interesting to people who enjoy engineering challenges.


It's just a saying. All we do is move protos from one service to another.

JSON is definitely not the right stuff.


The encoding/decoding cost is painful :(

I mean in this context if you're doing that level of scale. For a lot of purposes json is totally fine.


It's really not - compared to the wire cost/static type checks and loads of other stuff you give up.


"An atom can be described by its quantum state only if it's isolated and in that case its energy is constant."

How do you figure? As a contradiction, take your atom+electromagnetic field system, describe the transition from excited atom to unexcited atom + photon state, and project out the E&M field. Voila, now you have a quantum description of an atom transitioning between different energy states. Its dynamics may look funny, i.e. they may appear nonlocal, they may not conserve energy, etc. but that's different from saying "there is not a quantum description of these dynamics" which is what you're claiming.


https://en.m.wikipedia.org/wiki/Quantum_state

> project out the E&M field. Voila, now you have a quantum description

There is no wavefunction describing the state of the subsystem because the system is not separable.

> Its dynamics may look funny, i.e. they may appear nonlocal, they may not conserve energy, etc. but that's different from saying "there is not a quantum description of these dynamics" which is what you're claiming.

What is the “quantum description of these dynamics”?

Quantum mechanics is usually based on something like the following postulate: “The state of a physical system is described by a well-behaved function of the coordinates and time, Ψ(q, t). The function contains all the information that can be known about the system.”


The system is separable after you measure out your ancilla. Imagine a stream of qubits B_1, B_2, ... prepared in +X who interact weakly with qubit A prepared in +X. and are then strongly measured projectively. After the brief interaction, A and a certain B will be entangled. Projectively measuring this B updates our knowledge of A, but due to the weak entanglement, the adjustment is small and incremental. The state of such B qubit is known and thus there is no remaining entanglement between A and B, and thus A and B are separable. After many such Bs, an outside observer privy to all measurement outcomes will understand that A is doing an absorbing random walk to the poles of the Bloch sphere. Throwing away this information and taking an ensemble average, you'd see an exponential decay of coherence.

This is a toy model of a qubit being weakly measured by an incident flying field.


> Throwing away this information and taking an ensemble average, you'd see an exponential decay of coherence.

That is described by a density matrix but is not a pure state. The true state may be a pure state (because as you said the systems are separable after the measurement) but our description is a mixture reflecting our imperfect information (and not a superposition of the pure states corresponding to the potential outcomes).


Not sure exactly what you're linking the Wikipedia article on quantum states, but if you like it as a reference, check out https://en.wikipedia.org/wiki/Quantum_state#Mixed_states. The operation of "projecting out" the E&M field in my previous comment would be realized on the density matrix as a partial trace over the electromagnetic field. You can also take the partial trace of this field of the Hamiltonian operator to see the effective dynamics of the atom when "ignoring" the E&M field. This is a fully quantum description of the state, so I stand by the statement that your claim "An atom can be described by its quantum state only if it's isolated and in that case its energy is constant." is incorrect.


Do you agree that “The state of a physical system is described by a well-behaved function of the coordinates and time, Ψ(q, t). The function contains all the information that can be known about the system.”?

When the atom is coupled with the electromagnetic field and the state of the system is not separable there is no complete description of the atom given by a wavefunction defining its quantum state. You can have an incomplete description by tracing out the rest of the system, I agree.

Let’s say then that "An atom can be completely described by its quantum state only if it's isolated and in that case its energy is constant."

Edit: in any case, my point was (and I think that we will agree) that it is misleading to say “Consider a system that transitions from energy state 0 to an adjacent energy state 1. [...] To go smoothly from 0 to 1, the system transitions through a series of superpositions of both states”.

The atom goes from the state 0 to the state 1 but during the transition it’s not described by a superposition of those states (that would be a pure state). If anything, it is described by an (improper) mixture of those states, obtained by tracing out the rest of the system.


Yes, but note that I deliberately used the word "system" rather than "atom". The system is the combination of the atom plus whatever it's absorbing energy from or emitting energy to. And that (entangled) system is in a superposition.


Ok, I was confused because if you say “Consider a system that transitions from energy state 0 to an adjacent energy state 1” it sounds as if the energy of the “system” is changing and when you say that “a particle [...] can be in two different energy states at the same time” it seems that you are referring to the atom being in a superposition of states with different energy.


One can speak meaningfully of "an atom in a superposition of energy states" despite the fact that, strictly speaking, such a thing is not possible, just as one can speak meaningfully of "the force of gravity" despite the fact that, strictly speaking, there is no such force. The latter is understood as the force-like effect of curved spacetime, and the former is understood as "an atom being a component of a system in a superposition of states with different distributions of energy" (or something like that). Communications between humans becomes more productive when we cut each other a little terminological slack.


Maybe a different class of games will do best on a streaming platform.


How did you buy Inbox? Wasn't it free?

Seems like a false equivalence "free skin on top of Gmail" != paid-for service like this one.


"Buying into" isn't only about money. A lot of people put time and effort into Inbox, Reader etc. and they were badly burned when Google decided they weren't making enough millions of dollars.


The biggest difference between Stadia and Inbox, Reader, etc is that people pay for Stadia. That alone seems like comparing apples to oranges.


And the other big difference is that Stadia is a lot more expensive to run.


Is regulation really "one and done"? MS got fined (repeatedly) around 2000 so they're done with regulation?

They at least have shady sales tactics.


Twenty years ago. Please try again. I hope you're constantly held accountable for things you did decades ago.


Going to prison doesn't exempt you from the law from the rest of your life.


No, but it cleans your slate of the crime you committed because you've paid your dues. We don't charge people - or businesses - twice with the same crime.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: