Hacker News new | past | comments | ask | show | jobs | submit login
Is there such a thing as half a derivative? (askamathematician.com)
200 points by ColinWright on May 27, 2015 | hide | past | favorite | 61 comments



If you have a bit of signal processing background, fractional derivatives actually make perfect sense in the frequency domain.

A derivative is equivalent to a highpass filtering with a slope of +20 dB/decade (or about +6 dB/octave), and an integral a lowpass filter of -20 dB/dec. So if you filter something by +10 dB/dec, you get half a derivative.


I assume that this is what the author alluded to when he said "One way [to define non-integer derivatives] is to use Fourier Transforms."

To be more specific, if we write the Fourier transform of a function as FT(x(t)), then FT(d/dt x(t)) = j2pifFT(x(t)).

In fact, this equality holds for higher order derivatives:

FT(d^n/dt^n x(t)) = (j2pif)^n FT(x(t))

Where d^n/dt^n is the nth derivative.

We naturally extend this definition to "1/2 derivatives" the same way we often extend integer valued functions to take rational arguments: we plug in a rational and see what happens:

FT(d^(1/2)/dt^(1/2) x(t)) = (j2pif)^(1/2) FT(x(t)) = sqrt(j2pif) FT(x(t))

Then we take the inverse Fourier transform to find the half-derivative, which is what we were originally looking for:

d^(1/2)/dt^(1/2) x(t) = FT^-1 (sqrt(j2pif) FT(x(t)))


You can treat derivation as a linear operator on vector-space of functions, let's name it D. Now second and third derivatives as operators are quite simple since they are just D^2 and D^3. The question is if you can well define sqrt(D), or saying that if there is one and only one operator D_half such that D_half^2 is D (existence and unicity). It breaks at unicity since there are multiple half derivative constructions (frequency domain approach, Taylor-series approach,...).

On the other hand sqrt is well defined for positive real numbers (however not well defined for general complex numbers). There is a similar definition only for positive operators on Hilbert spaces (you pick the positive operator from all the possible square roots). However derivation is not a positive operator on the most used Hilbert-spaces.


At some point in my college Linear Algebra class I realized that the exponential function is an eigenvector of the derivative operator. Hilbert spaces are fun.


And the Gaussian distribution is an eigenvector of the Fourier transform.

Not surprisingly both of these facts correlate with how fundamental is the exponents function to solving linear differential equations, and how fundamental is the Gaussian distribution in probability theory.


Your approach seems to be simpler than the blog post. Can't you just note that the pth derivative of exp ax is a^p exp ax and then just extend via a fourier series? Maybe the non-uniqueness is too unsatisfying.


I assume you are thinking the pth derviative of exp(jkx) which is (jk)^p exp(jkx) and extending this to Fourier-transform (or series). However (j*k)^(1/2) is not so well-defined, you can pick arbitrarily from two complex values for each k. This approach leads to infinite half derivative constructions.


Yes, and a single irrational power of a non-zero number has an infinite set of possible values to choose from.


I remember reading a paper that claimed that integrating on a Grassmann variable space was equivalent to integrating in a space with -2 dimensions of the usual commuting variables.


Nitpick: the act of taking derivatives is not "derivation", it is "differentiation"


Nitpick squared: http://en.wikipedia.org/wiki/Derivation_(differential_algebr...

When I first learned this in a graduate abstract algebra class, however, I was convinced that my professor, who was not a native English speaker, was simply mistranslating from whatever language he was used to talking about these things in. (He was, IIRC, a Romanian, educated in Germany.)


Intuitive essence of fractional derivatives: Interpret the n-th derivative as the coefficient of \epsilon^n in a "generalized" Taylor expansion of f(x+\epsilon). If the "function" behaves weirdly enough to not have a nice Taylor expansion with integer powers, then you can essentially extract the behaviour under increments by \epsilon. HTH. (IANAMathematician :-)


Is the author's definition consistent with that?


From the functional form of the author's definition, it looks like yes. However, to connect the two definitions probably requires using Cauchy's integral formula (http://en.wikipedia.org/wiki/Cauchy%27s_integral_formula) and/or the complex method of residues, along with some line integral tricks (http://en.wikipedia.org/wiki/Methods_of_contour_integration).


BTW, I believe the "generalization" of the taylor expansion mentioned here is the Laurent series (http://en.wikipedia.org/wiki/Laurent_series).


At first look it appears to be yes, but (and maybe the GP can explain more) I don't see how it really gives any insight into the fractional derivative. It's just subbing out the derivative operator for finite differences, and you'd still need to introduce the gamma function, so you aren't gaining much that way either. What am I missing here?


Wikipedia also has a good discussion of the basics as well as some generalizations:

http://en.m.wikipedia.org/wiki/Fractional_derivative

Turns out there are even some applications, albeit rather esoteric ones.


I remember rediscovering fractional derivatives in college after learning about the Laplace transform derivative formula. I even called them fractional derivatives; and have a maple notebook somewhere in which I identified some fractional derivatives for various smooth functions. An interesting fact is that:

sin^(a)(x) = sin(x+a*pi/2)

(perhaps with some normalizing factor in front).

thus d/dx sin(x) = cos(x), etc.

I found the symmetry of this to be really beautiful.

I was very proud of my accomplishment until I googled the term and realized someone had beat me to it by ~50-100 years :)


Bright minds think alike!


A while back I realised you could do a similar thing by just working out the nth derivative analytically and chucking a real number in. For example, the nth derivative of x^m is

    (x^(m - n) m!)/(m - n)!
You can gradually vary n from 0 to 1 to see the smooth change. It doesn't always behave how you'd expect it to.

Same for e^ix:

    i^n e^(i x)
Which means you can partially differentiate sin, cos etc. (In fact, the power derivative lets you derive any function via its taylor series).

What do you think the gradient of sin looks like if you take it gradually and animate it? You might expect it to slowly move to the left and become cos. It does, but it also does a barrel roll around the real axis at the same time. Kinda neat – I wish there was an easy way to put a demo up.

I tried to figure out some physical applications for this but I've fallen short so far – would be interested to know if there are any.


In a relatedly similar way, you can define fractional B-splines. The nth order B-spline is the n+1-fold convolution of a box function. n+1 convolutions turn into raising the fourier transform of the box function to the n+1 power in the fourier domain. This generalizes to non-integer powers. Then you just invert. Cool stuff :)


So, if you plot a three dimensional graph of y = f^(N)(x) (axes: x, y, N), I wonder what it would look like, for, say, f(x) = x. Anyone handy with Matlab?


Not too interesting ( a is the derivative axis and x is x):

https://i.imgur.com/mFEGQ3d.gif

Basically it just interpolates between two lines, one with slope 1 and one with slope 0.


The wikipedia article linked in another comment has an animated 2d graph that gets you part of the way there.


The Wikipedia article on fractional calculus mentions that fractional derivatives are not local in the same way that integer derivatives are. Is that right? That seems profoundly weird in the context of differential equations.


Because the local behavior is already completely captured by integer order derivatives there is no information remaining that could go into fractional order derivatives. You could maybe just make fractional order derivatives interpolate between neighboring integer order derivatives but I guess that may cause some problems and a non-local definition just makes more sense.


A more in depth and more complete treatment: http://mathpages.com/home/kmath616/kmath616.htm


Next question: is there such a thing as a derivative of code?


The derivative of a type is its "zipper", a datatype for representing a "hole" that moves around the data structure. http://strictlypositive.org/diff.pdf


There is such a thing as a derivative of a context-free language, which turns out to be useful for writing parsers: http://matt.might.net/articles/parsing-with-derivatives/


This derivative (the Brzozowski derivative) is in fact applicable to any language, in the sense of a set of words over some alphabet. The language doesn't need to be context-free.


http://www.informatik.uni-marburg.de/~pgiarrusso/papers/pldi... defines one for programs as something that "maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program."


Algorithmic differentiation has been around since at least 1957. Here's a paper I co-authored, dealing with derivatives and generic programming: http://www.axiomatics.org/~gdr/ad/issac07.pdf.


You could take the continuous differential equations that describe the elements of a CPU, consider what's going on as the program feeds through, and take the derivative of that.

For quantum computers all programs correspond to unitary matrices. You could take the logarithm of the matrix M, define f(x) = e^{ln(M) x}, then compute the derivative of f at 1. You might need a factor of i to make it work. (It works extremely nicely for single-qubit operations.)

(Alternatively, you could apply that process separately to all of the individual gates making up the circuit, vary them all at once, and get a different continuous transformation with a derivative.)


Also see http://en.wikipedia.org/wiki/Hausdorff_dimension to see how you can have fractional dimensions


Fractional function application (called "Fractional iteration") is also pretty neat.

http://en.wikipedia.org/wiki/Iterated_function#Fractional_it...


From my masters project covering this I recommend http://www.amazon.com/The-Fractional-Calculus-Applications-D...

ended up coding up something similar but less general than Podlubny's http://www.mathworks.com/matlabcentral/fileexchange/36570-ma...


So instead of swapping (N-1)! out for gamma(N), can you swap in any other continuous extension to the factorial function, or does it have to be gamma?

I just read about Hadamard's & Luschny's gammas/factorial extensions [1]: would those not work out?

[1] http://www.luschny.de/math/factorial/hadamard/HadamardsGamma...


Probably not. gamma(N) is the single mostly-analytic function for which gamma(x+1) = x*gamma(x). This is a requirement.


There are lots of analytic functions with this property (and agreeing with the ordinary gamma function on integer arguments). For example, gamma(x) * cos(2πx) [or, in complete generality, gamma(x) * f(x) for any analytic function f of period 1, normalized to take value 1 at integers; thus, there are infinitely many examples].

Rather, the gamma function is the unique function satisfying this recurrence, taking the usual value at integers, with the asymptotics that gamma(n + x)/(gamma(n) * n^x) approaches 1 as x is held fixed and n grows large. [See http://www.quora.com/How-exactly-does-the-gamma-function-ext... for a detailed exposition]


> single mostly-analytic function

I think by "mostly-analytic" here you mean meromorphic.

One really cool thing about the gamma function is it is not the only meromorphic function which satisfies that recurrence relation on the integers! E.g.: gamma(x) + sin(x*pi) is meromorphic and agrees with the factorial.


gamma(x) + sin(x * pi) agrees with the factorial [shifted by one, because the gamma function has that stupid shift in its definition for no good reason] on integer arguments, but does not satisfy the recurrence relation on non-integer arguments.

That having been said, one can devise infinitely many analytic functions which do satisfy the recurrence relation in general. See my other comment.


Yes indeed, that's why I only said they agreed "on the integers". Your other example is deviously clever, nicely done.


I wonder why Fourier and Laplace transforms were dismissed very early on? It is very tempting to generalize the the differentiation property to non-integer n and do a inverse transform to get any non-integer derivative. For instance, I would imagine that f^(1/2) = L^{-1}[ sqrt(s) L[f] ]. Is there a particular difficulty in doing the integral?


Let's turn it up a notch... can you have i-th order derivative (or integral)?


It looks like you can. Most fractional derivative solutions on the Wikipedia page use the gamma function, which is defined for complex numbers.


What properties would we want this to have? For a 1/2 derivative, we want that taking it twice would give the regular derivative. Not sure what the analogue would be here.


Author seems pretty certain there are no applications and that his/her definition is just arbitrary but I wonder... ?


In quantum field theory you frequently do integrals in 4-ϵ as a way of regularizing divergent integrals.


fractional derivatives have found some uses in signal processing and control systems. you can search that exact term in google scholar.


fractional derivatives naturally occur in the field of fractional differential equations, a field which has applications in generalizations of some physical laws.


From the domain name, I expected this to be in English. It's not.


From the top of the page:

"There is! For readers not already familiar with first year calculus, this post will be a lot of non-sense."

FWIW, it is a pretty straight forward explanation of 'half-integrals'... which by induction means there are 'half-derivatives'.


It isn't nonsense, but I still couldn't follow all of the steps even with an okay level of calculus knowledge (multivariable but not PDEs).


So what's the first bit you don't understand?


Around "When you integrate a function the result is more continuous and more smooth."


So suppose you have a function. Don't draw a nice one, draw one that's jaggy, or even discontinuous.

Now start from somewhere and move rightwards, drawing the function of how much area is under the original. You'll find that (except for truly pathological cases) the function you draw changes continuously, never jumping radically, because as you go a little further right you can only get a little more area.

So the integral is, in a very real sense, "smoother" than the original.

You can turn this around and think of it the other way. If you have the function f(x)=abs(x) and you differentiate it, the derivative has a jump discontinuity at x=0. So differentiating can make things less smooth, and hence integrating makes things more smooth. As it says immediately after the sentence you quote:

    > When you integrate a function the result is more
    > continuous and more smooth.  In order to get
    > something out that’s discontinuous at a given
    > point, the function you put in needs to be
    > infinitely nasty at that point (technically,
    > it has to be so nasty it’s not even a function).
Does that help?


Just to add to that (for GP, obviously): if "drawing" seems too abstract or calculating the area on the fly seems too hard, you can get similar intuition by generating a wiggly/crinkly sequence (technical term) of numbers on your computer, then repeatedly take cumulative sums. This can even be done in excel (mash the number keys along a column for the initial function), and then you can graph the resulting sequences as functions. The cumulative sum is analogous to the integral, and the sequence will visibly get smoother.

For the derivatives, take the difference repeatedly.


Thank you. And to Colin, too.


There's nothing wrong with not understanding calculus. No one is born with that knowledge. There is something wrong with dismissing things you don't understand as nonsense.


Be fair, please. I did not dismiss it as nonsense--why would anybody do that?

I merely expressed the fact that I expected one thing, and got another.

https://en.wikipedia.org/wiki/Language_of_mathematics

I don't speak German or Greek, either, but obviously documents written in those languages are generally not nonsense.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: