I took a break from Julia a year or two ago because of some of these issues, one of the big ones being I didn't want to write and maintain a set of non-allocating LAPACK wrappers for iterative solvers, but the memory churn was killing my performance. So, so glad FastLapackInterface and LinerarSolve are a thing now, and the MKL situation is much easier with this trampoline development, makes me want to start working on Julia solvers again.
It does feel difficult to write performant Julia if you don't put a lot of effort to stay "in the know" as a lot of this knowledge is very dispersed, but I guess it makes sense as the language is still changing quite rapidly.
I love Julia and choose to work in it almost exclusively, but I agree with the points in the article. I've run into a lot of issues just writing numerical linear algebra type algorithms.
Even core, and not quite core but maintained by core dev, libraries like Distributed.jl and IterativeSolvers.jl can feel pretty rough. For example IterativeSolvers has had strange type issues and not allowed multiple right hand sides for linear solves, for years, afaik due to some aspects of the type system and some indecision in the linalg interface. DistributedArrays still is very poorly documented and looks like it hasn't been touched in 3 years.
I've run into problems when I need more explicit memory management, for example none of the BLAS/LAPACK routines have interfaces for the work arrays, so you either get reallocation or have to rewrite the ccall wrapper yourself. It can also be hard to tell where the memory allocation is happening.
My most recent problem had been with Distributed and DistributedArrays, where everything is fine if you just want a basic parallel mapreduce, but has been a huge pain past that. It's not even clear to me if Distributed/DistributedArrays has been more or less abandoned in favor of MPI.jl, which for me removes most of the benefit of writing in julia, since you then have to run it through MPI. There is an MPI sort of interface for DistributedArrays but that part is not well documented and looks like more of an afterthought.
My use case isn't even that complex, I just want to persistantly store some matrices across the nodes, run some linear algebra routines on them every iteration and send an update across the nodes, then collect at the end. If anyone has any idea how to do this correctly in Distributed or DistributedArrays or can point me to some examples that would be amazing because it has been taking me forever to piece it together.
Not going to stop using Julia but there are many basic things even just in a scientific computing workflow that still feel like they were rushed and they can really take the wind out of your sails.
Agreed, but it's funny that criticism of Julia broadly falls into two categories:
1) Julia doesn't have X. X is critical for modern programming languages, and without X, we should not even entertain the idea that Julia may be usable
2) Julia's feature X is too unstable. It's like they tried to implement too many things in Julia 1.0, and developer time stretched thin. They should have just not implemented all this stuff!
I mean yes, we all would like a programming language that materializes with 1,000,000 developer hours already poured into it, great editor support out of the box, and which is somehow born with 10 years of usage. It's similar to wanting an employee who enters the work force with 10 years of industry experience. Nice, but it's not very realistic.
Assuming you mean like fixed-point arithmetic? Well, if you know you want to support all of 1 BTC's satoshis but also want to support Danish krone's, you could be wasting a lot of space by just delegating to the currently largest precision-requiring currency (or whatever, depending on your use case)
Unless I'm not understanding what you mean by fixed point?
You can make big money trading the volitility, if you get really lucky. The price of some puts I looked at went up 500%+ during the drop today. Would not try this personally lol
Short term puts yes, long term puts no. For short term puts you pay (or are rewarded) in volatility, for long term puts you pay (or are rewarded) in premium.
At any rate, my point isn't you can't make money off this, it's that there's no easy arbitrage opportunity happening on GME. It's basically trading like a volatile stock would trade, and it's being priced as such. Whatever free or easy money you think there is to be made on GME is likely just a coin toss.
Looking right now some of the options a few months out have implied volitility of 1000%+, your best bet might be selling them off on big crashes instead of waiting it out? Some of the puts I looked at went up 500%+ after this dip
haha I am the developer of it so I can't really say anything lol. Although we don't have OTC, a lot of other stuff is free on the website so give it a go with a free account. Always happy to answer questions.
It looks good, I'll definitely give it a real try if I start trading options more, the order flow data alone looks like it would make it worthwhile. Do you incorporate L2 data? Couldn't find that anywhere, only thing that seems like its missing.
You absolutely can use regular jupyter notebooks for julia! Pluto has some advantages, like being stored as a normal julia file. The julia startup time issues affect both.
Oh, man, this is indeed a major feature. My main point of friction with jupyter notebooks is the stupid json ipynb format. Why can't it be just a regular language file with comments?
> They contain code, rendered Markdown, images, plots, video players, widgets, etc.
The code could be verbatim python code (or whatever language the notebook uses), and the rest could be embedded inside comments. I don't see any problem with that (besides the very concept of "rendered Markdown" being totally out of order). The fact that they are saving it as json by default seems more to be laziness by the developers than a well thought-out solution, that could be just a straightforward serializer.
>and the rest could be embedded inside comments. I don't see any problem with that
Do you mean embedding images and plots inside comments? If yes, please elaborate on how you see that happening in the real world.
>The fact that they are saving it as json by default seems more to be laziness by the developers than a well thought-out solution, that could be just a straightforward serializer.
So, how would that well thought-out solution in the form of a "straightforward serializer" work? I have a flat file, and I want to display images, plots that you can zoom into out of, figures, etc. as comments. How would that happen?
>At the very least, you could put the whole json stuff inside a comment. It's already plain text, isn't it?
So instead of having the whole file as JSON, which is lazy and not well thought-out, we'll put all content in JSON, then put that JSON inside a comment in a plain text file. Do I read you correctly?
I feel we're making progress faster than these lazy Jupyter org bandits.
> we'll put all content in JSON, then put that JSON inside a comment in a plain text file
Only the "output" content. The code inside the cells is verbatim, and the markdwon cells are regular text comments.
See, I'm not discussing you just because. I have a legitimate problem with ipynb: very often I want to run the code of a notebook from the command line, or import it from another python program. This is quite cumbersome with the ipynb, but it would be trivial if it was a simple program with comments.
I believe people reading this are not detecting the sarcasm. I'm demonstrating that the Jupyter folks are not lazy engineers, and the "obvious" solutions people come up with are not that well thought-out when you start actually thinking about them.
You are making a lot of ontological and epistemological assumptions that are contentious in the philosophy of math. Not saying you are wrong in thinking this, metaphysical questions don't necessarily have answers, but many would not agree with you.