We tend to find Qwen3-Coder-Next better at coding at least on our anecdotal examples from our codebases. It's somewhat better at tool calling, maybe the current templates for Qwen3.5 are still not enjoying as "mature" support as Qwen3 on vllm. I can say in my team MiniMax2.5 is the currently favorite.
use a larger model like Qwen3.5-122B-A10B quantized to 4/5/6 bits depending on how much context you desire, MLX versions if you want best tok/s on Mac HW.
if you are able to run something like mlx-community/MiniMax-M2.5-3bit (~100gb), my guess if the results are much better than 35b-a3b.
the cpu is certified by AMD to be running up to 105 celsius, but it thermal throttles automatically at 95 celsius, so out of the box probably not enough to boil water, but just barely :P.
the fun fact, is that if you manually reduce the power limit to 65W the initial single thread results so virtually no loss in ST performance vs 170W, and it appears that the original AMD slides stating 75% more efficient cores at that level not too far off.
The previous generation of Intel and AMD CPUs could not consume more than 20 to 30 W with a single active core (non-overclocked).
So with the power limit set to 65 W or more the single-thread performance was always limited by the maximum turbo frequency (which may depend on the temperature of the CPU) and never by the power limit.
I have not seen yet any published value about the single core power consumption of Zen 4, but it is likely that the single core power is not higher. It is certainly much less than 65 W even at 5.85 GHz.
So the expected behavior is that the single-thread performance does not depend on whether you set in BIOS the steady-state power limit to 170 W, 105 W or 65 W. Only the multi-threaded performance is modified by the power limit, because when the power limit is reached, the clock frequency is decreased until the power consumption matches the limit.
That's the consumer variants; the Threadrippers will almost certainly not be at a lower rated TDP than current gen's 280W. If they increased it by same percentage as they did for consumer, it'd be 450W, but that's unlikely; 350W might be in the cards, though.
To me this happens when the CPU starts throttling due to high temperatures, if you don't have something like menumeters installed it probably won't show itself on any other native Mac GUI.
For a long time the workarounds I've used to deal with this issue:
1) Charge from the ports on the right! The charging circuit on the left aggravates the issue.
2) Disable turbo charge by setting the mac into low power mode inside system preferences -> Battery
3) Raise the computer from the table for increased air flow, and not be in contact with latent warm from the table. Which has been heated by the mac itself.
4) Lower AC temperature
5) Change the Fan Speed from Automatic (default) to fullblast, or make it sensor based but lower the min/max temperatures for the fans to start spinning faster 38-60 celsius
I was having throttling issues with a new 4k monitor. Teams seemed to make the issue worse. Pointing an external USB cabinet fan at the laptop has resolved my issues.
Ever since my team started using Splunk (circa 2012), we claimed for a more open version we could tinker with and not cost an arm and a leg to ingest multiple terabytes of daily data.
Positioning as an opensource Splunk would be an interesting play.
Going through your docs the union() function looks like it returns a set, akin to splunk values(), is there the equivalent to list()?
Elastic is great in its lane, but it requires more resources and has a monolith weight, that has left a sour taste from our internal testing. Doing a minimal ElasticSearch compatible API would open up your target audience, are there any plans to do you it in a short term horizon (< 1 year)?
That's a cool idea. We've had many collaborators using Zed lakes for search at smallish scale and we are still building the breadth of features needed for a serious search platform, but I think we have a nice architecture that holds the promise to blend the best of both worlds of warehouses and search.
As for list() and values() functions, Zed has native arrays and sets so there's no need for a "multi-value" concept as in splunk. If you want to turn a set into an array, a cast will do the trick, e.g.,
This, but with extra noise around the signature and with at least 4 unique copies, max number of times one has to sign full name a document (in my personal xp).
Whomever is going to read it and check for digital, will probably check closer on the signed pages. Also make sure the signature isn't too perfect and not too regular on the ink :)
How much funding can they realistic provide for those priorities?
State grants from Horizon Europe projects or similar could be an alternative source for funding, probably with better recurring probabilities but it would require a champion to lead through the burocracy and in the end it's a coin toss (or lower) to get approved.
(a) I think the article isn't asking them to provide more funding, but to consider re-allocating what they already do.
(b) I think even without actual funding, PinePhone can set the tone of priorities for the community. If they speak about this issue, people will at least listen.
And, to that end, there's a lot of routes they could go to get this sort of thing done.
A public bug tracker + bounty, for example, would go a LONG way in messaging what needs to be done. Even a small amount of money ($100, $200) towards whatever issues are important would be a good signal to the community that "hey, we really need someone to do this".
What if I make a $1b fund and put the Krita guy and the Blender team in charge of forming the company apparatus to produce a phone that's not a god damn piece of shit? I would want to stipulate right out the gate that there should be at least 100,000 $1,000 bounties paid in the first three years.
I would turn that question around. Given the stories here of bad user experience, it sounds like there is a very limited niche for the hardware. If they made something more reliable, they might be able to provide a lot more funding.
It doesn't matter what they do, someone will be unhappy. They fill a niche that I'm glad someone does. They could go for a different niche, but then they leave this one unfilled.
It's very good for CLI apps, and quick one programs, web dev is decent Lucky is an awesome framework, the compile times make the code-compile-reload cycle subpar, specially if you compare it to scripting languages. Performance is normally excellent but the GC is suboptimal for some workflows, it's not yet written in crystal and it's too conservative recouping memory. The nicest side effect of crystal is that it's programs are far more likely to work on first compiles than ruby scripts, and the type system makes refactoring a large code base much safer.
multithreading is working with the -Dpreview_mt option, it's decent if the units of work are large enough at least 0,01ms, otherwise the channel overhead will dominate.
I find the API nice and easy specially for those familiar with CSP or go.
Performance should improve once it gets the necessary love for it to be released to be on by default.
Windows requires a lot of boring work, especially considering most of the core devs use Linux / MacOS.
Once they make it as priority it should be doable within a few months of work.
Half of all devs work on Windows platforms, so IMHO, if the team wants the kind of traction needed to reach for Crystal to reach a self-sustaining level, this should be a high-priority task.
Note that all of Crystal's competitors give first-class support to Windows: go, zig, nim, ruby, etc.
I say this as a suggestion, rather than a critique. Actually, as a hopeful suggestion. :-)
Further: I'm in the other half of developers, but if I'm going to use Crystal to develop desktop applications (which I'd like to do - particularly games, because that'd be a nice change of pace from business software), it'd be nice to be able to support the platform that - for better or worse - the vast majority of desktop users use.
It's worth noting that Ruby's support for Windows could not be described as first class for a long time. And that was in a world without WSL.
There's sense in just releasing for nix platforms if you have them ready to go, especially when you expect your initial target market to be deploying on nix systems.