Hacker Newsnew | past | comments | ask | show | jobs | submit | luma's commentslogin

Absolutely screwed! Every single product they have - OS (dominant desktop and laptop OS by a wide margin), Azure - (gaining in second place, now 25% vs AWS 31%), M365 (also dominant, particularly in terms of revenue). None of these show any sign of going anywhere, and if anything, the numbers for cloud and M365 are trending up.

I could only wish my own business were this screwed.


Their success is a big part of why the experience is so bad as they have to appeal to a common denominator.

At the same time, they also win on the little things that diehard opponents choose to ignore, like search that kind of works. I don't like Office 365 but I'm a paying customer because, after long research, I haven't found a competitor that meets all my requirements.


I think it means one should read the very next sentence:

> Pieper emphasized that current over-the-counter NAD+-precursors have been shown in animal models to raise cellular NAD+ to dangerously high levels that promote cancer. The pharmacological approach in this study, however, uses a pharmacologic agent (P7C3-A20) that enables cells to maintain their proper balance of NAD+ under conditions of otherwise overwhelming stress, without elevating NAD+ to supraphysiologic levels.


Anyone have any idea why the cables are arranged like this? https://8400e186.delivery.rocketcdn.me/articles/wp-content/u...

What's the zig-zag pattern for, seems like a fair bit of extra conductor.


This view is just very extreme, it is much less zig zag. It is just mounted to the wall at the high points and slack in between. Certainly there is also a reason for the exact amount of slack like thermal expansion.

https://cdn.ca.emap.com/wp-content/uploads/sites/9/2018/04/l...


A thousand words but 200 of them are bullshit, and they didn't even have to use Photoshop.

That’s just slack. You’re seeing a very long distance with extreme foreshortening in this image.

Do you mean the catenary [0]?

[0] https://en.wikipedia.org/wiki/Catenary


> Anyone have any idea why the cables are arranged like this?

I think that's just cables sagging, which is a requirement to accommodate thermal and seismic displacements.


Great photo from an artistic POV, completely useless to get a sense of the main subject of the tunnel: the cables carrying electricity.

First guess (may be wrong) is to manage thermal expansion/contraction constantly on a micro-scale.

I would also add that there some slack for repairs.

Maybe to reduce electromagnetic coupling? Seems they're offset a bit.

My understanding is that it's mostly mechanical and thermal, not electrical cleverness

It's the lens distorting the view.

Is that a tandem bicycle? Cute.

Tandem bike or a new SCP variant.

Yes. First time I see a tandem in such tunnel. Until know I just saw and used normal bicycles in such kind of tunnels.

There are three reasons:

* Cable thermal expansion under current load: https://www.ahelek.com/news/cable-thermal-expansion-and-its-...

* The amount pictured is in excess of that required for thermal expansion. The excess is to have some spare length in case of modifications. For example if you have to replace the transformer and the terminals are not in the same location. You cannot extend a massive cable like that easily or without degrading its specs.

* The sine wave pattern makes it into AC of course (/s)


It looks like AI slop to be honest. My second best guess is that it could be arrayed transformers.

I don't think a utility company in their right mind would allow workers to bicycle inside a tunnel powering the grid.


Question. Have you worked for a utility company?

The first stages always do, that's why corporations keep pulling the enshittification lever.


Billionaires benefit from it, they are the ones buying off our government. All the other "wedge issues" are designed to deflect attention away from our oligarchs and both parties are on the take. Look at how the DNC reacted to Mamdani for a recent example.

To be clear, because I know it will come up - not saying both parties are the same, but I am saying they are both doing the oligarch's bidding. The US government is now fully captured by billionaires and is currently being used for personal enrichment of those in power.


I’m certainly no authority but i tend to write the same way for casual communication, came from the 90s era BBS days. It was (and still is) common on irc nets too. Autocorrect fixes up some of it, but sometimes i just have ideas i’m trying to dump out of my head and the shift key isn’t helping that go faster. Emails at work get more attention, but bullshittin with friends on the PC? No need.

I’ll code switch depending on the venue, on HN i mostly Serious Post so my post history might demonstrate more care for the language than somewhere i consider more causal.


Single word answer: trust.

Y’all seem like nice people but trust isn’t automatic these days.


Trust with regards to...? Orion doesn't have any telemetry, doesn't force any updates on you, doesn't require any account. You can audit the application's behavior with standard tools to verify that it isn't "phoning home", etc., it doesn't need to be open source to do that, nor would making it open source obviate auditig the final executable anyways.

What do you perceive as the risk to "trusting" Orion in this case?

edit: Sandboxing the app also further reduces the surface area for "trust", though I'm unfamiliar with MacOS as a platform when it comes to that.


Personally, I have some software engineering skills. For me it’s about trust in your development team and product direction.

To be at least somewhat certain of the future, I want to own critical pieces of software, not rent it from someone no matter how benevolent-looking.

While things are well, I want to be able to contribute. There are myriads of minor things that your development teams would never get time to look into. If something is a wart, I might have skills to do it myself and - hopefully - ask you to incorporate my patches. I did that to a few pieces of software I trust and use, and I consider the ability to do this as fairly important, even though I do this very rarely.

And if things go sour, it could be impossible to keep up with long-term maintenance of this complex machinery but I still want that option open too. I want to know that if you folks decide to do something unpleasant to the browser, I’ll be able to begrudgingly take over and still fully own the software at least while I’m investigating the replacement options. Not be at someone’s else’s mercy.

To be persuaded otherwise, I need to be aware about your reasons for not providing users software freedoms and agree they’re serving our mutual interests.

(Needless to say, Orion is a very different product from Kagi Search, which is why I apply different set of requirements. I can switch search engines much more easily than user agent software.)


It may not phone home now, but it can do it tomorrow, or it can in be enabled and immediately disabled in some minor releases. Even if people didn't catch those shenanigans immediately it will be evident from the commit history. I'd say opensource forces certain discipline.

Also there is point of rugpull, or the product is getting cancelled. Few people will step up to maintain it; atleast until most users migrate to a different product.


As a paying kagi customer that uses orion, I’ll just point out that there’s a reason “enshittification” was the word of the year recently.

Much of it had to do with testimony during the Google antitrust trial. It’s hard to understand how Kagi wouldn’t be ultra-sensitive to guaranteeing there will be escape hatches if it enshittifies. (Your funding model is a great first step!)


> it doesn't need to be open source to do that, nor would making it open source obviate auditig the final executable anyways

It doesn't need to be open source to do that, but it really helps. Ideally you'd publish source and have reproducible builds, so that users could look at the code to see that it's not doing anything objectionable and a handful of people could make sure that that code matched the official binaries.

> You can audit the application's behavior with standard tools to verify that it isn't "phoning home", etc.

Can you? Practically? Lots of programs are easy: You put them in a sandbox with zero network access, or very carefully restricted access, and that eliminates 90% of likely problems. But this is a web browser; it's purpose is to connect over the network, all day every day, to arbitrary, dynamic domains in large numbers, such that I would seriously question whether it is in fact practical to audit in a black-box approach.


>Orion doesn't have any telemetry, doesn't force any updates on you, doesn't require any account.

Source: "Trust me".

As another person mentioned, telemetry could be sent out Sundays @ 2:00am, so my use of standard tools to verify that it isn't phoning home on a Tuesday afternoon is useless. This is just one isolated example.

>it doesn't need to be open source to do that, nor would making it open source obviate auditig the final executable anyways.

Trust is not a single bit that is flipped from "Fully trust" to "Fully distrust". Things become more trustworthy when the source can be reviewed, and less trustworthy when an employee says "We don't do this, trust us, but we're keeping the box closed because ~reasons~".

In my eyes, Kagi has a lot of trust-building to do, despite being the darling child that can do no wrong in many HNers eyes (for whatever reason).


Browser handling is way more personal than any other piece of software. It need not be open source licensed but being able to compile and install it from source the exact binary (minus signing) is a huge plus is today's world. Otherwise is "not" doing much from chrome, brave, firefox etc of today. Open source would be cherry on top.

Trust of Kagi search is already there w.r.t both the tool and the company but it is not transferable to Trust to the Orion Browser.


It's relatively hard to audit a binary. You can audit the behavior of single runs, you can't nearly as easily audit the behavior of the program itself though. What if it pings only on Tuesdays, what if it does some sort of dns reach out that's a false positive for something else you didn't realize the browser was doing, what if there are platform specific differences in behavior.

The same goes for auditing the final executable. Open source gives two options on that: build it, trust it. The latter may seem 0 gain but, again, it is actually a big difference trying to audit a blackbox for every possible behavior vs seeing what the baseline behavior is supposed to be and looking if any differences occur in the premade binaries. There is a 3rd option: reproducible builds... but I doubt that's a reasonable goal in this case.

I'm not saying Kagi/Orion should necessarily care about providing that level of audibility, just that the response a pre-made binary is as trustable as a binary with its source code falls quite flat.


I think Kagi / Orion should go down the independent auditor route like TrailOfBits, Cure53 and others.

That way the software would be audited and it doesn't have to be open source.


Also trust that it won't be abandoned like Opera was.


Would you pay for Orion not to be abandoned?

There is Orion+ that can be paid for that keeps development going.


I paid for opera too :)

But, sure, I could imagine paying for a browser again - although I don't immediately see what features would be worth paying for.

For opera it was the custom, fast rendering engine and the email client.

Now with basically two equivalent rendering engines, I don't imagine performance in that area will be enough to pay for.

Maybe a smalltalk like developer mode with better debugger and repl support?


If it gets abandoned—so what? Switching browsers is trivial.


It really isn’t, and especially not when one of the browser’s unique selling points is its multi-browser extension compatibility that no other browser offers.

Also some of us simply don’t want to learn new UIs and/or risk dealing with an “AI” infused alternative if we have a tool that already Just Works. Switching away from Just Works sucks.


It is completely trivial to switch browser. Anybody who doubts it can try it in this very moment.


The worst part about opera dying was the email client imnho - and it wasn't trivial to find a replacement.

I'm not sure what I'd seek in a browser I'd pay for - but it would be features not present or great in foss browsers.

Maybe email, podcast, rss client, a modal vi like browsing (like vimperator, but first class), a good reader mode/style override, proper editor for text input (like "it's all text"), automatic force support for select text, save as... for images)...

But whatever would be useful enough to pay for, would likely be a pain to lose.


By pushing back on someone over trust, you’ve eliminated the interest I briefly held in evaluating Orion. It would’ve been far better to acknowledge the concern than nitpick it.


What? Since when was asking questions to clarify someones position considered "pushing back?"

Can you help me understand what about the questions make you uncomfortable?

I am completely unaffiliated with Kagi. I find it concerning that we've come to a world were we can't ask questions without it being taken as something hostile to the person/people/idea being questioned. Is that not what science is?


If you don’t think “you can just audit the binary with tools” is pushing back, then I don’t know what is, and especially so when you’ve framed the invitation with “I'd rather listen”.

I’m reminded of the number of times I’ve had vendors sit across the table from me and argue that our fixed requirements for <whatever> are just a preference or a nice-to-have. This generally doesn’t bode well for their prospects.


Fair enough. I personally did not read push back in the questions/statements asked/made.

> Trust with regards to...?

I took this to be a good faith ask for clarification

> Orion doesn't have any telemetry... You can audit the application's behavior with standard tools to verify that it isn't "phoning home", etc...

I took this as a statement if what I could do, not specifically what I should do instead of getting it open sourced.

Maybe I read it with more good faith intention and curiosity than I should have. I see your point on how that could be perceived as push back, but I landed somewhere different from where you might have.


> you can just audit the binary with tools”

That statement also said you have to audit binary even if the code is open source. Which isn't entirely true as other comments pointed out - reproducible builds - but the idea doesn't seem like pushing back to me. It was to point out that open source doesn't automatically imply any level of trust when it comes to security/privacy.


I'm assuming the people who are asking for Orion to be open source are not paying for it.

I think a blog post on Orion's transparency is enough. The fact that there is Orion+ is enough to warrant no need to have tracking or 'enshittification'.

If you like Kagi and Orion, supporting development by paying for it makes sense.

Open sourcing everything of Orion means that Orion+ will be open source which defeats the point of supporting development of Orion directly.

I've seen projects start open source, change to closed source and then add in the enshittification later. It doesn't matter if the code is 'open' the source code would eventually be unmaintained and have security holes which there is no time in the world for anyone else to maintain.


> I'm assuming the people who are asking for Orion to be open source are not paying for it.

I think this is an odd/slightly-disingenuous statement.

I mean, I'm on linux, so I'm not, I'm happily paying for kagi though, and would pay for Orion+ if it was available to me :)

I would also very much like it if Orion was open source, it would make me feel a lot better committing to and recommending a browser if I had actual assurances it's behaving appropriately, beyond a company saying "trust me", no matter how nice/cool they seem at the time.

Honestly, I kinda wish Orion+ was the only option, I think having a free option (and the incentives that can create) is kind of antithetical to Kagi's whole raison detre.


> I would also very much like it if Orion was open source, it would make me feel a lot better committing to and recommending a browser if I had actual assurances it's behaving appropriately, beyond a company saying "trust me", no matter how nice/cool they seem at the time.

Kagi isn't 100% open source but you still use it and recommend it?

How do you know they aren't spying on the backend?


There's not really a reasonable local alternative to running something like Kagi, so one kind of just have to hope for the best with the least shady looking option or not use web search at all. It would be nice if they at least had a 3rd party audit validate their privacy claims... but Kagi is at least a step in a better direction than any common search option, even if they might still actually be spying on you for all you know (and keep that in mind if you choose to use it).

The same is not true of browsers, to the extent you can even build/use privacy conscious versions of Google's browser project because Chromium is open source! To trade that away for closed source on the promise of another company who was only able to build a browser because of an open source engine is an unnecessary step backwards and should be bothering people, as much as Kagi appears like the nice company for now.


> I'm assuming the people who are asking for Orion to be open source are not paying for it.

I don't know about the others, but I'm an Orion+ lifetime purchaser just because I like what they are trying to do and it's a good phone browser for my work phone. I'm not sure I follow why specifically people who pay are supposed to be uninterested in it being open sourced?

> If you like Kagi and Orion, supporting development by paying for it makes sense.

> Open sourcing everything of Orion means that Orion+ will be open source which defeats the point of supporting development of Orion directly.

Sure, one should support the development costs. Can you elaborate why you feel that relates to Orion being freeware vs open source or why it defeats the point of Orion+? The two aren't differentiated by functionality, Orion+ is a token of development support.

> I've seen projects start open source, change to closed source and then add in the enshittification later. It doesn't matter if the code is 'open' the source code would eventually be unmaintained and have security holes which there is no time in the world for anyone else to maintain.

Open source isn't a promise that the code will be maintained forever, nothing can guarantee that, it's a promise if the company decides to go closed source the community can decide what to do. Or, even if you don't care about that, a promise of easy/public auditing and hacking. Just look how many Chromium/Firefox build customization, UI tweaks, and forks people have made despite the possibility Google stop contributing to Chromium in the future.


VMware has been so good and reasonably priced for so long that there hasn't been a competitive market in the enterprise virtualization space for the past two decades. In a way, I think Broadcom's moves here might be healthy for the enterprise datacenter longer term, it has created the opportunity for others to step in and broadened the ecosystem significantly.


Talking to midmarket and enterprise customers and nobody is taking Proxmox seriously quite yet, I think due to concerns around support availability and long term viability. Hyper-V and Azure Local come up a lot in these conversations if you run a lot of Windows (Healthcare in the US is nearly entirely Windows based). Have some folks kicking tires on OpenShift, which is a HEAVY lift and not much less expensive than modern Broadcom licenses.

My personal dark horse favorite right now is HPE VM Essentials. HPE has a terrible track record of being awesome at enterprise software, but their support org is solid and the solution checks a heck of a lot of boxes, including broad support for non-HPE servers, storage, and networking. Solution is priced to move and I expect HPE smells blood in these waters, they're clearly dumping a lot of development resources into the product in this past year.


I've used them professionally during 0.9 times (2008.) and it was already quite useful and very stable (all advertised features worked). 17 years looks pretty good to me, Proxmox will not go away (neither product or company)


>(Healthcare in the US is nearly entirely Windows based).

This wasn't my experience in over a decade in the industry.

It's Windows dominant, but our environment was typically around a 70/30 split of Windows/Linux servers.

Cerner shops in particular are going to have a larger Linux footprint. Radiology, biomed, interface engines, and med records also tended to have quite a bit of nix infrastructure.

One thing that can be said is that containerization has basically zero penetration with any vendors in the space. Pretty much everyone is still doing a pets over cattle model in the industry.


HPE VM Essentials and Proxmox are just UI/wrappers/+ on top of kvm/virsh/libvirt for the virtualization side.

You can grow out of either by just moving to self hosted, or you can avoid both for the virtualization part if you don't care about the VMware like GUI if you are an automation focused company.

If we could do it 20 years ago once VT-x for production Oracle EBS instances for a smaller but publicly traded company with a IT team of 4, almost any midmarket enterprise could do it today, especially with modern tools.

It is culture and web-ui requirements and FUD that cause issues, not the underlying products that are stable today, but hidden from view.


Correction: In Proxmox VE we're not using virsh/libvirt at all, rather we have our own stack for driving QEMU on a low-level, our in-depth integration, especially with live local storage migration our Backup Servers dirty-bitmap (known as change block tracking in vmware worlds) would be possible in the form we have it. Same w.r.t. our own stack for managing LXC container.

The web UI part is actually one of our smaller code bases relative to the whole API and lower level backend code.


Correct sorry I don't use the web-ui's and was confusing oVirt, I forgot that you are using perl modules to call qemu/lxc.

I would strongly suggest more work on your NUMA/cpuset limitations. I know people have been working on it slowly but with the rise of E and P cores, you can't stick to pinning for many use cases and while I get hyperconvergence has it's costs, and platforms have to choose simple, the kernels cpuset proc system works pretty well there and dramatically reduces latency, especially for lakehouse style DP.

I do have customers who would be better served by a proxmox type solution, but need to isolate critical loads and/or avoid the problems with asymmetric cores and non-locality in the OLAP space.

IIRC lots of things that have worked for years in qemu-kvm are ignored when added to <VMID>.conf etc...


PVE itself is still made of a lot of perl, but nowadays, we actually do almost everything new in rust.

We already support CPUsets and pinning for Container VMs, but definitively can be improved, especially if you mean something more automated/guided by the PVE stack.

If you have something more specific, ideally somewhat actionable, it would be great if you could create an enhancement request at https://bugzilla.proxmox.com/ so that we can actually keep track of these requests.


There is a bit of a problem with polysemy here.

While the input for qemu is called a "pve-cpuset" for affinity[0], it is using explicitly the taskset[1][3] command.

This is different than cpuset[2], or how libvirt allows the creation of partitions[3] using systemd slices in your case.

The huge advantage is that setting up basic slices can be done when provisioning the hypervisor, and you don't have the hard code cpu pinning numbers as you would in taskset, plus in theory it could be dynamic.

From the libvirt page[4]

     ...
     <resource>
       <partition>/machine/production</partition>
     </resource>
     ...
As cpusets are hierarchical, one could use various namespace schemes, which change per hypervisor, not exposing that implementation detail to the guest configuration. Think migrating from an old 16 core CPU to something more modern, and how all those guests will be pinned to a fraction of the new cores without user interaction.

Unfortunately I am deep into podman right now and don't have a proxmox at the moment or I would try to submit a bug.

This page[5] covers how even inter CCD traffic even on Ryzen is ~5x compared to local. That is something that would break the normal affinity if you move to a chip with more cores on a CCD as an example. And you can't see CCD placement in the normal numa-ish tools.

To be honest most of what I do wouldn't generalize, but you could use cpusets, with a hierarchy and open the choice to try and improve latency without requiring each person launching a self service VM to hard code the core ID's.

I do wish I had the time and resources to document this well, but hopefully that helps explain more about at least the cpuset part, not even applying the hard partitioning you could do to ensure say ceph is still running when you start to thrash etc...

[0] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...

[1] https://git.proxmox.com/?p=qemu-server.git;a=blob;f=src/PVE/...

[2] https://docs.kernel.org/admin-guide/cgroup-v2.html#cpuset

[3] https://man7.org/linux/man-pages/man1/taskset.1.html

[4] https://libvirt.org/cgroups.html#using-custom-partitions

[5] https://kb.blockbridge.com/technote/proxmox-tuning-low-laten...


KVM is awesome enough that there isn’t a lot of room left to differentiate at the hypervisor level. Now the problem is dealing with thousands of the things, so it’s the management layer where the product space is competing.


Thus why libvirt was added, it works with KVM, Xen, VMware ESXi, QEMU etc... but yes most of the tools like ansible only support libvirt_lxc and libvirt_qemu today but it isn't too hard to use for any modern admin with automation experiance.

Libvirt is the abstraction API that mostly hides the concrete implementation details.

I haven't tried oVirt or the other UIs on top of libvirt, but it seems less painful to me than digging through the Proxmox Perl modules when I hit a limitation of their system, but most people may not.

All of those UI's have to make sacrifices to be usable, I just miss the full power of libvirt/qemu/kvm for placement and reduced latency, especially in the era of p vs e cores, dozen's of numa nodes etc...

I would argue for long lived machines, automation is the trick for dealing with 1000's of things, but I get that is not always true for others use-cases.

I think some people may be supprised by just targeting libvirt vs looking for some web-ui.


The industry tends to use the even harder-to-understand term "shrink". Not always theft, just any loss of product versus what the books say they should have.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: