Hacker Newsnew | past | comments | ask | show | jobs | submit | more afitnerd's commentslogin

Friends don’t let friends build auth. Use open source, standards compliant solutions if you don’t trust a company. Just don’t reinvent the wheel.


There are plenty of solutions you can run in-house like Keycloak or PingFederate. Not using Okta or Azure AD doesn't meant hand-crafting your own tooling.


One of my first paid jobs was in the summer of my college freshman year, 1988, I did some consulting work for an insurance salesman. He had an ibm 8086 based machine and it was taking 15 hours to run an insurance estimate.

I got him an 8087 math coprocessor and it reduced the time to about 10 minutes. I think it cost around $500 at the time.

I made my first $50 in tech, an his business was demonstrably improved. What a thrill!


This is a really great summary (with reference links) of the problem.

As an anecdotal experience that supports this: I’ve ordered products from Instagram ads exactly twice, with the same experience in each case: a higher quality, lower price alternative was available elsewhere. And, in both cases, the products shipped from overseas and took weeks to arrive.


In 1988, I had a friend in my computer science program who got some local notoriety when he made a spell checker for DOS. It was a terminate and stay resident program and it used a tiny bit of the available 640K of RAM.

We were all pretty amazed by it. He got some seed money to start a business and then was crushed like a year later when it was just built into wordstar.



On tmobile, in the US, it’s definitely worse. I usually leave the cell settings to LTE on my iPhone. Occasionally, I’ll switch to 5G Auto. When I get the “uc” symbol next to the 5G, it’s often faster, but not always. I’d say I have a 30% hit rate on 5G where the speed is noticeably better. But as others have said, often I can’t get a web page to loaf even with the phone reporting full signal strength.


“Employees will have up to 3 years to exercise their options post-exit.”

That’s pretty remarkable, actually


forgive me if this is a dumb question, but at some point of proliferation, won't wind turbines have an impact on wind flow?


Yes. If they didn’t, we would place them in dense grids, separated by only as large a distance to prevent them from colliding with each other when they turn in response to changes in wind direction.

Also, there’s the small matter of conservation of energy. Wind turbines turn kinetic energy from moving air into electricity. That means the moving air must lose kinetic energy. It won’t lose mass, so it has to lose velocity.

Sample of a paper discussing this: https://www.pnas.org/content/113/48/13570


It seems, if we did reduce global wind speeds with all the new friction of turbines, their would be less transfer of air between poles and tropical areas. Thus, the northland wouldn't warm as much as it currently is. Then a cooler Arctic would mean more driving force for winds. A negative feedback loop.


I don’t think wind turbines are high enough (yet) to affect wind speeds at height much.


They create more local turbulence, but don't have a big overall impact on wind flow.

They do, however, redistribute moisture and heat which can contribute to temperature changes.


Nvidia's lack of support for Wayland really hurts linux desktop adoption.

It's especially bad new if there's no more x.org support


> Nvidia's lack of support for Wayland really hurts linux desktop adoption.

I doubt it has any significant impact. Linux Desktop adoption is hurt by many, many things.


I think they mean Wayland adoption on Linux Desktop, but I could be wrong.


I'm not even sure that's all that true. Wayland is still missing a bunch of things that people desire from X or other OS's display systems.


Such as?


Allegedly: screenshots and autokey-like functionality

I don't know who says that Wayland is ready for "prime time", but I also don't know how a major distro would force Wayland without an implementation of those features.


I think it's reasonable to assume that someone else will pick up the development. IIRC Ubuntu 20.04 at least as of now will still go with X.org


No. The Linux stack is pretty much whatever Red Hat says it is. If Red Hat says X is moribund, that will prompt upstreams to drop X support from their toolkits and the other distros will fall in line and go full Wayland. Maybe Slackware will hang on for a couple releases more. No one's going to maintain that big chungus code base just to buck the direction the wind is very obviously blowing.


>It's especially bad new if there's no more x.org support

"Maintenance mode" means fixing bugs are still being fixed, but not huge new changes or features. It doesn't mean that it's no longer supported.


Linux on the desktop would be a reality already if Linux had a stable driver API.


Not really. It is well known across the industry how you can get drivers in linux now. There many players, big (Dell, Lenovo) and small (system76, Entroware, etc) that sell linux supported devices.

Here's a great write up on the technical merits of linux's approach to drivers https://www.kernel.org/doc/html/latest/process/stable-api-no...


I think OP was thinking about commercial players who do not like to upstream their driver code. This might be for copyright/GPL reasons, or for trade-secret / code obscurity reasons. This is impossible with linux drivers. To quote from your link:

So, if you have a Linux kernel driver that is not in the main kernel tree, what are you, a developer, supposed to do? Releasing a binary driver for every different kernel version for every distribution is a nightmare, and trying to keep up with an ever changing kernel interface is also a rough job.

Simple, get your kernel driver into the main kernel tree (remember we are talking about drivers released under a GPL-compatible license here, if your code doesn’t fall under this category, good luck, you are on your own here, -snip-).

Thing is, this excludes quite a lot of drivers from getting into the kernel. And that kind of sucks.


I’m noob what regards kernel specifics, so probably a stupid question: so basically if you want kernel to support all possible hardware in the world, you would need to add those drivers to main kernel ? seems like a bad idea because the kernel’s source code would not fit into terabyte size disk..


If you want your driver to run in kernel space you need that code to be in the main kernel. Anything that runs in user-space can be kept outside of the main kernel repo.

There is a bad middle version where you put some interface exposed to user-space into the mainline kernel and then dump in binary-blobs to interface with this. The bad part is doing this if you will be the only user of that interface, and if you do it to intentionally keep your driver out of the kernel. I believe there was/is an attempt by nvidea to essentially get a shim for their windows drivers into the kernel.

The good version of this is where the userspace interface evolves naturally, and in cooperation between multiple consumers. Or at least a case where in the end there are multiple consumers of the interface in the end.


There are a fewer drivers needed these days as interfaces become standardized. Most stuff uses a standard USB driver for example.


You can place a shim into the module and leave the rest in userland. Yes, it hurts performance, but that's the tradeoff.


That writeup does a good job at explaining the current situation, but it doesn't completely refute the validity of the demand. The primary people who are asking for stable driver APIs and ABIs are desktop folks who want to be able to run closed source drivers for their GPU, and other consumer peripherals. Which means its a very limited set of architectures, or maybe just one set of architectures - x86 and x64. The other thing is that linux has a non-modular monolithic kernel design which means that simply having a stable API for GPU drivers isn't enough. you'll probably also need a stable PNP API layer, and a stable IO API Layer, and a stable file system layer, and whatever other OS services a driver would use. The argument about deprecation of interfaces and fixing bugs is valid, but everything is a trade-off in OS design. Linux's implementation of a monolithic design turned out to be very stable, when in-theory, micro kernels have a vastly better design when it comes to stability. Whether theoretical benefits are realized at the ground level, depends on a LOT of factors. Personally, I think the effort required to create a stable API layer would be too enormous to undertake at this point..


> Linux on the desktop would be a reality already if Linux had a stable driver API.

Linux on the desktop (and other consumer devices) is a reality, via Android and ChromeOS. What's not a reality is consumer use of the userspace tools and desktop environments lots of people mentally associate with Linux, but I doubt very much that's about driver APIs.


Ah the typical pat in the back.

ChromeOS and Android could be running on top of Windows kernel and userspace would hardly notice.


And "Android's adoption shows that Linux doesn't need a stable driver ABI" is a funny position to take, given how many problems that's caused Android, to the point where Google is adding a way for vendor drivers to work across versions of Android.

https://android-developers.googleblog.com/2017/05/here-comes...


Incidentally, the Windows Subsystem for Linux was reportedly the result of an effort to run Android userspace on top of the Windows Kernel that was at least somewhat successful, so your assertion is completely true.

(https://venturebeat.com/2016/02/25/microsoft-kills-project-a...)


Not having to commit to a stable driver API has advantages, too.


Love that idea. Although, the original Gameboy is a little bulky to carry around ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: