I think this is basically what OSX has had forever. And I think this is one of those things that's _really_ easy to miss how big of a deal it is in making the experience feel solid.
My Ubuntu 16.04 machine does this weird thing where it shows my login in 1/4 of the screen then suddenly it stretches to fill. Really sets the tone for how janky the experience is going to be.
The startup experience sets the tone for the entire thing. I know some engineers who won't get why it's a big deal. It's a _big_ deal.
My favourite thing my Ubuntu machine does is that when you wake it from sleep, it shows you the desktop as it was when you put it to sleep for a fraction of a second, then flickers a few times while it puts up a lock screen, then shows two login dialogs stacked on top of each other, one larger than the other.
Thats probably because the lockscreen is just a fullscreen application under X11, Weyland actually understands the concept of a lockscreen and keeps things a tad more secure.
With KDE4 I often have to wait whilst the lockscreen is paged back out of swap.
Invariably I miss the fact that the disk-activity light is still lit, start typing my password and then become annoyed because it only caught the last few characters. So then I have to wait AGAIN whilst it proudly announces an authentication failure and punishes me with a further delay.
So your problem was a process going to swap (or not, Linux still has problems with disk i/o, especially going from hybernation, although bug #12309 is gone), rest are the consequences.
I've seen this behavior with the old 'gnome-screensaver' (which I think ubuntu hasn't used for quite a while), but with the current gnome-shell lock screen it behaves properly and doesn't black out the screen when I open my laptop.
At the risk of sounding like a fanboi: try Pop!_OS (I know, the spelling is ridiculous). It's essentially Ubuntu with a bunch of those things that make it feel janky ironed out. The biggest one for me is it does a great job of managing the discrete GPU on my laptop. I've been using it for a few weeks and couldn't be happier.
My understanding when looking back at those bugs, is that they were driver bugs where the driver lied about having committed to the back-buffer. GNOME, at least, tries to blank things and submit the right commands to ensure it's cleared before releasing the suspend lock.
But if the GPU driver is broken about command committal, then there is only so much you can do short of fixing those .
Every distinct piece of Apple's boot process (firmware, boot loader, kernel, desktop) can have the correct video driver, be aware of the desired final video resolution, and trust that any previous piece has left the video card in the proper state. It all ships together.
The PC BIOS doesn't know you want 2560x1600 for your desktop. It just picks a mode suitable for the BIOS. If anything, the BIOS is required to leave the screen in VGA text mode. The boot loader faces the same issue. At best, a Linux distribution might decide to use a VESA mode, but that can't cover the desired 2560x1600. (VESA doesn't go that high)
OS X /could/ do that on a random PC - Windows 8 and later already do so if your firmware provides appropriate functionality, and Fedora is making use of this same functionality. The firmware exports information about its bootsplash in an ACPI table called BGRT, and this allows the bootloader and OS to draw stuff without overwriting the firmware's artwork. This is how recent versions of Windows draw the boot animation underneath the vendor firmware logo. Right now Fedora doesn't do that, but it wouldn't be super hard to teach Plymouth how to.
UEFI graphics drivers usually support much more fine grained modesetting than VESA, assuming you have a native UEFI graphics driver rather than one that's thunking down to a legacy BIOS option ROM. They're also (usually) capable of reading and parsing EDID and will set a native mode if possible. The number of people not running their desktop at their display's native mode is sufficiently low that it's not really worth worrying about.
The PC BIOS doesn't know you want 2560x1600 for your desktop...
That's why we switched to UEFI around a decade ago; I'm pretty sure it doesn't have all those problems. So a modern PC can boot directly into the correct resolution.
UEFI stands for “Unified Extensible Firmware Interface”, where “Firmware” is an ancient African word meaning “Why do something right when you can do it so wrong that children will weep and brave adults will cower before you”, and “UEI” is Celtic for “We missed DOS so we burned it into your ROMs”.
–Matthew Garrett (he wrote a lot of the Linux kernel UEFI implementation)
My custom build shows the motherboard logo at the correct resolution immediately on POST and never has to mode switch even through grub. It flashes off and on to load the nvidia proprietary drivers, but never has to do so when I boot to windows.
As a side note OS X totally fails to do this on high resolution external displays. You can’t unlock your disk with the FileVault password on those 35” wide displays for example
It's probably been such low priority in the Linux community because most of us are big nerds and we like seeing different technologies interact, it can be nice to be able to see the boot sequence bios>bootloader>OS>Desktop.
OSx does the same thing, but hides it all behind a single load screen, as if all of those elements from hardware to DE were a monolithic "mac".
It feels like lying to the user to me. Bios isn't in control anymore, why is the motherboard logo still on the screen? How do I know when the OS takes over from grub if both are hidden under the motherboard logo?
It's not in there because it's generally a gigantic hack. For the 40 or 50 years that people have been developing device driver's the basic principles have largely not changed: you start with a clean slate ans do a reset to put the HW into a known good state. Every block of silicon ever devised comes with means to reset it's internal state.
What this is doing is telling the driver to ignore it's normal reset procedure because someone else, but specifically not you, has already done the super complex modesetting and you are to trust it implicitly.
So next year, when the sales people come up with the idea that they want to show an animation across all your DisplayPort chained monitors at boot, the guys at Apple buckle down for a month, hack that into the bootloader and then hack their kernel to recognize it, ships the same year. On Linux, for the first ten years, nobody cares because there are much larger fires to put out, then ten years are spent on a vendor and display technology agnostic VR ready boot up animation description format, ten more years pass for a critical mass of boot firmware and kernel graphics drivers to support it, then it ships with Ubuntu but every third boot has flicker or a weird rotation issue.
I’m a big nerd but I don’t care when the bios hands off to the bootloader or when that hands off to the kernel ow when that hands off to the window manager. I just want those steps to go as quickly as possible without a bunch of manufacturers spamming their logos at me.
The only time I want to see these handoffs and a ugly text dump is when things are broken and I’m debugging it, which should never happen anyway.
Why do we expect such mediocrity out of our software?
"The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair." -Douglas Adams
I think it's a matter of taste. I find seeing boot messages about services starting up etc. more aesthetically pleasing than some logo. It reminds me of people who buy mechanical watches where the gears are visible. Seeing the underlying bits of a complex system exposed can certainly be appealing to some people.
I am exactly the kind of person who buys watches where the gears are visible, and I even use gear-shaped watch faces on my smartwatches, so yeah maybe this whole thread is just me being me.
I do also turn off the splash screen in linux so that I can watch the service startups fly by.
I think it's fair to say that in general, users want to be lied to. That's why UX patterns like screenshots of suspended apps being displayed while the app actually loads became standard. The appearance of speed is effectively as valuable as actual speed, just like the appearance of polish is valuable regardless of the cruft that lies beneath it.
Lying has nothing to do with it. Users don't _care_ about the details of what the system is doing and it's just cognitive noise, distracting from the task at hand.
The suspended app screenshot is way more that appearance. We all need a moment to remember to context of what we were doing. Showing us a screenshot of the suspended state enables us to use the loading time productively so that we are immediately ready to interact once it's up.
The task at hand is waiting. People love to be distracted from waiting without any evidence of progress. Isn't that why they put speakers into so many dial-up modems?
Speaking as a user of dial up modems in the past, no.
The speaker was vital to inform the user as to what was happening. If the phone line was in user, if there was a dialing problem, if you called the wrong number, etc. And it also gave you an indication of the connection speed and line quality.
To change the initial splash (from power-on), you'd need to get that pixmap embedded into your EFI. However, with the recent changes, any of these systems after initial EFI can alter the framebuffer and have that persisted up until wayland/xorg/etc.
Pressing a key to get the "full dump" is in fact how plymouth works, which many Linux distributions use from the kernel up until gdm/display-manager.
> Bios isn't in control anymore, why is the motherboard logo still on the screen?
There is nothing stopping any of these components from changing the display framebuffer. The "news" here, so to speak, is that finally we'll have things working where you can avoid any buffer changes and simply start drawing on the old background.
If you want, you could have grub2/etc change the framebuffer and now that could persist up to the gdm hand-off.
True that. I want to see the output of OpenRC when I boot to make sure it all looks familiar (I only reboot after a kernel upgrade really). Then I login and type startx.
I understand I'm completely out of touch with how normal people use computers. I don't care.
You can still see the output of your kernel and init system... on the serial console.
Personally, I wish serial-over-USB (and serial-over-Lightning) cables were more common. (I don't mean that they're expensive—they're dirt cheap. It's just that nobody has any idea they exist.)
They used to be a kind of arcane thing—why would you need serial access over USB when the PC already has a serial port? And why are you debugging a PC on another PC—(taking up a quite lot of space on your desk, eh?)—when you could instead debug the PC on the PC itself?
But these days, almost nothing has a serial port, and we have perfect "secondary PCs" to serve as the recipient of the serial input: our phones and tablets. (Heck, they probably have space reserved for them on your desk already!)
Imagine: sit an iPad or some cheapo Android tablet below your monitor. Plug the "send" end of the serial-over-USB cable into the USB hub built into your monitor. Plug the "receive" end of the cable into your tablet. Open a terminal-emulator app on your phone/tablet. Bam: console logs, flowing along below your PC, as needed.
Best of all, it's not unidirectional; your phone/tablet is now a VT100, and you can log into your computer over it, even if the display server or login manager or desktop environment is wedged.
If owning these little cables was common, I'd honestly suggest just turning off the Linux text-console virtual framebuffers entirely for the Desktop versions of distributions. You want to see what's going on underneath the gloss? Tap in.
I'm not really impressed by that idea. Pretend I copied your whole post, except it's about wanting to run a web browser. It's so convenient to grab an entire different device to tether into your computer to browse on, why would you want to use the built-in screen for browsing?
Huh? Because you want to see/interact with what's on the screen approximately 100% of the time when browsing, but a tiny percentage of the time (only when something's broken) when booting.
"Best of all, it's not unidirectional; your phone/tablet is now a VT100, and you can log into your computer over it, even if the display server or login manager or desktop environment is wedged."
And wanting to interact is the least of the reasons to not want to hook up an external screen for something that could easily be on your main screen.
Maybe you don’t have a “main screen.” A lot of devices don’t. For example, servers, NASes, or smart speakers. Or maybe your main screen is just not made for display of text at the level of detail required—like a smartwatch, or an IoT device like a thermostat, or even a jumbotron/billboard computer that’s too big to see up close.
In a lot of these cases, it should be pretty obvious that there are also no permanent input peripherals attached to the target device, and perhaps not enough ports to conveniently plug them in.
So, tell me what’s more convenient: lugging a secondary display, mouse and keyboard around to plug into your devices to debug them? Or just plugging your laptop into the server and opening a terminal emulator?
The advantages of serial access are exactly the same as the advantages of SSH access, except that serial access doesn’t require an active network stack and daemon running to facilitate it, and so is usable even in a post-crash single-user mode, or in an unable-to-begin bootstrap stage.
Plus, with serial, the solution is universal—anything that supports serial output can be attached to anything that supports serial input. Whereas those peripherals aren’t guaranteed to do anything much if you plug them into a crashed PC.
Oh, and one more advantage: if you’ve ever tried writing your own kernel driver or unikernel (or game on a game console), you’ll have experienced the fact that a kernel crash that happens while the display is in graphical mode is very hard to get displayed on the screen, since you can’t know what subsystems (like the framebuffer, or the scheduler) are still in a valid state. Spewing lines onto the serial console, on the other hand, works perfectly. (And for this sort of development you can usually go even further, producing a debug build of your kernel or game that will crash into a debugger breakpoint accessible via the serial console, expecting something like gdb to be running on the other end of the serial connection. The main difference between a game console and its development kit is basically the presence of serial debugging.)
> Maybe you don’t have a “main screen.” A lot of devices don’t. For example, servers, NASes, or smart speakers. Or maybe your main screen is just not made for display of text at the level of detail required—like a smartwatch, or an IoT device like a thermostat, or even a jumbotron/billboard computer that’s too big to see up close.
>In a lot of these cases, it should be pretty obvious that there are also no permanent input peripherals attached to the target device, and perhaps not enough ports to conveniently plug them in.
You are now describing almost the opposite of your original scenario, which was plugging ancillary devices into a PC to see the kernel messages of the PC.
>So, tell me what’s more convenient: lugging a secondary display, mouse and keyboard around to plug into your devices to debug them? Or just plugging your laptop into the server and opening a terminal emulator?
That's a weird response when I was the one arguing for having one screen instead of two.
> Oh, and one more advantage: if you’ve ever tried writing your own kernel driver or unikernel
> True that. I want to see the output of OpenRC when I boot to make sure it all looks familiar (I only reboot after a kernel upgrade really). Then I login and type startx.
What I’d really love is Linux to be able to unlock a fully encrypted root partition similar to macOS. On macOS the user is prompted to login at a very early stage of the boot process, and after this the disk is decrypted and the OS is booted. The user is then logged in automatically, so they only need to enter their password once to decrypt the disk and authenticate their account.
It's definitely possible to configure this... it's just not a default on any distro I'm aware of. Linux Full Disk Encryption definitely stops very early and asks for the FDE password before being able to continue booting. After that, you would just have to have your user set to "automatically log in" for it to just go all the way in.
In fact, I think you could choose to set it up this way from the Ubuntu installer directly. By default, it just encourages you to sign in manually in addition to setting up FDE.
Actually I have my desktop configured with an encrypted root. Turns out GRUB can actually decrypt the partition holding /boot before loading the grub.cfg. Though I have this set up on ArchLinux and had to refer to the Arch wiki a lot to get this set up properly. Definitely not that user friendly, but I can say it is a possible setup after having used it for years on my desktop now.
EDIT: To clarify, my setup requires inputting passwords twice: once for decrypting the root partition, and once to login after everything has booted. During the boot process the system needs to remount everything, so I had the encrypted partition(s) also be decrypted with a key file (typically `dd if=/dev/urandom of=keyfile bs=1M count=4`; LUKS encryption can have multiple keyfiles/passwords to decrypt) and had the key file(s) put in the initramfs so after GRUB has decrypted root and loaded /boot/grub.cfg, the booted system could decrypt and mount everything needed with the key file(s).
I have a similar setup for my Arch install. I also put my swap into the encrypted LVM partition. Nothing is on the disk unencrypted. I also had to refer to the Arch wiki a lot, and it took me 2 attempts to get it right.
My Windows 10 machine, if I start it up before I turn the screen on (DisplayPort connection), shows the login screen with the background picture in what looks like a stretched super low JPEG-artifacted resolution. If I turn the screen on first, or if I use the login screen again later, it's fine. The background image is a local, high-resolution image file on my computer.
It didn't happen in early versions on Windows 10. It started with one of the major updates. My guess is if you don't have a screen on when the login page first shows, it down-samples your background image to some low resolution, and then just stretches it out to fit to your actual screen res when you do turn your screen on.
On my external 5k display connected to a 2013 Mac Pro, the File Vault protected login screen is displayed in the middle of the screen using maybe 30% of the screen real estate.
Once logged in you get the full 5k. So it's still not 100% over in Mac land. :D
It's not a big deal for you, it's a big deal for UX.
UX isn't about what you as a user feel, it's about what users feel, and OP is probably talking about UX for all users, not just those comfortable with Linux and it's tendency to have a janky UI.
It is a big deal, for experience, and you're right about it being the kind of thing certain engineers don't see as a big deal.
But- I'd rather they fix a bunch of other things first.
I'd put "things that don't work" in the first todo bucket, then move on to "things that are painful to endure", followed by "things that make it feel janky"
I wanted to use Fedora recently and it totally failed to handle my perfectly common laptop graphics card. Ubuntu did it fine.
I’ve used various MacBook Pro models from 2007 to 2018; they’ve all done this at least occasionally. The iBook I had previously didn’t, though. Apple’s PPC stuff always felt just that little bit saner.
I don't think it's a big deal at all. Or rather, I think it's important, but it's the last bit of polish that you put on things after the rest of your house is in order. It's a nice-to-have.
Meanwhile, I installed Debian buster on a 2016 MacBook Pro -- a two year old laptop -- and sound and suspend-to-RAM don't work (hell, the keyboard and trackpad didn't even work at all without installing an out-of-tree driver post-OS-install; I had to do the install using a USB keyboard). Yes, I get that this probably isn't super common hardware, and it's also a bit exotic, but... c'mon. I just don't care if my screen jumps between text and graphics mode or changes resolution a couple times before I get to the login screen. It just isn't on my radar at all.
The Xorg touchpad drivers are a mess: libevent is one-size-fits-none, synaptics is buggy and unmaintained, and the fork of mtrack that's still maintained might be able to come close to something decent, but I'm not sure because I've already spent a couple hours tweaking its settings, and I still can't get it to not randomly send scroll and button events while I'm typing.
So no, I don't care at all about something useless like "flicker free boot". I want my laptop to be actually polished in ways that are functional first.
And I get it: it's not up to me to tell people what to work on during their free time. Everyone has their own itch to scratch, and that's a very personal thing. But to then (as you have) arbitrarily decide that visual polish during a functionally-irrelevant period of the machine's operation is more important than day-to-day functional polish while I'm actually interacting with the machine? Nope... not buying it.
This is just a matter of target audience.
I think if you want ordinary people to use your OS a flickering boot will get noticed and not make a great first impression.
The users who don't care at all about it are probably already using Linux.
And I agree with you that the basics need to run well first, but in my experience many common distros got that covered. Debian is one of the exceptions, it's pretty much the only Linux distro where not everything runs fine out of the box on my laptop.
> And I get it: it's not up to me to tell people what to work on during their free time.
So it's up to me now to tell you: If you want to run Linux on Apple hardware, help getting it to work! Otherwise please just use hardware with Linux supported by its vendor instead of complaining here.
One great thing about open source is that everybody can work on what he needs or what he enjoys. So if people enjoy working on flicker free boot, that's great. Other people enjoy reverse engineering Thunderbolt adapters to get them to work with Linux or dig into the depths of the state machine required for touch pads to properly work. In the end we all benefit from such work, as each little piece gets us closer to the completed puzzle where everything works smoothly.
I specifically search out the kernel option to pass in grub2 to see the dmesg scroll past in text during boot, so I think I am the opposite of people who care about flickers and visual disturbances during boot.
More than once having the kernel boot visible has helped diagnose hardware failure or other OSI layer 1 issues with the system.
Flicker-free boot can help you too—it's not a synonym for concealing the boot process. Wouldn't it be nice to watch the text logs progress tidily as the graphics drivers load? "Flickers and visual disturbances" can disrupt your ability to read potentially important boot text.
The level of disruption can vary greatly depending on your particular hardware; some external monitors take multiple seconds to show an image after the display mode is changed.
I agree with another comment that said they want the pretty picture when it works and the text dump only when something has gone wrong. The best of both worlds.
I think this is also one of those things that is easy to overstate. OSX has often annoyed me by how obviously unready it was for me, even though the login screen was there.
For the most part, though, I just don't notice it. One way or the other.
>I think this is basically what OSX has had forever.
And classic Mac.
First time I saw Windows 3.1.1 boot my jaw dropped. What an ugly hack. That’s what’s conquering the world? The monitor wouldn’t even turn itself on. Never mind having to manually eject a floppy after you just told the computer to do it for you.
I have to agree. I almost feel guilty admitting that it really annoys the crap out of me every time I see all these mode-switches and text consoles when booting my Linux desktops. It really should not matter at all because the end result is always the same ('computer is booted and usable') but there's some psychological effect (probably related to my OCD tendencies) that makes me notice it every.single.time, and it bugs me :-/
It's a really fundamental UX issue - imagine you open a book, it's got random half pages of text, smeared print, blocks of colour; what's your impression of the production quality? Are you expecting it to be well edited?
Or you put on the radio and the first track is random snippets of talking/music/sound at various volumes.
People put luxury goods in expensive packaging for a reason ... and no, _I_ don't like expensive packaging.
(Now, actually a book like that would be intriguing to me, but ...)
That's how things work since ages, Windows had mode switches at least until Windows XP. It's quite normal if you think of it: the computer turns on, does some basic self-testing, then enhances it's capabilities, the OS starts and activates more and more functionality.
I know it looks very polished when all this is hidden. But when something stops working, the repair option might then be hidden as well though. The UX of having a dysfunctional system for days or weeks is even worse. In fact this is the reason I switched to Linux, everything is so much more predictable and transparent. Of course stuff looks more stitched together and sometimes things are harder to get running. But once things are set up properly, it just works and works.
Windows has improved a lot over the years, but still, if you keep an installation over time, you end up having more and more spyware and crappy software on your system, even on the Mac it's advisable to reinstall everything from time to time. This means backing up all data, not forgetting anything and also taking care of App reinstallations. For anyone using Computers for things they depend on, this is nightmarish UX.
Basic internet and computer hygiene makes it incredibly easy, even inherent, in avoidng spyware/adware/malware and crappy software. Now some people who don’t do that might find nuking it all to be easier than trying to uninstall everything, that’s valid, but Unless you corrupted something on a system wide scale, clean installs are a relic from when they were a blanket solution way back in the day when all software was a lot more unstable, many years ago. I’ve been running the same install of Windows, upgraded numerous times, with nary and issue nowadays just thanks the the above best practices. Same with MacOS.
This tends to be the general sentiment in both communities (outside of people who recommend that cause it’s all they know, but I wouldn’t listen to them regardless) if you keep good computer/internet hygiene practices and barring any widespread corruption/systemic malware/other external factors.
Aside from that, I’m always a huge advocate for the polished UX. As long as the options aren’t removed, the users who need them will be able to use them perfectly fine, everyone else won’t be offput by them, especially when the need for clean installs is only necessary for specific use cases as I mentioned before and doesn’t come up often at all.
I wouldn't care about flicking screens but this is actually more than just this one second. Once you deal with external screens, proprietary graphics drivers, then it might happen to you that your system doesn't boot at all - when using the flicker free setup. It's actually a nice demo, but I also want to point out that - the allegedly technically inferior - Ubuntu had this already since last year or so. It's quite difficult to turn it off, but I know how because it works really bad if you have an NVidia card and also need the GPU capabilities.
Neat - but it’d be nicer to replace that motherboard logo with a Fedora logo or something, so it’s clear when the kernel has taken the reins. Otherwise, people might think that the system is stuck in the BIOS!
Yes, I know the menu disappears - but that’s only going to be true for some subset of BIOSes, and it’s not an indicator that most people would be expecting.
Speaking as one of the people who originally implemented the Linux support for the Boot Graphics Resource Table (BGRT) which provides the BIOS logo, here's why both the BIOS and the OS want this:
BIOS vendors support this because otherwise the BIOS boots so fast (with modern system requirements) that they don't get as much branding opportunity.
OS vendors love this because then when you're waiting around for your system to boot, you're looking at the BIOS logo and blaming the BIOS, not the OS.
As pointed out this isn't what happens, but even if vendors did ship Coreboot it'd have the same behaviour (because that's what the vendors want) and you still wouldn't be able to reflash it because they'd still be using Boot Guard. Coreboot isn't some magical freedom enhancing technology that can be sprinkled on a platform to make it respect its users, in the same way that the use of Linux in Android doesn't guarantee any strong user benefits over the iOS kernel.
As the other reply in this thread pointed out, this doesn't mean that the BIOS takes longer to boot, it means that the logo keeps showing as the OS boots.
(I would still love to see more open BIOSes, though.)
At least there's some "sign of life" from the OS in this case - although I can totally imagine this being a confusing experience for people as well! If it's technically infeasible to replace the OEM logo while avoiding a modeset (flicker), maybe a small spinner or animated logo would do - an animated Fedora logo could look extremely cool there!
They note in the post that this functionality is coming soon:
> 2. Write a new plymouth theme based on the spinner theme which used the vendor logo as background and draws the spinner beneath it. Since this keeps the logo and black background as is and just draws the spinner on top this avoids the current visually jarring transition from logo screen to plymouth, allowing us to set plymouth.splash-delay to 0. This also has the advantage that the spinner will provide visual feedback that something is actually happening as soon as plymouth loads.
That's because you probably launched windows from a grub bootloader. What windows actually does is just leave up whatever was the last thing drawn on the screen, which for single-OS systems is usually the manufacturer logo.
That's basically what sbupdate [0] for Arch does, it allows you to set your own bitmap for boot logo (after the vendor logo, of course).
Another benefit of sbupdate (besides Secure Boot) is that it allows running Linux kernel directly as a UEFI executable, no GRUB or systemd-boot needed!
> even the inbuilt UEFI firmware of most manufacturers.
I have a <3yo laptop that boots NVME OSes fine, Windows and Linux alike ... except for the Windows feature-upgrade process, which just hangs indefinitely on the OEM logo.*
I also have a desktop whose UEFI firmware doesn't output anything to its PCI videocard, and also refuses to boot at all if anything is connected to its integrated videocard. So getting Linux installed on that silly thing at all, was an exercise in frustration.
I furthermore used to have a little "mini PC" that utterly refused to allow its UEFI boot entries to be modified at all - on top of having no boot-time video output and a soldered-on bootable storage. So I had to abandon and rollback my efforts to install Linux on it, lest I brick it. Brick, an x86-based system. Ugh!
In short, I don't respect manufacturers' UEFI firmware any farther than I can defenestrate their products.
-
* I would say something about how imaging from a SATA SSD onto an NVME SSD even required a reinstall of Windows, whereas the dual-booted Linux install worked perfectly; but that's probably just Windows being its usual stupid self.
Is it safe to assume that this is being done for Intel before AMD/Nvidia due to the more open nature of Intel's drivers? (I had thought AMDGPU was supposed to be better?)
I'm just wondering if features like this can be used as a carrot/stick for AMD/NVIDIA -- "Hey, look what we can build and provide for Intel customers because their driver devs are at the table."
On a separate note, insert here unending words of praise for the money and effort that Red Hat puts into Linux, and specifically technologies related to Linux on the desktop. systemd/logind, pulseaudio, pipewire, gnome, wayland (including porting various other projects)... all critical stuff that I see go unappreciated sometimes.
In my experience you cannot tar amd and nvidia with the same brush regarding linux drivers. I have been running fedora with a nvidia card for a couple of years. The open drivers have been unusable for me. Crashing the system at random - often mid installing the proprietary drivers, corrupting the display server (I got quite handy with dnf history rollback). The proprietary drivers have been made mush easier to install the last releases. But the same machine with windows gave way better performance.
I recently bought an amd card. I just pop'ed in the card, booted the machine and the mesa drivers included with fedora worked without a hitch. The only thing I had to do was install vulkan to play f1 2017, other than that it has been plug'n'play with good performance.
> Is it safe to assume that this is being done for Intel before AMD/Nvidia due to the more open nature of Intel's drivers? (I had thought AMDGPU was supposed to be better?)
Not really - it's more that the functionality for copying the firmware's mode configuration into the kernel exists for the Intel gpu driver and nobody's done the work for the others. I spent a while back in 2011 or so looking at doing this for nouveau but never got things working sufficiently well to merge it.
Now it'd be nice to add some kind of animated logo to the boot process. Windows has the little spinner, macOS has a progress bar, and even Ubuntu has a five-dots progress bar thing to indicate boot progress. I mocked up an animated logo really quickly here: https://imgur.com/gallery/Zt6K55V (disclaimer: I don't have rights to the logo, it's just a toy test and shouldn't be used for anything serious, etc.)
Some other ideas: a little ball rolling around the infinity logo (maybe eventually rolling back to center and unfolding into the "f"), a pair of "tubes" moving around which occasionally intersect to make the "f" shape, etc. I think this is a good opportunity to do something interesting with the boot screen!
Honestly my vote is to bring back the Beefy Miracle.
Let the mustard once again indicate progress!
(The Plymouth splash for Beefy Miracle had a friendly hotdog with arms waving at you while a mustard squiggle filled itself in up it’s body. Once the hotdog was fully dressed, you were booted!)
No we don't. Because it is hard, thankless work, and the people doing this work are often harassed endlessly online to the point where nobody actually wants to do the work anymore. And yet, some people are still amazed by how bad things are, completely oblivious to why that might be...
The magician engineers at the core of making things like this just work and be pleasant do not get anywhere near the credit they deserve - this is a herculean task to have Just Work(tm) on white box hardware.
Hans deserves a huge thanks from all of us for slogging through this thankless work.
Can you not default to pretty and have a way to unhide it when necessary? Macs take this approach - they show the Apple logo until the login screen is ready, but if you need to change the boot device or whatever, you can hold down the Option key.
To me, this is an entirely reasonable behavior that optimizes for the 99% of time where you just want your computer to boot quickly and start using it.
Really pleased to see this kind of work being done. My experience of desktop Linux has been on a downward trend in recent years, with so many little graphical glitches and oddities popping up at every opportunity. I believe the root cause is the deep stack of components that it takes to draw on the screen, with little mutual understanding across boundaries, and some black boxes that cannot be understood by OSS devs anyway.
It would be really slick if they could fade in the blue background of the login over time in the boot process, so the only thing that changes in the end is the bios logo swapping out (or a faded swap) to the login entry.
It might require storing the target color as a kernel param, but it might be worth it for the wow factor.
Windows must boot in UEFI mode for it to work. It keeps the same display mode as the firmware left it in, and just animates the spinner in UEFI framebuffer.
The native display kicks in with the login screen. (For some reason, UEFI doesn't initialize my graphics card to the panels native resolution, so the change with the login screen is visible).
That depends on your graphics card manufacturer and your motherboard. With my MSI parts I had to get a specially crafted GPU BIOS from an MSI employee to make it all work. I had hoped that situation had improved over the years.
A quick description of the modern Linux boot sequence:
1) Press power
2) Get motherboard/manufacturer logo
3) Switch to a text-based Grub interface
4) Actual Linux kernel starts booting, sometimes flashing some text, sometimes setting a video mode directly
5) Plymouth loads a graphic screen showing an Ubuntu/Fedora/whatever logo, hiding the rest of the kernel loading output
6) The login manager (GDM or something similar) loads, displaying a different background from the previous logo.
Each one of those steps introduces a somewhat jarring "flicker". The new process looks like this:
1) Press power
2) Get motherboard/manufacturer logo
3) Login manager shows up
Personally, I don't care that much about something that only happens during boot. What does bug the crap out of me is how scaling for HiDPI displays is still all kinds of broken and you still get tiny cursors every now and then. Or how some applications decide to ignore which audio sink is currently setup in PulseAudio. Or how battery life is still better in Windows.
I haven't tried desktop Linux in a while, but it used to switch from graphical UEFI mode to text mode then back to graphical. Or sometimes it would switch between different graphical resolutions during boot, causing the monitor to resync.
How is this different from the BetterStartup feature of fedora 10? I remember beingreally impressed by the kernel modesetting magic that was going on. No flicker. No resizing.
Has there been regressions because of architecture changes since then?
The video is using intel's onboard vga chipset which plays extra nice with intel's onboard EFI. You can bet the framebuffer code between the EFI and 3rd party hardware(AMD/NVidia) sucks much, much worse
I expect ServerFault posts about grephic instability in fedora 29 any minute now...
Often enough to matter, given that the preferred update method for OS updates in Fedora increasingly relies on offline updates so that correctness can be enforced.
Especially as Fedora moves towards Silverblue (aka Atomic/OSTree) which uses a content-store, hard-link farm, and pivot-root at boot for atomic upgrades.
Systems using A/B partition flipping (such as ChromeOS) would also benefit from this.
Absolutely everyone boots from cold, a non-zero number of times. If it looks bad, then everyone will see that... and it will be the first thing they see, every single time they do that.
Sounds cool, though I'm having a deeply frustrating facepalm moment that the folks doing graphics integration on Fedora want to work on boot glitz and not unbreaking remote desktop sharing, which still doesn't work out of the box with their Wayland setup.
Different team members work on each of these avenues of work you describe, and their effort isn't fungible.
There are patches by other desktop team members for chrom(ium), Firefox, and WebKit to work with desktop sharing using Wayland/pipewire but it takes time to get upstream to merge them into their products. Fedora 29, I believe, will be shipping with some of them applied so that even various commercial desktop streaming applications work.
Do you know if they're collaborating with the Sway folks? The Sway folks, via wlroots, seem to have their own protocol for surface capturing. It sounds like there is a potential for an adapter that speaks Sway on one side and Pipewire on the other side, and I'm out of my depth here, but I'm also interested in the Wayland + "screen capture/sharing" story becoming more solid, even for those of us that aren't in GNOME.
My Ubuntu 16.04 machine does this weird thing where it shows my login in 1/4 of the screen then suddenly it stretches to fill. Really sets the tone for how janky the experience is going to be.
The startup experience sets the tone for the entire thing. I know some engineers who won't get why it's a big deal. It's a _big_ deal.