I work in the NHS, this is the current user experience
1. Type in user name and password onto computer
2. Logs in to windows, normally taking ~60 seconds, unless you've got the computer where the WiFi signal is poor (yes, WiFi on desktops!) and you get a 'no log in servers'
3. Windows finally loads, click the icon for the software for viewing blood test results
4. An internet explorer window opens, then closes, then after 10 seconds the software opens
5. Type in your username and password, wait 10 seconds
6. Now I want to prescribe some medications, close the first software (computer can't cope with two things open at once), click the logo for the prescribing software
7. A Google Chrome window opens, slowly loads the prescribing software website
8. Type in username and password
9. Navigate through the slow and unintuitive prescribing software
10. Oh wait, I can't prescribe this particular drug without checking a blood test result, close the prescribing software and go to step 4
11. Some alarm goes off, so I have to lock the computer and run. Return from dealing with the alarm, go back to step 1
1) PC is desperately under-provisioned, probably several years old and upgraded to Windows 10
2) Too large roaming profile (easy situation to get into and hard to spot other than "it's slow")
3) Too slow AD profile server to get the large profile from
4) Internal websites "should" use NTLM authentication if available, which would remove the requirement to log in again, but forwarding this outside the domain properly is remarkably hard
5) Smartcard auth can offer the dropin use case with no typing, but it's a pain to provision and costs more
In many cases you'd be better off with an IBM 3270 terminal but with higher resolution text and images...
In my previous job we built a Windows CE-based system that let you log in instantly by tapping a keyfob and brought the screen you were last using up anywhere on the site, running on fifteen-year-old hardware. It was for selling beer.
My sister works in the NHS. It's all smartcard based. Also kerberos could achieve the same as NTLM for SSO and is more secure, and less complicated in multi-domain setups, since the client does all the hard work rather than servers having to contact remote DCs themselves.
If you have a server that is connected via 100Mbps link to the entire network, yea, it's not enough. When multiple clients are hitting it e.g. during login, downloading updates from wsus etc. it will be painfully slow
Instead of IBM, just use a Bloomberg terminal. It is as secure if not much more than all those security checklist-ticking healthcare software. For good or for bad, a lot more money is on the line for Bloomberg than e.g. a personal data breach in healthcare.
The notion that it should take 40 million pounds to fix this is just so reflective of the disconnect between the bureaucracy and the people that actually get work done that I probably couldn't think of a better example if I tried.
I can think of a couple ways to make that faster for free. Many dozens more for a lot less than a million pounds, forget 40. Arguments about "but corporate / compliance / vendor contracts / red tape" be damned. (Edit: Most of the replies to this comment are exactly this sort of thinking - It's too complicated, there's too many vendors, we can't do it, it's too hard, no one will do it for free. This is the exact kind of lack of political will to get anything done that I'm talking about. It's someone's existing job to do IT, task them with making it better - plugging in the computers to the wired network is a good free first step. Cleaning up the GPOs so Windows login doesn't take 60 seconds (!) is another freebie. Yes, these systems are complicated. That's not an excuse for them to be garbage).
The decadence of IT spend is horrifying. I suspect most of it winds up lining someone's pocket.
This is a pretty simplistic comment that seriously underestimates how complex this stuff is. (Are you Dominic Cummings by any chance?) IT systems like the one in the NHS are huge, have grown organically over time and are mission-critical. You can't just trash one system and start from scratch (witness the Universal Credit debacle in the UK). It's like repairing a ship while it's sailing. Yes, there's quite a bit of money involved, but I can only imagine someone who writes a comment along the lines of "it's easy, just do X" has never touched a system like this, or had to write budgets for it.
I'd also like to see you walk into a situation like this, no matter how much power you had, and see how far 'Arguments about "but corporate / compliance / vendor contracts / red tape" be damned.' gets you.
Ah no, you clearly misunderstand that NHS doesn’t actually use a single system f or everything. What IT systems are used depends on the hospital discussed, and thus 40M isn’t really about the complexity of a single system - it’s simply a paycheck to pay for upgrades of many systems running in parallel and barely having any interop whatsoever.
It's free? Does that mean you'll work for free until it's done?
The idea that anything is free is ludicrous. You are either spending time (wages) or money (buying solutions).
Why would £40 million be a lot of money for the IT systems of the entire public healthcare system in the UK? The NHS has a yearly budget of £114 billion, so this particular IT spend is only 0.03% the size of their yearly budget.
There's nothing decadent about this. It's essentially deferred maintenance that is long overdue, probably overdue because the government had to wade through people like you who can't help but complain every time a government organization wants to invest in modernizing.
My original calculation added in an unintended currency conversion (Google assumed that "40 million" meant "40 million US Dollars), and I also meant to clarify that I was talking about this single initiative, not the overall IT budget. I've since edited my number (0.03 instead of 0.02%)
> plugging in the computers to the wired network is a good free first step
Plugging into wired ethernet is only free if you have a wired port near the computer and it connects (properly) to an available port on a network switch and you have patch cables onsite and free labor to connect them (including labor that can access the network closet if needed) and free labor to configure the computer to prefer the wired over the wireless, if necessary.
Ideally, it's not a lot of labor, and there's likely to be some switch ports available and some patch cables available, but there's also probably a reason the computers are using wifi if they are.
You can argue this is probably part of someone's job already, but if they aren't doing it because they have too much other stuff to do, you'll need to pay someone else to do it.
I do feel that you haven't understood the complexity of the landscape. Not of the vendors, because in theory we could just impose standards of interoperability upon them with RFC like documents. But the complexity of NHS providers. There are thousands of GPs, hundreds of NHS trusts, and they've all got to talk to each other while providing strong access controls and audits.
EDIT: I don't normally ask about downvotes, but I'm curious about the votes in this post, and I'd be grateful for any explanation about downvoting.
Their pay is 30% below market, so they wind up blowing even more money on contractors. I’m pretty sure increasing pay will save them a chunk of change, but the NHS ties itself up in knots to prevent sensible changes.
They've been having some trouble recruiting and retaining staff.
And yet I didn't see any obvious job openings. I didn't see an obvious way of checking civilservicejobs.service.gov.uk as neither NHSX nor National Health… turned up any results when plugging them into the organization filter. It's an interesting mission, but if they're not hiring then I can only imagine they're starved for funding or they've successfully retained enough folks.
> if they're not hiring then I can only imagine they're starved for funding
The overall context is a government that has been openly antagonistic to the NHS for at least a decade, pushed for disastrously expensive "privatisation by stealth" efforts [0][1][2], and is now effectively trying to outsource the digital side of NHS operations to private companies [3]. Strategically starving certain departments is par for the course.
The overall context is a government that has been openly antagonistic to the NHS for at least a decade, pushed for disastrously expensive "privatisation by stealth" efforts [0][1][2], and is now effectively trying to outsource the digital side of NHS operations to private companies [3]. Strategically starving certain departments is par for the course.
If the goal is to "starve the beast" NHSX isn't having trouble with retention then are they?
I agree with everything you say, but GPs have always been private companies operating under NHS GP contracts, and Babylon Health isn't at all unusual in that regard.
They are a very large provider, and that's causing problems for their hosting CCG.
None of this detracts from your point: the NHS has been deliberately under-funded for years, and the Conservatives are pushing for non-NHS providers to provide services.
You are dealing with multiple vendors, who all bring their layers of customer support and sales people before you even get an estimate of the required change.
Contracts might include processes that must be followed for any modification so nobody gets sued afterwards.
The developers who implemented the software might not even be with the contractor anymore. So you pay for training someone else, too.
This is an unwarranted slight on CfH, which _did_ produce lots of useful work.
Unfortunately it was far too ambitious and effectively killed private development of healthcare software for the U.K. market in the process, making (for an example relevant to one area of my expertise) dose-based eprescribing a pipe dream despite having produced a ton of useful standards.
Taking a lead from GDS and starting to create user-friendly, predictable applications with a shared design language and reusable components (like SSO, email/SMS/letter sending) would be a step in the right direction.
They've already stated but it will take a long long time.
What you're describing sounds most like a) badly-written login scripts, or b) misconfigured DNS confounding the client's domain controller locator, or c) both. IME the folks who tinker with login scripts don't tend to think about execution time or failure modes, and it's tough to get a good developer assigned to that kind of work--and the MIS folks are unlikely to invite that kind of attention to their works.
I've worked on a widely used NHS IT system and specifically looked at login times for it.
We had an issue at one site where instead of taking 30 seconds to log in it took over 20 minutes. Our JDBC driver was using the default fetch size and the network latency was killing it.
As you may be able to tell, that had nothing to do with authentication - it was a thick client that downloaded a bunch of reference data at login. I left that role well over a decade ago but I've been back into a hospital in the last year or so and it was still in use, in fact, I couldn't see that it had changed at all.
Good luck to them, it's money well spent, but I doubt £40m is going to fix much.
No, the data was necessary (though there are a stack of different optimisation strategies they could have pursued with varying trade offs).
Fetch size is a standard JDBC parameter for tweaking the number of rows paged in a database query. The Sun/Oracle default was 10 rows and the app needed to retrieve thousands. So it would retrieve 10, process them, retrieve 10 more, etc. Which was minimal overhead on most networks, but on a network with high latency it meant there was perhaps a couple of seconds delay with each fetch.
Sadly the only login system I’ve seen that truly made a positive difference in these types of organisations has been discontinued for years.
Sun had SunRay terminals with smartcards that allowed users to seemlessly move from terminal to terminal. It was a solution that would have been revolutionary if Microsoft had implemented it in Windows. Being Solaris only means that only a few of us ever saw what could be done and how easy logins and moving from terminal to terminal could be.
With the SunRays you’d login to everything, pull your card and your session and applications would still be running on the Solaris server in basement. Put the card back in and your session and applications would be right back to where you were. So unless applications automatically log you out after some time, there would be no reason to log in again.
Sure you’d still not have SSO, but you could just let everything running, logged in, in your session on the server.
* Nobody uses windows terminal server for anything serious, because of its reputation for security holes (which may or may not be outdated, but y'know, "java is slow" etc etc).
* Nobody likes Citrix, even (or particularly) when they use it every day. The amount of compromises and hoops that app developers have to consider to deploy on it, is significant.
> Nobody uses windows terminal server for anything serious, because of its reputation for security holes (which may or may not be outdated, but y'know, "java is slow" etc etc).
Entire companies run on remote desktop. It's the industry standard, at least here in Germany. I'm working with a lot of enterprise customers and I never heard about particular security concerns with RDP. If anything, the protocol has an excellent security track record.
135 acute non-specialist trusts (including 84 foundation trusts)
17 acute specialist trusts (including 16 foundation trusts)
54 mental health trusts (including 42 foundation trusts)
35 community providers (11 NHS trusts, 6 foundation trusts, 17 social enterprises and 1 limited company)
10 ambulance trusts (including 5 foundation trusts)11
7,454 GP practices12
853 for-profit and not-for-profit independent sector organisations, providing care to NHS patients from 7,331 locations13
Virtual desktops are a whole different ball of horror. Sometimes giving everyone a plain old PC is a ton cheaper than figuring out how to troubleshoot the super expensive nightmare server that has to handle the load of replacing everyone's PCs.
Maybe, but I’ve never seen the solution implemented on Windows. I’ve worked for multiple organisations where it would have made more sense that deploying individual desktops, but I’ve never seen it done, at least not with a thin client. I’ve seen and used SunRays.
Confirming that Windows’ Terminal Services works the same way using the RDP protocol. I worked at an organisation several years ago that had WYSE thin terminals which connected to a Windows server over RDP. If you logged in at another machine it would connect to your exisiting session as-it-is. Even certain people with full PCs (like myself when I worked in a video editing role) would sometimes use a desktop RDP client to log into the system to do certain tasks cos it was sometimes faster than running natively. Pretty useful as you noted with the SunRays! Only downside was at the time many remote offices had poor ADSL connectivity and the head office had a rather limited bandwidth. When everyone was logged in and actively working the bandwidth of either end was usually maxed out to the point performance was sometimes very poor especially for certain remote offices. It didn’t help that the thin clients didn’t support all the applications that certain staff needed so it was both popular and very unpopular depending on what people roles were.
You only needed Solaris for the server directly handling the sun rays. The host handling the actual sessions could be Linux, that's how my university was running most things when I started in 2001.
somewhat unrelated, but... had an interaction with a state agency IT dept - I do some contract work on a project for a state agency, and we're being forced to move all server/hosting to in-house state IT. Our team was given logins, and on initial run through with the PM, he said "it's a little slow, but it may just be my login". Well... mine was slow too - doing an SSH in to the system brings up around 20-40 second pause, then asked for password, then ... another 20-40 seconds before a shell comes up.
I did some digging - there are errors in /var/log/messages and /var/log/secure (RHEL 7), and... most of this is caused by an initial hang while trying to reach an ldap server which doesn't exist - it just hangs and times out.
I pointed this out to their tech people, asking for some assistance, and had multiple back and forths where they kept saying "we're seeing you logged in". I had to keep replying "the problem is it takes 65-70 seconds to login - this surely can't be normal". No one ever said it was normal, but one guy wrote privately and said, more or less, "we deal with a lot of systems, and have standard setups. it's better to just have everyone just the standard configs for all systems, even if they're a bit buggy, but not completely broken".
I'm not even sure if I should be surprised by this, but... it was certainly disheartening.
My SO works in healthcare in the USA. Logging in to the VA takes about 3-5 minutes. To do some research stuff she often has to doublen-RDP into a special machine that's allowed to see her anonomized datasets, inception-style. I'm a research engineer with lots of computational skills in a different field, but I can never help with a quick Python/scipy script because I'm not allowed to get near the anonymized data. It's frustrating because she could be at least 10x more productive, especially on data processing side, with a better IT situation. Most doctors don't know Python, but the ones doing research could use it.
I love that we've gotten down to the point we're no longer even pretending that we're going to consolidate the mess that is the NHS IT system. Much better that we just make sure it's easy to log in to the 57 different services than to actually rationalise the systems in the first place.
Hilarious that this is a news story with zero content.
So they are spending 40 million to consolidate AD. Translation: they are Probably hiring a dozen consultants, buying Quest and Imprivata and moving one department of a hospital in 4 years. They may do a demo of Azure AD to check the AI box.
I think you're wrong on both counts - it's coming to multiple trusts (several friends and family have mentioned it), and despite the timing of the article is actually already in production or currently rolling out - I guess they forgot to release it before now.
If that’s the case, I bet it’s the implementation phase of a bigger project. That scope is too big for that amount of money in a government healthcare setting.
With all the disparate systems an NHS member of staff needs to use, they really need a robust SSO and Context Management solution. Even with a wall-to-wall EPR like EPIC there are some huge gaps that other systems need to plug, hence multiple logins and extended waits before a user is productive.
I’ve looked at nhs login [1] as a developer. The process is a mess of filling out forms, waiting months for reviews (the service is being rationed like many things in the nhs), answering questions about political imperatives and executive sponsorship, requests for more reviews and paperwork, etc. it’s no wonder their previous attempts at this have failed, it’s simply easier to meet budgets and deadlines (read: funding cycles) by rolling your own auth.
If this next one is going to work, they need to consider the developer experience as well as the end users.
I have tried to investigate a similar (if much smaller in scale) problem at one job. Our logins were just taking entirely too long. Granted, we had a notoriously "it's on your end" unhelpful parent IT department to work with, but I found just trying to understand the list of steps that go in on the background for a single login to finish on Windows to be something of an occult science. Finding the tools to just measure the different times each step takes was almost impossible; I had hints for a few steps there, a suggestion for one step over here, and so on, but nothing complete.
Even an empty profile on a relatively new computer, recently wiped computer was painful.
I loathe every login screen except for Google's. Why is it that only one company in the world (and sshd but there's no login screen when set up properly) can get it right? Is it really a piece of software that requires hundreds of millions of dollars to get right?
And people who try to fix it so often make things worse. I was very excited about notion.so until I learned I will have to keep clicking magic links for eternity. (Yeah, I know, they support logging in via Google which reinforces my point.)
Odd. I dislike Google’s very much. Although I use Google’s with many accounts (personal/work/partner) sometimes URLs will just assume my identity is “logged in account index 0” and give me a permission denied, or pass me to a page which has no way of changing the active account. I found that most urls can have “u/1/“ pretty much anywhere to change accounts but if they have rewritten the url before I see the permission denied or black-hole then it’s useless.
There’s also the fact that google recently forced Javascript and specific browsers to be used in order to be able to login. So I was locked out of my account for a while until someone discovered that the “rules” are a little more lax for Firefox useragents.
To add to this: I’m certainly no Microsoft fanboy but the best login I have experienced (in my life I think) is AzureAD with SAML. It seems to use Kerberos silently in the background to do saml handshaking. It’s truly seamless and very fast.
Even if I’m logged in to outlook from office365 (from a non-enrolled computer) I won’t even be presented with another login screen. I’ll just be logged in like “magic”.
> There’s also the fact that google recently forced Javascript and specific browsers to be used in order to be able to login. So I was locked out of my account for a while until someone discovered that the “rules” are a little more lax for Firefox useragents.
There's a ton of threat mitigation and detection of unusual activities going on behind the scenes. This is a big reason why my company uses GSuite SSO - it's basically impossible to achieve a similar level of security with a DIY SSO setup. Auth is very hard to get right with all corner cases considered.
JS trickery is key to detecting bots, and blocking super-outdated browsers like Konqueror that basically lack all modern security mitigations is a reasonable thing to do (and probably allows them to remove less strict fallbacks for those browsers that were previously abused by bad actors).
> blocking super-outdated browsers like Konqueror that basically lack all modern security mitigations is a reasonable thing to do
Except they still allow browsers far more insecure, like IE. And they could do feature detection to see if the security features are implemented or not.
Blocking user-agents does nothing good for anyone.
Yeah, Google's SSO is fine unless the first logged-in account (/u/0) is one with a redirect to a company portal login. At which point you have to manipulate the URL to get to a different account.
When logging in to gmail (gsuite really) for work, I notice that I’m briefly redirected to accounts.youtube.com after putting in the otp even though the account has no youtube association. Any ideas why? I’ve tested blocking the domain and it still lets me through (after hitting back after falling to load). Is this just “because Firefox”?
From what I can tell, it's setting some kind of cookie for Youtube as part of the standard Google login flow. They probably have their cookies set up in such a way that authentication data is only accessible from first-party websites (like cookies).
I can see 11 cookies on the Youtube.com domain in my browser. Some of those seem to be purely for authentication reasons. I'm pretty sure that's why Google flows through Youtube when you log in.
As for why they always try to set Youtube cookies: I don't know, but it's probably because they don't feel like splitting up their authentication flow. Doing the redirect for accounts even when they don't have any Youtube-related is probably a no-op; leaving it like that doesn't cost anything close to the time and resources it would take to segregate their entire workflow + verify that the security is still sound.
I use Google’s with many accounts (personal/work/partner)
You don't mention your use case but it's almost always a bad idea to be logging in as someone else, not just at Google but everywhere. Some sort of access delegation would probably be much better.
As for a work account, I keep it in a separate Chrome profile. Not because of logging in but because you can sync everything without risking spillage.
At this rate, software developers are going to get worse reputations than lawyers before too long.
One app I use every day at work, I have to provide my email address ever day, then log in. Then the app passes authentication for some unknown reason to Microsoft's federation service of some kind, using my email address which it asks for again + password.
Prior to their recent 'redesign' it was perfectly streamlined before they "refreshed" the app with a brand new bullshit UI that just made everything worse. I literally can't think of a positive that came out with the new version, it's worse in every conceivable way.
Personally I think someone scammed the company into redesigning something that didn't need it, subcontracted it out to the lowest bidder and pocketed a nice stack of cash for themselves.
I despise Google's login screens. Two separate forms, one for just an email address another for just a password. Why? Why should I have to invoke my password manager twice just to fill in two fields? I can't think of a single sane reason for doing this.
This is because certain email addresses have SSO enabled and don't use a password. The only way to know is to get the username first and then redirect to the right place.
Not only SSO, but also two-factor authentication. Without asking for the user identifier first, you don't know which authentication method to present, because this varies per user account.
The alternative is to let the user choose the authentication method right on the first screen, but that would mean showing all possible methods (some of which may be limited to certain groups of accounts) and getting a lot of users stuck because they chose the wrong method.
So leave the password field empty if you want to use SSO, but keep it for standard Google accounts, so at least people using those don’t have to deal with the madness?
The reason why they do this is because of sso logins for other organizations.
For example, to log into my college gmail account, I type my email, get redirected to the schools sso page (retype my username and password), then get redirected to gmail.
Both Chrome and Firefox handle that form just fine. In my case it's four steps: login, password, hardware key, and clickthrough. It is fast and painless though.
Plus if you have a bunch of tabs open (G Suite), it will log you in all of them once you log in on one.
Oh god... How a human being can stand this humiliation? I image this inflating to eight steps in 2030: login, password, 2FA key, face scan, reCAPTCHA, ID scan, phone call, click checkbox in AI TOS no one reads, final click on “We’re done!”
I work for a system integrator that also offers Identity and Access Management systems. Some of our offerings have both modern directory services and SSO and multi-factor authentication solutions with access control policies like SAASPASS. Don’t understand how this deployment would cost 40 million pounds even with IAM consulting and training.
It would be good to see the actual breakdown of the spending as it probably includes other spending as well. If not, then it is questionable spending.
I suspect you don't grasp the sheer size of the NHS. 2.15 million staff (of whom, 1.4M doctors, nurses, and medical specialists), at least 350 hospitals (depending on definitions) ... in terms of employment, this is the fifth largest employer in the world. This isn't about a single, universal, organization-wide SSO system: there are a bunch of systems, and it's a given that many of them are obsolescent, underpowered, legacy systems including exotic or proprietary kit.
Think of it in terms of SSO for an organization the size of the entire US Armed Forces and you'll begin to grasp the scale of the problem.
I think you VASTLY underestimate the size of US Armed Forces in terms of what's required for authentication, authorization, and access control.
US DOD has well over 4 million Active Duty, Civilian Employees, Contractors, etc. Though numbers are hard to pin down, since the number of contractors is in many cases not what is being paid for by the contract (e.g. firm-fixed price, and service based contracts like IDIQ which the contract holder dedicates cleared staff, but otherwise it's hard to know; compared to time-and-material/"butts-in-seats" where numbers are know by the Government per contract).
The US DOD attempted to move everything to MS AD at one point but hit a limitation in the number of objects AD let's you create (~2bn).
1. Type in user name and password onto computer
2. Logs in to windows, normally taking ~60 seconds, unless you've got the computer where the WiFi signal is poor (yes, WiFi on desktops!) and you get a 'no log in servers'
3. Windows finally loads, click the icon for the software for viewing blood test results
4. An internet explorer window opens, then closes, then after 10 seconds the software opens
5. Type in your username and password, wait 10 seconds
6. Now I want to prescribe some medications, close the first software (computer can't cope with two things open at once), click the logo for the prescribing software
7. A Google Chrome window opens, slowly loads the prescribing software website
8. Type in username and password
9. Navigate through the slow and unintuitive prescribing software
10. Oh wait, I can't prescribe this particular drug without checking a blood test result, close the prescribing software and go to step 4
11. Some alarm goes off, so I have to lock the computer and run. Return from dealing with the alarm, go back to step 1