You're not far off: the batteries are (probably) made of lithium.
Also, why batteries in a datacenter? When you implement a flush() command at the lowest level you're faced with two choices: 1) actually write to disk, then return from the call, 2) write to some cache/RAM and have just enough battery locally to ensure that you can write it to disk even if all power goes out.
Then there's the other problem of surviving long enough between a power interruption and diesel generators starting up. But this is a smaller problem, rebooting all instances in a datacenter is less bad than losing some data that was correctly flush()ed by software. Bad flush() behaviour can result in errors that cannot be recovered from without a complicated manual intervention (for example if it causes corrupted and unreadable database files).
The batteries in the datacenter are simply there to hold the power until the generators are all up and running, and the phases are in sync.
They create 3 separate arrays of batteries in each back. Each array represents a power phase, A-B-C.. if I remember correctly, each array has a number of low voltage/2000 amp batteries connected in series to make up for a 2000amp 480 volt leg on the other end.
In a tier 4 plus+1 datacenter, they have 4 battery rooms and 4 generators for each data pod. You have a primary generator and UPS battery set, and a backup generator set for each pod. And then that generator set has its own primary and secondary backup set. The end result is that they can work on any piece of equipment without interrupting power. In the event they lost the primary set or needed to take it offline for maintenance, they have the whole secondary redundant set to fallback on.
The servers on the received on the power cord after it passes the switchgear never know that there has been power source changes on the other end.
Everything serious in the telecom/ISP infrastructure sector has a big -48VDC battery plant, or preferably separate A and B side -48VDC battery plants, to provide a significant buffer between power going Grid --> AC-to-DC Rectifiers --> Equipment, and when a generator can start up, warm up, and transfer switch does its job.
Even if a bunch of servers don't have any UPS or battery backup because they're designed to tolerate individual node (or whole rack, or whole row failures) the core network equipment in a datacenter will still have a huge battery plant.
Ideally if you have a chilled water loop for cooling you do not want it anywhere near your big-ass racks of batteries. Or near the racks that contain the rectifiers and DC breakers, distribution bus bars.
If you look at the battery racks in a traditional telco CO in the US for instance you will see that all of the cabling and batteries are a minimum of 1 foot off the floor, so that the whole place could theoretically flood and the DC distribution would remain unaffected. Same principle that applies to very traditional setups with wet-cell 2V lead acid batteries also applies to more modern things if building from scratch.
Very different trade offs in play for google who run with a relatively high tolerance for failure at the individual machine or even rack level. At one point I believe there were batteries in every rack, though I don’t know what they're building these days. A telco DC is gonna have more network interconnect with lower tolerance for failure due to capacity impact that isn’t easy to double.
Think like a fiber termination demarc vs an in-cluster mesh.
What I was saying above is that the 'core' of a google DC has a massive amount of network interconnect and needs for battery backup not very different from a big IX point or traditional "primary CO" for a city in a telco environment.
By square footage maybe 95% of a google DC might have no UPS or battery backup but the core network for things like routers and DWDM transport equipment absolutely will have such.
If they were unlucky enough that the burst cooling loop met with the battery plant for the core gear in a building or small campus of buildings....
Also, why batteries in a datacenter? When you implement a flush() command at the lowest level you're faced with two choices: 1) actually write to disk, then return from the call, 2) write to some cache/RAM and have just enough battery locally to ensure that you can write it to disk even if all power goes out.
Then there's the other problem of surviving long enough between a power interruption and diesel generators starting up. But this is a smaller problem, rebooting all instances in a datacenter is less bad than losing some data that was correctly flush()ed by software. Bad flush() behaviour can result in errors that cannot be recovered from without a complicated manual intervention (for example if it causes corrupted and unreadable database files).