MySQL has had first class replication and failover built into it for years. You deploy a server, enable binlogs, clone the server (percona's xtranackup can live-clone a server locally or remotely) and start the new instance. Then you point the replica at the master using 'CHANGE MASTER TO <host> ...' and it starts pulling binlogs and applying them. On the master, you can repeat this last step, making the replica it's master. This means that you have multi-master replication. And it just works. There are some other tools you can use to detect failure of the current master and switch to another, that is up to you.
There are also solutions like MySQL Cluster and Galera which provide a more cluster-like solution with synchronous replication. If you've got a suitable use case (low writes, high reads and no gigantic transactions) this can work extremely well. You bootstrap a cluster on node 1, and new members automatically take a copy of the cluster data when they join. You can have 3 or 5 mode clusters, and reads are distributed across all nodes since it's synchronous. Beware though, operating one of these things requires care. I've seen people suffer read downtime or full cluster outages by doing operations without understanding how they work under the hood. And if you're cluster is hard-down, you need to pick the node with "most recent transaction" to re-bootstrap the cluster or you can lose transactions.
There are also solutions like MySQL Cluster and Galera which provide a more cluster-like solution with synchronous replication. If you've got a suitable use case (low writes, high reads and no gigantic transactions) this can work extremely well. You bootstrap a cluster on node 1, and new members automatically take a copy of the cluster data when they join. You can have 3 or 5 mode clusters, and reads are distributed across all nodes since it's synchronous. Beware though, operating one of these things requires care. I've seen people suffer read downtime or full cluster outages by doing operations without understanding how they work under the hood. And if you're cluster is hard-down, you need to pick the node with "most recent transaction" to re-bootstrap the cluster or you can lose transactions.