With Lync Server 2010 it was supported to have Full Site Resiliency (Metropolitan data center resiliency) which looks something like this:
Although this solution provides the highest level of availability, resiliency and automatic failover, this solution is complex to deploy and maintain and can only be deployed within the same metropolitan location (town/city) due a number of additional infrastructure requirements:
- SQL geo clustering for backend services (including Stretched VLAN)
- Synchronous storage array level data replication (for SQL data)
- Requires low latency WAN (less than 20 milliseconds round-trip)
- High Bandwidth between site (greater than 1 Gbps available bandwidth)
Now with Lync pool pairing, you might be thinking you could do something with stretching your Front-End Pool across two site with a SQL mirror something like this:
Although you could deploy a database witness in a 3rd site to deal with automatic failover of the SQL mirror, this doesn’t overcome the issue of the brick architecture of the Front-End pool and the fact that the capabilities of Windows Fabric are used for the replication of data between pool members which in effect a distributed architecture similar to a majority node set. As a result by stretching the Front-End pool over two sites in the event of a WAN failure between the two locations you will end up with a split brain pool. However unlike Exchange Server which has the Database Activation Coordination (DAC) mode to overcome split brain. Do note, there is something similar when you have of two Front-End Servers  to force quorum, but this isn’t automatic and two server Front-End Pools are not recommended with Lync Server 2013!
In effect, this is not a supported configuration!
However you should rather use pool pairing to provide site failover capability (DR) which should look something like the following:
Alternatively by using two separate databases instance you could stretch SQL over two sites, however this is just adding complexity and you should really us the above architecture rather than the below and keep it simple!
Credit to Kevin Peters (MCM) for assistance with this answer
 Topologies and Components for Front End Servers, Instant Messaging, and Presence