On July 7 around 10am ET, our Cloud team received alerts from our monitoring system related to US customers located on one of the DB servers. Our Cloud team started investigating this issue affecting login and slow performance. Upon further investigation, the issue appeared to be caused by high numbers of DB connections on the server. Attempts were made to reset the DB connections and restart the Database, but ultimately required a restart of the Database server. Further, we increased the CPU and memory capacity of the Database server. Performance returned to normal around around 1:15pm ET and web functions returned to normal state. A spot-check of client environments was performed and all were successful. We verified that everything was running successfully and marked the incident as full resolved after monitoring it for another 3 hours.