tag:www.githubstatus.com,2005:/historyGitHub Status - Incident History2026-03-26T06:40:17ZGitHubtag:www.githubstatus.com,2005:Incident/292391642026-03-24T20:56:05Z2026-03-24T20:56:05ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:56</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:38</var> UTC</small><br><strong>Update</strong> - We are investigating elevated error rates affecting multiple GitHub services including Actions, Issues, Pull Requests, Webhooks, Codespaces, and login functionality. Some users may have experienced errors when accessing these features. Most services are now showing signs of recovery. We'll post another update by 21:00 UTC.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:23</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:23</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:20</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/292356442026-03-24T19:51:16Z2026-03-24T19:51:16ZTeams Github Notifications App is down<p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>18:50</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:09</var> UTC</small><br><strong>Update</strong> - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>16:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291794382026-03-22T10:02:05Z2026-03-22T10:02:05ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>09:27</var> UTC</small><br><strong>Update</strong> - We are investigating intermittently high latency and errors from Git operations.</p><p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>09:08</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291232672026-03-20T01:58:40Z2026-03-25T14:54:45ZDisruption with Copilot Coding Agent Sessions<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:58</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br />peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br />backing datastore.<br /><br />We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /><br />We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:26</var> UTC</small><br><strong>Update</strong> - We are rolling out our mitigation and are seeing recovery.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:00</var> UTC</small><br><strong>Update</strong> - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291140792026-03-20T00:05:15Z2026-03-20T00:05:15ZGit operations for users in the west coast are experiencing an increase in latency<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Update</strong> - We have reached stability with git operations through our changes deployed today.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>23:52</var> UTC</small><br><strong>Update</strong> - We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>22:57</var> UTC</small><br><strong>Update</strong> - We have completed the rollout of our new network path and are monitoring its impact.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>21:59</var> UTC</small><br><strong>Update</strong> - We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>18:27</var> UTC</small><br><strong>Update</strong> - We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Update</strong> - We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>17:01</var> UTC</small><br><strong>Update</strong> - We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>16:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p>tag:www.githubstatus.com,2005:Incident/291109992026-03-19T14:32:55Z2026-03-25T15:39:32ZIssues with Copilot Coding Agent<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:32</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br /> peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br /> backing datastore.<br /> <br /> We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /> <br /> We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:06</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:02</var> UTC</small><br><strong>Update</strong> - We are investigating reports that Copilot Coding Agent session logs are not available in the UI.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>13:44</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290988322026-03-19T02:52:44Z2026-03-25T15:40:07ZDisruption with Copilot Coding Agent sessions<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:52</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br /> peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br /> backing datastore.<br /> <br /> We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /> <br /> We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:46</var> UTC</small><br><strong>Update</strong> - We have rolled out our mitigation and are seeing recovery for Copilot Coding Agent sessions</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:25</var> UTC</small><br><strong>Update</strong> - We are seeing widespread issues starting and viewing Copilot Agent sessions. We have a hypothesis for the cause and are working on remediation.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:05</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290949362026-03-19T01:44:01Z2026-03-19T01:44:01ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>01:44</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>01:43</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in git operations for customers on the West Coast of the US.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Update</strong> - We continue to investigate the slow performance of Git Operations affecting the US West Coast.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>00:10</var> UTC</small><br><strong>Update</strong> - We continue to investigate degraded performance for git operations from the US West Coast.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>23:33</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate degraded performance for git operations from the US West Coast.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:48</var> UTC</small><br><strong>Update</strong> - We are experiencing increased latency when performing git operations, especially large pushes and pulls from customers on the west coast of the US. We are not seeing an increase in failures. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:36</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:36</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290905962026-03-18T19:46:38Z2026-03-19T21:15:58ZWebhook delivery is delayed<p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>19:46</var> UTC</small><br><strong>Resolved</strong> - On March 18, 2026, between 18:18 UTC and 19:46 UTC all webhook deliveries experienced elevated latency. During this time, average delivery latency increased from a baseline of approximately 5 seconds to a peak of approximately 160 seconds. This was due to resource constraints in the webhook delivery pipeline, which caused queue backlog growth and increased delivery latency. We mitigated the incident by shifting traffic and adding capacity, after which webhook delivery latency returned to normal. We are working to improve capacity management and detection in the webhook delivery pipeline to help prevent similar issues in the future.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Update</strong> - We are seeing recovery and are continuing to monitor the latency for webhook deliveries</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>18:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p>tag:www.githubstatus.com,2005:Incident/290349762026-03-16T15:28:22Z2026-03-17T22:49:31ZErrors starting and connecting to Codespaces<p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:28</var> UTC</small><br><strong>Resolved</strong> - On 16 March 2026, between 14:16 UTC and 15:18 UTC, Codespaces users encountered a download failure error message when starting newly created or resumed codespaces. At peak, 96% of the created or resumed codespaces were impacted. Active codespaces with a running VSCode environment were not affected. <br /><br />The error was a result of an API deployment issue with our VS Code remote experience dependency and was resolved by rolling back that deployment. We are working with our partners to reduce our incident engagement time, improve early detection before they impact our customers, and ensure safe rollout of similar changes in the future.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:27</var> UTC</small><br><strong>Update</strong> - Errors starting or resuming Codespaces have resolved.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are investigating reports of users experiencing errors when starting or connecting to Codespaces. Some users may be unable to access their development environments during this time. We are working to identify the root cause and will implement a fix as soon as possible.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:01</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/289681052026-03-13T16:15:33Z2026-03-17T13:27:46ZDegraded performance for various services<p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:15</var> UTC</small><br><strong>Resolved</strong> - On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform. <br /><br />The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak. <br /><br />The incident was mitigated by manually scaling up the authorization service and reverting the configuration change. <br /><br /> <br />To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:02</var> UTC</small><br><strong>Update</strong> - We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:47</var> UTC</small><br><strong>Update</strong> - We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:20</var> UTC</small><br><strong>Update</strong> - Packages is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:14</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Issues</p>tag:www.githubstatus.com,2005:Incident/289426492026-03-12T18:53:33Z2026-03-16T20:39:15ZDegraded Codespaces experience<p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>18:53</var> UTC</small><br><strong>Resolved</strong> - On March 12, 2026, between 01:00 UTC and 18:53 UTC, users saw failures downloading extensions within created or resumed codespaces. Users would see an error when attempting to use an extension within VS Code. Active codespaces with extensions already downloaded were not impacted.<br /><br />The extensions download failures were the result of a change introduced in our extension dependency and was resolved by updating the configuration of how those changes affect requests from Codespaces. We are enhancing observability and alerting of critical issues within regular codespace operations to better detect and mitigate similar issues in the future.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>17:59</var> UTC</small><br><strong>Update</strong> - Codespaces IPs are no longer being blocked from Visual Studio Marketplace operations and we are monitoring for full recovery</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>17:20</var> UTC</small><br><strong>Update</strong> - We're seeing intermittent failures downloading from the extension marketplace from codespaces, caused by IP blocks for some codespaces. We're working to remove those blocks.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>16:09</var> UTC</small><br><strong>Update</strong> - We're seeing intermittent failures downloading from the extension marketplace from codespaces and are investigating.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>15:08</var> UTC</small><br><strong>Update</strong> - We're seeing partial recovery for the issue affecting extension installation in newly created Codespaces. Some users may still experience degraded functionality where extensions hit errors. The team continues to investigate the root cause while monitoring the recovery.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>14:29</var> UTC</small><br><strong>Update</strong> - We have deployed a fix for the issue affecting extension installation in newly created Codespaces. New Codespaces are now being created with working extensions. We'll post another update by 15:30 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:50</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate an issue where extensions fail to install in newly created Codespaces. Users can create and access Codespaces, but extensions will not be operational, resulting in a degraded experience. The team is working on a fix. All newly created Codespaces are affected. We'll post another update by 15:00 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:07</var> UTC</small><br><strong>Update</strong> - We're investigating an issue where extensions fail to install in newly created Codespaces. Users can still create and access Codespaces, but extensions will not be operational, resulting in a degraded development experience. Our team is actively working to identify and resolve the root cause. We'll post another update by 14:00 UTC.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>13:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p>tag:www.githubstatus.com,2005:Incident/289350062026-03-12T06:02:07Z2026-03-16T17:22:02ZActions failures to download (401 Unauthorized)<p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>06:02</var> UTC</small><br><strong>Resolved</strong> - On March 12, 2026 between 02:30 and 06:02 UTC some GitHub Apps were unable to mint server to server tokens, resulting in 401 Unauthorized errors. During the outage window, ~1.3% of requests resulted in 401 errors incorrectly. This manifested in GitHub Actions jobs failing to download tarballs, as well as failing to mint fine-grained tokens. During this period, approximately 5% of Actions jobs were impacted <br /><br />The root cause was a failure with the authentication service’s token cache layer, a newly created secondary cache layer backed by Redis – caused by Kubernetes control plane instability, leading to an inability to read certain tokens which resulted in 401 errors. The mitigation was to fallback reads to the primary cache layer backed by mysql. As permanent mitigations, we have made changes to how we deploy redis to not rely on the Kubernetes control plane and maintain service availability during similar failure modes. We also improved alerting to reduce overall impact time from similar failures. <br /></p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>06:02</var> UTC</small><br><strong>Monitoring</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>05:40</var> UTC</small><br><strong>Update</strong> - We are continuing investigation of reports of degraded performance for Actions and GitHub Apps</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>04:46</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/289330132026-03-12T02:45:55Z2026-03-12T02:45:55ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:45</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:44</var> UTC</small><br><strong>Update</strong> - We've identified the root cause and are working on resolving the underlying issue. Some users may have encountered intermittent failures and errors. We're continuing to see reduced error rates.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>02:13</var> UTC</small><br><strong>Update</strong> - We are investigating elevated error rates. Error rates are now decreasing and we're continuing to monitor the situation.</p><p><small>Mar <var data-var='date'>12</var>, <var data-var='time'>01:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/289224002026-03-11T15:53:15Z2026-03-13T20:03:40ZDegraded experience with Copilot Code Review<p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2026, between 13:00 UTC and 15:23 UTC the Copilot Code Review service was degraded and experienced longer than average review times. On average, Copilot Code Review requests took 4 minutes and peaked at just under 8 minutes. This was due to hitting worker capacity limits and CPU throttling. We mitigated the incident by increasing partitions, and we are improving our resource monitoring to identify potential issues sooner.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:53</var> UTC</small><br><strong>Update</strong> - Copilot Code Review queue processing has returned to normal levels.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:31</var> UTC</small><br><strong>Update</strong> - We experienced degraded performance with Copilot Code Review starting at 14:01 UTC. Customers experienced extended review times and occasional failures. Some extended processing times may continue briefly. We are monitoring for full recovery. We'll post another update by 16:30 UTC.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:28</var> UTC</small><br><strong>Monitoring</strong> - We are investigating degraded performance with Copilot Code Review. Customers may experience extended review times or occasional failures. We are seeing signs of improvement as our team works to restore normal service. We'll post another update by 15:30 UTC.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/289225992026-03-11T15:02:23Z2026-03-25T20:03:16ZIncident with API Requests<p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:02</var> UTC</small><br><strong>Resolved</strong> - On March 11, 2026, between 14:25 UTC and 14:34 UTC, the REST API platform was degraded, resulting in increased error rates and request timeouts. REST API 5xx error rates peaked at ~5% during the incident window with two distinct spikes: the first impacting REST services broadly, and the second driven by sustained timeouts on a subset of endpoints. <br /><br />The incident was caused by a performance degradation in our data layer, which resulted in increased query latency across dependent services. Most services recovered quickly after the initial spike, but resource contention caused sustained 5xx errors due to how certain endpoints responded to the degraded state. <br /><br />A fix addressing the behavior that prolonged impact has already been shipped. We are continuing to work to resolve the primary contributing factor of the degradation and to implement safeguards against issues causing cascading impact in the future.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>15:02</var> UTC</small><br><strong>Update</strong> - We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.</p><p><small>Mar <var data-var='date'>11</var>, <var data-var='time'>14:37</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests</p>tag:www.githubstatus.com,2005:Incident/290873372026-03-10T23:00:00Z2026-03-18T16:06:25ZIncident With Webhooks<p><small>Mar <var data-var='date'>10</var>, <var data-var='time'>23:00</var> UTC</small><br><strong>Resolved</strong> - On March 10, 2026, between 23:00 UTC and 23:40 UTC, the Webhooks service was degraded and ~6% of users experienced intermittent errors when accessing webhook delivery history, retrying webhook deliveries, and listing webhooks via the UI and API. Approximately 0.37% of requests resulted in errors, while at peak 0.5% of requests resulted in errors.<br /><br />This was due to unhealthy infrastructure. We mitigated the incident by redeploying affected services, after which service health returned to normal.<br /><br />We are working to improve detection of unhealthy infrastructure and strengthen service safeguards to reduce time to detect and mitigate similar issues in the future.</p>tag:www.githubstatus.com,2005:Incident/288823822026-03-09T17:03:40Z2026-03-17T21:50:37ZIncident with Webhooks<p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Resolved</strong> - On March 9, 2026, between 15:03 and 20:52 UTC, the Webhooks API experienced was degraded, resulted in higher average latency on requests and in certain cases error responses. Approximately 0.6% of total requests exceeded the normal latency threshold of 3s, while 0.4% of requests resulted in 500 errors. At peak, 2.0% experienced latency greater than 3 seconds and 2.8% of requests returned 500 errors.<br /><br />The issue was caused by a noisy actor that led to resource contention on the Webhooks API service. We mitigated the issue initially by increasing CPU resources for the Webhooks API service, and ultimately applied lower rate limiting thresholds to the noisy actor to prevent further impact to other users.<br /><br />We are working to improve monitoring to more quickly ascertain noisy traffic and will continue to improve our rate-limiting mechanisms to help prevent similar issues in the future.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>17:03</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>15:56</var> UTC</small><br><strong>Update</strong> - We are experiencing latency on the API and UI endpoints. We are working to resolve the issue.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p>tag:www.githubstatus.com,2005:Incident/288707872026-03-09T03:51:42Z2026-03-10T18:32:23ZIncident with Codespaces<p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:51</var> UTC</small><br><strong>Resolved</strong> - On March 9, 2026, between 01:23 UTC and 03:25 UTC, users attempting to create or resume codespaces in the Australia East region experienced elevated failures, peaking at a 100% failure rate for this region. Codespaces in other regions were not affected.<br /><br />The create and resume failures were caused by degraded network connectivity between our control plane services and the VMs hosting the codespaces. This was resolved by redirecting traffic to an alternate site within the region. While we are addressing the core network infrastructure issue, we have also improved our observability of components in this area to improve detection. This will also enable our existing automated failovers to cover this failure mode. These changes will prevent or significantly reduce the time any similar incident causes user impact.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:51</var> UTC</small><br><strong>Update</strong> - This incident has been resolved. New Codespace creation requests are now completing successfully.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:32</var> UTC</small><br><strong>Update</strong> - We are seeing recovery, with the failure rate for new Codespace creation requests dropping from 5% to about 3%.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Update</strong> - We are seeing about 5% of new Codespace creation requests failing. We are investigating the root cause and identifying the impacted regions.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p>tag:www.githubstatus.com,2005:Incident/288295282026-03-06T23:28:13Z2026-03-12T16:53:04ZIncident with Webhooks<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Resolved</strong> - On March 6, 2026, between 16:16 UTC and 23:28 UTC the Webhooks service was degraded and some users experienced intermittent errors when accessing webhook delivery histories, retrying webhook deliveries, and listing webhooks via the UI and API. On average, the error rate was 0.57% and peaked at approximately 2.73% of requests to the service. This was due to unhealthy infrastructure affecting a portion of webhook API traffic.<br /><br />We mitigated the incident by redeploying affected services, after which service health returned to normal.<br /><br />We are working to improve detection of unhealthy infrastructure and strengthen service safeguards to reduce time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>23:26</var> UTC</small><br><strong>Update</strong> - We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>22:35</var> UTC</small><br><strong>Update</strong> - We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>21:34</var> UTC</small><br><strong>Update</strong> - The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>20:18</var> UTC</small><br><strong>Update</strong> - We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>19:39</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>19:07</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>18:07</var> UTC</small><br><strong>Update</strong> - We continue working on mitigations to restore full service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - Our engineers have identified the root cause and are actively implementing mitigations to restore full service.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:19</var> UTC</small><br><strong>Update</strong> - This problem is impacting less than 1% of UI and webhook API calls.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>17:12</var> UTC</small><br><strong>Update</strong> - We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>16:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p>tag:www.githubstatus.com,2005:Incident/288139352026-03-05T23:55:20Z2026-03-05T23:55:20ZActions is experiencing degraded availability<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:55</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:40</var> UTC</small><br><strong>Update</strong> - We are close to full recovery. Actions and dependent services should be functioning normally now.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:37</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:15</var> UTC</small><br><strong>Update</strong> - Actions and dependent services, including Pages, are recovering.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:00</var> UTC</small><br><strong>Update</strong> - We applied a mitigation and we should see a recovery soon.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>22:54</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/288084292026-03-05T19:30:54Z2026-03-06T17:21:02ZMultiple services are affected, service degradation<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:30</var> UTC</small><br><strong>Resolved</strong> - On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents. <br /><br />We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs. <br /><br />We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - Webhooks is operating normally.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>18:59</var> UTC</small><br><strong>Update</strong> - Actions is now fully recovered.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>18:15</var> UTC</small><br><strong>Update</strong> - The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Update</strong> - We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>17:25</var> UTC</small><br><strong>Update</strong> - We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:52</var> UTC</small><br><strong>Update</strong> - We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:47</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:41</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded availability. We are continuing to investigate.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/287951132026-03-05T01:30:37Z2026-03-06T20:15:53ZDisruption with some GitHub services<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Resolved</strong> - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.<br /><br />We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.<br /><br />To prevent similar issues in the future, we're improving our monitoring depth to more easily detect degradation before changes are fully rolled out.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Update</strong> - Copilot coding agent mission control is fully restored. Tasks are now listed as expected.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:21</var> UTC</small><br><strong>Update</strong> - Users were temporarily unable to see tasks listed in mission control surfaces. The ability to submit new tasks, view existing tasks via direct link, or manage tasks was unaffected throughout. A revert is currently being deployed and we are seeing recovery.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/287947832026-03-05T01:13:31Z2026-03-11T19:35:39ZSome OpenAI models degraded in Copilot<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Resolved</strong> - On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:53</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/287713442026-03-03T21:11:30Z2026-03-03T23:03:29ZClaude Opus 4.6 Fast not appearing for some Copilot users<p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:11</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.<br /><br />We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Update</strong> - We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:31</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>