tag:www.githubstatus.com,2005:/historyGitHub Status - Incident History2026-01-09T18:08:15ZGitHubtag:www.githubstatus.com,2005:Incident/279516062026-01-09T18:08:15Z2026-01-09T18:08:15ZDisruption with some GitHub services<p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>18:08</var> UTC</small><br><strong>Update</strong> - Agent Session activity is still observable in audit logs, and this only impacts the AI Controls UI.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:57</var> UTC</small><br><strong>Update</strong> - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:53</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/279247342026-01-08T01:32:48Z2026-01-08T01:32:48ZIncident with Copilot<p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>01:32</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>01:31</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and Grok Code Fast 1 is once again available in Copilot Chat and across IDE integrations.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>00:45</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'> 8</var>, <var data-var='time'>00:45</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/279203822026-01-07T21:07:09Z2026-01-07T21:07:09ZSome models missing in Copilot<p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>21:07</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:43</var> UTC</small><br><strong>Update</strong> - We have implemented a mitigation and confirmed that Copilot Pro and Business accounts now have access to the previously missing models. We will continue monitoring to ensure complete resolution.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Update</strong> - We continue to investigate. We'll post another update by 19:50 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:10</var> UTC</small><br><strong>Update</strong> - Correction - Copilot Pro and Business users are impacted. Copilot Pro+ and Enterprise users are not impacted.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:06</var> UTC</small><br><strong>Update</strong> - We continue to investigate this problem and have confirmed only Copilot Business users are impacted. We'll post another update by 19:30 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - We are currently investigating reports of some Copilot Pro premium models including Opus and GPT 5.2 being unavailable in Copilot products. We'll post another update by 19:08 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:33</var> UTC</small><br><strong>Update</strong> - We have received reports that some expected models are missing from VSCode and other products using Copilot. We are investigating the cause of this to restore access.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:32</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/279031232026-01-06T17:06:11Z2026-01-06T17:06:11ZIncident with Actions<p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>17:06</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>16:44</var> UTC</small><br><strong>Update</strong> - We are investigating issues downloading artifacts from Actions workflows. All customers are affected when attempting to download through the web interface. We're actively working on a fix and will post another update by 17:15 UTC.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>16:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/278974802026-01-06T10:08:04Z2026-01-09T10:31:11ZIncident with Copilot<p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:08</var> UTC</small><br><strong>Resolved</strong> - On January 6th, 2026, between approximately 8:41 and 10:07 UTC, the Copilot service experienced a degradation of the GPT-5.1-Codex-Max model due to an issue with our upstream provider. During this time, up to 14.17% of requests to GPT-5.1-Codex-Max failed. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:07</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and GPT-5.1-Codex-Max is once again available.<br />We will continue monitoring to ensure stability.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>09:03</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT-5.1-Codex-Max model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>08:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/278430352026-01-01T22:31:49Z2026-01-08T20:41:51ZDisruption with some GitHub services<p><small>Jan <var data-var='date'> 1</var>, <var data-var='time'>22:31</var> UTC</small><br><strong>Resolved</strong> - On December 31, 2025, between 04:00 UTC and 22:31 UTC, all users visiting https://github.com/features/copilot were unable to load the page and were instead redirected to an error page.<br />The issue was caused by an unexpected content change that resulted in page rendering errors.<br />We mitigated the incident by reverting the change, which restored normal page behavior.<br />To reduce the likelihood and duration of similar issues in the future, we are improving monitoring and alerting for increased error rates on this page and similar pages, and strengthening validation and safeguards around content updates to prevent unexpected changes from causing user-facing errors.</p><p><small>Jan <var data-var='date'> 1</var>, <var data-var='time'>21:24</var> UTC</small><br><strong>Update</strong> - Our Copilot feature page (https://github.com/features/copilot) is returning 500s. We are currently investigating. This does not impact the core GitHub application.</p><p><small>Jan <var data-var='date'> 1</var>, <var data-var='time'>21:24</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/277194902025-12-23T10:32:24Z2026-01-06T17:34:56ZIncident with Issues and Pull Requests<p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Resolved</strong> - On December 23, 2025, between 09:15 UTC and 10:32 UTC the Issues and Pull Requests search indexing service was degraded and caused search results to contain stale data up to 3 minutes old for roughly 1.3 million issues and pull requests. This was due to search indexing queues backing up from resource contention caused by a running transition.<br /><br />We mitigated the incident by cancelling the running transition.<br /><br />We are working to implement closer monitoring of search infrastructure resource utilization during transitions to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Update</strong> - Issues and Pull Requests are operating normally.</p><p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>10:29</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in search indexing for Issues and Pull Requests. The queue has returned to normal processing times, and we continue to monitor service health. We'll post another update by 11:00 UTC.</p><p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>09:58</var> UTC</small><br><strong>Update</strong> - We're experiencing delays in search indexing for Issues and Pull Requests. Search results may show data up to three minutes old due to elevated processing times in our indexing pipeline. We're working to restore normal performance. We'll post another update by 10:30 UTC.</p><p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>09:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues and Pull Requests</p>tag:www.githubstatus.com,2005:Incident/277129682025-12-23T00:17:33Z2025-12-30T22:16:18ZDisruption with some GitHub services<p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>00:17</var> UTC</small><br><strong>Resolved</strong> - On December 22, 2025, between 22:01 UTC and 22:32 UTC, unauthenticated requests to github.com were degraded, resulting in slow or timed out page loads and API requests. Unauthenticated requests from Actions jobs, such as release downloads, were also impacted. Authenticated traffic was not impacted. This was due to a severe spike in traffic, primarily to search endpoints.<br /><br />Our immediate response focused on identifying and mitigating the source of the traffic increase, which along with automated traffic management restored full service for our users.<br /><br />We improved limiters for load to relevant endpoints and are continuing work to more proactively identify these large changes in traffic volume, improve resilience in critical request flows, and improve our time to mitigation.</p><p><small>Dec <var data-var='date'>23</var>, <var data-var='time'>00:06</var> UTC</small><br><strong>Update</strong> - All services at healthy levels. We're finalizing the change to prevent future degradations from the same source.</p><p><small>Dec <var data-var='date'>22</var>, <var data-var='time'>23:32</var> UTC</small><br><strong>Update</strong> - We're investigating elevated traffic affecting GitHub services, primarily impacting logged-out users with some increased latency on Issues. We're preparing additional mitigations to prevent further spikes.</p><p><small>Dec <var data-var='date'>22</var>, <var data-var='time'>22:57</var> UTC</small><br><strong>Update</strong> - We are experiencing elevated traffic affecting some GitHub services, primarily impacting logged-out users. We're actively investigating the full scope and working to restore normal service. We'll post another update by 23:45 UTC.</p><p><small>Dec <var data-var='date'>22</var>, <var data-var='time'>22:48</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>22</var>, <var data-var='time'>22:31</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/276500492025-12-18T19:09:50Z2025-12-23T21:43:04ZDisruption with some GitHub services<p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>19:09</var> UTC</small><br><strong>Resolved</strong> - On December 18, 2025, between 16:25 UTC and 19:09 UTC the service underlying Copilot policies was degraded and users, organizations, and enterprises were not able to update any policies related to Copilot. No other GitHub services, including other Copilot services were impacted. This was due to a database migration causing a schema drift.<br /><br />We mitigated the incident by synchronizing the schema. We have hardened the service to make sure schema drift does not cause any further incidents, and will investigate improvements in our deployment pipeline to shorten time to mitigation in the future.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>19:09</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>19:05</var> UTC</small><br><strong>Update</strong> - We have observed full recovery with updating copilot policy settings, and are validating that that there is no further impact.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>18:43</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>18:10</var> UTC</small><br><strong>Update</strong> - We have identified the source of this regression and are preparing a fix for deployment. We will update again in one hour.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>17:36</var> UTC</small><br><strong>Update</strong> - We are seeing an increase in errors on the User and Org policy settings page when updating policies. The errors are affecting the user copilot policies settings page, org copilot policies settings page when updating a policy. <br /></p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>17:36</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/276494132025-12-18T17:41:41Z2025-12-19T21:15:46ZIntermittent networking failures across GitHub-hosted Actions runners<p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>17:41</var> UTC</small><br><strong>Resolved</strong> - On December 18th, 2025, from 08:15 UTC to 17:11 UTC, some GitHub Actions runners experienced intermittent timeouts for Github API calls, which led to failures during runner setup and workflow execution. This was caused by network packet loss between runners in the West US region and one of GitHub’s edge sites. Approximately 1.5% of jobs on larger and standard hosted runners in the West US region were impacted, 0.28% of all Actions jobs during this period.<br /><br />By 17:11 UTC, all traffic was routed away from the affected edge site, mitigating the timeouts. We are working to improve early detection of cross-cloud connectivity issues and faster mitigation paths to reduce the impact of similar issues in the future.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>17:29</var> UTC</small><br><strong>Update</strong> - We are observing recovery with request from GitHub-hosted Actions runners and will continue to monitor.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Update</strong> - Since approximately 8:00 UTC, we have observed intermittent failures on GitHub-hosted actions runners. The failures have been observed both during runner setup, and workflow execution. We are continuing to investigate.<br /><br />Self-hosted runners are not impacted.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>16:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/276351482025-12-16T12:00:00Z2025-12-17T19:04:34ZIncident With Copilot<p><small>Dec <var data-var='date'>16</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>Resolved</strong> - From 11:50-12:25 UTC, Copilot Coding Agent was unable to process new agent requests. This affected all users creating new jobs during this timeframe, while existing jobs remained unaffected. The cause of this issue was a change to the actions configuration where Copilot Coding Agent runs, which caused the setup of the Actions runner to fail, and the issue was resolved by rolling back this change.<br />As a short term solution, we hope to increase our alerting criteria so that we can be alerted more quickly when an incident occurs, and in the long term we hope to harden our runner configuration to be more resilient against errors.</p>tag:www.githubstatus.com,2005:Incident/276022312025-12-15T18:22:12Z2025-12-19T22:30:23ZCopilot Code Review is degraded, and not returning responses to users<p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>18:22</var> UTC</small><br><strong>Resolved</strong> - On December 15, 2025, between 15:15 UTC and 18:22 UTC, Copilot Code Review experienced a service degradation that caused 46.97% of pull request review requests to fail, requiring users to re-request a review. Impacted users saw the error message: “Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.” The remaining requests completed successfully.<br /><br />The degradation was caused by elevated response times in an internal, model-backed dependency, which led to request timeouts and backpressure in the review processing pipeline, resulting in sustained queue growth and failed review completion.<br /><br />We mitigated the issue by temporarily bypassing fix suggestions to reduce latency, increasing worker capacity to drain the backlog, and rolling out a model configuration change that reduced end-to-end latency. Queue depth and request success rates returned to normal and remained stable through peak traffic.<br /><br />Following the incident, we increased baseline worker capacity, added instrumentation for worker utilization and queue health, and are improving automatic load-shedding, fallback behavior, and alerting to reduce time to detection and mitigation for similar issues.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>18:21</var> UTC</small><br><strong>Update</strong> - We have seen recovery for Copilot Code Review requests and are investigating long-term availability and scaling strategies</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/275994402025-12-15T15:45:52Z2025-12-19T14:26:09ZIncident with Copilot Grok Code Fast 1<p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:45</var> UTC</small><br><strong>Resolved</strong> - On Dec 15th, 2025, between 14:00 UTC and 15:45 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, 4% of the requests to this model failed due to an issue with our upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1. Users can expect some requests to intermittently fail until all issues are resolved.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:13</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/275494932025-12-11T20:05:46Z2025-12-11T20:17:58ZDisruptions in Login and Signup Flows<p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Resolved</strong> - Between 13:25 UTC and 18:35 UTC on Dec 11th, GitHub experienced an increase in scraper activity on public parts of our website. This scraper activity caused a low priority web request pool to increase and eventually exceed total capacity resulting in users experiencing 500 errors. In particular, this affected Login, Logout, and Signup routes, along with less than 1% requests from within Actions jobs. At the peak of the incident, 7.6% of login requests were impacted, which was the most significant impact of this scraping attack.<br /><br />Our mitigation strategy identified the scraping activity and blocked it. We also increased the pool of web requests that were impacted to have more capacity, and lastly we upgraded key user login routes to higher priority queues. <br /><br />In future, we’re working to more proactively identify this particular scraper activity and have faster mitigation times.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - We see signs of full recovery and will post a more in-depth update soon.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>19:58</var> UTC</small><br><strong>Update</strong> - We are continuing to monitor and continuing to see signs of recovery. We will update when we are confident that we are in full recovery.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>19:04</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to fix intermittent failures in anonymous requests and downloads from GitHub, including Login, Signup, Logout, and some requests from within Actions jobs. We are seeing improvements in telemetry, but we will continue to monitor for full recovery.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>18:47</var> UTC</small><br><strong>Update</strong> - We currently have ~7% of users experiencing errors when attempting to sign up, log in, or log out. We are deploying a change to mitigate these failures.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>18:40</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/275474842025-12-11T17:53:22Z2025-12-16T15:27:44ZWe are investigating a rise in request failures on several services<p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>17:53</var> UTC</small><br><strong>Resolved</strong> - Between 13:25 UTC and 18:35 UTC on December 11th, GitHub experienced elevated traffic to portions of GitHub.com that exceeded previously provisioned capacity for specific request types. As a result, users encountered intermittent 500 errors. Impact was most pronounced on Login, Logout, and Signup pages, peaking at 7.6% of login requests. Additionally, fewer than 1% of requests originating from GitHub Actions jobs were affected. <br /><br />This incident was driven by the same underlying factors as the previously reported <a href="https://www.githubstatus.com/incidents/40730vhmg6y8">disruption to Login and Signup flows</a><br /><br />Our immediate response focused on identifying and mitigating the source of the traffic increase. We increased available capacity for web request handling to relieve pressure on constrained pools. To reduce recurrence risk, we also re-routed critical authentication endpoints to a different traffic pool, ensuring sufficient isolation and headroom for login related traffic.<br /><br />In future, we’re working to more proactively identify these large changes in traffic volume and improve our time to mitigation.<br /><br /></p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>17:20</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>17:19</var> UTC</small><br><strong>Update</strong> - We believe that we have narrowed down our affected users to primarily those that are signing up or signing in as well as logged out usage. We are currently continuing to investigate the root cause and are working multiple mitigation angles.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>16:41</var> UTC</small><br><strong>Update</strong> - We are experiencing intermittent web request failures across multiple services, including login and authentication. Our teams are actively investigating the cause and working on mitigation.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>16:09</var> UTC</small><br><strong>Update</strong> - Codespaces, Copilot, Git Operations, Packages, Pages, Pull Requests and Webhooks are experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>16:01</var> UTC</small><br><strong>Update</strong> - API Requests and Actions are experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>15:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Issues</p>tag:www.githubstatus.com,2005:Incident/275306052025-12-10T14:52:42Z2025-12-19T14:09:44ZSome macOS Actions jobs routing to Ubuntu instead<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>14:52</var> UTC</small><br><strong>Resolved</strong> - Between December 9th, 2025 21:07 UTC and December 10th, 2025 14:52 UTC, 177 macos-14-large jobs were run on an Ubuntu larger runner VM instead of MacOS runner VMs. The impacted jobs were routed to a larger runner with incorrect metadata. We mitigated this by deleting the runner.<br /><br />The routing configuration is not something controlled externally. A manual override was done previously for internal testing, but left incorrect metadata for a large runner instance. An infrastructure migration caused this misconfigured runner to come online which started the incorrect assignments. We are removing the ability to manually override this configuration entirely, and are adding alerting to identify possible OS mismatches for hosted runner jobs.<br /><br />As a reminder, hosted runner VMs are secure and ephemeral, with every VM reimaged after every single job. All jobs impacted here were originally targeted at a GitHub-owned VM image and were run on a GitHub-owned VM image.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>14:32</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to ensure all macOS jobs route to macOS fulfillers and are monitoring for full recovery.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>13:34</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/275279492025-12-10T11:05:35Z2025-12-12T22:37:11ZSome Actions customers experiencing run start delays<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>11:05</var> UTC</small><br><strong>Resolved</strong> - On December 10, 2025 between 08:50 UTC and 11:00 UTC, some GitHub Actions workflow runs experienced longer-than-normal wait times for jobs starting or completing. All jobs successfully completed despite the delays. At peak impact, approximately 8% of workflow runs were affected.<br /><br />During this incident, some nodes received a spike in workflow events that led to queuing of event processing. Because runs are pinned to nodes, runs being processed by these nodes saw delays in starting or showing as completed. The team was alerted to this at 8:58 UTC. Impacted nodes were disabled from processing new jobs to allow queues to drain.<br /><br />We have increased overall processing capacity and are implementing safeguards to better balance load across all nodes when spikes occur. This is important to ensure our available capacity can always be fully utilized.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>11:05</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>11:05</var> UTC</small><br><strong>Update</strong> - We have validated the mitigation and are no longer seeing impact.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>10:58</var> UTC</small><br><strong>Update</strong> - We are seeing improvements in telemetry and are monitoring for full recovery.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>10:25</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We will continue monitoring to confirm whether this resolves the issue.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>09:41</var> UTC</small><br><strong>Update</strong> - The team continues to investigate issues with some Actions jobs being queued for a long time. We will continue providing updates on the progress towards mitigation.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>09:13</var> UTC</small><br><strong>Update</strong> - We're investigating Actions workflow runs taking longer than expected to start.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>09:11</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/275081952025-12-08T22:33:23Z2025-12-12T04:37:56ZDisruption with some GitHub services<p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>22:33</var> UTC</small><br><strong>Resolved</strong> - On December 8, 2025, between 21:15 and 22:24 UTC, Copilot code completions experienced a significant service degradation. During this period, up to 65% of code completion requests failed.<br /><br />The root cause was an internal feature flag that caused the primary model supporting Copilot code completions to appear unavailable to the backend service. The issue was resolved once the flag was disabled.<br /><br />To prevent recurrence, we expanded test coverage for Copilot code completion models and are strengthening our detection mechanisms to better identify and respond to traffic anomalies.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Update</strong> - We are beginning to see signs of resolution after applying a mitigation. We expect full resolution within approximately 30 minutes.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>22:04</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate and mitigate issues with the GPT 4o model for Copilot completions. Users can currently work around this issue by updating their VS Code settings with "github.copilot.advanced.debug.overrideEngine": "gpt-41-copilot".</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>21:32</var> UTC</small><br><strong>Update</strong> - We are currently investigating failures with the GPT 4o model for Copilot completions.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>21:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/275071302025-12-08T21:06:10Z2025-12-12T19:56:45ZPotential disruption with our Agent Control Plane UI Settings<p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>21:06</var> UTC</small><br><strong>Resolved</strong> - On November 26th, 2025, between approximately 02:24 UTC and December 8th, 2025 at 20:26 UTC, enterprise administrators experienced a disruption when viewing agent session activities in the Enterprise AI Controls page. During this period, users were unable to list agent session activity in the AI Controls view. This did not impact viewing agent session activity in audit logs or directly navigating to individual agent session logs, or otherwise managing AI Agents.<br /><br />The issue was caused by a misconfiguration in a change deployed on November 25th that unintentionally prevented data from being published to an internal Kafka topic responsible for feeding the AI Controls page with agent session activity information.<br /><br />The problem was identified and mitigated on December 8th by correcting the configuration issue. GitHub is improving monitoring for data pipeline dependencies and enhancing pre-deployment validation to catch configuration issues before they reach production.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>19:52</var> UTC</small><br><strong>Update</strong> - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/274675922025-12-05T22:20:19Z2025-12-10T16:38:05ZTeam synchronization is experiencing delays for non enterprise managed users<p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>22:20</var> UTC</small><br><strong>Resolved</strong> - On December 5th, 2025, between 12:00 pm UTC and 9:00 pm UTC, our Team Synchronization service experienced a significant degradation, preventing over 209,000 organization teams from syncing their identity provider (IdP) groups. The incident was triggered by a buildup of synchronization requests, resulting in elevated Redis key usage and high CPU consumption on the underlying Redis cluster.<br /><br />To mitigate further impact, we proactively paused all team synchronization requests between 3:00 pm UTC and 8:15 pm UTC, allowing us to stabilize the Redis cluster. Our engineering team also resolved the issue by flushing the affected Redis keys and queues, which promptly stopped runaway growth and restored service health. Additionally, we scaled up our infrastructure resources to improve our ability to process the high volume of synchronization requests. All pending team synchronizations were successfully processed following service restoration.<br /><br />We are working to strengthen the Team Synchronization service by implementing a killswitch, adding throttling to prevent excessive enqueueing of synchronization requests, and improving the scheduler to avoid duplicate job requests. Additionally, we’re investing in better observability to alert when job drops occur. These efforts are focused on preventing similar incidents and improving overall reliability going forward.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>21:40</var> UTC</small><br><strong>Update</strong> - We believe we reached a scaling limit and are increasing the amount of resources available to reduce the delays for the team synchronization process.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate the delays in the team synchronization and will report back once we have more information.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>18:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/275650322025-12-03T22:30:00Z2025-12-12T21:34:38ZWebhooks delivery degradation<p><small>Dec <var data-var='date'> 3</var>, <var data-var='time'>22:30</var> UTC</small><br><strong>Resolved</strong> - On December 3, 2025, between 22:21 UTC and 23:44 UTC, the Webhooks service experienced a degradation that delayed writes of webhook delivery records to our database. During this period, many webhook deliveries were not visible in the webhook delivery UI or API for more than an hour after they were sent. As a result, customers were temporarily unable to request redeliveries for those delayed records. The underlying cause was throttling of database writes due to high replication lag.<br /><br />We mitigated the incident by temporarily disabling delivery history for a small number of very high‑volume webhook owners to reduce write pressure and stabilize the service. We are contacting the affected customers directly with more details.<br /><br />We are improving our webhook delivery storage architecture so it can scale with current and future webhook traffic, reducing the likelihood and impact of similar issues.</p>tag:www.githubstatus.com,2005:Incident/273581972025-11-28T08:23:18Z2025-12-02T01:35:25ZIncident with Copilot<p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>08:23</var> UTC</small><br><strong>Resolved</strong> - On November 28th, 2025, between approximately 05:51 and 08:04 UTC, Copilot experienced an outage affecting the Claude Sonnet 4.5 model. Users attempting to use this model received an HTTP 400 error, resulting in 4.6% of total chat requests during this timeframe failing. Other models were not impacted.<br /><br />The issue was caused by a misconfiguration deployed to an internal service which made Claude Sonnet 4.5 unavailable. The problem was identified and mitigated by reverting the change. GitHub is working to improve cross-service deploy safeguards and monitoring to prevent similar incidents in the future.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>07:52</var> UTC</small><br><strong>Update</strong> - We have rolled out a fix and are monitoring for recovery.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>07:04</var> UTC</small><br><strong>Update</strong> - We are investigating degraded performance with the Claude Sonnet 4.5 model in Copilot.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>06:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/273047072025-11-24T15:04:23Z2025-11-25T22:22:08ZDisruption with some GitHub services<p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>15:04</var> UTC</small><br><strong>Resolved</strong> - On November 24, 2025, between 12:15 and 15:04 UTC, Codespaces users encountered connection issues when attempting to create a codespace after choosing the recently released VS Code Codespaces extension, version 1.18.1. Users were able to downgrade to the 1.18.0 version of the extension during this period to work around this issue. At peak, the error rate was 19% of connection requests. This was caused by mismatching version dependencies for the released VS Code Codespaces extension.<br /><br />The connection issues were mitigated by releasing the VS Code Codespaces extension version 1.18.2 that addressed the issue. Users utilizing version 1.18.1 of the VS Code Codespaces extension are advised to upgrade to version >=1.18.2.<br /><br />We are improving our validation and release process for this extension to ensure functional issues like this are caught before release to customers and to reduce detection and mitigation times for extension issues like this in the future.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>14:26</var> UTC</small><br><strong>Update</strong> - Version 1.18.2 of the GitHub Codespaces VSCode extension has been released. This version should resolve the connection issues, and we are continuing to monitor success rate for Codespaces creation.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>14:00</var> UTC</small><br><strong>Update</strong> - We are testing a new version of the GitHub Codespaces VSCode extension that should resolve the connection issues, and expect that to be available in the next 30 minutes.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>13:26</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>13:25</var> UTC</small><br><strong>Update</strong> - We are seeing Codespaces connection issues related to the latest version of the VSCode Codespaces extension (1.18.1). Users can select the 1.18.0 version of the extension to avoid issues (View -> Command Palette, run "Extensions: Install specific version of Extension..."), while we work to remove the affected version.</p><p><small>Nov <var data-var='date'>24</var>, <var data-var='time'>13:10</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:www.githubstatus.com,2005:Incident/272405632025-11-21T00:22:14Z2025-12-02T18:42:12ZDisruption with some GitHub services<p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>00:22</var> UTC</small><br><strong>Resolved</strong> - Between November 19th, 16:13UTC and November 21st, 12:22UTC, the GitHub Enterprise Importer (GEI) service was in a degraded state, during which time, customers of the service experienced a delay when reclaiming mannequins post-migration.<br /><br />We have taken steps to prevent similar incidents from occurring in the future.</p><p><small>Nov <var data-var='date'>21</var>, <var data-var='time'>00:22</var> UTC</small><br><strong>Update</strong> - Processing of these jobs has resumed.</p><p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Update</strong> - GitHub Enterprise Importer migration systems are currently impacted by a pause to Migration Mannequin Reclaiming.<br />At 19:43 UTC on 2025-11-19, we paused the queue that processes Mannequin Reclaiming work done at the end of a migration.<br />This was done after observing load that threatened the health of the overall system. The cause has been identified, and efforts to fix are underway.<br /><br />In the current state:<br /> - all requests to Reclaim Mannequins will be held in a queue<br /> - those requests will be processed when repair work is complete and the queue unpaused, at which time the incident will be closed<br /><br />This does not impact processing of migration runs using GitHub Enterprise Importer, only mannequin reclamation.</p><p><small>Nov <var data-var='date'>19</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:www.githubstatus.com,2005:Incident/272567362025-11-20T19:24:33Z2025-11-21T22:50:15ZDisruption with some GitHub services<p><small>Nov <var data-var='date'>20</var>, <var data-var='time'>19:24</var> UTC</small><br><strong>Resolved</strong> - Between November 20, 2025 17:16 UTC to November, 2025 19:08 UTC some users experienced delayed or failed Git Operations for raw file downloads. On average, the error rate was less than 0.2%. This was due to a sustained increase in unauthenticated repository traffic.<br /><br />We mitigated the incident by applying regional rate limiting and are taking steps to improve our monitoring and time to mitigation for similar issues in the future.</p><p><small>Nov <var data-var='date'>20</var>, <var data-var='time'>19:24</var> UTC</small><br><strong>Update</strong> - Mitigation has been applied and operations have returned to normal.</p><p><small>Nov <var data-var='date'>20</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - We continue to see a small number of errors when accessing raw file content. We are deploying a mitigation.</p><p><small>Nov <var data-var='date'>20</var>, <var data-var='time'>18:05</var> UTC</small><br><strong>Update</strong> - We're investigating elevated error rates for a small amount of customers when accessing raw file content.</p><p><small>Nov <var data-var='date'>20</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>