Blacksmith Managed Runners - Operational
Blacksmith Managed Runners
Previous page
Next page
API - Operational
API
Website - Operational
Website
Github → Actions - Operational
Github → Actions
Github → API Requests - Operational
Github → API Requests
Github → Webhooks - Operational
Github → Webhooks
We identified the issue to be a network maintenance window with our upstream cloud provider, which seems to have resolved now. We're monitoring the issue.
We're seeing some alerts indicating errors with some percentage of Blacksmith cache restore attempts. We're investigating this issue.
We are noticing partial degradation in GitHub Action services. This is manifesting as longer queue times once again.
This incident has been resolved, queue times are returning to normal. We will continue monitoring the situation.
We are noticing partial degradation in GitHub Action services. This is manifesting as longer queue times and the logs not streaming through for some jobs.
Queue times have returned back to normal, we will continue monitoring the situation.
We are noticing GitHub taking longer than usual to assign jobs to our runners, we are looking into this at the moment.
We're seeing GitHub take longer than usual to assign jobs to our runners. We are currently investigating this incident.
We are seeing some signs of recovery again and are closely monitoring the situation.
We're seeing signs of the same incident reported earlier today, with GitHub's control plane not assigning jobs to our runners.
The incident is now resolved.
We're seeing some signs of recovery as jobs are starting to get assigned to our runners as expected.
We've validated that this seems to be at least a partial outage on GitHub's side. We're closely monitoring the situation.
GitHub has resolved the incident on their end, we are seeing normal queue times once again.
GitHub is still continuing to investigate delays to status updates to Actions Workflow Runs, Workflow Job Runs, and Check Steps.
GitHub has declared an incident, we are continuing to monitor the situation - https://www.githubstatus.com/incidents/9yk1fbk0qjjc
We are currently noticing higher than normal queue times because of GitHub not assigning jobs to some of our runners. We are noticing excessive queueing on ubuntu-latest as well.
ubuntu-latest
This incident has been resolved, error rates are back to zero and jobs are getting assigned to our runners.
We are currently investigating this incident. It appears to be an outage on GitHub's endpoints and webhook delivery.
We're seeing some recovery and are monitoring the situation.
We're seeing intermittent connection failures to our database provider. We are currently investigating this incident.
The connection issues have dissipated, we are monitoring the system to ensure normal queue times.
We're seeing some connection issues with our database provider that may result in GitHub Action cache failures. We are investigating the issue.
We've deployed a fix and are seeing short queue times again. We're actively monitoring this fix.
We are looking into reported delays in job pickup by our ARM runners.
Oct 2024 to Dec 2024
Next