January 03, 2024 Migrating from Sidekiq to Solid Queue

With the recent release of Solid Queue, and a little bit of extra free time due to the holidays, I made a decision to migrate my apps away from Sidekiq. The idea of simplifying my infrastructure requirements by removing Redis from the equation, and the relatively vanilla requirements I have for a background queuing system were just enough to convince me this was a worthwhile exercise.

I won’t bore you with every machinations of copying, pasting, and deleting code since it’s realtively straightforward, but I did think it was worth sharing a few highlights, considerations, and gotchas I came across.

The Migration

The switchover in my case was very straightforward since my use of Sidekiq was all via Active Job. It consisted of:


I migrated two apps to Solid Queue. One of them mainly uses ActiveJob to send emails and run reports, so I simply replied on the default config, but my other app does utilize multiple queues, so I ended up with the following config for it:

# config/solid_queue.yml
default: &default
    - queues: "*"
      polling_interval: 2
    - queues: real_time
      polling_interval: 0.1

  <<: *default

  <<: *default

  <<: *default

My goal here was to create a priorty queue, called real_time, so I decided to add a second worker dedicated soley to those jobs, and to poll for them at a much higher frequency.

A Few Considerations

My app with the completely vanilla config deployed and worked without a hitch, but I did run into a few hiccups with my custom config.

DB Connection Pool

I deploy my apps on Heroku, and by default the app is setup to look at the DB_POOL environmental variable to set the connection pool in config/database.yml. This is fine for app instances but since I’m running a dedicated worker dyno for Solid Queue, some extra work was needed.

By default each Solid Queue worker has 5 threads, meaning that since my app has two workers, I needed a connection pool that allowed 10 connections. I was able to accomplish this by using a specfic enviroment variable, SOLID_QUEUE_DB_POOL, as is suggested in the Heroku docs.

# Procfile
web: bundle exec puma -C config/puma.rb
worker: DB_POOL=$SOLID_QUEUE_DB_POOL bundle exec rake solid_queue:start
release: bundle exec rails db:migrate

Error Handling

Note: A previous version of this post referred to the on_thread_error setting to catch job errors. After some more digging I realized this setting is not for job errors but rather actual errors that occur that are related to the thread directly.

By default the library silently handles errors and marks them as failed, but I prefer being proactively notified when something happens. To do this you’ll need to leverage an around_perform callback to wrap your jobs in an error handler, then report them as necessary

 class ApplicationJob < ActiveJob::Base
  around_perform do |job, block|
    capture_and_record_errors(job, block)

  def capture_and_record_errors(job, block)
  # I had to use rescue here instead of a `Rails.error` block because Honeybadger ignores the `Rails.error.report` call
  # in favor of their own error handler, which is fine in most cases, but unfortunately doesn't work here. Report would be
  # great here because it re-raises the error, but instead I have to do that manually
  rescue Exception => e
    raise e

  def error_context(job)
      active_job: job.class.name,
      arguments: job.arguments,
      scheduled_at: job.scheduled_at,
      job_id: job.job_id

A huge thanks goes out to Rosa Gutierrez for not only her wonderful work on this gem, but also for taking the time to respond to Github issues.

Failures and Retries

By default, Sidekiq has a built-in retry mechanism, but this is not the case for Solid Queue. If automatic retrying is important for you you’ll need configure this on a per-job basis, or in application_job.rb, via Active Job’s retry_on API.

Wrapping Up

So far I’m feeling good about the simplification that this has afforded me/my apps, but I do find myself missing the UI that Sidekiq provided, but it does not seem that we’ll have to wait too long for an answer for that either.