Rails on Heroku: Guide to how many dynos and which size

I just published an [exhaustive and opinionated guide](https://railsautoscale.com/how-many-dynos/) to dynos on Heroku. It answers the questions I've been hearing over and over for years:

* How many dynos should you be running?
* Which dyno type is right for your app?

I hope you find it helpful!

8 thoughts on “Rails on Heroku: Guide to how many dynos and which size”

  1. The one thing that doesn’t feel right to me simply based on my own experience, is that the compute power of Performance Dynos (perhaps related to the fact they are not virtualized alongside other Heroku Customers) was just better.

    My app was not memory constrained to a Standard 2x given my server configuration, but I got a \~20% speedup in my response times switching to Performance tier (even if I kept the same server configuration).

    Granted, I did this test in 2017, so it could be that whatever EC-2 instances behind the hood have changed, but there was definitely some runtime benefit to “playing up a league”, and you had even less memory oopsies to allow you to sleep at night.

  2. Great article.

    Something relevant here is that Heroku [recommends](https://devcenter.heroku.com/articles/h12-request-timeout-in-ruby-mri) using the rack timeout gem for all ruby apps in order to kill long running requests in Puma.

    In my experience this gem makes a _big_ difference in handling concurrency.

    The lower the threads, the better the gem works. Adding threads will at some point [hose your server](https://www.schneems.com/2017/02/21/the-oldest-bug-in-ruby-why-racktimeout-might-hose-your-server/). <- fantastic article. I found out the hard way consolidating 1-10 smaller dynos into 1-2 performance. We went from 2-4 threads per dyno to ~24 threads per dyno. As we added threads the requests would gradually start slowing down over several days. Eventually after a few hours the app would crash so hard we’d get the Heroku “purple screen of death”. Rack timeout was locking up threads & throwing mem errors b/c it couldn’t garbage collect. The only fix was to restart dynos. This wasn’t an issue when we had 1-10 small dynos being autoscaled by Hirefire b/c they were always restarting. We removed rack timeout & saw mem use plummet. This let us add even more threads. >TL;DR Rack timeout is great for lots of small dynos. A disaster for big ones.

  3. What a such great article. Useful and clear info related to a complex topic with close to none sales pitch in there. Instantly signed up to your newsletter although I don’t need to scale any Heroku app at this moment (who knows about the future?).


Leave a Comment