We’ve got a query which runs pretty slow. It takes about 1.7ms per row to return, with a wallclock time of around 30 seconds in production. The delay is large enough where clients are complaining and it’s creating a lot of slow query log entries. At the core we want a number of journal entries related to a number of daily journals as created by a specific person. We implemented the query using the ORM and I would prefer to keep the description at that level. The total number of rows selected from the journal entry table is arppxoimately 1.7% of the total table however Postgres is scanning the entire table large table due to the query.

I think my next steps are to change from selecting the journal entries to the journals within the given time range. And that is a steaming pile of speghetti, err. So after looking through the code and trying to create a special query case I realized I need to grab a QuerySet for a different entity entirely. Perhaps I’m appraoching this problem from the wrong direction and should really be querying from the client for the journals first, then drill down into the journal entries over time as we need them. Currently we are getting everything and doing a lot of work in the client. Great early implementation idea however the implementation is definitely seeing the strain now.

A client approach?

Preempted by other work :-(.