PgBouncer internals is a connections pooling for Postgres. It has all kinds of internal limits and limited resources. So here’s how it looks like from client’s, say, some web-application, point of view: PgBouncer service Client connects to PbBouncer. Client makes SQL request / query / transaction Gets a response. Repeat steps 2–3 as many times as needed. Here’s client’s connection states diagram: During a LOGIN phaze/state ( stands for ) Pgbouncer might authorize a client based on some local info (such as auth_file, certificates, PAM or hba files), or in a remote way — with in a . Thus a client connection while logging in might need and be executing a query. Let’s show that as substate: CL_ client auth_query database Executing But queries also might be actually executing some queries and so linked to actual server connections by PgBouncer, or idling, doing nothing. This linking / matching of clients and server connections is the whole raison d’etre of PgBouncer. PgBouncer links those clients with server only for some time, depending on — either for a session, transaction or just one request. CL_ACTIVE database pool_mode As transaction pooling is the most common, we’ll assume it for the rest of this post So client while being in state is actually might be or might be not linked to a server connection. To account for that we split this state in two: and . So here’s a new diagram: cl_active active active-linked/executing These server connections, that clients get linked to, are “pooled” — limited in number and reused. Because of that it might occur that while client sends some request (beginning a transaction or performing a query) a corresponding server connection pool is exhausted, i.e. pgbouncer oppened as many connections as were allowed it and all of them are occupied by (linked to) some other clients. PgBouncer in this scenario puts client into a queue, and this client’s connection goes to a state. This might happen as well while client only logging in, so there’s for that also: CL_WAITING CL_WAITING_LOGIN On the other end there are these server connections: from PgBouncer to the actual database. Those have respectful states: for when authorizing, for when it’s linked with (and used or not by) client’s connections, or if it’s free — . SV_LOGIN SV_ACTIVE SV_IDLE PgBouncer has an administration interface available through a connection to a special ‘virtual’ database named . There are a number of commands in it, one of those — — will show number of connections in each state for each pool: pgbouncer SHOW SHOW POOLS We see here 4 client’s connections opened, all of them — . And 5 server connections: 4 — an one is in . Here’s a nice write up on . But basically you would want to track them in any way you do monitoring, so you’ll have historical picture. cl_active sv_active sv_used how to monitor these states Pool size It’s not a that simple, PgBouncer has related to limiting connection count! 5 different setting You can specify for each proxied database. If not set, it defaults to setting, which again by default has a value of . pool_size default_pool_size 20 is exactly suitable for covering this problem — it limits total number of connections to any database, so badly behaving clients won’t be able to create too many Postgres backends. max_db_connections — is a limit on an additional, reserve pool, which kicks in if a regular pool is exhausted, i.e. there are open server connections. As I understand it was designed to help serve a burst of clients. reserve_pool_size pool_size — this limits total number of conns from one user. From my point of view, it’s a very strange limit, it makes sense only in case of multiple databases with same users. max_user_connections to any database — limits total number of incoming clients connections. It differs from because it includes connections from any user. max_client_conn max_user_connections Pgbouncer’s administration interface database besides has also command, that shows actually applied limits and all configured and currently present pools: SHOW POOLS SHOW DATABASES So dividing by will give you pool utilization, so you can trigger an alert if it goes somewhere close to 100%. current_connections pool_size PgBouncer also provides command, that provides stats (not a surprize, I know) on requests and traffic for every proxied database: SHOW STATS Here, for the purpose of measuring pool utilization we are mostly interested in — total number of microseconds spent by pgbouncer when . Dividing this by respectful pool size (considering pool size to be the number of seconds that all the server connections might spent in total serving queries within one wall clock second) we get another measure/estimate of pool utilization, let’s call it “query time utilization”. total_query_time actively connected to PostgreSQL, executing queries Here’s my article on monitoring PgBouncer with USE ans RED monitoring methods. Why it’s not enough to watch Utilization and you need Saturation metric as well? The problem is that even with cumulative stats like one can’t tell if there were some short periods of high utilization between two moments when we look at the stats. For example, you have some cron jobs configured to simultaneously start and make some queries to a database. If these queries are short enough, i.e. shorter than stats collection period, then measured utilization might still be low, while at these moment of crons start time they might exhaust the pool. But looking only on Utilization metric, you won’t be able to diagnose that. total_query_time How can we track that on PgBouncer. A straightforward (and naive) approach is to count clients in output in a state, that we discussed. In Under normal circumstances you won’t see them, and seeing number of waiting client greater than 0 means pool saturation. SHOW POOLS cl_waiting But as you know, you can only sample , and this leads to a possibility of missing such waitings. SHOW POOLS Check out my other articles on . Postgres and monitoring