Public preview Open source

database_observability.postgres

Public preview: This is a public preview component. Public preview components are subject to breaking changes, and may be replaced with equivalent functionality that cover the same use case. To enable and use a public preview component, you must set the stability.level flag to public-preview or below.

Usage

Alloy
database_observability.postgres "<LABEL>" {
  data_source_name = <DATA_SOURCE_NAME>
  forward_to       = [<LOKI_RECEIVERS>]
  targets          = "<TARGET_LIST>"
}

Arguments

You can use the following arguments with database_observability.postgres:

NameTypeDescriptionDefaultRequired
data_source_namesecretData Source Name for the Postgres server to connect to.yes
forward_tolist(LogsReceiver)Where to forward log entries after processing.yes
targetslist(map(string))List of targets to scrape.yes
disable_collectorslist(string)A list of collectors to disable from the default set.no
enable_collectorslist(string)A list of collectors to enable on top of the default set.no

The following collectors are configurable via enable_collectors and disable_collectors:

NameDescriptionEnabled by default
query_detailsCollect queries information.yes
query_samplesCollect query samples and wait events information.yes
schema_detailsCollect schemas, tables, and columns from PostgreSQL system catalogs.yes
explain_plansCollect query explain plans.yes

The error_logs collector is always active and processes PostgreSQL error logs in JSON format that are forwarded to it via the forward_to parameter. It does not need to be explicitly enabled.

Blocks

You can use the following blocks with database_observability.postgres:

BlockDescriptionRequired
cloud_providerProvide Cloud Provider information.no
cloud_provider > awsProvide AWS database host information.no
query_detailsConfigure the queries collector.no
query_samplesConfigure the query samples collector.no
schema_detailsConfigure the schema and table details collector.no
explain_plansConfigure the explain plans collector.no
error_logsConfigure the error logs collector.no

The > symbol indicates deeper levels of nesting. For example, cloud_provider > aws refers to a aws block defined inside an cloud_provider block.

cloud_provider

The cloud_provider block has no attributes. It contains zero or more aws blocks. You use the cloud_provider block to provide information related to the cloud provider that hosts the database under observation. This information is appended as labels to the collected metrics. The labels make it easier for you to filter and group your metrics.

aws

The aws block supplies the ARN identifier for the database being monitored.

NameTypeDescriptionDefaultRequired
arnstringThe ARN associated with the database under observation.yes

query_details

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no

query_samples

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."15s"no
disable_query_redactionboolCollect unredacted SQL query text (might include parameters).falseno
exclude_current_userboolDo not collect query samples for current database user.trueno

schema_details

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no
cache_enabledbooleanWhether to enable caching of table definitions.trueno
cache_sizeintegerCache size.256no
cache_ttldurationCache TTL."10m"no

explain_plans

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no
per_collect_ratiofloat64The ratio of queries to collect explain plans for.1.0no
explain_plan_exclude_schemaslist(string)Schemas to exclude from explain plans.[]no

error_logs

The error_logs collector is always active and processes PostgreSQL error logs in JSON format (requires log_destination = 'jsonlog' in PostgreSQL configuration). It receives log entries through the component’s forward_to parameter, extracts error information, maps SQLSTATE codes to human-readable error names, and tracks metrics by error type and severity.

Unlike other collectors, error_logs does not need to be enabled via enable_collectors - it automatically processes any PostgreSQL JSON logs that are forwarded to the component.

By default, SQL statements and Personally Identifiable Information (PII) in error log fields are redacted to protect sensitive data. This includes:

  • SQL literals in statement and internal_query fields
  • Data values in detail and context fields (such as constraint violations, key values, and row data)
  • Parenthesized values in error messages (e.g., Key (id)=(123) becomes Key (id)=(?))
NameTypeDescriptionDefaultRequired
disable_query_redactionboolDisable redaction of SQL queries and PII in error logs. Use with caution in production environments.falseno

Warning

Setting disable_query_redaction to true will expose SQL query parameters, constraint values, and other potentially sensitive information in error logs. Only enable this in non-production environments or when you are certain the logs do not contain sensitive data.

Example

Alloy
database_observability.postgres "orders_db" {
  data_source_name = "postgres://user:pass@localhost:5432/dbname"
  forward_to       = [loki.relabel.orders_db.receiver]
  targets          = prometheus.exporter.postgres.orders_db.targets

  enable_collectors = ["query_samples", "explain_plans"]

  cloud_provider {
    aws {
      arn = "your-rds-db-arn"
    }
  }

  // error_logs collector is always active - configure it here
  error_logs {
    disable_query_redaction = false // Recommended: Keep redaction enabled in production
  }
}

prometheus.exporter.postgres "orders_db" {
  data_source_name   = "postgres://user:pass@localhost:5432/dbname"
  enabled_collectors = ["stat_statements"]
}

loki.relabel "orders_db" {
  forward_to = [loki.write.logs_service.receiver]
  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }
  rule {
    target_label = "instance"
    replacement  = "orders_db"
  }
}

discovery.relabel "orders_db" {
  targets = database_observability.postgres.orders_db.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }
  rule {
    target_label = "instance"
    replacement  = "orders_db"
  }
}

prometheus.scrape "orders_db" {
  targets    = discovery.relabel.orders_db.targets
  job_name   = "integrations/db-o11y"
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_URL>")
    basic_auth {
      username = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_ID>")
      password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_URL>")
    basic_auth {
      username = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_ID>")
      password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
    }
  }
}

Replace the following:

  • <GRAFANA_CLOUD_HOSTED_METRICS_URL>: The URL for your Grafana Cloud hosted metrics.
  • <GRAFANA_CLOUD_HOSTED_METRICS_ID>: The user ID for your Grafana Cloud hosted metrics.
  • <GRAFANA_CLOUD_RW_API_KEY>: Your Grafana Cloud API key.
  • <GRAFANA_CLOUD_HOSTED_LOGS_URL>: The URL for your Grafana Cloud hosted logs.
  • <GRAFANA_CLOUD_HOSTED_LOGS_ID>: The user ID for your Grafana Cloud hosted logs.

Compatible components

database_observability.postgres can accept arguments from the following components:

database_observability.postgres has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.