Here's a practical guide on how to manage Terraform provider configurations for different Yandex Cloud regions using Terragrunt. What you'll need Terraform >= 1.9.7 Terragrunt >= 0.67.16 Yandex Cloud Provider >= 0.129.0 Setup Steps Let's look at how to use Terragrunt to dynamically create provider configs for Yandex Cloud. I'll break this down into digestible pieces: Basic provider setup First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl. This will automatically generate versions.tf for each module: locals { tf_providers = { yandex = ">= 0.129.0" } } generate "providers_versions" { path = "versions.tf" if_exists = "overwrite" contents = <<EOF terraform { required_version = ">= 1.9.7" required_providers { yandex = { source = "yandex-cloud/yandex" version = "${local.tf_providers.yandex}" } } } EOF } Region settings For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module: locals { cloud_id = "SOME_ID" folder_id = "SOME_ID" sa_key_file = "${get_repo_root()}/key.json" endpoint = "api.yandexcloud.kz:443" # Region-Specific storage_endpoint = "storage.yandexcloud.kz" # Region-Specific } generate "providers_configs" { path = "providers.tf" if_exists = "overwrite_terragrunt" contents = <<EOF provider "yandex" { service_account_key_file = "${local.sa_key_file}" cloud_id = "${local.cloud_id}" folder_id = "${local.folder_id}" endpoint = "${local.endpoint}" storage_endpoint = "${local.storage_endpoint}" } EOF } Additional providers If you're working with Kubernetes / Kubectl / Helm in Terraform, you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module: dependencies { paths = ["path/to/your/mks"] } dependency "mks" { config_path = "path/to/your/mks" mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"] mock_outputs_merge_strategy_with_state = "shallow" mock_outputs = { cluster_id = "cluster_id" } } terraform { source = "path/to/your/module" } inputs = { cluster_id = dependency.mks.outputs.cluster_id . . . <OTHER_INPUTS> . . . } Then use data resources in the module to configure providers: variable "cluster_id" { type = string default = null description = "Managed Kubernetes Service cluster ID" } data "yandex_kubernetes_cluster" "this" { cluster_id = var.cluster_id } data "yandex_client_config" "this" {} provider "kubernetes" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } provider "helm" { kubernetes { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } } provider "kubectl" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } Notes Using Terragrunt for configuration management: Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source. Setup JSON key for Terragrunt: To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore. Alternatively you can use a static access key. Configuring the module: Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in terragrunt. Conclusion This setup gives you a clean way to manage Terraform configs across different Yandex Cloud regions. It handles authentication properly and works well whether you're just using basic cloud resources or diving into Kubernetes and Helm deployments. Here's a practical guide on how to manage Terraform provider configurations for different Yandex Cloud regions using Terragrunt. What you'll need Terraform >= 1.9.7 Terragrunt >= 0.67.16 Yandex Cloud Provider >= 0.129.0 Terraform >= 1.9.7 Terragrunt >= 0.67.16 Yandex Cloud Provider >= 0.129.0 Setup Steps Let's look at how to use Terragrunt to dynamically create provider configs for Yandex Cloud. I'll break this down into digestible pieces: Basic provider setup First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl. This will automatically generate versions.tf for each module: locals { tf_providers = { yandex = ">= 0.129.0" } } generate "providers_versions" { path = "versions.tf" if_exists = "overwrite" contents = <<EOF terraform { required_version = ">= 1.9.7" required_providers { yandex = { source = "yandex-cloud/yandex" version = "${local.tf_providers.yandex}" } } } EOF } Region settings For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module: locals { cloud_id = "SOME_ID" folder_id = "SOME_ID" sa_key_file = "${get_repo_root()}/key.json" endpoint = "api.yandexcloud.kz:443" # Region-Specific storage_endpoint = "storage.yandexcloud.kz" # Region-Specific } generate "providers_configs" { path = "providers.tf" if_exists = "overwrite_terragrunt" contents = <<EOF provider "yandex" { service_account_key_file = "${local.sa_key_file}" cloud_id = "${local.cloud_id}" folder_id = "${local.folder_id}" endpoint = "${local.endpoint}" storage_endpoint = "${local.storage_endpoint}" } EOF } Additional providers If you're working with Kubernetes / Kubectl / Helm in Terraform, you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module: dependencies { paths = ["path/to/your/mks"] } dependency "mks" { config_path = "path/to/your/mks" mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"] mock_outputs_merge_strategy_with_state = "shallow" mock_outputs = { cluster_id = "cluster_id" } } terraform { source = "path/to/your/module" } inputs = { cluster_id = dependency.mks.outputs.cluster_id . . . <OTHER_INPUTS> . . . } Basic provider setup First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl. This will automatically generate versions.tf for each module: locals { tf_providers = { yandex = ">= 0.129.0" } } generate "providers_versions" { path = "versions.tf" if_exists = "overwrite" contents = <<EOF terraform { required_version = ">= 1.9.7" required_providers { yandex = { source = "yandex-cloud/yandex" version = "${local.tf_providers.yandex}" } } } EOF } Basic provider setup Basic provider setup First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl . This will automatically generate versions.tf for each module: terragrunt.hcl versions.tf locals { tf_providers = { yandex = ">= 0.129.0" } } generate "providers_versions" { path = "versions.tf" if_exists = "overwrite" contents = <<EOF terraform { required_version = ">= 1.9.7" required_providers { yandex = { source = "yandex-cloud/yandex" version = "${local.tf_providers.yandex}" } } } EOF } locals { tf_providers = { yandex = ">= 0.129.0" } } generate "providers_versions" { path = "versions.tf" if_exists = "overwrite" contents = <<EOF terraform { required_version = ">= 1.9.7" required_providers { yandex = { source = "yandex-cloud/yandex" version = "${local.tf_providers.yandex}" } } } EOF } Region settings For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module: locals { cloud_id = "SOME_ID" folder_id = "SOME_ID" sa_key_file = "${get_repo_root()}/key.json" endpoint = "api.yandexcloud.kz:443" # Region-Specific storage_endpoint = "storage.yandexcloud.kz" # Region-Specific } generate "providers_configs" { path = "providers.tf" if_exists = "overwrite_terragrunt" contents = <<EOF provider "yandex" { service_account_key_file = "${local.sa_key_file}" cloud_id = "${local.cloud_id}" folder_id = "${local.folder_id}" endpoint = "${local.endpoint}" storage_endpoint = "${local.storage_endpoint}" } EOF } Region settings Region settings For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module: KZ endpoints RU env.hcl providers.tf locals { cloud_id = "SOME_ID" folder_id = "SOME_ID" sa_key_file = "${get_repo_root()}/key.json" endpoint = "api.yandexcloud.kz:443" # Region-Specific storage_endpoint = "storage.yandexcloud.kz" # Region-Specific } generate "providers_configs" { path = "providers.tf" if_exists = "overwrite_terragrunt" contents = <<EOF provider "yandex" { service_account_key_file = "${local.sa_key_file}" cloud_id = "${local.cloud_id}" folder_id = "${local.folder_id}" endpoint = "${local.endpoint}" storage_endpoint = "${local.storage_endpoint}" } EOF } locals { cloud_id = "SOME_ID" folder_id = "SOME_ID" sa_key_file = "${get_repo_root()}/key.json" endpoint = "api.yandexcloud.kz:443" # Region-Specific storage_endpoint = "storage.yandexcloud.kz" # Region-Specific } generate "providers_configs" { path = "providers.tf" if_exists = "overwrite_terragrunt" contents = <<EOF provider "yandex" { service_account_key_file = "${local.sa_key_file}" cloud_id = "${local.cloud_id}" folder_id = "${local.folder_id}" endpoint = "${local.endpoint}" storage_endpoint = "${local.storage_endpoint}" } EOF } Additional providers If you're working with Kubernetes / Kubectl / Helm in Terraform, you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module: dependencies { paths = ["path/to/your/mks"] } dependency "mks" { config_path = "path/to/your/mks" mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"] mock_outputs_merge_strategy_with_state = "shallow" mock_outputs = { cluster_id = "cluster_id" } } terraform { source = "path/to/your/module" } inputs = { cluster_id = dependency.mks.outputs.cluster_id . . . <OTHER_INPUTS> . . . } Additional providers Additional providers If you're working with Kubernetes / Kubectl / Helm in Terraform , you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module: Kubernetes Kubectl Helm Terraform cluster_id dependencies { paths = ["path/to/your/mks"] } dependency "mks" { config_path = "path/to/your/mks" mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"] mock_outputs_merge_strategy_with_state = "shallow" mock_outputs = { cluster_id = "cluster_id" } } terraform { source = "path/to/your/module" } inputs = { cluster_id = dependency.mks.outputs.cluster_id . . . <OTHER_INPUTS> . . . } dependencies { paths = ["path/to/your/mks"] } dependency "mks" { config_path = "path/to/your/mks" mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"] mock_outputs_merge_strategy_with_state = "shallow" mock_outputs = { cluster_id = "cluster_id" } } terraform { source = "path/to/your/module" } inputs = { cluster_id = dependency.mks.outputs.cluster_id . . . <OTHER_INPUTS> . . . } Then use data resources in the module to configure providers: data variable "cluster_id" { type = string default = null description = "Managed Kubernetes Service cluster ID" } data "yandex_kubernetes_cluster" "this" { cluster_id = var.cluster_id } data "yandex_client_config" "this" {} provider "kubernetes" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } provider "helm" { kubernetes { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } } provider "kubectl" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } variable "cluster_id" { type = string default = null description = "Managed Kubernetes Service cluster ID" } data "yandex_kubernetes_cluster" "this" { cluster_id = var.cluster_id } data "yandex_client_config" "this" {} provider "kubernetes" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } provider "helm" { kubernetes { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } } provider "kubectl" { host = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate token = data.yandex_client_config.this.iam_token } Notes Using Terragrunt for configuration management: Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source. Setup JSON key for Terragrunt: To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore. Alternatively you can use a static access key. Configuring the module: Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in terragrunt. Using Terragrunt for configuration management: Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source. Using Terragrunt for configuration management: Using Terragrunt for configuration management: Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source. Terragrunt .hcl Setup JSON key for Terragrunt: To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore. Alternatively you can use a static access key. Setup JSON key for Terragrunt: Setup JSON key for Terragrunt: To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore . Alternatively you can use a static access key . JSON key .gitignore static access key Configuring the module: Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in terragrunt. Configuring the module: Configuring the module: Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in terragrunt. generate Conclusion This setup gives you a clean way to manage Terraform configs across different Yandex Cloud regions. It handles authentication properly and works well whether you're just using basic cloud resources or diving into Kubernetes and Helm deployments.