Requirements and Build Information
The following sections describe the requirements and prerequisites for the Azure reference platform.
Prerequisites
Before starting the deployment, ensure you have the following tools and access:
Terraform 1.0+: Terraform is the infrastructure as code tool used to manage the deployment.
kubectl 1.28+: kubectl is the command-line tool for interacting with the Kubernetes cluster.
Azure Command Line 2.68.0+: The Azure Command Line (az cli) enables you to interact with Azure services from the command line.
Owner Access to Azure Subscription: For full architecture deployment, owner access is expected to the target Azure subscription.
It is additionally recommended, but not required, to install Helm 3.17.0 or later, for troubleshooting Helm-based Kubernetes deployments.
Azure Access Requirements
The account running this Terraform needs to be assigned the Owner role for the target subscription due to the assignment of roles to the managed identity used by the kubernetes control plane. Role assignment for the Kubernetes cluster is as follows:
Reader - scoped to the Disk Encryption Set created during this process. Allows identity to read the disk encryption set used for node disk encryption.
Network Contributor - scoped to the resource group created by this terraform. Allows identity to bind a managed load balancer to a public IP created during the Terraform run for environment access.
Key Vault Crypto User - scoped to the Azure Key Vault created during this process. Allows the disk set encryption managed identity the ability to use the key vault for disk encryption.
IP-based Access Restrictions
There are five variables that control public access to the environment.
You set these in your Terraform configuration file,
terraform.tfvars.
IP range variables:
ip_ranges_allowed_to_kubeapi = ["192.168.3.32/32", "192.168.4.1/32"]
ip_ranges_allowed_https = ["192.168.1.0/24"]
ip_ranges_allowed_kv_access = ["192.168.3.32/32", "192.168.4.1/32"]Network access control variables:
kubernetes_private_cluster_enabled = true # Optional: Make k8s API private-only
logscale_lb_internal_only = true # Optional: Make LogScale internal-only| Type | Description |
|---|---|
| ip_ranges_allowed_to_kubeapi | The Kubernetes API is publicly available by default. This variable limits access to the API and impacts the ability to run Kubernetes API commands. |
| ip_ranges_allowed_https | The ingress endpoint for UI Access to LogScale and Ingestion to logscale is publicly available. This limits access. |
| ip_ranges_allowed_kv_access | Access to the Azure Key Vault is limited to ranges defined here. |
| kubernetes_private_cluster_enabled |
When true, the Kubernetes API is only
accessible from within the Azure VNet (requires VPN/ExpressRoute
for external access). Overrides
ip_ranges_allowed_to_kubeapi restrictions.
|
| logscale_lb_internal_only |
When true, LogScale uses an internal
load balancer with no public IP (requires VPN/ExpressRoute for
external access). Overrides
ip_ranges_allowed_https restrictions.
|
Note
ip_ranges_allowed_kv_access and
ip_ranges_allowed_to_kubeapi must be set correctly
for Terraform to operate as expected. For enterprise deployments,
consider enabling kubernetes_private_cluster_enabled
and logscale_lb_internal_only for enhanced security.
Cluster Size Configuration
The cluster_size.tpl file specifies the available
parameters for different sizes of LogScale clusters. This template
defines various cluster sizes, for example xsmall,
small, medium and their associated
configurations, including node counts, instance types, disk sizes, and
resource limits. The Terraform configuration uses this template to
dynamically configure the LogScale deployment based on the
selected cluster size.
The data from cluster_size.tpl is retrieved and
rendered by the locals.tf file. The
locals.tf file uses the jsondecode
function to parse the template and select the appropriate cluster size
configuration based on the logscale_cluster_size
variable.
Example:
# Local Variables
locals {
# Render a template of available cluster sizes
cluster_size_template = jsondecode(templatefile("${path.module}/cluster_size.tpl", {}))
cluster_size_rendered = {
for key in keys(local.cluster_size_template) :
key => local.cluster_size_template[key]
}
cluster_size_selected = local.cluster_size_rendered[var.logscale_cluster_size]
}