Deploy GCP Resources and LogScale
In order to deploy the following Terraform you will need sufficient permissions, the Terraform will require access to create, modify, and delete resources in the following GCP services:
Compute Engine
GCS
IAM
Kubernetes Engine
VPC Networks
Deploy GCP Infrastructure
Clone the logscale-gcp and logscale-gcp-components repositories:
shell$
mkdir logscale-gcp-example
$cd logscale-gcp-example
$git clone https://github.com/CrowdStrike/logscale-gcp.git
$git clone https://github.com/CrowdStrike/logscale-gcp-components.git
Create a bucket to store the Terraform state in the region you will deploy GKE cluster. By default the Terraform assumes the region is us-east1, this can be changed via the _override.tf file or by overriding it on the command line in further commands.
shell$
gcloud storage buckets create gs://UNIQUE_PREFIX-logscale-terraform-state-v1 --location=us-east1
Update the backend.tf files to set the bucket used for Terraform state to the bucket created in step 2. The files are located in the gcp directory for both the logscale-gcp and logscale-gcp-components repositories.
hcl# Terraform State Bucket and Prefix terraform { backend "gcs" { bucket = "UNIQUE_PREFIX-logscale-terraform-state-v1" prefix = "logscale/gcp/terraform/tf.state" } }
Deploy the GCP resources required for Logscale
shell$
cd logscale-gcp/gcp
$terraform init
$terraform apply -var project_id=google-projectid-XXXXXX -var logscale_cluster_type=basic -var logscale_cluster_size=xsmall
# Example output: # Apply complete! Resources: 4 added, 0 changed, 0 destroyed. # # Outputs: # # bastion_hostname = "logscale-XXXX-bastion" # bastion_ssh = "gcloud compute ssh logscale-XXXX-bastion --project=google-projectid-XXXXXX --zone=us-central1-a --tunnel-through-iap" # bastion_ssh_proxy = "gcloud compute ssh logscale-XXXX-bastion --project=google-projectid-XXXXXXX --zone=us-central1-a --tunnel-through-iap --ssh-flag=\"-4 -L8888:localhost:8888 -N -q -f\"" # gce-ingress-external-static-ip = "24.14.19.33" # .......Once the Terraform is supplied the credentials for the GKE cluster must be downloaded. The Terraform output contains the gcloud command to download the credentials.
shell$
terraform output | grep gke_credential_command
# # Example output: # gke_credential_command = "gcloud container clusters get-credentials logscale-XXXX-gke --region us-central1 --project google-projectid-XXXXXX" # Run the gcloud command $gcloud container clusters get-credentials logscale-XXXX-gke --region us-central1 --project google-projectid-XXXXXX
# Examples output: # Fetching cluster endpoint and auth data. # kubeconfig entry generated for logscale-XXXX-gke # Next, configure kubetctl to use the new kubeconfig entry $kubectl config use-context gke_google-projectid-111111_us-central1_logscale-XXXX-gke
By default the Terraform will create a bastion host to facilitate access to the GKE cluster, if you have disabled the bastion in favor of a VPN or other access means you can skip this step.
Grep the bastion SSH proxy command from the Terraform output.
shell$
terraform output | grep bastion_ssh_proxy
# Example output: # bastion_ssh_proxy = "gcloud compute ssh logscale-XXXX-bastion --project=google-projectid-111111 --zone=us-central1-a --tunnel-through-iap --ssh-flag=\"-4 -L8888:localhost:8888 -N -q -f\""Run the command and set the HTTPS_PROXY environmental varibale:
shell$
gcloud compute ssh logscale-XXXX-bastion --project=google-projectid-111111 --zone=us-central1-a --tunnel-through-iap --ssh-flag="-4 -L8888:localhost:8888 -N -q -f"
# User the SSH proxy in your terminal $export HTTPS_PROXY=localhost:8888
Verify connectivity to the GKE cluster by listing the pods.
shell$
kubectl get pods -A
# Example output: # NAMESPACE NAME READY STATUS RESTARTS AGE # gmp-system collector-6rls7 2/2 Running 0 46m # gmp-system collector-bbwql 2/2 Running 0 45m # gmp-system collector-c9z8c 2/2 Running 0 46m # gmp-system collector-d4ltf 2/2 Running 0 45m # gmp-system collector-g77mx 2/2 Running 0 46m # gmp-system collector-mmmc7 2/2 Running 0 45m # gmp-system collector-qfpx4 2/2 Running 0 46m # gmp-system collector-rhm48 2/2 Running 0 45m # gmp-system collector-w77c8 2/2 Running 0 46m # .......Next LogScale will be deployed using the logscale-gcp-components repository. Supplying the LogScale license key and public URL for the cluster is required.
shell$
cd ../../logscale-gcp-components/gcp
$terraform init
$export TF_VAR_humiocluster_license="YOUR LICENSE KEY"
$terraform apply -var project_id=google-projectid-XXXXXX -target null_resource.install_cert_manager -target=helm_release.strimzi_operator -target=null_resource.humio_operator_crds
# Example output: # │ Warning: Applied changes may be incomplete # │ # │ The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending: # │ terraform plan # │ # │ Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message. # # Apply complete! Resources: 4 added, 0 changed, 0 destroyed. # # Apply the remaining terraform$ terraform apply -var project_id=google-projectid-XXXXXX -var logscale_cluster_type=basic -var logscale_cluster_size=xsmall -var public_url=logscale.mycompany.com
# Example output # kubernetes_manifest.humio_cluster_type_basic[0]: Creating... # kubernetes_manifest.humio_cluster_type_basic[0]: Creation complete after 3s # kubernetes_service.logscale_basic_nodeport[0]: Creating... # kubernetes_ingress_v1.logscale_basic_ingress[0]: Creating... # kubernetes_ingress_v1.logscale_basic_ingress[0]: Creation complete after 0s [id=logging/logscale-XXXX-basic-ingress] # kubernetes_service.logscale_basic_nodeport[0]: Creation complete after 0s [id=logging/logscale-XXXX-nodeport] # kubernetes_manifest.logscale_basic_ingress_backend[0]: Creating... # kubernetes_manifest.logscale_basic_ingress_backend[0]: Creation complete after 1s # Apply complete! Resources: 19 added, 0 changed, 0 destroyed.Check the status of the pods by running:
shell$
kubectl get humiocluster -n logging
# Example output: # humio-operator-945b5845f-hld8b 1/1 Running 0 5m # logscale-XXXX-core-ckjksp 3/3 Running 0 100s # logscale-XXXX-core-dninhg 3/3 Running 0 100s # logscale-XXXX-core-zdevoe 3/3 Running 0 99s # logscale-XXXX-strimzi-kafka-kafka-0 1/1 Running 0 6m # logscale-XXXX-strimzi-kafka-kafka-1 1/1 Running 0 6m # logscale-XXXX-strimzi-kafka-kafka-2 1/1 Running 0 6m # logscale-XXXX-strimzi-kafka-zookeeper-0 1/1 Running 0 6m # logscale-XXXX-strimzi-kafka-zookeeper-1 1/1 Running 0 6m # logscale-XXXX-strimzi-kafka-zookeeper-2 1/1 Running 0 6m # strimzi-cluster-operator-86948f6756-9nj4p 1/1 Running 0 6mCheck the status of the HumioCluster by running:
shell$
kubectl get humiocluster -n logging
# NAME STATE NODES VERSION # logscale-XXXX RunningInitially the cluster will go into the state
Bootstrapping
as it starts up, but once it starts all nodes it will go into the state ofRunning
.