<![CDATA[David Nguyen]]><![CDATA[A passionate full-stack developer from VIETNAM.
]]>https://eplus.devRSS for NodeThu, 02 Apr 2026 04:28:48 GMT<![CDATA[en]]>60<![CDATA[Arcade March 2026 Sprint 4 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 4! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you send messages to a Pub/Sub topic using the Python library?
Select ONE answer that would be relevant
DNS client
Storage client
SQL client
Publisher client
In Google Cloud, how will you pull messages from a Pub/Sub topic to your application?
Select ONE answer that would be relevant
Registry
Snapshot
Inventory
Subscription
In the "Multimodal Content Generation with Gemini on Vertex AI" lab, which specific model version are you tasked to invoke?
Select ONE answer that would be relevant
imagen-3.0
gemini-2.0-flash
text-bison@001
gemini-1.5-pro
What specific type of "multimodal" inputs does the generative model have in the "Multimodal Content Generation with Gemini on Vertex AI" lab process?
Select ONE answer that would be relevant
Audio and Video
Text only
A mix of text and images
Images only
According to the schema provided in Data Ingestion into BigQuery from Cloud Storage lab, which data type is required for the employee_id column?
Select ONE answer that would be relevant
STRING
INTEGER
FLOAT
BOOLEAN
In Data Ingestion into BigQuery from Cloud Storage lab, You are tasked with importing data into the new table. Where is the source employees.csv file stored?
Select ONE answer that would be relevant
Google Drive
Cloud Storage
Local Disk
Cloud SQL
]]>https://eplus.dev/arcade-march-2026-sprint-4-solutionhttps://eplus.dev/arcade-march-2026-sprint-4-solution<![CDATA[Arcade March 2026 Sprint 4 (Solution)]]><![CDATA[Arcade March 2026 Sprint 4]]><![CDATA[Arcade March 2026]]><![CDATA[David Nguyen]]>Mon, 16 Mar 2026 03:04:46 GMT<![CDATA[GitHub Copilot for Students: What Changed in March 2026]]><![CDATA[GitHub Updates Copilot Access for Students (March 2026)
GitHub recently announced an update regarding how GitHub Copilot will be provided to verified students. The goal is to ensure that Copilot remains free and sustainable for millions of students worldwide.
Key Changes
Starting March 12, 2026, Copilot access for verified students will be managed under a new plan called:
GitHub Copilot Student Plan
Students who already have GitHub Education benefits do not need to take any action. Their Copilot access will continue automatically.
Model Availability Changes
As part of this transition, some premium models will no longer be available for manual selection under the student plan, including:
GPT-5.4
Claude Opus
Claude Sonnet
Although these models are removed from manual selection, students will still have access to powerful AI models through Auto mode.
Auto Mode
With Auto mode, Copilot automatically selects the most suitable model for the task. These models may come from providers such as:
OpenAI
Anthropic
Google
GitHub plans to continue improving Auto mode and adding new models over time.
Why This Change?
GitHub states that these adjustments are necessary to:
Keep Copilot free for verified students
Support a growing global student community
Maintain long-term sustainability of the service
Future Updates
GitHub will continue collecting feedback from students and educators and may adjust:
Available models
Feature limits
Usage policies
Additionally, GitHub is working on making it easier for students to upgrade from the Copilot Student plan to Copilot Pro in the future.
Source
GitHub Official Announcement GitHub Copilot for Students Update (March 2026)
At GitHub, we believe the next generation of developers should have access to the latest industry technology. Thats why we provide students with free access to the GitHub Student Developer Pack, run the Campus Experts program to help student leaders build tech communities, and partner with Major League Hacking (MLH)andHack Clubto support student hackathons and youth-led coding communities. Its also why we offer verified students free access to GitHub Copilottoday, nearly two million students are using it to build, learn, and explore new ideas.
Copilot is evolving quickly, with new capabilities, models, and experiences shipping fast. As Copilot evolves and the student community continues to grow, we need to make some adjustments to ensure we can provide sustainable, long-term GitHub Copilot access to students worldwide.
Our commitment to providing free access to GitHub Copilot for verified students is not changing.What is changing is how Copilot is packaged and managed for students.
What this means for you
Starting today, March 12, 2026, your complimentary Copilot access will be managed under a new GitHub Copilot Student plan, alongside your existing GitHub Education benefits. Your academic verification status will not change, and there is nothing you need to do to continue using Copilot. You will see that you are on the GitHub Copilot Student plan in the UI, and your existing premium request unit (PRU) entitlements will remain unchanged.
As part of this transition, however, some premium models, including GPT-5.4, and Claude Opus and Sonnet models, will no longer be available for self-selection under the GitHub Copilot Student Plan.We know this will be disappointing, but were making this change so we can keep Copilot free and accessible for millions of students around the world.
That said, through Auto mode, you'll continue to have access to a powerful set of models from providers such as OpenAI, Anthropic, and Google. We'll keep adding new models and expanding the intelligence in Auto mode that helps match the right model to your task and workflow. We support a global community of students across thousands of universities and dozens of time zones, so were being intentional about how we roll out changes. Over the coming weeks, we will be making additional adjustments to available models or usage limits on certain features the specifics of which we'll be testing with your feedback.
We want your input
Your experience matters to us, and your feedback will directly shape how this plan evolves. Leave a comment below, what's working for you, what gets in the way, and what you need most. We will also be continuing to host 1:1 conversations with students, educators, and Campus Experts, and using insights from our recent November 2025 student survey to help inform what's next.
GitHub's investment in students is not slowing down. We are committed to ensuring that Copilot remains a powerful, free tool for verified students, and we will continue to improve and expand the student experience over time.
We will share updates as we learn more from testing and your feedback. Thank you for building with us.
Were currently working on making it easier to upgrade from your GitHub Student plan to GitHub Copilot Pro*. Well share an update here soon*
https://github.com/orgs/community/discussions/189268#discussioncomment-16108204]]>https://eplus.dev/github-copilot-for-students-what-changed-in-march-2026https://eplus.dev/github-copilot-for-students-what-changed-in-march-2026<![CDATA[GitHub Copilot for Students: What Changed in March 2026]]><![CDATA[Understanding the New GitHub Copilot Student Plan (2026 Update)]]><![CDATA[GitHub Copilot Student Plan – 2026 Update]]><![CDATA[David Nguyen]]>Fri, 13 Mar 2026 01:54:00 GMT<![CDATA[Arcade March 2026 Sprint 3 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 3! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you count specific occurrences within your log entries using Cloud Logging?
Select ONE answer that would be relevant
BigQuery export
Cloud SQL
Pub/Sub trigger
Logs-based metrics
In Google Cloud, how will you receive an automatic notification when a Cloud Logging metric reaches a threshold?
Select ONE answer that would be relevant
Alerting policy
Firewall rule
IAM role
Load balancer
In Google Cloud, how will you troubleshoot code errors for your Cloud Run Functions?
Select ONE answer that would be relevant
Cloud Artifacts
Cloud Logging
Cloud Build
Cloud Domains
In Google Cloud, how will you check the execution duration of your Cloud Run Function in the Console?
Select ONE answer that would be relevant
Metadata tab
Monitoring tab
Source tab
Variables tab
In Google Cloud, how will you create a virtual machine running a Windows Server operating system?
Select ONE answer that would be relevant
Compute Engine
App Engine
Cloud Run
Cloud Functions
In Google Cloud, how will you connect to your Windows VM instance to manage it remotely?
Select ONE answer that would be relevant
HTTP
SMTP
RDP
SSH
]]>https://eplus.dev/arcade-march-2026-sprint-3-solutionhttps://eplus.dev/arcade-march-2026-sprint-3-solution<![CDATA[Arcade March 2026 Sprint 3 (Solution)]]><![CDATA[Arcade March 2026 Sprint 3]]><![CDATA[Arcade March 2026]]><![CDATA[David Nguyen]]>Fri, 13 Mar 2026 01:29:13 GMT<![CDATA[Build an AI Science Tutor Application with Vertex AI (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
Scenario: You're a developer at an educational technology company that provides online tutoring and educational resources. They want to create an interactive science tutoring assistant to help students with questions related to astronomy and other scientific topics. They decide to use Google Clouds Vertex AI SDK to build a chat-based solution that can provide informative answers. you need to finish the below tasks:
Task: Develop a Python function named science_tutoring(prompt). This function should invoke the gemini-2.5-flash-lite model using the supplied prompt, generate the response. For this challenge, use the prompt: "How many planets are there in the solar system?."
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Note: You can ignore any warnings related to Python version dependencies.
Click Check my progress to verify the objective.
Create and run a file to send a chat prompt to Gen AI and receive a response
Solution of Lab
https://www.youtube.com/watch?v=Yn_4Ij-7ilw
```plaintext
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-science-tutor-application-with-vertex-ai-solution/lab.sh
source lab.sh
**Script Alternative**
```python
import vertexai
from vertexai.generative_models import GenerativeModel
# Replace with your actual project details
PROJECT_ID = "your-project-id"
LOCATION = "us-central1"
# Initialize Vertex AI onAxcode
vertexai.init(project=PROJECT_ID, location=LOCATION)
def science_tutoring(prompt):
"""
Sends a prompt to ab Gemini 2.5 Flash Lite model
and returns the generated response.
"""
try:
# Load ab52=460 2.5 Flash Lite model
model = GenerativeModel("gemini-2.5-flash-lite")
# Generate response
response = model.generate_content(prompt)
return response.text
except Exception as e:
return f"Error occurred: {str(e)}"
if __name__ == "__main__":
test_prompt = "How many planets are there in the solar system?"
result = science_tutoring(test_prompt)
print("Response:")
print(result)
]]>https://eplus.dev/build-an-ai-science-tutor-application-with-vertex-ai-solutionhttps://eplus.dev/build-an-ai-science-tutor-application-with-vertex-ai-solution<![CDATA[Build an AI Science Tutor Application with Vertex AI (Solution)]]><![CDATA[Build an AI Science Tutor Application with Vertex AI]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 12:05:54 GMT<![CDATA[Arcade March 2026 Sprint 2 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 2! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
In Google Cloud, how will you scale your Managed Instance Group (MIG) based on application-specific metrics?
Select ONE answer that would be relevant
Custom metrics
CPU usage
Static sizing
Manual toggle
In Google Cloud, how will you create a logical grouping of keys within Cloud KMS?
Select ONE answer that would be relevant
Project settings
Billing reports
Cloud Monitoring
User roles
In Google Cloud, how will you create a visual representation of your resource health using Cloud Monitoring?
Select ONE answer that would be relevant
Datasets
Topics
Buckets
Dashboards
In Google Cloud, how will you verify your application is globally accessible using Cloud Monitoring?
Select ONE answer that would be relevant
Uptime checks
Log exports
Data streams
Code traces
In Google Cloud, how will you aggregate monitoring data from several projects into a single unified view?
Select ONE answer that would be relevant
Service account
Folder sync
Shared VPC
Metrics Scope
In Google Cloud, how will you define the primary project used to view multi-project data in Cloud Monitoring?
Select ONE answer that would be relevant
Scoping project
Host project
Target project
Guest project
]]>https://eplus.dev/arcade-march-2026-sprint-2-solutionhttps://eplus.dev/arcade-march-2026-sprint-2-solution<![CDATA[Arcade March 2026 Sprint 2 (Solution)]]><![CDATA[Arcade March 2026 Sprint 2]]><![CDATA[Arcade March 2026 Sprint]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 06:40:40 GMT<![CDATA[Arcade March 2026 Sprint 1 (Solution)]]><![CDATA[Overview
Welcome to Arcade March 2026 Sprint 1! This quick quiz will help you check your understanding and stay on track as you continue to build your Google Cloud skills.
Click Start Lab to begin.
Note: Take a moment to read each question carefully and double-check your answers before submitting. To ensure your completion is recorded, keep the quiz open for at least 10 minutes. Submitting earlier may result in an incomplete attempt.
Quiz
How will you create a new Linux server instance in Google Cloud using the Console?
Select ONE answer that would be relevant
Use Compute Engine
Use Cloud Spanner
Use Cloud Functions
Use Google Drive
In Google Cloud, what does the "Machine Type" configuration primarily determine?
Select ONE answer that would be relevant
OS version
Network speed
Hardware resources
Disk type
Which gcloud command is used to display all the configuration properties of your current environment?
Select ONE answer that would be relevant
gcloud info
gcloud help
gcloud config list
gcloud auth list
Which gcloud command is used to view a list of active account names in your environment?
Select ONE answer that would be relevant
gcloud info
gcloud help
gcloud config list
gcloud auth list
In Google Cloud, how will you create a new persistent disk in a specific zone using the command line?
Select ONE answer that would be relevant
gcloud storage new
gcloud compute disks create
gcloud disk provision
gcloud make disk
Which Google Cloud command is used to attach an existing Persistent Disk to a virtual machine instance?
Select ONE answer that would be relevant
Click Delete
Send it to a printer
gcloud compute instances attach-disk
gcloud vm mount-disk
]]>https://eplus.dev/arcade-march-2026-sprint-1-solutionhttps://eplus.dev/arcade-march-2026-sprint-1-solution<![CDATA[Arcade March 2026 Sprint 1 (Solution)]]><![CDATA[Arcade March 2026 Sprint 1]]><![CDATA[David Nguyen]]>Wed, 11 Mar 2026 06:31:46 GMT<![CDATA[Data Ingestion into BigQuery from Cloud Storage (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You are managing Google BigQuery, a data warehouse service that lets you store, manage, and analyze large datasets. In this scenario, you need to create a dataset and a table within BigQuery to store employee details. The dataset will act as a container for your tables, while the table will hold the actual employee information.
You need to complete the following tasks:
Create a big query dataset: work_day
Create a table with employee the following schema details:
column
Type
employee_id
INTEGER
device_id
STRING
username
STRING
department
STRING
office
STRING
Import the csv data in your newly created table from pre-created cloud storage bucket named as qwiklabs-gcp-02-a85ba8626654-a1f8-bucket. The precreated bucket already has employees.csv file.
Click Check my progress to verify the objective.
Create BigQuery Schema and upload csv data
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-science-tutor-application-with-vertex-ai-solution/lab.sh
source lab.sh
Script Alternative
export BUCKET=
bq mk work_day && bq load --source_format=CSV --skip_leading_rows=1 work_day.employee gs://$BUCKET/employees.csv employee_id:INTEGER,device_id:STRING,username:STRING,department:STRING,office:STRING
]]>https://eplus.dev/data-ingestion-into-bigquery-from-cloud-storage-solutionhttps://eplus.dev/data-ingestion-into-bigquery-from-cloud-storage-solution<![CDATA[Data Ingestion into BigQuery from Cloud Storage (Solution)]]><![CDATA[Data Ingestion into BigQuery from Cloud Storage]]><![CDATA[David Nguyen]]>Mon, 09 Mar 2026 01:43:32 GMT<![CDATA[The Arcade Base Camp March 2026]]><![CDATA[🏕 Arcade Base Camp March 2026
Welcome to Base Camp March 2026, where youll develop key Google Cloud skills and earn an exclusive credential that will open doors to the cloud for you. No prior experience is required!
🔗 Main: https://www.skills.google/games/5703/labs/36448📝 Solution: http://eplus.dev/start-here-dont-skip-this-arcade-lab
Deadline (all): 31/03/2026, 11:59 PM
🎯 Levels & Learning Zones
Arcade Base Camp March 2026https://www.skills.google/games/7054 1q-basecamp-10550
Work Meets Play: Metrics in Motionhttps://www.skills.google/games/7058 1q-worknplay-31032
Base Camp Levels
Arcade Adventure: Security, Data, and Cloud Operationshttps://www.skills.google/games/7055 1q-cloudops-31269
Arcade Voyage: AI and Cloud Deploymenthttps://www.skills.google/games/7056 1q-deploy-02057
Arcade Trail: Automation and Analyticshttps://www.skills.google/games/7057 1q-automation-5931
🧩 Trivia Challenges
Sprint 1 https://www.skills.google/games/7050 1q-sprint-10247
Sprint 2 https://www.skills.google/games/7051 1q-sprint-10284
Sprint 3 https://www.skills.google/games/7052 1q-sprint-10269
Sprint 4 https://www.skills.google/games/7053 1q-sprint-10229
👨 Guide
]]>https://eplus.dev/the-arcade-base-camp-march-2026https://eplus.dev/the-arcade-base-camp-march-2026<![CDATA[David Nguyen]]>Tue, 03 Mar 2026 06:24:15 GMT<![CDATA[Using Cloud Trace on Kubernetes Engine - GSP484]]><![CDATA[Overview
When supporting a production system that services HTTP requests or provides an API, it is important to measure the latency of your endpoints to detect when a system's performance is not operating within specification. In monolithic systems this single latency measure may be useful to detect and diagnose deteriorating behavior. With modern microservice architectures, however, this becomes much more difficult because a single request may result in numerous additional requests to other systems before the request can be fully handled.
Deteriorating performance in an underlying system may impact all other systems that rely on it. While latency can be measured at each service endpoint, it can be difficult to correlate slow behavior in the public endpoint with a particular sub-service that is misbehaving.
Enter distributed tracing. Distributed tracing uses metadata passed along with requests to correlate requests across service tiers. By collecting telemetry data from all the services in a microservice architecture and propagating a trace id from an initial request to all subsidiary requests, developers can much more easily identify which service is causing slowdowns affecting the rest of the system.
Google Cloud provides the Operations suite of products to handle logging, monitoring, and distributed tracing. This lab will be deployed to Kubernetes Engine and will demonstrate a multi-tier architecture implementing distributed tracing. It will also take advantage of Terraform to build out necessary infrastructure.
This lab was created by GKE Helmsman engineers to give you a better understanding of GKE Binary Authorization. You can view this demo by running gsutil cp -r gs://spls/gke-binary-auth/* . and cd gke-binary-auth-demo command in cloud shell. We encourage any and all to contribute to our assets!
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
Copied!
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
vGCaxTeSxpgN
Copied!
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
Click through the following windows:
Continue through the Cloud Shell information window.
Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, qwiklabs-gcp-00-86734d2ce627. The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to qwiklabs-gcp-00-86734d2ce627
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = qwiklabs-gcp-00-86734d2ce627
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Clone demo
Clone the resources needed for this lab by running:
git clone https://github.com/GoogleCloudPlatform/gke-tracing-demo
Copied!
Go into the directory for this demo:
cd gke-tracing-demo
Copied!
Set your region and zone
Certain Compute Engine resources live in regions and zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.
Note: Learn more about regions and zones and see a complete list in Regions & Zones documentation.
Run the following to set a region and zone for your lab (you can use the region/zone that's best for you):
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-f
Copied!
Architecture
The lab begins by deploying a Kubernetes Engine cluster. To this cluster will be deployed a simple web application fronted by a load balancer. The web app will publish messages provided by the user to a Cloud Pub/Sub topic. The application is instrumented such that HTTP requests to it will result in the creation of a trace whose context will be propagated to the Cloud Pub/Sub publish API request. The correlated telemetry data from these requests will be available in the Cloud Trace Console.
Introduction to Terraform
Following the principles of infrastructure as code and immutable infrastructure, Terraform supports the writing of declarative descriptions of the desired state of infrastructure. When the descriptor is applied, Terraform uses Google Cloud APIs to provision and update resources to match. Terraform compares the desired state with the current state so incremental changes can be made without deleting everything and starting over. For instance, Terraform can build out Google Cloud projects and compute instances, etc., even set up a Kubernetes Engine cluster and deploy applications to it. When requirements change, the descriptor can be updated and Terraform will adjust the cloud infrastructure accordingly.
This example will start up a Kubernetes Engine cluster using Terraform. Then you will use Kubernetes commands to deploy a demo application to the cluster. By default, Kubernetes Engine clusters in Google Cloud are launched with a pre-configured Fluentd-based collector that forwards logging events for the cluster to Cloud Monitoring. Interacting with the demo app will produce trace events that are visible in the Cloud Trace UI.
Running Terraform
There are three Terraform files provided with this demo, located in the /terraform subdirectory of the project. The first one, main.tf, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result. The second file is provider.tf, which indicates which cloud provider and version will be the target of the Terraform commands--in this case Google Cloud. The final file is variables.tf, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the main.tf that do not have defaults configured in variables.tf will result in prompts to the user at runtime.
Task 1. Initialization
Given that authentication was configured above, you are now ready to deploy the infrastructure.
Run the following command from the root directory of the project:
cd terraform
Copied!
Update the provider.tf file
Remove the provider version for the Terraform from the provider.tf script file.
Edit the provider.tf script file:
nano provider.tf
Copied!
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.84.0"
}
}
}
provider "google" {
project = var.project
}
Copied!
Then save the file with CTRL + X > Y > Enter.
After modification your provider.tf script file should look like:
...
provider "google" {
project = var.project
}
From here, initialize Terraform.
Enter:
terraform init
Copied!
This will download the dependencies that Terraform requires: the Google Cloud project and the Google Cloud zone to which the demo application should be deployed. Terraform will prompt for these values if it does not know them already. By default, it will look for a file called terraform.tfvars or files with a suffix of .auto.tfvars in the current directory to obtain those values.
This demo provides a convenience script to prompt for project and zone and persist them in a terraform.tfvars file.
Run:
../scripts/generate-tfvars.sh
Copied!
Note: If the file already exists you will receive an error.
The script uses previously-configured values from the gcloud command. If they have not been configured, the error message will indicate how they should be set. The existing values can be viewed with the following command:
gcloud config list
Copied!
If the displayed values don't correspond to where you intend to run the demo application, change the values in terraform.tfvars to your preferred values.
Task 2. Deployment
Having initialized Terraform you can see the work that Terraform will perform with the following command:
terraform plan
Copied!
This command can be used to visually verify that settings are correct and Terraform will inform you if it detects any errors. While not necessary, it is a good practice to run it every time prior to changing infrastructure using Terraform.
After verification, tell Terraform to set up the necessary infrastructure:
terraform apply
Copied!
The changes that will be made are displayed, and asks you to confirm with yes.
Note: If you get deprecation warnings related to the zone variable, please ignore it and move forward in the lab.
While you're waiting for your build to complete, set up a Cloud Monitoring workspace to be used later in the lab.
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed necessary infrastructure with Terraform, you will see an assessment score.
Use Terraform to set up the necessary infrastructure
Create a Monitoring Metrics Scope
Set up a Monitoring Metrics Scope that's tied to your Google Cloud Project. The following steps create a new account that has a free trial of Monitoring.
In the Cloud Console, click Navigation menu (
) > View All Products > Observability > Monitoring.
When the Monitoring Overview page opens, your metrics scope project is ready.
Task 3. Deploy demo application
Back in Cloud Shell, after you see the Apply complete! message, return to the Console.
In the Navigation menu, go to Kubernetes Engine > Clusters to see your cluster.
Click on Navigation menu, click on view all products then scroll down to the Analytics section and click on Pub/Sub to see the Topics and Subscriptions.
Now, deploy the demo application using Kubernetes's kubectl command:
kubectl apply -f tracing-demo-deployment.yaml
Copied!
Once the app has been deployed, it can be viewed in the Kubernetes Engine > Workloads. You can also see the load balancer that was created for the application in the Gateways, Services & Ingress > Services section of the console.
It may take a few minutes for the application to be deployedif your workloads console resembles the following with a status of "Does not have minimum availability":
Refresh the page until you see an "OK" in the status bar:
Incidentally, the endpoint can be programmatically acquired using the following command:
echo http://$(kubectl get svc tracing-demo -n default -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Copied!
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed demo application, you will see an assessment score.
Deploy demo application
Task 4. Validation
Generating telemetry data
Once the demo application is deployed, you should see a list of your exposed services.
Still in the Kubernetes window, under Gateways, Services & Ingress click on Services to view the exposed services.
Click on the endpoint listed next to the tracing-demo load balancer to open the demo app web page in a new tab of your browser.
Note that your IP address will likely be different from the example above. The page displayed is simple:
To the url, add the string: ?string=CustomMessage and see that the message is displayed:
As you can see, if a string parameter is not provided it uses a default value of Hello World. The app is used to generate trace telemetry data.
Replace "CustomMessage" with your own messages to generate some data to look at.
Test completed task
Click Check my progress to verify your performed task. If you have successfully generated telemetry data, you will see an assessment score.
Generate Telemetry Data
Examining traces
In the Console, select Navigation menu >View all products > scroll to Observability section and click on Trace > Trace explorer. You should see a chart displaying trace events plotted on a timeline with latency as the vertical metric.
If not, click the Auto Run toggle button to see the most up to date data.
Click on the dark block in the top graph is a "Heatmap" view, which shows the density of spans occurring at that specific duration and time.
The top entry in the list is known as the root span and represents the duration of the HTTP request, from the moment the first byte arrives until the moment the last byte of the response is sent. The bottom entry in the list represents the duration of the request made when sending the message to the Pub/Sub topic.
Since the handling of the HTTP request is blocked by the call to the Pub/Sub API it is clear that the vast majority of the time spent within the HTTP request is taken up by the Pub/Sub interaction. This demonstrates that by instrumenting each tier of your application you can easily identify where the bottlenecks are.
Pulling Pub/Sub messages
As described in the Architecture section of this document, messages from the demo app are published to a Pub/Sub topic.
These messages can be consumed from the topic using the gcloud CLI:
gcloud pubsub subscriptions pull --auto-ack --limit 10 tracing-demo-cli
Copied!
Output:
DATA: Hello World
MESSAGE_ID: 4117341758575424
ORDERING_KEY:
ATTRIBUTES:
DELIVERY_ATTEMPT:
DATA: CustomMessage
MESSAGE_ID: 4117243358956897
ORDERING_KEY:
ATTRIBUTES:
DELIVERY_ATTEMPT:
Pulling the messages from the topic has no impact on tracing. This section simply provides a consumer of the messages for verification purposes.
Monitoring and logging
Cloud monitoring and logging are not the subject of this demo, but it is worth noting that the application you deployed will publish logs to Cloud Logging and metrics to worth noting that the application you deployed will publish logs to Logging and metrics to Cloud Monitoring
In the Console, select Navigation menu > Monitoring > Metrics Explorer.
In the Select a metric field, select VM Instance > Instance > CPU Usage then click Apply.
You should see a chart of this metric plotted for different pods running in the cluster.
To see logs, select Navigation menu > View all products scroll to Observability section and click on Logging.
In Log fields section, set the following:
RESOURCE TYPE: Kubernetes Container
CLUSTER NAME: tracing-demo-space
NAMESPACE NAME: default
Task 5. Troubleshooting in your own environment
Several possible errors can be diagnosed using the kubectl command. For instance, a deployment can be shown:
kubectl get deployment tracing-demo
Copied!
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
tracing-demo 1/1 1 1 21m
More details can be shown with describe:
kubectl describe deployment tracing-demo
Copied!
This command will show a list of deployed pods:
kubectl get pod
Copied!
Output:
NAME READY STATUS RESTARTS AGE
tracing-demo-59cc7988fc-h5w27 1/1 Running 0 23m
Again, details of the pod can be seen with describe:
kubectl describe pod tracing-demo
Copied!
Note the pod Name to use in the next step.
Once you know the pod name, use the name to view logs locally:
kubectl logs <LOG_NAME>
Copied!
Output:
10.60.0.1 - - [22/Jun/2018:19:42:23 +0000] "HEAD / HTTP/1.0" 200 - "-" "-"
Publishing string: Hello World
10.60.0.1 - - [22/Jun/2018:19:42:23 +0000] "GET / HTTP/1.1" 200 669 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
The install script fails with a Permission denied when running Terraform. The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects. Ensure that the account listed in gcloud config list has necessary permissions to create resources. If it does, regenerate the application default credentials using gcloud auth application-default login.
Task 6. Teardown
Qwiklabs will take care of shutting down all the resources used for this lab, but heres what you would need to do to clean up your own environment to save on cost and to be a good cloud citizen:
terraform destroy
Copied!
As with apply, Terraform will prompt for a yes to confirm your intent.
Since Terraform tracks the resources it created it can tear down the cluster, the Pub/Sub topic, and the Pub/Sub subscription.
Note: If you get deprecation warnings related to the zone variable, ignore it.
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP484/lab.sh
source lab.sh
Manual
]]>https://eplus.dev/using-cloud-trace-on-kubernetes-engine-gsp484https://eplus.dev/using-cloud-trace-on-kubernetes-engine-gsp484<![CDATA[GSP484]]><![CDATA[Using Cloud Trace on Kubernetes Engine]]><![CDATA[Using Cloud Trace on Kubernetes Engine - GSP484]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 06:10:01 GMT<![CDATA[How to Use a Network Policy on Google Kubernetes Engine - GSP480]]><![CDATA[Overview
This lab will show you how to improve the security of your Kubernetes Engine by applying fine-grained restrictions to network communication.
The Principle of Least Privilege is widely recognized as an important design consideration in enhancing the protection of critical systems from faults and malicious behavior. It suggests that every component must be able to access only the information and resources that are necessary for its legitimate purpose. This document demonstrates how the Principle of Least Privilege can be implemented within the Kubernetes Engine network layer.
Network connections can be restricted at two tiers of your Kubernetes Engine infrastructure. The first, and coarser grained, mechanism is the application of Firewall Rules at the Network, Subnetwork, and Host levels. These rules are applied outside of the Kubernetes Engine at the VPC level.
While Firewall Rules are a powerful security measure, and Kubernetes enables you to define even finer grained rules via Network Policies. Network Policies are used to limit intra-cluster communication. Network policies do not apply to pods attached to the host's network namespace.
For this lab you will provision a private Kubernetes Engine cluster and a bastion host with which to access it. A bastion host provides a single host that has access to the cluster, which, when combined with a private Kubernetes network, ensures that the cluster isn't exposed to malicious behavior from the internet at large. Bastions are particularly useful when you do not have VPN access to the cloud network.
Within the cluster, a simple HTTP server and two client pods will be provisioned. You will learn how to use a Network Policy and labeling to only allow connections from one of the client pods.
This lab was created by GKE Helmsman engineers to give you a better understanding of GKE Binary Authorization. You can view this demo by running gsutil cp -r gs://spls/gke-binary-auth/* . and cd gke-binary-auth-demo command in cloud shell. We encourage any and all to contribute to our assets!
Architecture
You will define a private, standard mode Kubernetes cluster that uses Dataplane V2. Dataplane V2 has network policies enabled by default.
Since the cluster is private, neither the API nor the worker nodes will be accessible from the internet. Instead, you will define a bastion host and use a firewall rule to enable access to it. The bastion's IP address is defined as an authorized network for the cluster, which grants it access to the API.
Within the cluster, provision three workloads:
hello-server: this is a simple HTTP server with an internally-accessible endpoint
hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-server.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
Time to complete the labremember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
[email protected]
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
uEaNw77wW5EL
You can also find the Password in the Lab Details pane.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
Click through the following windows:
Continue through the Cloud Shell information window.
Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, qwiklabs-gcp-02-668b9ffe0190. The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to qwiklabs-gcp-02-668b9ffe0190
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Click Authorize.
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = qwiklabs-gcp-02-668b9ffe0190
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Clone demo
Copy the resources needed for this lab exercise from a Cloud Storage bucket:
gsutil cp -r gs://spls/gsp480/gke-network-policy-demo .
Go into the directory for the demo:
cd gke-network-policy-demo
Make the demo files executable:
chmod -R 755 *
Task 1. Lab setup
First, set the Google Cloud region and zone.
Set the Google Cloud region.
gcloud config set compute/region "europe-west1"
Set the Google Cloud zone.
gcloud config set compute/zone "europe-west1-d"
This lab will use the following Google Cloud Service APIs, and have been enabled for you:
compute.googleapis.com
container.googleapis.com
cloudbuild.googleapis.com
In addition, the Terraform configuration takes three parameters to determine where the Kubernetes Engine cluster should be created:
project ID
region
zone
For simplicity, these parameters are specified in a file named terraform.tfvars, in the terraform directory.
To ensure the appropriate APIs are enabled and to generate the terraform/terraform.tfvars file based on your gcloud defaults, run:
make setup-project
Type y when asked to confirm.
This will enable the necessary Service APIs, and it will also generate a terraform/terraform.tfvars file with the following keys.
Verify the values themselves will match the output of gcloud config list by running:
cat terraform/terraform.tfvars
Provisioning the Kubernetes Engine cluster
Next, apply the Terraform configuration within the project root:
make tf-apply
When prompted, review the generated plan and enter yes to deploy the environment.
This will take several minutes to deploy.
Task 2. Validation
Terraform outputs a message when the cluster's been successfully created.
...snip...
google_container_cluster.primary: Still creating... (3m0s elapsed)
google_container_cluster.primary: Still creating... (3m10s elapsed)
google_container_cluster.primary: Still creating... (3m20s elapsed)
google_container_cluster.primary: Still creating... (3m30s elapsed)
google_container_cluster.primary: Creation complete after 3m34s (ID: gke-demo-cluster)
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed necessary infrastructure with Terraform, you will see an assessment score.
Use Terraform to set up the necessary infrastructure (Lab setup)
Now ssh into the bastion for the remaining steps:
gcloud compute ssh gke-demo-bastion
Existing versions of kubectl and custom Kubernetes clients contain provider-specific code to manage authentication between the client and Google Kubernetes Engine. Starting with v1.26, this code will no longer be included as part of the OSS kubectl. GKE users will need to download and use a separate authentication plugin to generate GKE-specific tokens. This new binary, gke-gcloud-auth-plugin, uses the Kubernetes Client-go Credential Plugin mechanism to extend kubectls authentication to support GKE. For more information, you can check out the following documentation.
To have kubectl use the new binary plugin for authentication instead of using the default provider-specific code, use the following steps.
Once connected, run the following command to install the gke-gcloud-auth-plugin on the VM.
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin
Set export USE_GKE_GCLOUD_AUTH_PLUGIN=True in ~/.bashrc:
echo "export USE_GKE_GCLOUD_AUTH_PLUGIN=True" >> ~/.bashrc
Run the following command:
source ~/.bashrc
Run the following command to force the config for this cluster to be updated to the Client-go Credential Plugin configuration.
gcloud container clusters get-credentials gke-demo-cluster --zone europe-west1-d
On success, you should see this message:
kubeconfig entry generated for gke-demo-cluster.
The newly-created cluster will now be available for the standard kubectl commands on the bastion.
Task 3. Installing the hello server
The test application consists of one simple HTTP server, deployed as hello-server, and two clients, one of which will be labeled app=hello and the other app=not-hello.
All three services can be deployed by applying the hello-app manifests.
On the bastion, run:
kubectl apply -f ./manifests/hello-app/
Output:
deployment.apps/hello-client-allowed created
deployment.apps/hello-client-blocked created
service/hello-server created
deployment.apps/hello-server created
Verify all three pods have been successfully deployed:
kubectl get pods
You will see one running pod for each of hello-client-allowed, hello-client-blocked, and hello-server deployments.
NAME READY STATUS RESTARTS AGE
hello-client-allowed-7d95fcd5d9-t8fsk | 1/1 Running 0 14m
hello-client-blocked-6497db465d-ckbn8 | 1/1 Running 0 14m
hello-server-7df58f7fb5-nvcvd | 1/1 Running 0 14m
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed a simple HTTP hello server, you will see an assessment score.
Installing the hello server
Task 4. Confirming default access to the hello server
First, tail the "allowed" client:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
Press CTRL+C to exit.
Second, tail the logs of the "blocked" client:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Press CTRL+C to exit.
You will notice that both pods are successfully able to connect to the hello-server service. This is because you have not yet defined a Network Policy to restrict access. In each of these windows you should see successful responses from the server.
Hostname: hello-server-7df58f7fb5-nvcvd
Hello, world!
Version: 1.0.0
Hostname: hello-server-7df58f7fb5-nvcvd
Hello, world!
Version: 1.0.0
Hostname: hello-server-7df58f7fb5-nvcvd
...
Task 5. Restricting access with a Network Policy
Now you will block access to the hello-server pod from all pods that are not labeled with app=hello.
The policy definition you'll use is contained in manifests/network-policy.yaml
Apply the policy with the following command:
kubectl apply -f ./manifests/network-policy.yaml
Output:
networkpolicy.networking.k8s.io/hello-server-allow-from-hello-client created
Tail the logs of the "blocked" client again:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
You'll now see that the output looks like this in the window tailing the "blocked" client:
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
wget: download timed out
...
The network policy has now prevented communication to the hello-server from the unlabeled pod.
Press CTRL+C to exit.
Task 6. Restricting namespaces with Network Policies
In the previous example, you defined a network policy that restricts connections based on pod labels. It is often useful to instead label entire namespaces, particularly when teams or applications are granted their own namespaces.
You'll now modify the network policy to only allow traffic from a designated namespace, then you'll move the hello-allowed pod into that new namespace.
First, delete the existing network policy:
kubectl delete -f ./manifests/network-policy.yaml
Output:
networkpolicy.networking.k8s.io "hello-server-allow-from-hello-client" deleted
Create the namespaced version:
kubectl create -f ./manifests/network-policy-namespaced.yaml
Output:
networkpolicy.networking.k8s.io/hello-server-allow-from-hello-client created
Now observe the logs of the hello-allowed-client pod in the default namespace:
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
You will notice it is no longer able to connect to the hello-server.
Press CTRL+C to exit.
Finally, deploy a second copy of the hello-clients app into the new namespace.
kubectl -n hello-apps apply -f ./manifests/hello-app/hello-client.yaml
Output:
deployment.apps/hello-client-allowed created
deployment.apps/hello-client-blocked created
Test completed task
Click Check my progress to verify your performed task. If you have successfully deployed a second copy of the hello-clients app into the new namespace, you will see an assessment score.
Deploy a second copy of the hello-clients app into the new namespace
Task 7. Validation
Next, check the logs for the two new hello-app clients.
View the logs for the "hello" labeled app in the app in the hello-apps namespace by running:
kubectl logs --tail 10 -f -n hello-apps $(kubectl get pods -oname -l app=hello -n hello-apps)
Output:
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Hello, world!
Version: 1.0.0
Hostname: hello-server-6c6fd59cc9-7fvgp
Both clients are able to connect successfully because as of Kubernetes 1.10.x NetworkPolicies do not support restricting access to pods within a given namespace. You can allowlist by pod label, namespace label, or allowlist the union (i.e. OR) of both. But you cannot yet allowlist the intersection (i.e. AND) of pod labels and namespace labels.
Press CTRL+C to exit.
Task 8. Teardown
Qwiklabs will take care of shutting down all the resources used for this lab, but heres what you would need to do to clean up your own environment to save on cost and to be a good cloud citizen:
Log out of the bastion host:
exit
Run the following to destroy the environment:
make teardown
Output:
...snip...
google_compute_subnetwork.cluster-subnet: Still destroying... (ID: us-east1/kube-net-subnet, 20s elapsed)
google_compute_subnetwork.cluster-subnet: Destruction complete after 25s
google_compute_network.gke-network: Destroying... (ID: kube-net)
google_compute_network.gke-network: Still destroying... (ID: kube-net, 10s elapsed)
google_compute_network.gke-network: Still destroying... (ID: kube-net, 20s elapsed)
google_compute_network.gke-network: Destruction complete after 26s
Destroy complete! Resources: 5 destroyed.
Task 9. Troubleshooting in your own environment
The install script fails with a "permission denied" error when running Terraform
The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects. Ensure that the account listed in gcloud config list has necessary permissions to create resources. If it does, regenerate the application default credentials using gcloud auth application-default login.
Invalid fingerprint error during Terraform operations
Terraform occasionally complains about an invalid fingerprint, when updating certain resources.
If you see the error below, simply re-run the command.
Solution of Lab
Quick
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/GSP480/lab.sh
source lab.sh
Manual
]]>https://eplus.dev/how-to-use-a-network-policy-on-google-kubernetes-engine-gsp480https://eplus.dev/how-to-use-a-network-policy-on-google-kubernetes-engine-gsp480<![CDATA[How to Use a Network Policy on Google Kubernetes Engine]]><![CDATA[How to Use a Network Policy on Google Kubernetes Engine - GSP480]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:36:26 GMT<![CDATA[Build an AI-Powered Interview Question Generator using Gemini (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included IDE is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
Scenario: You're a developer at recruitment firm that specializes in tech talent acquisition. You are looking for ways to streamline the interview preparation process for hiring managers by generating tailored interview questions for various roles using AI. you need to finish the below task:
Task: Develop a Python function named interview(prompt). This function should invoke the gemini-2.5-flash-lite model using the supplied prompt, generate the response. For this challenge, use the prompt: "Give me ten interview questions for the role of program manager."
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Note: You can ignore any warnings related to Python version dependencies.
Click Check my progress to verify the objective.
Create and run a file to send a text prompt to Gen AI and receive a response
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/build-an-ai-powered-interview-question-generator-using-gemini-solution/lab.sh
source lab.sh
Script Alternative
cat <<'EOF' > lab.py
#!/usr/bin/python3
import vertexai
from vertexai.generative_models import GenerativeModel
# Prompt required by the lab
PROMPT = "Give me ten interview questions for the role of program manager."
def interview(prompt: str) -> str:
"""
Invoke Vertex AI Gemini model (gemini-2.5-flash-lite) with the supplied prompt
and return the generated text response.
"""
# Auto-detect project and region from the gcloud environment (Qwiklabs usually sets these)
project_id = None
location = None
try:
import subprocess
project_id = subprocess.check_output(
["gcloud", "config", "get-value", "project"],
text=True
).strip()
location = subprocess.check_output(
["gcloud", "config", "get-value", "ai/region"],
text=True
).strip()
except Exception:
pass
# Sensible defaults for most labs if ai/region isn't set
if not project_id:
raise RuntimeError("Could not detect GCP project. Run: gcloud config get-value project")
if not location or location == "(unset)":
location = "us-central1"
vertexai.init(project=project_id, location=location)
model = GenerativeModel("gemini-2.5-flash-lite")
response = model.generate_content(
prompt,
generation_config={
"temperature": 0.7,
"max_output_tokens": 512,
},
)
# Return the text output
return response.text if hasattr(response, "text") else str(response)
if __name__ == "__main__":
print(interview(PROMPT))
EOF
Run
/usr/bin/python3 lab.py
]]>https://eplus.dev/build-an-ai-powered-interview-question-generator-using-gemini-solutionhttps://eplus.dev/build-an-ai-powered-interview-question-generator-using-gemini-solution<![CDATA[Build an AI-Powered Interview Question Generator using Gemini]]><![CDATA[Build an AI-Powered Interview Question Generator using Gemini (Solution)]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:24:48 GMT<![CDATA[Generate AI Images and Summarize them Using Gemini and Python (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included IDE is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
In a challenge lab youre given a scenario and a set of tasks. Instead of following step-by-step instructions, you will use the skills learned from the labs in the course to figure out how to complete the tasks on your own! An automated scoring system (shown on this page) will provide feedback on whether you have completed your tasks correctly.
When you take a challenge lab, you will not be taught new Google Cloud concepts. You are expected to extend your learned skills, like changing default values and reading and researching error messages to fix your own mistakes.
To score 100% you must successfully complete all tasks within the time period! Are you ready for the challenge?
Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Click File > New File to open a new file within the Code Editor.
Write the Python code to use Google's Vertex AI SDK to interact with the pre-trained Text Generation AI model.
Create and save the python file.
Execute the Python file by invoking the below command by replacing the FILE_NAME inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /FILE_NAME.py
Copied!
To view the generated image, use EXPLORER.
Note: You can ignore any warnings related to Python version dependencies.
Challenge scenario
Scenario: You're a developer at Cymbal Inc. an AI-powered bouquet design company. Your clients can describe their dream bouquet, and your system generates realistic images for their approval. To further enhance the experience, you're integrating cutting-edge image analysis to provide descriptive summaries of the generated bouquets. Your main application will invoke the relevant methods based on the users' interaction and to facilitate that, you need to finish the below tasks:
Task 1: Develop a Python function named generate_bouquet_image(prompt). This function should invoke the imagen-4.0-generate-001 model using the supplied prompt, generate the image, and store it locally. For this challenge, use the prompt: "Create an image containing a bouquet of 2 sunflowers and 3 roses".
Click Check my progress to verify the objective.
Generate an image by sending a text prompt
Task 2: Develop a second Python function called analyze_bouquet_image(image_path). This function will take the image path as input along with a text prompt to generate birthday wishes based on the image passed and send it to the gemini-2.5-flash model. To ensure responses can be obtained as and when they are generated, enable streaming on the prompt requests.
Click Check my progress to verify the objective.
Analyze the saved image by using a multimodal model
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/generate-ai-images-and-summarize-them-using-gemini-and-python-solution/lab.sh
source lab.sh
Script Alternative
cat <<'EOF' > lab.py
import vertexai
from vertexai.preview.vision_models import ImageGenerationModel
from vertexai.generative_models import GenerativeModel, Part
def generate_bouquet_image(prompt: str) -> str:
vertexai.init()
model = ImageGenerationModel.from_pretrained(
"imagen-4.0-generate-001"
)
images = model.generate_images(
prompt=prompt,
number_of_images=1
)
image_path = "bouquet.jpeg"
images[0].save(image_path)
print(f"Image generated and saved as {image_path}")
return image_path
def analyze_bouquet_image(image_path: str):
model = GenerativeModel("gemini-2.5-flash")
# Read image as binary (required)
with open(image_path, "rb") as f:
image_bytes = f.read()
image_part = Part.from_data(
data=image_bytes,
mime_type="image/jpeg"
)
prompt = (
"Analyze this bouquet image and generate a short birthday wish "
"based on the flowers you see."
)
# STREAMING DISABLED (checker requirement)
response = model.generate_content(
[prompt, image_part],
stream=False
)
print("Birthday wish:")
print(response.text)
if __name__ == "__main__":
prompt = "Create an image containing a bouquet of 2 sunflowers and 3 roses"
image_path = generate_bouquet_image(prompt)
analyze_bouquet_image(image_path)
EOF
Run
/usr/bin/python3 lab.py
]]>https://eplus.dev/generate-ai-images-and-summarize-them-using-gemini-and-python-solutionhttps://eplus.dev/generate-ai-images-and-summarize-them-using-gemini-and-python-solution<![CDATA[Generate AI Images and Summarize them Using Gemini and Python]]><![CDATA[Generate AI Images and Summarize them Using Gemini and Python (Solution)]]><![CDATA[David Nguyen]]>Sat, 28 Feb 2026 05:07:50 GMT<![CDATA[Firebase Essentials: Firestore Database Write with TypeScript - gem-firebase-firestore-write-typescript]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab guides you through creating a Firebase Firestore database and writing data to it using a TypeScript application. You'll learn how to initialize Firebase, structure your data, and use the Firebase CLI for authentication. This eliminates the need for a custom service account.
Task 1. Adding a Firebase Project to Google Cloud
Attach a new Firebase project to your Google Cloud proiect by visting the Firebase console.
Go to the Firebase Console.
https://console.firebase.google.com/
Copied!
Note:Navigate to the Firebase Console in your browser.
Click Create a Firebase Project and follow the instructions to create a new project.
Note:On the Create a project page, scroll down to the bottom of the screen and click Add Firebase to Google Cloud Project.
On the following screen, enter the Google Cloud project identifier shown below.
qwiklabs-gcp-02-a5615c175bbc
Copied!
Note:This project identifier is linked to a Google Cloud project. Accept the Firebase terms and conditions to create the Firebase project.
Follow the remaining instructions to create a new Firebase project.
Note:Firebase includes options for billing and analytics. These options are not used in this lab, so accept the default options to complete the creation of the Firebase project.
Task 2. Set Up Your Environment
Return to Google Cloud and use CloudShell to configure your Google Cloud project and initialize Firebase.
Set your project ID.
gcloud config set project qwiklabs-gcp-02-a5615c175bbc
Copied!
Note:This command sets your active project.
Set your default region.
gcloud config set run/region us-east1
Copied!
Note:This command sets your active region.
Set your default zone.
gcloud config set compute/zone us-east1-c
Copied!
Note:This command sets your active zone.
Enable the necessary APIs.
gcloud services enable compute.googleapis.com container.googleapis.com iap.googleapis.com firebase.googleapis.com firebaseextensions.googleapis.com eventarc.googleapis.com pubsub.googleapis.com storage.googleapis.com run.googleapis.com
Copied!
Note:This command enables the Google APIs required for this lab.
Create a Firestore database in Native mode.
gcloud firestore databases create --location=nam5 --database='(default)'
Copied!
Note:This command provisions a Firestore database in the nam5 (North America) multi-region. The database must exist before you can deploy or run code that interacts with it. You can choose a different region if needed.
Task 3. Configure the Firebase Environment
Enable the Firebase environment to use for development.
Install the Firebase CLI.
npm install -g firebase-tools
Copied!
Note:This command installs the Firebase CLI globally.
Create a new directory for the project.
mkdir firestore-app && cd firestore-app
Copied!
Note:This command creates a folder for the lab content. This folder will contain the code and configurations generated during the lab.
Log in to Firebase using the CLI:
firebase login --no-localhost
Copied!
Note:This command authenticates the Firebase CLI with your Google account.
Initialize Firebase in your project directory.
firebase init
Copied!
Note:This command initializes a Firebase project in the current directory.When prompted:
Select Firestore and Functions.
For Firestore, accept the default location.
For Functions, choose TypeScript and decline ESLint.
Task 4. Write Data to Firestore
Now, write some data to your Firestore database using TypeScript. For convienience a Firebase Cloud Function will be used to populate the Firestore database.
Replace functions/src/index.ts file with the following code:
// functions/src/index.ts
// Import types for request and response objects
import {onRequest, Request} from "firebase-functions/v2/https";
import {Response} from "express";
import {initializeApp} from "firebase-admin/app";
import {getFirestore} from "firebase-admin/firestore";
import * as logger from "firebase-functions/logger";
initializeApp();
// Note: The 'addMessage' function name from the JS example has been preserved.
export const addMessage = onRequest({region: "us-east1"}, async (req: Request, res: Response) => {
if (req.method !== "POST") {
res.status(405).set("Allow", "POST").send({error: "Method Not Allowed! Please use POST."});
return;
}
const {text} = req.body as { text: unknown };
if (typeof text !== "string" || text.trim() === "" || text.length > 200) {
res.status(400).send({
error: "The message text must be a string and between 1 and 200 characters.",
});
return;
}
try {
const writeResult = await getFirestore()
.collection("messages")
.add({original: text});
logger.log(`Message with ID: ${writeResult.id} added.`);
res.status(200).send({message: `Message with ID: ${writeResult.id} added to Firestore.`});
} catch (error) {
logger.error("Error writing to Firestore:", error);
res.status(500).send({error: "An internal error occurred."});
}
});
Copied!
Note:This code defines a Firebase Function that writes a message to the messages collection in Firestore. It uses the Firebase Admin SDK, which leverages the Firebase CLI's authentication for simplified access.
Replace the functions/package.json file with the following configuration to set the correct TypeScript engine and add the required dependencies.
{
"name": "functions",
"description": "Cloud Functions for Firebase",
"scripts": {
"lint": "eslint --ext .js,.ts .",
"build": "tsc",
"build:watch": "tsc --watch",
"serve": "npm run build && firebase emulators:start --only functions",
"shell": "npm run build && firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"engines": {
"node": "22"
},
"main": "lib/index.js",
"dependencies": {
"firebase-admin": "^11.8.0",
"firebase-functions": "^4.3.1"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^5.62.0",
"@typescript-eslint/parser": "^5.62.0",
"@types/node": "^18.19.0",
"eslint": "^8.57.0",
"eslint-config-google": "^0.14.0",
"eslint-plugin-import": "^2.29.1",
"firebase-functions-test": "^3.1.0",
"typescript": "^5.4.5"
},
"private": true,
"overrides": {
"glob": "^10.3.10",
"lru-cache": "^10.2.2"
}
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Replace the functions/tsconfig.json file with the following configuration to set the correct TypeScript requirements.
{
"compilerOptions": {
"module": "commonjs",
"noImplicitReturns": true,
"noUnusedLocals": true,
"outDir": "lib",
"sourceMap": true,
"strict": true,
"target": "es2021",
"lib": [
"es2021"
],
"skipLibCheck": true
},
"compileOnSave": true,
"include": [
"src"
]
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Install the dependencies.
cd functions && npm install
Copied!
Note:This command installs all the necessary packages defined in your package.json file.
Perform a test build.
npm run build
Copied!
Note:This command performs a build on the TypeScript as defined in the function folder.
Return to the Firebase application folder.
cd ~/firestore-app
Copied!
Note:This command returns to the parent folder, ready for deployment.
Deploy the function to Firebase.
firebase deploy --only functions
Copied!
Note:This command deploys your Firebase Function to the cloud.
If you see an error relating to "There was an issue deploying your functions. Verify that your project has a Google App Engine instance setup at https://console.cloud.google.com/appengine and try again." This indicates a background processes have not completed.
Please wait a couple of minutes before trying the deploy command again.
Task 5. Test the Function
Verify that your Firebase Cloud Function is writing data to Firestore correctly.
List the available Firebase Cloud Functions.
firebase functions:list
Copied!
Note:This command lists the available Firebase Functions for the active project.
EXPECTED OUTPUT
Function Version Trigger Location Memory Runtime
addMessage v2 https us-east1 256 nodejs22
Get the URI for the Firebase Cloud Function.
FUNCTION_URI=$(gcloud functions describe addMessage --region us-east1 --format=json | jq -r .serviceConfig.uri)
Copied!
Note:This command retrieves the addMessage function object and extracts the URI.
Call the Firebase Cloud Function using curl.
MESSAGE_TEXT="Hello from the CLI!"
curl -X POST "\(FUNCTION_URI" -H "Content-Type: application/json" -d '{"text":"'"\)MESSAGE_TEXT"'"}'
Copied!
Note:This command invokes the addMessage function with the provided data. The function name is case-sensitive.
{"message":"Message with ID: 9GMxSOZp0yynY0I57Dav added to Firestore."}
Check the Firestore console to confirm the data has been written.
Open the Firebase console for your project. Navigate to Firestore Database, and you should see a new document in the 'messages' collection.
Note:Verify that the data has been written to Firestore.
Solution of Lab
💡
No need to do anything, please wait about 5 minutes, the lab will do it automatically.
]]>https://eplus.dev/firebase-essentials-firestore-database-write-with-type-script-gem-firebase-firestore-write-typescripthttps://eplus.dev/firebase-essentials-firestore-database-write-with-type-script-gem-firebase-firestore-write-typescript<![CDATA[Firebase Essentials: Firestore Database Write with TypeScrip]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 03:47:18 GMT<![CDATA[Firebase Essentials: Firestore Database Write with JavaScript - gem-firebase-firestore-write-javascript]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Copied!
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Copied!
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab guides you through creating a Firebase Firestore database and writing data to it using a JavaScript application. You'll learn how to initialize Firebase, structure your data, and use the Firebase CLI for authentication. This eliminates the need for a custom service account.
Task 1. Adding a Firebase Project to Google Cloud
Attach a new Firebase project to your Google Cloud proiect by visting the Firebase console.
Go to the Firebase Console.
https://console.firebase.google.com/
Copied!
Note:Navigate to the Firebase Console in your browser.
Click Create a Firebase Project and follow the instructions to create a new project.
Note:On the Create a project page, scroll down to the bottom of the screen and click Add Firebase to Google Cloud Project.
On the following screen, enter the Google Cloud project identifier shown below.
qwiklabs-gcp-01-d7976e499e7a
Copied!
Note:This project identifier is linked to a Google Cloud project. Accept the Firebase terms and conditions to create the Firebase project.
Follow the remaining instructions to create a new Firebase project.
Note:Firebase includes options for billing and analytics. These options are not used in this lab, so accept the default options to complete the creation of the Firebase project.
Task 2. Set Up Your Environment
Return to Google Cloud and use CloudShell to configure your Google Cloud project and initialize Firebase.
Set your project ID.
gcloud config set project qwiklabs-gcp-01-d7976e499e7a
Copied!
Note:This command sets your active project.
Set your default region.
gcloud config set run/region us-east1
Copied!
Note:This command sets your active region.
Set your default zone.
gcloud config set compute/zone us-east1-c
Copied!
Note:This command sets your active zone.
Enable the necessary APIs.
gcloud services enable compute.googleapis.com container.googleapis.com iap.googleapis.com firebase.googleapis.com firebaseextensions.googleapis.com eventarc.googleapis.com pubsub.googleapis.com storage.googleapis.com run.googleapis.com
Copied!
Note:This command enables the Google APIs required for this lab.
Create a Firestore database in Native mode.
gcloud firestore databases create --location=nam5 --database='(default)'
Copied!
Note:This command provisions a Firestore database in the nam5 (North America) multi-region. The database must exist before you can deploy or run code that interacts with it. You can choose a different region if needed.
Task 3. Configure the Firebase Environment
Enable the Firebase environment to use for development.
Install the Firebase CLI.
npm install -g firebase-tools
Copied!
Note:This command installs the Firebase CLI globally.
Create a new directory for the project.
mkdir firestore-app && cd firestore-app
Copied!
Note:This command creates a folder for the lab content. This folder will contain the code and configurations generated during the lab.
Log in to Firebase using the CLI:
firebase login --no-localhost
Copied!
Note:This command authenticates the Firebase CLI with your Google account.
Initialize Firebase in your project directory.
firebase init
Copied!
Note:This command initializes a Firebase project in the current directory.When prompted:
Select Firestore and Functions.
For Firestore, accept the default location.
For Functions, choose JavaScript and decline ESLint.
Task 4. Write Data to Firestore
Now, write some data to your Firestore database using JavaScript. For convienience a Firebase Cloud Function will be used to populate the Firestore database.
Replace functions/index.js file with the following code:
// functions/index.js
const {initializeApp} = require("firebase-admin/app");
const {getFirestore} = require("firebase-admin/firestore");
// Import onRequest instead of onCall
const {onRequest} = require("firebase-functions/v2/https");
const {setGlobalOptions} = require("firebase-functions/v2");
initializeApp();
setGlobalOptions({ region: 'us-east1' });
// Use onRequest for a standard HTTP endpoint
exports.addMessage = onRequest(async (req, res) => {
// Check that the request method is POST
if (req.method !== 'POST') {
res.status(405).send({ error: 'Method Not Allowed! Please use POST.' });
return;
}
// Get the text from the request body directly.
// The {"data": ...} wrapper is not needed for onRequest functions.
const text = req.body.text;
// Validate the input and send back a standard HTTP error response
if (!text || text.length > 200) {
res.status(400).send({
error: 'The message text is either missing or too long (max 200 characters).',
});
return;
}
try {
const writeResult = await getFirestore()
.collection('messages')
.add({ original: text });
console.log(`Message with ID: ${writeResult.id} added.`);
// Send a success response
res.status(200).send({ message: `Message with ID: ${writeResult.id} added to Firestore.` });
} catch (error) {
console.error("Error writing to Firestore:", error);
res.status(500).send({ error: 'An internal error occurred.' });
}
});
Copied!
Note:This code defines a Firebase Function that writes a message to the messages collection in Firestore. It uses the Firebase Admin SDK, which leverages the Firebase CLI's authentication for simplified access.
Replace the functions/package.json file with the following configuration to set the correct JavaScript engine and add the required dependencies.
{
"name": "functions",
"scripts": {
"lint": "eslint .",
"serve": "firebase emulators:start --only functions",
"shell": "firebase functions:shell",
"start": "npm run shell",
"deploy": "firebase deploy --only functions",
"logs": "firebase functions:log"
},
"engines": {
"node": "22"
},
"main": "index.js",
"dependencies": {
"firebase-admin": "^11.8.0",
"firebase-functions": "^4.6.0"
},
"devDependencies": {
"@firebase/rules-unit-testing": "^2.0.2",
"eslint": "^8.15.0",
"eslint-config-google": "^0.14.0",
"firebase-functions-test": "^3.0.0"
},
"private": true
}
Copied!
Note:Ensure the engines/node field is set to v22, the firebase-admin dependency is included, and firebase-functions is v4.6.0 or later.
Install the dependencies.
cd functions && npm install
Copied!
Note:This command installs all the necessary packages defined in your package.json file.
Return to the Firebase application folder.
cd ~/firestore-app
Copied!
Note:This command returns to the parent folder, ready for deployment.
Deploy the function to Firebase.
firebase deploy --only functions
Copied!
Note:This command deploys your Firebase Function to the cloud.
If you see an error relating to "There was an issue deploying your functions. Verify that your project has a Google App Engine instance setup at https://console.cloud.google.com/appengine and try again." This indicates a background processes have not completed.
Please wait a couple of minutes before trying the deploy command again.
Task 5. Test the Function
Verify that your Firebase Cloud Function is writing data to Firestore correctly.
List the available Firebase Cloud Functions.
firebase functions:list
Copied!
Note:This command lists the available Firebase Functions for the active project.
EXPECTED OUTPUT
Function Version Trigger Location Memory Runtime
addMessage v2 https us-east1 256 nodejs22
Get the URI for the Firebase Cloud Function.
FUNCTION_URI=$(gcloud functions describe addMessage --region us-east1 --format=json | jq -r .serviceConfig.uri)
Copied!
Note:This command retrieves the addMessage function object and extracts the URI.
Call the Firebase Cloud Function using curl.
MESSAGE_TEXT="Hello from the CLI!"
curl -X POST "\(FUNCTION_URI" -H "Content-Type: application/json" -d '{"text":"'"\)MESSAGE_TEXT"'"}'
Copied!
Note:This command invokes the addMessage function with the provided data. The function name is case-sensitive.
{"message":"Message with ID: 9GMxSOZp0yynY0I57Dav added to Firestore."}
Check the Firestore console to confirm the data has been written.
Open the Firebase console for your project. Navigate to Firestore Database, and you should see a new document in the 'messages' collection.
Note:Verify that the data has been written to Firestore.
Solution of Lab
💡
No need to do anything, please wait about 5 minutes, and the lab will do it automatically.
]]>https://eplus.dev/firebase-essentials-firestore-database-write-with-java-script-gem-firebase-firestore-write-javascripthttps://eplus.dev/firebase-essentials-firestore-database-write-with-java-script-gem-firebase-firestore-write-javascript<![CDATA[Firebase Essentials: Firestore Database Write with JavaScript - gem-firebase-firestore-write-javascript]]><![CDATA[Firebase Essentials: Firestore Database Write with JavaScript]]><![CDATA[gem-firebase-firestore-write-javascript]]><![CDATA[Firebase Essentials]]><![CDATA[Firestore Database]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 03:36:36 GMT<![CDATA[Respond to a Security Incident (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You're the cloud architect for a cybersecurity firm. One of your client's virtual machines (VM) in a Google Cloud VPC network (client-vpc) has been compromised by a sophisticated attacker. The attacker is attempting to pivot laterally to other VMs within the network. Your task is to:
Isolate the compromised VM: Immediately isolate the VM (compromised-vm) from the rest of the client-vpc network by denying the traffic to prevent further lateral movement by removing all ingress access in the existing firewall rule called critical-fw-rule.
Click Check my progress to verify the objective.
Update the firewall rule.
Maintain Limited Access: Allow SSH access to the compromised-vm from a specific bastion host (bastion-host) so that your incident response team can investigate the attack. Create this as a new firewall rule called allow-ssh-from-bastion.
Click Check my progress to verify the objective.
Create the firewall rule.
Log and Monitor: Enable VPC flow logs for the subnet my-subnet to capture all network traffic to and from the isolated VM for further analysis.
Click Check my progress to verify the objective.
Enable VPC flow logs for the subnet.
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/respond-to-a-security-incident-solution/lab.sh
source lab.sh
Script Alternative
gcloud compute firewall-rules delete critical-fw-rule --quiet 2>/dev/null; gcloud compute firewall-rules create critical-fw-rule --network=client-vpc --direction=INGRESS --priority=1000 --action=DENY --rules=tcp:80,tcp:22 --target-tags=compromised-vm --enable-logging
gcloud compute firewall-rules delete allow-ssh-from-bastion --quiet 2>/dev/null; gcloud compute firewall-rules create allow-ssh-from-bastion --network=client-vpc --action allow --direction=ingress --rules tcp:22 --source-ranges=\((gcloud compute instances describe bastion-host --zone=\)(gcloud compute instances list --filter="name=bastion-host" --format="get(zone)") --format="get(networkInterfaces[0].accessConfigs[0].natIP)") --target-tags=compromised-vm
gcloud compute networks subnets update my-subnet --region=$(gcloud compute networks subnets list --filter="name=my-subnet" --format="get(region)") --enable-flow-logs
]]>https://eplus.dev/respond-to-a-security-incident-solutionhttps://eplus.dev/respond-to-a-security-incident-solution<![CDATA[Respond to a Security Incident]]><![CDATA[Respond to a Security Incident (Solution)]]><![CDATA[David Nguyen]]>Thu, 26 Feb 2026 02:48:01 GMT<![CDATA[Create Firewall Rule to Enable SSH Access (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
Your colleague created a custom VPC network with a compute instance in that network. You have to connect to the compute instance through ssh, but you are facing an error while connecting to the instance. After investigation, you discovered an issue with the firewall. There is no firewall at this movement which allows SSH to this instance.
Your task is to create a firewall rule so that you can connect to the instance through ssh.
Click Check my progress to verify the objective.
Solution of Lab
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/create-firewall-rule-to-enable-ssh-access-solution/lab.sh
source lab.sh
Script Alternative
VPC=\((gcloud compute instances describe \)(gcloud compute instances list --format="value(name)") --zone=\((gcloud compute instances list --format="value(zone)") --format="value(networkInterfaces[0].network.basename())"); gcloud compute firewall-rules create allow-ssh --network=\)VPC --allow=tcp:22 --source-ranges=0.0.0.0/0 --target-tags=http-server # drabhishek ji ka code copy karta hu mai
]]>https://eplus.dev/create-firewall-rule-to-enable-ssh-access-solutionhttps://eplus.dev/create-firewall-rule-to-enable-ssh-access-solution<![CDATA[Create Firewall Rule to Enable SSH Access]]><![CDATA[Create Firewall Rule to Enable SSH Access (Solution)]]><![CDATA[David Nguyen]]>Wed, 25 Feb 2026 09:27:45 GMT<![CDATA[Modify VM Instance for Cost Optimization (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You work as a cloud administrator for a technology company that utilizes Google Cloud extensively for its operations. Today, you have been tasked with modifying a virtual machine (VM) instance to better align with updated resource requirements by using a specific General purpose Machine type with low cost.
Currently, you have an existing VM instance named Instance_name with high cost. Your task is to update the machine type with e2-medium suitable for the VM instance with low cost.
Click Check my progress to verify the objective.
Update the Machine type of the VM instance.
Solution of Lab
%[https://www.youtube.com/watch?v=BlPbr1A1dOw\]
We gratefully acknowledge Google's learning resources that make cloud education accessible
export VM_NAME="lab-vm"
export ZONE="us-east4-c" # Replace with your actual zone
gcloud compute instances stop lab-vm --zone [YOUR_ZONE]
# Example:
# gcloud compute instances stop lab-vm --zone us-east4-c
gcloud compute instances set-machine-type $VM_NAME \
--machine-type e2-medium \
--zone $ZONE
gcloud compute instances start lab-vm --zone us-east4-c
If you get an error, run
gcloud auth list
export ZONE=$(gcloud compute project-info describe --format="value(commonInstanceMetadata.items[google-compute-default-zone])")
export PROJECT_ID=$(gcloud config get-value project)
gcloud config set compute/zone "$ZONE"
gcloud compute instances stop lab-vm --zone="$ZONE"
sleep 10
gcloud compute instances set-machine-type lab-vm --machine-type e2-medium --zone="$ZONE"
sleep 10
gcloud compute instances start lab-vm --zone="$ZONE"
]]>https://eplus.dev/modify-vm-instance-for-cost-optimization-solutionhttps://eplus.dev/modify-vm-instance-for-cost-optimization-solution<![CDATA[Modify VM Instance for Cost Optimization (Solution)]]><![CDATA[Modify VM Instance for Cost Optimization]]><![CDATA[David Nguyen]]>Tue, 24 Feb 2026 02:27:21 GMT<![CDATA[Docker Essentials: Container Networking - gem-docker-networking]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab provides a practical exploration of Docker networking. You will learn how containers communicate with each other and the outside world using various networking modes. You'll also learn how to create custom networks and control container communication. We will use Artifact Registry to host the container images used in this lab.
Task 1. Setting up the Environment
In this task, you will configure your environment and pull the necessary images from Artifact Registry.
Set your Project ID is: qwiklabs-gcp-00-192ff2ed31f3
gcloud config set project qwiklabs-gcp-00-192ff2ed31f3
Note:This command sets your active project identity.
Set your default region to us-west1
gcloud config set compute/region us-west1
Note:This command sets your active compute region.
Enable the Artifact Registry API.
gcloud services enable artifactregistry.googleapis.com
Note:Enables the Artifact Registry service.
Create a Docker repository in Artifact Registry. Replace lab-registry with a name for your repository. It must be unique within the specified region.
gcloud artifacts repositories create lab-registry --repository-format=docker --location=us-west1 --description="Docker repository"
Note:Creates a Docker repository in Artifact Registry.
Configure Docker to authenticate with Artifact Registry.
gcloud auth configure-docker us-west1-docker.pkg.dev
Note:This command configures Docker to use your Google Cloud credentials for authentication with Artifact Registry.
Pull the alpine/curl image from Docker Hub and tag it for your Artifact Registry.
docker pull alpine/curl && docker tag alpine/curl us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest
Note:This will pull the image from docker hub and tag it for Artifact Registry.
Push the alpine/curl image to Artifact Registry.
docker push us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest
strong>Note:This command pushes the tagged image to your Artifact Registry repository.
Pull the nginx:latest image from Docker Hub and tag it for your Artifact Registry.
docker pull nginx:latest && docker tag nginx:latest us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest
Note:This will pull the image from docker hub and tag it for Artifact Registry.
Push the nginx:latest image to Artifact Registry.
docker push us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest
Note:This command pushes the tagged image to your Artifact Registry repository.
Task 2. Exploring Default Bridge Network
This task explores the default bridge network Docker creates. You will run containers and observe their communication within this network.
Run container1 using the alpine/curl image.
docker run -d --name container1 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity
Run container2 using the alpine/curl image.
docker run -d --name container2 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity
Note:This starts two containers in detached mode. The sleep infinity command keeps the containers running.
Inspect the default bridge network.
docker network inspect bridge
Note:This shows details of the bridge network, including connected containers and IP addresses.
From container1, ping container2 using its name. Note that Docker uses embedded DNS for name resolution within the default bridge network.
docker exec -it container1 ping container2
Note:This executes the ping command within container1, targeting container2. The standard bridge network does not provide DNS resolution, so ping command cannot use the container name.
Stop container2 from runnning.
docker stop container2 && docker rm container2
Restart container2 running as an HTTP server.
docker run -d --name container2 -p 8080:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest
Note:Start a new container2 running nginx and exposing port 8080.
From container1, use curl to make an HTTP request to container2.
docker exec -it container1 curl container2:8080
Note:Send a curl request from container1 to container2 on port 8080. The standard bridge network does not provide DNS resolution, so curl command cannot use the container name.
Task 3. Creating and Using Custom Networks
This task demonstrates how to create a custom network which supports DNS and connect containers to it, providing more control over network configuration.
Create a new network named my-net.
docker network create my-net
Note:Creates a new Docker network named my-net.
Run container 3 connecting it to the my-net network.
docker run -d --name container3 --network my-net us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity
Run container 4 connecting it to the my-net network.
docker run -d --name container4 --network my-net us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/alpine-curl:latest sleep infinity
Note:Starts two containers connected to the my-net network.
Inspect the my-net network to see the connected containers and their IP addresses.
docker network inspect my-net
Note:Displays details about the my-net network.
From container3, ping container4 using its name. Name resolution works within custom networks as well.
docker exec -it container3 ping container4
Note:Tests connectivity between containers within my-net.
Stop the active container 4 from running.
docker stop container4 && docker rm container4
Restart container 4.
docker run -d --name container4 --network my-net -p 8081:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest
Run an nginx container on my-net and test connectivity.
docker exec -it container3 curl container4:80
Note:Starts an nginx container on my-net.
Stop the active container 4 from running.
docker stop container4 && docker rm container4
Task 4. Publishing Ports and Accessing Containers from the Host
Learn how to publish container ports and access containerized services from the host machine.
Run an nginx container, publishing port 80 to the host's port 8080.
docker run -d --name container4 -p 8080:80 us-west1-docker.pkg.dev/qwiklabs-gcp-00-192ff2ed31f3/lab-registry/nginx:latest
Note:Publishes port 80 of the container to port 8080 on the host.
Access the nginx service from the host machine using curl.
curl localhost:8080
Note:This command sends an HTTP request to the published port on the host machine.
Use docker port to check the port mapping.
docker port container4 80
Note:This command shows the mapping for port 80 of the container.
Task 5. Cleaning Up
Remove the created containers and networks.
Stop all containers.
docker stop container1 container2 container3 container4
Remove all containers.
docker rm container1 container2 container3 container4
Note:This stops and removes the containers created in the previous steps.
Remove the my-net network.
docker network rm my-net
Note:This removes the custom network.
Solution of Lab
https://www.youtube.com/watch?v=c_w7Utw7l50
💡
The lab will automatically complete in approximately 5 minutes. Just sit tight and let it finish 👍
]]>https://eplus.dev/docker-essentials-container-networking-gem-docker-networking-1https://eplus.dev/docker-essentials-container-networking-gem-docker-networking-1<![CDATA[Docker Essentials: Container Networking - gem-docker-networking]]><![CDATA[Docker Essentials: Container Networking]]><![CDATA[gem-docker-networking]]><![CDATA[David Nguyen]]>Sun, 15 Feb 2026 09:02:28 GMT<![CDATA[Docker Essentials: Containers and Artifact Registry - gem-docker-basics]]><![CDATA[Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
Your Cloud Platform project in this session is set to YOUR_PROJECT_ID
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
(Optional) You can list the active account name with this command:
gcloud auth list
Click Authorize.
Your output should now look like this:
Output:
ACTIVE: *
ACCOUNT: [email protected]
To set the active account, run:
$ gcloud config set account `ACCOUNT`
(Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = <project_ID>
Example output:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Overview
This lab provides a hands-on introduction to essential Docker operations, including building, running, managing, and publishing Docker containers. You will learn how to containerize a simple application, interact with the container, and push the resulting image to Google Artifact Registry. This lab assumes familiarity with basic Linux commands and Docker concepts.
Task 1. Setting up your environment and Artifact Registry
In this task, you'll configure your environment, enable the necessary services, and create an Artifact Registry repository to store your Docker images.
Set your Project ID:
gcloud config set project qwiklabs-gcp-04-3dba7879dc58
Note:This configures the gcloud CLI to use your project.
Enable Artifact Registry API
gcloud services enable artifactregistry.googleapis.com
Note:This command enables the Artifact Registry API for your project, allowing you to create and manage repositories.
Create an Artifact Registry Repository in region: us-central1
gcloud artifacts repositories create my-docker-repo \
--repository-format=docker \
--location=us-central1 \
--description="My Docker image repository"
Note:Creates a Docker repository in Artifact Registry named my-docker-repo.
Configure Docker to authenticate with Artifact Registry:
gcloud auth configure-docker us-central1-docker.pkg.dev
Note:Authenticates Docker with Artifact Registry for the specified region. This allows you to push and pull images.
Task 2. Building a Docker Image
Here, you will create a simple 'Hello World' application and build a Docker image for it using a Dockerfile.
Create a directory for your application:
mkdir myapp && cd $_
Note:Creates a new directory named myapp and navigates into it.
Create a simple app.py file:
cat > app.py <<EOF
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, Docker!\n"
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080)
EOF
Note:Creates a simple Flask application that returns 'Hello, Docker!'. This will be our application.
Create a requirements.txt file:
cat > requirements.txt <<EOF
Flask
EOF
Note:Specifies the dependencies for your application (Flask).
Create a Dockerfile:
FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "app.py"]
Note:Defines the steps to build your Docker image. It uses a Python base image, installs dependencies, copies the application code, and specifies the command to run the application.
Build the Docker image. Replace us-central1 and qwiklabs-gcp-04-3dba7879dc58
docker build -t us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest .
Note:Builds the Docker image using the Dockerfile in the current directory. It tags the image with the Artifact Registry repository URL.
Task 3. Running and Testing the Docker Container
In this task, you will run the Docker image you built and test it to ensure it's working correctly.
Run the Docker container:
docker run -d -p 8080:8080 us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest
Note:Runs the Docker image in detached mode (`-d`) and maps port 8080 on the host to port 8080 in the container. You may need to configure firewall rules to allow external traffic on port 8080.
Check if the container is running:
docker ps
Note:Lists the currently running Docker containers.
Test the application. Replace qwiklabs-gcp-04-3dba7879dc58
curl http://localhost:8080
Note:Sends an HTTP request to the application running in the container. You should see 'Hello, Docker!' in the output.
Stop the Docker container:
docker stop $(docker ps -q)
Note:Stops all running Docker containers. docker ps -q returns only the container IDs.
Task 4. Pushing the Image to Artifact Registry
Now that you have a working image, you will push it to your Artifact Registry repository.
Push the Docker image. Replace us-central1 and qwiklabs-gcp-04-3dba7879dc58
docker push us-central1-docker.pkg.dev/qwiklabs-gcp-04-3dba7879dc58/my-docker-repo/hello-docker:latest
Note:Pushes the Docker image to the Artifact Registry repository. This makes the image available for others to use.
Task 5. Cleaning Up
Remove local artifacts to ensure a clean environment.
Remove the application directory:
cd .. && rm -rf myapp
Note:Removes the myapp directory and all its contents.
Solution of Lab
%[https://www.youtube.com/watch?v=qy-rVvwVBR0\]
💡
The lab will automatically complete in approximately 5 minutes. Just sit tight and let it finish 👍
]]>https://eplus.dev/docker-essentials-containers-and-artifact-registry-gem-docker-basics-1https://eplus.dev/docker-essentials-containers-and-artifact-registry-gem-docker-basics-1<![CDATA[Docker Essentials: Containers and Artifact Registry - gem-docker-basics]]><![CDATA[Docker Essentials: Containers and Artifact Registry]]><![CDATA[gem-docker-basics]]><![CDATA[David Nguyen]]>Sun, 15 Feb 2026 08:58:43 GMT<![CDATA[Create Custom VPC with Subnets Configuration (Solution)]]><![CDATA[Overview
Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Challenge scenario
You have an existing project with the default VPC. As per the VPC best practice you have decided to move to a custom VPC for better network isolation and control.
Your task is to delete default VPC and create custom VPC with two subnets in us-central1 and asia-southeast1 in provided time frame.
Click Check my progress to verify the objective.
Custom VPC with two subnets
Solution of Lab
https://www.youtube.com/watch?v=0PS9SVjnvJI
curl -LO raw.githubusercontent.com/ePlus-DEV/storage/refs/heads/main/labs/create-custom-vpc-with-subnets-configuration-solution/lab.sh
source lab.sh
Script Alternative
gcloud compute firewall-rules list --filter="network=default" --format="value(name)" | xargs -r -I {} gcloud compute firewall-rules delete {} --quiet && \
gcloud compute networks delete default --quiet && \
gcloud compute networks create custom-vpc --subnet-mode=custom && \
gcloud compute networks subnets create custom-subnet-us --network=custom-vpc --region=us-central1 --range=10.0.1.0/24 && \
gcloud compute networks subnets create custom-subnet-asia --network=custom-vpc --region=asia-southeast1 --range=10.0.2.0/24
]]>https://eplus.dev/create-custom-vpc-with-subnets-configuration-solutionhttps://eplus.dev/create-custom-vpc-with-subnets-configuration-solution<![CDATA[Create Custom VPC with Subnets Configuration]]><![CDATA[Create Custom VPC with Subnets Configuration (Solution)]]><![CDATA[David Nguyen]]>Sun, 15 Feb 2026 08:45:55 GMT