gcloud cheat-sheet
gcloud auth login
gcloud config set project PROJECT_ID
gcloud config list
where PROJECT-ID is your GCP project ID. You can get it by running
gcloud config get-value project
Enable API's to be used in a project_id
gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com
https://hackersandslackers.com/connect-to-your-google-cloud-eompute-engine
gcloud compute ssh instancename
https://cloud.google.com/dns/docs/quickstart
gcloud beta dns --project=monolc18 managed-zones create skystone-analytics-zone --description= --dns-name=skyscs.com.
https://cloud.google.com/sql/docs/mysql/connect-admin-ip
gcloud sql connect [INSTANCE_ID] --user=root
gcloud compute instances create example-instance \
--custom-cpu 6 --custom-memory 3072MB --custom-vm-type n2
List GKE Clusters
gcloud container clusters list
https://cloud.google.com/sql/docs/postgres/quickstart-proxy-test?authuser=2
Install:
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=flaskapps-270710:us-central1:chandb2=tcp:5432
psql "host=127.0.0.1 port=5432 sslmode=disable dbname=chanpostgres user=postgres"
Example
#!/bin/bash
# Ensure local install of postgres/mysql is stopped in systemd
sudo systemctl stop postgresql.service
#Check its not running
#sudo systemctl status postgresql.service
# Start connection server
~/cloud_sql_proxy -instances=your-project-id:us-central1:db2=tcp:5432
# DO THIS IN SEPARATE SHELL
# get psql shell to your db
psql "host=127.0.0.1 port=5432 sslmode=disable dbname=fastapitest user=postgres"
Startup Script
sudo apt update
sudo apt upgrade -y
Create GCE Ubunto 1804
gcloud beta compute --project=monol8 instances create cloudron2 --zone=us-central1-a --machine-type=n1-standard-1 --subnet=default --network-tier=PREMIUM --metadata=startup-script=sudo\ apt\ update$'\n'sudo\ apt\ upgrade\ -y --maintenance-policy=MIGRATE --service-account=8-compute@developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/cloud-platform --tags=http-server,https-server --image=ubuntu-1804-bionic-v20200129a --image-project=ubuntu-os-cloud --boot-disk-size=100GB --boot-disk-type=pd-standard --boot-disk-device-name=cloudron2 --reservation-affinity=any
gcloud compute ssh your_instance_name
wget https://cloudron.io/cloudron-setup && chmod +x cloudron-setup && ./cloudron-setup --provider gce
gcloud beta dns --project=monolithdec18 managed-zones create skystone-analytics-zone --description= --dns-name=your_domain.com.
5 Copy GCDNS name records to records of google domain.
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/flaskapps-2/skystone', '.']
# push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/flaskapps-2/skystone']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'run', 'deploy', 'skystone', '--image', 'gcr.io/flaskapps-2/skystone', '--region', 'us-central1','--platform', 'managed', '--quiet']
images:
- gcr.io/flaskapps-2/skystone
1 Select the service you want to make public.
2 Click Show Info Panel in the top right corner to show the Permissions tab.
3 In the Add members field, allUsers
4 Select the Cloud Run Invoker role from the Select a role drop-down menu.
5 Click Add.
https://github.com/RedisLabs/redis-microservices-for-dummies
https://redislabs.com/blog/digging-into-redis-enterprise-on-google-cloud/
Pricing:
https://cloud.google.com/memorystore/pricing
Memory Store Redis
https://cloud.google.com/memorystore/docs/redis/redis-overview
python library
https://googleapis.dev/python/redis/latest/index.html
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/tasks
https://cloud.google.com/tasks/docs/quickstart-appengine?authuser=2
https://cloud.google.com/tasks/docs/creating-http-target-tasks?authuser=2
https://cloud.google.com/tasks/docs/tutorial-gcf?authuser=2
https://cloud.google.com/tasks/docs/comp-tasks-sched?authuser=2
psq library:
https://github.com/GoogleCloudPlatform/psq
https://cloud.google.com/scheduler/docs/configuring/cron-job-schedules?authuser=2#defining_the_job_schedule
http://man7.org/linux/man-pages/man5/crontab.5.html
Field | Format of valid values |
---|---|
Minute | 0-59 |
Hour | 0-23 |
Day of the month | 1-31 |
Month | 1-12 |
Day of the week | 0-6 (Sunday to Saturday) |
Sample Schedule | Cloud Scheduler Format |
---|---|
Every min | * * * * * |
Every Saturday at 23:45 (11:45 PM) | 45 23 * * 6 |
Every Monday at 09:00 | 0 9 * * 1 |
https://cloud.google.com/scheduler/pricing
Job cost | Free tier |
---|---|
$0.10 per job per month | 3 free jobs per month, per billing account |
install requirements
steps:
- name: python:3.10
entrypoint: pip
args: ["install", "-r", "requirements.txt", "--user"]
# or
- name: python:3.10
entrypoint: 'bash'
args:
- '-c'
- |
source ./scripts/install_reqs
install_python_requirements
python -m pytest tests/schemas/test_schemas.py --junitxml=${SHORT_SHA}_test_log.xml
run pytest
steps:
- name: python
entrypoint: python
args: ["-m", "pytest", "tests/schemas/test_schemas.py", "--junitxml=${SHORT_SHA}_test_log.xml"]
# or
args: ["-m", "pytest", "--junitxml=${SHORT_SHA}_test_log.xml"]
Private Repo
https://cloud.google.com/build/docs/interacting-with-dockerhub-images#storing_docker_credentials_in
Notes:
vscode extension
https://cloud.google.com/code/docs/vscode/yaml-editing
https://cloud.google.com/code/docs/vscode/creating-a-cloud-run-app
https://cloud.google.com/code/docs/vscode/deploying-a-cloud-run-app
MAchine types:
https://cloud.google.com/compute/docs/machine-types
Build config files are modeled using the Cloud Build API's Build resource.
https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds#resource:-build
https://cloud.google.com/cloud-build/docs/build-config
A build config file has the following structure:
steps:
- name: string
args: [string, string, ...]
env: [string, string, ...]
dir: string
id: string
waitFor: [string, string, ...]
entrypoint: string
secretEnv: string
volumes: object(Volume)
timeout: string (Duration format)
- name: string
...
- name: string
...
timeout: string (Duration format)
queueTtl: string (Duration format)
logsBucket: string
options:
env: [string, string, ...]
secretEnv: string
volumes: object(Volume)
sourceProvenanceHash: enum(HashType)
machineType: enum(MachineType)
diskSizeGb: string (int64 format)
substitutionOption: enum(SubstitutionOption)
logStreamingOption: enum(LogStreamingOption)
logging: enum(LoggingMode)
substitutions: map (key: string, value: string)
tags: [string, string, ...]
secrets: object(Secret)
images:
- [string, string, ...]
artifacts: object (Artifacts)
https://cloud.google.com/cloud-build/docs/build-config#id
Use the id field to set a unique identifier for a build step.
id is used with the waitFor field to configure the order in which build steps should be run.
For instructions on using waitFor and id, see Configuring build step order.
https://cloud.google.com/cloud-build/docs/build-config#env
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/myimage', 'frontend=gcr.io/myproject/myimage']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-east1-b'
- 'CLOUDSDK_CONTAINER_CLUSTER=node-example-cluster'
https://cloud.google.com/cloud-build/docs/build-config#volumes
steps:
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: ['cat', '/persistent_volume/file']
https://cloud.google.com/cloud-build/docs/build-config#substitutions
steps:
- name: 'ubuntu'
args: ['echo', 'hello ${_SUB_VALUE}']
substitutions:
_SUB_VALUE: world
options:
substitution_option: 'ALLOW_LOOSE'
https://cloud.google.com/cloud-build/docs/build-config#network
When Cloud Build runs each build step, it attaches the step's container to a local Docker network named cloudbuild.
The cloudbuild network hosts Application Default Credentials (ADC) that Google Cloud services can use to automatically find your credentials.
If you're running nested Docker containers and want to expose ADC to an underlying container, use the --network flag in your docker build step:
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "--network=cloudbuild", "."]
https://cloud.google.com/sdk/gcloud/reference/sql
https://stackoverflow.com/questions/58990302/cloudbuild-yaml-create-a-sql-instance
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ["sql", "instances", "create", "$_INSTANCE_NAME", "--database-version=POSTGRES_11", "--region=$_REGION"]
https://github.com/anthcor/cloudrun-fastapi/blob/master/cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
id: "postgres_up"
args:
[
"run",
"-d",
"-e",
"POSTGRES_HOST_AUTH_METHOD=trust",
"-p",
"5432:5432",
"-v",
"/var/run/postgresql:/var/run/postgresql",
"--name=postgres",
"--network=cloudbuild",
"postgres",
]
- id: "cloud_run_deploy"
name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
args:
[
"gcloud",
"run",
"deploy",
"api-$BRANCH_NAME",
"--image=us.gcr.io/$PROJECT_ID/api:latest",
"--cpu=2",
"--port=8080",
"--memory=2048Mi",
"--timeout=600",
"--concurrency=20",
"--platform=managed",
"--max-instances=1000",
"--region=us-central1",
"--allow-unauthenticated",
"--revision-suffix=$SHORT_SHA",
"--set-env-vars=PROJECT_ID=$PROJECT_ID,SHORT_SHA=$SHORT_SHA",
"--set-cloudsql-instances=$PROJECT_ID:us-central1:cloudrunfastapi",
"--service-account=cloudrunsrvacct@$PROJECT_ID.iam.gserviceaccount.com"
]
https://github.com/GoogleCloudPlatform/cloud-builders-community
- name: 'gcr.io/$PROJECT_ID/packer'
args:
- build
- -var
- project_id=$PROJECT_ID
- packer.json
gcloud builds submit --config cloudbuild.yaml .
gcloud container images list --filter packer
https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gke
https://kompose.io/
https://github.com/kubernetes/kompose
https://github.com/kubernetes/kompose/releases
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Example docker-compose file for testing kompose conversions.
https://raw.githubusercontent.com/kubernetes/kompose/master/examples/docker-compose-v3.yaml
version: "3"
services:
redis-master:
image: k8s.gcr.io/redis:e2e
ports:
- "6379"
redis-slave:
image: gcr.io/google_samples/gb-redisslave:v1
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
frontend:
image: gcr.io/google-samples/gb-frontend:v4
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer
minikube:
- Manages VM itself.
kubectl:
- Manages containers in the node
@startmindmap
title Application Overview
* minikube
** kubectl
@endmindmap
https://cloud.google.com/compute/docs/startupscript?_ga=2.180355812.-292617564.1590767881
#!/bin/bash
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce docker-compose
sudo systemctl status docker
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG
# Add a user to the group if you need it
#sudo usermod -aG docker username
docker
#Notes:
#Start docker-compose as sudo
https://cloud.google.com/compute/docs/startupscript?_ga=2.180355812.-292617564.1590767881
You could, for example, provide a Python script instead of a bash script.
Keep in mind that Compute Engine runs the script verbatim, regardless of the type of script.
To execute a script that is not bash, add a shebang (a line that starts with #!) at the top of the file.
This tells the operating system which interpreter to use.
For example, if you use a Python script, you can add the following shebang line:
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello World</h1>
<p>This page was created from a startup script.</p>
</body></html>
EOF
#! /usr/bin/python
https://cloud.google.com/compute/docs/startupscript?_ga=2.180355812.-292617564.1590767881
https://cloud.google.com/compute/docs/containers/deploying-containers
https://cloud.google.com/run/docs/reference/container-contract#env-vars
These are reserved vaiables inside Cloud Run.
They cannot be set by Secrets Manager.
Name | Description | Example |
---|---|---|
PORT |
The port your HTTP server should listen on. | 8080 |
K_ |
The name of the Cloud Run service being run. | hello-world |
K_ |
The name of the Cloud Run revision being run. | hello-world. |
K_ |
The name of the Cloud Run configuration that created the revision. | hello-world |
https://cloud.google.com/run/docs/reference/container-contract#port
The container must listen for requests on 0.0.0.0 on the port defined by the PORT environment variable.
In Cloud Run container instances, the PORT environment variable is always set to 8080,
but for portability reasons, your code should not hardcode this value.
If your container cannot listen on the PORT environment variable, you can configure the container port with a specific port.
https://cloud.google.com/run/docs/reference/container-contract#metadata-server
https://cloud.google.com/compute/docs/storing-retrieving-metadata#querying
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts1234567898-compute@developer.gserviceaccount.com/?query_path=https%3A%2F%2Flocalhost%3A8200%2Fexample%2Fquery&another_param=true" -H "Metadata-Flavor: Google"
https://cloud.google.com/run/docs/using-gcp-services#connecting_to_services_in_code
You can use Cloud Run (fully managed) with the supported Google Cloud services using the client libraries they provide. For code samples showing how to connect with a particular Google Cloud service, refer to the documentation provided for that Google Cloud service.
You do not need to provide credentials manually inside Cloud Run (fully managed) container instances when using the Google Cloud client libraries.
Note that Cloud Run (fully managed) uses a default runtime service account that has the Project > Editor role, which means it is able to call all Google Cloud APIs and have read and write access on all resources in your Google Cloud project. You can restrict this by assigning a service account with a minimal set of permissions to your Cloud Run services. For example, if your Cloud Run service is only reading data from Firestore, we recommend assigning it a service account that only has the Firestore User IAM role.
https://cloud.google.com/run/docs/using-gcp-services#recommended-services
The following table lists services recommended for Cloud Run (fully managed).
Service | Description |
---|---|
Cloud Build | Build container images, continuous integration and delivery. |
Container Registry | Store container images. |
Artifact Registry | Store container images. |
Google Cloud's operations suite | Monitoring and logging of Cloud Run services. |
Firestore | Fully managed NoSQL database. |
Cloud Storage | Object storage. Store objects and serve static content. |
Pub/Sub | Push events to Cloud Run services. Refer to the Using Pub/Sub with Cloud Run Tutorial. |
Cloud Scheduler | Trigger Cloud Run services on a schedule. |
Cloud Tasks | Execute asynchronous tasks on Cloud Run. Refer to HTTP Target tasks with authentication tokens. |
Identity Platform | Login your users. |
Secret Manager | Create and access secrets. |
BigQuery | Fully managed cloud data warehouse for analytics. |
Firebase Hosting | Fully managed hosting service for static and dynamic content with configurable CDN caching. |
Cloud Endpoints (Beta) | API management including routing, authentication, API keys, rate limiting, and quota. Endpoints for Cloud Run are in Beta. |
https://cloud.google.com/run/docs/using-gcp-services#unsupported-services
The following table lists services that are not yet supported by Cloud Run (fully managed).
Service | Notes |
---|---|
Virtual Private Cloud | Cloud Run (fully managed) cannot connect to VPC network. |
Memorystore | Cloud Run (fully managed) cannot connect to VPC network. |
Filestore (NAS) | Filestore is not Firestore, which is supported. |
Cloud Load Balancing | Cloud Run (fully managed) does not work with Cloud Load Balancing. |
Google Cloud Armor | Cloud Run (fully managed) does not work with Cloud Load Balancing. |
Cloud CDN | Cloud Run (fully managed) does not work with Cloud Load Balancing. |
Identity-Aware Proxy | Cloud Run (fully managed) does not work with Cloud Load Balancing. |
VPC Service Controls | Cloud Run (fully managed) cannot be deployed into a VPC network. |
Cloud Asset Inventory |
Containers use gvisor
https://gvisor.dev/docs/user_guide/quick_start/docker/
from google.cloud import storage
def list_buckets():
storage_client = storage.Client()
buckets = storage_client.list_buckets()
for bucket in buckets:
print(bucket.name)
https://cloud.google.com/secret-manager/docs/overview?authuser=2#secret
A secret is a project-global object that contains a collection of metadata and secret versions.
The metadata can include replication locations, labels, and permissions.
The secret versions store the actual secret data, such as an API key or credential.
A secret version stores the actual secret data, such as API keys, passwords, or certificates.
You can address individual versions of a secret. You cannot modify a version, but you can delete it.
You rotate a secret by adding a new secret version to the secret.
Any version of a given secret can be accessed, as long as that version is enabled.
To prevent a secret version from being used, you can disable that version.
It is not possible to schedule a secret for automatic rotation.
from google.cloud import secretmanager
def create_secret(project_id, secret_id):
"""
Create a new secret with the given name. A secret is a logical wrapper
around a collection of secret versions. Secret versions hold the actual
secret material.
"""
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the parent project.
parent = client.project_path(project_id)
# Create the secret.
response = client.create_secret(parent, secret_id, {
'replication': {
'automatic': {},
},
})
# Print the new secret name.
print('Created secret: {}'.format(response.name))
from google.cloud import secretmanager
def add_secret_version(project_id, secret_id, payload):
"""
Add a new secret version to the given secret with the provided payload.
"""
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the parent secret.
parent = client.secret_path(project_id, secret_id)
# Convert the string payload into a bytes. This step can be omitted if you
# pass in bytes instead of a str for the payload argument.
payload = payload.encode('UTF-8')
# Add the secret version.
response = client.add_secret_version(parent, {'data': payload})
# Print the new secret version name.
print('Added secret version: {}'.format(response.name))
https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets#access_a_secret_version
from google.cloud import secretmanager
def access_secret_version(project_id, secret_id, version_id):
"""
Access the payload for the given secret version if one exists. The version
can be a version number as a string (e.g. "5") or an alias (e.g. "latest").
"""
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = client.secret_version_path(project_id, secret_id, version_id)
# Access the secret version.
response = client.access_secret_version(name)
# Print the secret payload.
# WARNING: Do not print the secret in a production environment - this
# snippet is showing how to access the secret material.
payload = response.payload.data.decode('UTF-8')
print('Plaintext: {}'.format(payload))
https://cloud.google.com/secret-manager/docs/managing-secrets?authuser=2#listing_secrets
def list_secrets(project_id):
"""List all secrets in the given project."""
# Import the Secret Manager client library.
from google.cloud import secretmanager
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the parent project.
parent = client.project_path(project_id)
# List all secrets.
for secret in client.list_secrets(parent):
print('Found secret: {}'.format(secret.name))
def get_secret(project_id, secret_id):
"""
Get information about the given secret. This only returns metadata about
the secret container, not any secret material.
"""
# Import the Secret Manager client library.
from google.cloud import secretmanager
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret.
name = client.secret_path(project_id, secret_id)
# Get the secret.
response = client.get_secret(name)
# Get the replication policy.
if response.replication.automatic:
replication = 'AUTOMATIC'
elif response.replication.user_managed:
replication = 'MANAGED'
else:
raise 'Unknown replication {}'.format(response.replication)
# Print data about the secret.
print('Got secret {} with replication policy {}'.format(response.name, replication))
https://cloud.google.com/secret-manager/docs/managing-secrets?authuser=2#managing_access_to_secrets
from google.cloud import secretmanager
def iam_grant_access(project_id, secret_id, member):
"""
Grant the given member access to a secret.
"""
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret.
name = client.secret_path(project_id, secret_id)
# Get the current IAM policy.
policy = client.get_iam_policy(name)
# Add the given member with access permissions.
policy.bindings.add(
role='roles/secretmanager.secretAccessor',
members=[member])
# Update the IAM Policy.
new_policy = client.set_iam_policy(name, policy)
# Print data about the secret.
print('Updated IAM policy on {}'.format(secret_id))
from google.cloud import secretmanager
def update_secret(project_id, secret_id):
"""
Update the metadata about an existing secret.
"""
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret.
name = client.secret_path(project_id, secret_id)
# Update the secret.
secret = {'name': name, 'labels': {'secretmanager': 'rocks'}} update_mask = {'paths': ['labels']}
response = client.update_secret(secret, update_mask)
# Print the new secret name.
print('Updated secret: {}'.format(response.name))
from google.cloud import secretmanager
def delete_secret(project_id, secret_id):
"""
Delete the secret with the given name and all of its versions.
"""
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret.
name = client.secret_path(project_id, secret_id)
# Delete the secret.
client.delete_secret(name)
https://cloud.google.com/free/docs/gcp-free-tier#always-free-usage-limits
App Engine |
|
---|---|
The Google Cloud Free Tier is available only for the Standard Environment. |
|
AutoML Natural Language |
|
AutoML Tables |
|
AutoML Translation |
|
AutoML Video Intelligence |
|
AutoML Vision |
|
BigQuery |
|
Cloud Build |
|
Cloud Functions |
|
Cloud Logging and Cloud Monitoring |
|
Cloud Natural Language API |
|
Cloud Run |
|
The free tier is available only for Cloud Run (fully managed). |
|
Cloud Shell |
|
Cloud Source Repositories |
|
Cloud Storage |
|
Always Free is only available in |
|
Cloud Vision |
|
Compute Engine |
|
Your Always Free
Google Cloud Free Tier is also available for external IP addresses
that are being used by VM instances. In-use external IP addresses are
available without additional cost until you have used a number of hours
equal to the total hours in the current month. Usage calculations are
combined across all in-use external IP addresses. Google Cloud Free Tier
for in-use external IP addresses applies to all instance types, not just
Compute Engine offers discounts for sustained use of virtual machines. Your Always Free use doesn't factor into sustained use. GPUs and TPUs are not included in the Always Free offer. You are always charged for GPUs and TPUs that you add to VM instances. |
|
Firestore |
|
Google Kubernetes Engine |
|
Google Maps |
|
Pub/Sub |
|
Speech-to-Text |
|
Video Intelligence API |
|