How Long Should I Wait Until I Roll Again
This document describes how to automatically use configuration updates to the virtual machine (VM) instances in a managed case grouping (MIG).
Compute Engine maintains the VMs in a MIG based on the configuration that you specify in an instance template and optional stateful configuration. From time to time, yous might want to update this configuration.
When you lot ready an automated update, the MIG rolls out a new version of an example template automatically to all or to a subset of the group'southward VMs. If you take stateful configuration, the MIG also applies whatsoever unapplied per-instance configurations to the corresponding VMs.
You lot tin command the speed of deployment, the level of disruption to your service, and, by using a canary update, the number of instances that the MIG updates with the new template. After you specify a new configuration, you lot practice not need to provide additional input and the update completes on its own.
Alternatively, if y'all want to selectively utilize a new configuration only to new or to specific instances in a MIG, see Selectively updating instances in a MIG. To help yous decide, see Choosing between automated and selective updates.
Before you lot begin
- If you lot desire to apply the API examples in this guide, gear up API access.
- If yous want to use the command-line examples in this guide, install the Google Cloud CLI.
- If you lot're updating a stateful MIG, review Applying, viewing, and removing stateful configuration in MIGs.
Limitations
- If yous have a stateful MIG and you want to use automated rolling updates, you must go on the instance names when replacing instances or, equivalently, fix the replacement method to
RECREATE
.
Starting a basic rolling update
A bones rolling update is an update that is gradually applied to all instances in a MIG until all instances have been updated to the latest intended configuration. The rolling update automatically skips instances that are already in their latest configuration.
Yous can command various aspects of a rolling update, such as how many instances can be taken offline for the update, how long to wait between updating instances, whether the new template affects all or just a portion of instances, and so on.
Hither are things to continue in heed when making a rolling update:
-
Updates are intent based. When y'all make the initial update request, the Compute Engine API returns a successful response to ostend that the request is valid, but that doesn't indicate that the update succeeded. You lot must check the status of the grouping to make up one's mind whether your update was deployed successfully.
-
The Instance Group Updater API is a declarative API. The API expects a request to specify the desired post-update configuration of the MIG, rather than an explicit role call.
-
Automatic updates support upward to ii instance template versions in your MIG. This ways that you can specify two different instance template versions for your group, which is useful for performing canary updates.
To get-go a basic rolling update where the update is applied to all instances in the group, follow the instructions below.
Console
-
In the Deject console, go to the Instance groups folio.
Go to Example groups
-
Select the MIG that you want to update.
-
Click Update VMs.
-
Nether New template, click the drop-downwards listing and select the new template to update to. The target size is automatically prepare to 100%, indicating that all your instances will be updated.
-
Under Update configuration, expand the selection menu and select Automatic every bit the Update blazon. Leave default values for the other options.
-
Click Update VMs to start the update.
gcloud
Use the rolling-action first-update
command.
gcloud compute instance-groups managed rolling-activity first-update INSTANCE_GROUP_NAME \ --version=template=INSTANCE_TEMPLATE_NAME [--zone=ZONE | --region=REGION]
Supersede the following:
-
INSTANCE_GROUP_NAME
: the proper noun of the MIG -
INSTANCE_TEMPLATE_NAME
: the new case template -
ZONE
: for zonal MIGs, provide the zone -
REGION
: for regional MIGs, provide the region
API
Phone call the patch
method on a regional or zonal MIG resources.
For example, for a regional MIG, the post-obit request shows the minimal configuration necessary for automatically updating 100% of the instances to the new instance template.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME { "instanceTemplate": "global/instanceTemplates/NEW_TEMPLATE", "updatePolicy": { "type": "PROACTIVE" } }
Later you brand a request, you can monitor the update to know when the update has finished.
For avant-garde configurations, include other update options. If y'all don't specify otherwise, the maxSurge
and maxUnavailable
options default to ane
multiplied past the number of affected zones. This means that only 1 example is taken offline in each afflicted zone, and the MIG creates simply 1 additional instance per zone, during the update.
Configuring options for your update
For more complex updates you can configure additional options, as described in the following sections.
Manner
Managed example groups back up two types of update mode:
- Automatic, or proactive, updates
- Selective, or opportunistic, updates
If yous want to apply updates automatically, gear up the mode to proactive.
Alternatively, if an automated update is potentially too disruptive, yous tin can choose to perform an opportunistic update. The MIG applies an opportunistic update only when you manually initiate the update on selected instances or when new instances are created. New instances tin can exist created when you lot or some other service, such equally an autoscaler, resizes the MIG. Compute Engine does not actively initiate requests to apply opportunistic updates on existing instances.
For more than information about automated versus selective updates, encounter Choosing between automated and selective updates.
Maximum surge
Use the maxSurge
pick to configure how many new instances the MIG tin create above its targetSize
during an automated update. For example, if y'all set maxSurge
to 5, the MIG uses the new case template to create up to 5 new instances above your target size. Setting a college maxSurge
value speeds upwardly your update, at the price of boosted instances, which are billed according to the Compute Engine price sheet.
You can specify either a fixed number, or, if the group has 10 or more instances, a percentage. If y'all set up a percentage, the Updater rounds upward the number of instances if necessary.
If yous don't set the maxSurge
value, the default value is used. For zonal MIGs, the default for maxSurge
is 1
. For regional MIGs, the default is the number of zones associated with the group, by default three
.
maxSurge
only works if y'all accept enough quota or resources to back up the boosted resources.
If your update does non require VMs to be replaced, this option is ignored. Yous can forcefulness VMs to be replaced during an update by setting the minimal action option.
Maximum unavailable
Use the maxUnavailable
option to configure how many instances are unavailable at any fourth dimension during an automated update. For example, if you set maxUnavailable
to five, then merely 5 instances are taken offline for updating at a fourth dimension. Use this option to control how confusing the update is to your service and to control the rate at which the update is deployed.
This number also includes whatever instances that are unavailable for other reasons. For case, if the group is in the procedure of beingness resized, instances in the heart of being created might be unavailable. These instances count toward the maxUnavailable
number.
You can specify a fixed number, or, if the group has ten or more than instances, a per centum. If you ready a percentage, the Updater rounds down the number of instances, if necessary.
If yous exercise not want any unavailable machines during an update, set the maxUnavailable
value to 0
and the maxSurge
value to greater than 0. With these settings, Compute Engine removes each one-time car simply afterward its replacement new machine is created and running.
If y'all don't set the maxUnavailable
value, the default value is used. For zonal MIGs, the default is ane
. For regional MIGs, the default is the number of zones associated with the group, by default 3
.
Minimum await time
Utilize the minReadySec
option to specify the amount of fourth dimension to wait before considering a new or restarted instance every bit updated. Use this pick to control the rate at which the automated update is deployed. The timer starts when both of the post-obit weather condition are satisfied:
- The instance's condition is
RUNNING
. - If wellness checking is enabled, when the wellness check returns
HEALTHY
.
Note that for the health cheque to return salubrious, the Updater waits for the following conditions:
- Waits for up to the menstruation of fourth dimension specified by the MIG's
autohealingPolicies.initialDelaySec
value for the wellness bank check to render asHEALTHY
. - Then, waits for the menstruation of time specified by
minReadySec
.
If the wellness bank check doesn't return Healthy
within the initialDelaySec
, and so the Updater declares the VM instance as unhealthy and potentially stops the update. While the VM case is waiting for verification during the initialDelaySec
and the minReadySec
time catamenia, the instance's currentAction
is VERIFYING
. However, the underlying VM instance status remains RUNNING
.
If there are no health checks for the grouping, so the timer starts when the instance'southward status is RUNNING
.
The maximum value for the minReadySec
field is 3600 seconds (one hour).
The post-obit diagram shows how the target size, maximum unavailable, maximum surge, and minimum expect fourth dimension options affect your instances. For more than information about target size, come across Canary updates.
Minimal action
Use the minimal activeness option to minimize disruption as much as possible or to apply a more disruptive activity than is strictly necessary. For example, Compute Engine does not need to restart a VM to modify its metadata. But if your awarding reads instance metadata only when a VM is restarted, you can set the minimal activeness to restart in guild to pick up metadata changes.
If your update requires a more disruptive action than you set with this flag, Compute Engine performs the necessary action to execute the update. For example, if you specify a restart equally the minimal action, the Updater attempts to restart instances to apply the update. Only, if you lot are changing the Bone, which can't be done by restarting the instance, then the Updater replaces the instances in the group with new VM instances.
For more information, including valid options, run into Decision-making the disruption level during a rolling update.
Most disruptive allowed action
Use the most confusing allowed activity option to forbid an update if it requires more disruption than you tin can afford. If an update cannot exist completed due to this setting, so the update fails and your VMs maintain their previous configuration.
For more information, encounter Controlling the disruption level during a rolling update.
Replacement method
By default, when you proactively update a MIG, the group deletes your VM instances and swaps them with new instances with new names. If you demand to preserve the names of your VM instances, employ the replacementMethod
choice.
Preserving existing instance names might be useful if you take applications or systems that rely on using specific example names. For case, some applications, like Memcached, rely on instance names because they don't accept a discovery service; as a consequence, whenever an case name changes, the application loses connection to that specific VM.
To preserve case names, gear up the replacement method to RECREATE
instead of SUBSTITUTE
if you update the MIG with the gcloud CLI or the Compute Engine API. Alternatively, if you update the MIG from the Cloud console, select the checkbox Continue instance names when replacing instances.
Valid replacementMethod
values are:
-
SUBSTITUTE
(default). Replaces VM instances faster during updates because new VMs are created before old ones are shut downwardly. However, instance names aren't preserved because the names are still in utilize past the sometime instances. -
RECREATE
. Preserves instance names through an update. Compute Engine releases the instance proper noun as the former VM is shut down. Then Compute Engine creates a new case using that same name. To employ this mode, you must fixmaxSurge
to0
.
For more information, run into Preserving example names.
Boosted update examples
Here are some command-line examples with common configuration options.
Perform a rolling update of all VM instances, but create up to 5 new instances above the target size at a fourth dimension
gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \ --version=template=NEW_TEMPLATE \ --max-surge=5 \ [--zone=ZONE | --region=REGION]
Perform a rolling update with at most 3 unavailable machines and a minimum look time of 3 minutes before mark a new instance equally available
gcloud beta compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \ --version=template=NEW_TEMPLATE \ --min-ready=3m \ --max-unavailable=3 \ [--zone=ZONE | --region=REGION]
Perform a rolling update of all VM instances, merely create up to 10% new instances higher up the target size at a time
For instance, if you have 1,000 instances and you run the following command, the Updater creates up to 100 instances before information technology starts to remove instances that are running the previous case template.
gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \ --version=template=NEW_TEMPLATE \ --max-surge=10% \ [--zone=ZONE | --region=REGION]
Canary updates
A canary update is an update that is practical to a subset of instances in the group. With a canary update, you can test new features or upgrades on a random subset of instances, instead of rolling out a potentially confusing update to all your instances. If an update is non going well, you lot only need to curlicue back the subset of instances, minimizing the disruption for your users.
A canary update is the same as a standard rolling update, except that the number of instances that should be updated is less than the total size of the instance group. Like a standard rolling update, you can configure additional options to control the level of disruption to your service.
Starting a canary update
To initiate a canary update, specify up to two case template versions, typically a new instance template to canary and the electric current example template for the remainder of the instances. For case, you can specify that 20% of your instances exist created based on NEW_INSTANCE_TEMPLATE
while the rest of the instances continue to run on the OLD_INSTANCE_TEMPLATE
. You can't specify more ii example templates at a time.
Yous must always specify a target size (targetSize
) for the canary version. Y'all can't initiate a canary update if you lot omit the target size for the canary version. For example, if you specified that 10% of instances should be used for canarying, the remaining xc% are untouched and use the current case template.
Panel
- In the Cloud panel, go to the Instance groups page.
Become to Instance groups
- Select the managed instance group that you lot desire to update.
- Click Update VMs.
- Click Add a 2nd template and choose the new instance template to canary.
- Nether Target size, enter the percentage or fixed number of instances you desire to use to canary the new instance template.
- If you want, y'all can configure other update options.
- Click Update VMs to start the update.
gcloud
Use the rolling-action start-update
command. Provide both the current template and the new template to explicitly limited how many instances should utilize each template:
gcloud compute instance-groups managed rolling-activity beginning-update INSTANCE_GROUP_NAME \ --version=template=CURRENT_INSTANCE_TEMPLATE_NAME \ --canary-version=template=NEW_TEMPLATE,target-size=SIZE \ [--zone=ZONE | --region=REGION]
Supplant the following:
-
INSTANCE_GROUP_NAME
: the instance group proper noun. -
CURRENT_INSTANCE_TEMPLATE_NAME
: the instance template that the instance group is currently running. -
NEW_TEMPLATE
: the new template that you want to canary. -
SIZE
: the number or pct of instances that you want to apply this update to. You lot must apply thetarget-size
property to the--canary-version
template. You can simply set a per centum if the group contains 10 or more instances. -
ZONE
: for zonal MIGs, provide the zone. -
REGION
: for regional MIGs, provide the region.
For example, the following command performs a canary update that rolls out example-template-B
to 10% of instances in the group:
gcloud compute case-groups managed rolling-action start-update case-mig \ --version=template=case-template-A \ --canary-version=template=example-template-B,target-size=10%
API
Call the patch
method on a regional or zonal MIG resource. In the asking body, include both the current instance template and the new instance template that you want to canary. For example:
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME { "versions": [ { "instanceTemplate": "global/instanceTemplates/NEW_TEMPLATE", "targetSize": { "[percent|fixed]": NUMBER|Percentage # Use `fixed` for a specific number of instances } }, { "instanceTemplate": "global/instanceTemplates/CURRENT_INSTANCE_TEMPLATE_NAME" } ] }
Replace the following:
-
NEW_TEMPLATE
: the name of the new template you desire to canary. -
NUMBER|PERCENTAGE
: the fixed number or pct of instances to canary this update. Y'all can only set a pct if the group contains 10 or more instances. Otherwise, provide a fixed number. -
CURRENT_INSTANCE_TEMPLATE_NAME
: the name of the current instance template that the group is running.
For more than options, see Configuring options for your update.
After you lot make a request, you can monitor the update to know when the update has finished.
Rolling forwards a canary update
Later running a canary update, you tin can decide whether you want to commit the update to 100% of the MIG or roll it back.
Console
- In the Cloud console, go to the Instance groups page.
Get to Instance groups
- Select the managed example group that you want to update.
- Click Update VMs.
- Under New template, update the target size of the canary template to 100% to roll forward the template to all your instances. Alternatively, you lot can replace the primary template with the canary template remove the second template field.
- Click Update VMs to start the update.
gcloud
If you want to commit to your canary update, roll forward the update by issuing another rolling-action start-update
command merely set only the version
flag and omit the --canary-version
flag.
gcloud compute example-groups managed rolling-action outset-update INSTANCE_GROUP_NAME \ --version=template=NEW_TEMPLATE \ [--zone=ZONE | --region=REGION]
API
Phone call the patch
method on a regional or zonal MIG resource. In the request body, specify the new instance template equally a version
and omit the earlier instance template from your request trunk. Omit the target size specification to roll out the update to 100% of instances. For example:
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME { "versions": [ { "instanceTemplate": "global/instanceTemplates/NEW_TEMPLATE" # New instance template } ] }
Monitoring updates
Subsequently you initiate an update, it tin can take some time for the new configuration to finish rolling out to all affected instances. You can monitor the progress of an update by checking the following:
- To verify if all VMs take reached their target template version, view the group status.
- To verify that specific VMs have reached their target template version, view electric current actions on instances.
- For stateful MIGs, encounter too Verifying whether per-case configurations take been applied.
Group condition
At the group level, Compute Engine populates a read-simply field called status
that contains a versionTarget.isReached
flag and an isStable
flag. Yous can utilise the gcloud CLI or the Compute Engine API to access these flags. You can also employ the Cloud console to see the current and planned number of instances being updated.
Panel
You can monitor a rolling update for a group by going to the group'southward details page.
- In the Cloud panel, go to the Example groups folio.
Go to Instance groups
- Select the managed instance group that you want to monitor. The overview page for the group shows the template that each example is using.
- To view the details, click the Details tab.
- Nether Case template, you can see the electric current and target number of instances for each case template, as well every bit the update parameters.
gcloud
Use the describe
control.
gcloud compute instance-groups managed describe INSTANCE_GROUP_NAME \ [--zone=ZONE | --region=REGION]
You tin also use the gcloud compute instance-groups managed wait-until
control with the --version-target-reached
flag to wait until status.versionTarget.isReached
is ready to truthful
for the group:
gcloud compute instance-groups managed wait-until INSTANCE_GROUP_NAME \ --version-target-reached \ [--zone=ZONE | --region=REGION]
API
Call the get
method on a regional or zonal MIG resource.
Get https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME/get
Verifying whether an update rollout is complete
Verify whether the rollout of an update is complete by checking the value of the MIG's status.versionTarget.isReached
field:
status.versionTarget.isReached
set to true
indicates that all VM instances have been or are beingness created using the target version.
condition.versionTarget.isReached
set to imitation
indidates that at least i VM is not yet using the target version. Or, in the case of a canary update, false
indicates that the number of VMs using a target version doesn't match its target size.
Checking whether a managed example group is stable
Verify that all instances in a managed example group are running and healthy by checking the value of the group's condition.isStable
field.
status.isStable
gear up to simulated
indicates that changes are active, pending, or that the MIG itself is existence modified.
status.isStable
fix to truthful
indicates the following:
- None of the instances in the MIG are undergoing any type of change and the
currentAction
for all instances isNONE
. - No changes are pending for instances in the MIG.
- The MIG itself is non being modified.
Remember that the stability of a MIG depends on numerous factors considering a MIG can exist modified in numerous ways. For example:
- You brand a request to ringlet out a new example template.
- You make a asking to create, delete, resize or update instances in the MIG.
- An autoscaler requests to resize the MIG.
- An autohealer resources is replacing one or more unhealthy instances in the MIG.
- In a regional MIG, some of the instances are being redistributed.
As soon as all actions are finished, status.isStable
is prepare to true
again for that MIG.
Electric current actions on instances
Use the Google Cloud CLI or the Compute Engine API to see details well-nigh the instances in a managed instance group. Details include case status and current actions that the group is performing on its instances.
gcloud
All managed instances
To check the status and current actions on all instances in the group, apply the list-instances
command.
gcloud compute instance-groups managed list-instances INSTANCE_GROUP_NAME \ [--zone=ZONE | --region=REGION]
The command returns a list of instances in the group, including their status, current deportment, and other details:
Name ZONE STATUS HEALTH_STATE Activeness INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR vm-instances-9pk4 us-central1-f CREATING my-new-template vm-instances-h2r1 us-central1-f STOPPING DELETING my-old-template vm-instances-j1h8 us-central1-f RUNNING NONE my-old-template vm-instances-ngod us-central1-f RUNNING NONE my-old-template
The HEALTH_STATE
column appears empty unless you have set up wellness checking.
A specific managed instance
To check the status and current action for a specific instance in the group, utilize the depict-case
command.
gcloud compute instance-groups managed describe-instance INSTANCE_GROUP_NAME \ --instance INSTANCE_NAME \ [--zone=ZONE | --region=REGION]
The command returns details about the instance, including instance status, current action, and, for stateful MIGs, preserved country:
currentAction: NONE id: '6789072894767812345' example: https://www.googleapis.com/compute/v1/projects/instance-project/zones/the states-central1-a/instances/instance-mig-hz41 instanceStatus: RUNNING name: example-mig-hz41 preservedStateFromConfig: metadata: example-fundamental: example-value preservedStateFromPolicy: disks: persistent-disk-0: autoDelete: NEVER style: READ_WRITE source: https://www.googleapis.com/compute/v1/projects/example-project/zones/us-central1-a/disks/example-mig-hz41 version: instanceTemplate: https://world wide web.googleapis.com/compute/v1/projects/example-projection/global/instanceTemplates/example-template
API
Call the listManagedInstances
method on a regional or zonal MIG resource. For example, to see details about the instances in a zonal MIG resources, you can make the post-obit request:
Become https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME/listManagedInstances
The call returns a listing of instances for the MIG including each instance's instanceStatus
and currentAction
.
{ "managedInstances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/instance-projection/zones/u.s.-central1-f/instances/vm-instances-prvp", "id": "5317605642920955957", "instanceStatus": "RUNNING", "instanceTemplate": "https://www.googleapis.com/compute/v1/projects/example-project/global/instanceTemplates/example-template", "currentAction": "REFRESHING" }, { "instance": "https://www.googleapis.com/compute/v1/projects/example-project/zones/us-central1-f/instances/vm-instances-pz5j", "currentAction": "DELETING" }, { "instance": "https://www.googleapis.com/compute/v1/projects/instance-projection/zones/us-central1-f/instances/vm-instances-w2t5", "id": "2800161036826218547", "instanceStatus": "RUNNING", "instanceTemplate": "https://world wide web.googleapis.com/compute/v1/projects/example-project/global/instanceTemplates/example-template", "currentAction": "REFRESHING" } ] }
To see a list of valid instanceStatus
field values, meet VM instance life bicycle.
If an instance is undergoing some type of change, the managed example group sets the instance's currentAction
field to i of the following deportment to help you runway the progress of the change. Otherwise, the currentAction
field is ready to NONE
.
Possible currentAction
values are:
-
ABANDONING
. The instance is being removed from the MIG. -
CREATING
. The example is in the process of existence created. -
CREATING_WITHOUT_RETRIES
. The instance is being created without retries; if the example isn't created on the get-go endeavor, the MIG doesn't try to replace the instance over again. -
DELETING
. The instance is in the process of existence deleted. -
RECREATING
. The instance is being replaced. -
REFRESHING
. The case is existence removed from its current target pools and being readded to the list of current target pools (this list might be the aforementioned or dissimilar from existing target pools). -
RESTARTING
. The example is in the procedure of existence restarted using theterminate
andstart
methods. -
VERIFYING
. The instance has been created and is in the process of being verified. -
NONE
. No actions are existence performed on the case.
Rolling dorsum an update
In that location is no explicit command for rolling dorsum an update to a previous version, but if you decide to curl back an update (either a fully committed update or a canary update), you tin can do so by making a new update request and passing in the example template that you want to roll back to.
gcloud
For case, the following gcloud CLI command rolls back an update every bit fast as possible. Replace OLD_INSTANCE_TEMPLATE
with the name of the instance template yous want to curl back to.
gcloud compute instance-groups managed rolling-activity start-update INSTANCE_GROUP_NAME \ --version=template=OLD_INSTANCE_TEMPLATE_NAME \ --max-unavailable=100% \ [--zone=ZONE | --region=REGION]
API
Call the patch
method on a regional or zonal MIG resource.
In the request torso, specify the earlier instance template as a version
:
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME { "updatePolicy": { "maxUnavailable": { "percent": 100 } }, "versions": [ { "instanceTemplate": "global/instanceTemplates/OLD_INSTANCE_TEMPLATE_NAME" # Old example template } ] }
For a regional MIG with less than 10 instances, you must use a stock-still value for maxUnavailable
and set the value to the number of instances in the grouping.
The Updater treats a rollback request the aforementioned every bit a regular update request, so you tin can specify additional update options.
Stopping an update
There is no explicit method or command to stop an update. You can change an update from proactive to opportunistic, and if the group is not beingness resized by other services similar autoscaler, the change to opportunistic effectively stops the update.
To change an update from proactive to opportunistic by using the gcloud CLI, run the post-obit command:
gcloud compute case-groups managed rolling-activity stop-proactive-update INSTANCE_GROUP_NAME \ [--zone=ZONE | --region=REGION]
To finish the update completely after converting it from proactive to opportunistic, follow these steps:
-
Make a request to determine how many instances accept been updated:
gcloud compute example-groups managed list-instances INSTANCE_GROUP_NAME \ [--zone=ZONE | --region=REGION]
The gcloud CLI returns a response that includes a list of instances in the group and their current statuses:
Name ZONE Condition HEALTH_STATE ACTION INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR vm-instances-9pk4 united states of america-central1-f RUNNING Salubrious NONE example-new-template vm-instances-j1h8 united states of america-central1-f RUNNING HEALTHY NONE example-former-template vm-instances-ngod us-central1-f STAGING UNKNOWN CREATING example-new-template
In this example, two instances have already been updated.
-
Adjacent, make a request to perform a new update, but pass in the number of instances that have already been updated as the target size:
gcloud compute instance-groups managed rolling-activity start-update INSTANCE_GROUP_NAME \ --version template=OLD_INSTANCE_TEMPLATE_NAME \ --canary-version template=NEW_INSTANCE_TEMPLATE_NAME,target-size=2 \ [--zone=ZONE | --region=REGION]
To the Updater, this update appears complete, then no other instances are updated, effectively stopping the update.
Controlling the speed of a rolling update
Past default, when you make an update request, the Updater performs the update every bit fast every bit possible. If yous aren't sure you want to apply an update fully or are tentatively testing your changes, you lot can command the speed of the update by using the post-obit methods.
- Kickoff a canary update rather than a full update.
- Prepare a large
minReadySec
value. Setting this value causes the Updater to wait this number of seconds before considering the instance successfully updated and proceeding to the side by side instance. - Enable health checking to crusade the Updater to look for your application to commencement and to written report a salubrious signal before considering the instance successfully updated and proceeding to the next instance.
- Prepare low
maxUnavailable
andmaxSurge
values. This ensures that but a minimal number of instances are updated at a time. - Selectively update instances in a MIG instead of using an automated update.
You can likewise use a combination of these methods to control the charge per unit of your update.
Controlling the disruption level during a rolling update
Depending on the nature of an update, information technology might disrupt an instance'southward life cycle country. For example, changing an instance'southward kick disk requires replacing the instance. You can command the level of disruption during a rolling update past setting the post-obit options:
-
Minimal activity: Employ this option to minimize disruption every bit much as possible or to utilize a more disruptive action than is necessary.
- To limit disruption as much as possible, set the minimal action to
REFRESH
. If your update requires a more disruptive activity, Compute Engine performs the necessary action to execute the update. - To utilise a more disruptive action than is strictly necessary, prepare the minimal action to
RESTART
orSupersede
. For example, Compute Engine does non need to restart a VM to change its metadata. Merely if your awarding reads instance metadata just when a VM is restarted, yous can set the minimal action toRESTART
in club to pick upward metadata changes.
- To limit disruption as much as possible, set the minimal action to
-
Most confusing allowed activeness: Employ this pick to preclude an update if it requires more disruption than you lot tin afford. If your update requires a more than disruptive action than you lot set with this flag, the update request fails. For example, if you set the most disruptive allowed action to
RESTART
, so an attempt to update the boot disk paradigm fails because that update requires instance replacement, a more disruptive action than a restart.
Both of these options accept the following values:
Value | Description | Which case backdrop can exist updated? |
---|---|---|
REFRESH | Practise not terminate the example. | Additional disks, case metadata, labels, tags |
RESTART | Terminate the instance and start it over again. | Additional disks, instance metadata, labels, tags, car type |
Supercede | (Default.) Replace the example according to the replacement method option. | All instance properties stored in the instance template or per-case configuration |
The most confusing allowed activeness can't exist less disruptive than the minimal activity.
When you automatically roll out updates, the following defaults apply:
- The default minimal action is
REPLACE
. If y'all want to prevent unncessary disruption, set the minimal activeness to be less disruptive. - The default most disruptive allowed activeness is
Supervene upon
. If you cannot tolerate such disruption, set the well-nigh confusing allowed activity to exist less disruptive.
You tin change the default behavior by using the Compute Engine API to set up the updatePolicy.minimalAction
and updatePolicy.mostDisruptiveAllowedAction
fields in your MIG resource–for example, past calling the regionInstanceGroupManagers.patch
method. Alternatively, y'all can select the specific Actions allowed to update VMs when you update your MIG from the Deject panel. To view the current settings, see Getting a MIG's properties.
An update fails if information technology requires a more disruptive activeness than y'all allowed. If this happens, you can try the update again with a more than confusing immune action, or you can selectively update the instance. Compute Engine performs all-time-endeavor validation to see if instances tin exist updated with the specified disruption limit. Merely due to concurrent changes in the system, the situation can change afterward the update starts. If an operation on a particular instance fails, list instance errors to come across the error.
Performing a rolling supervene upon or restart
A rolling restart stops and restarts all instances, while a rolling replace replaces instances co-ordinate to the replacement method selection. A rolling restart or replace does not change anything else about the group, including the example template.
In that location are many reasons why yous might desire a rolling restart or a rolling supercede. For example, you might want to restart or replace your VM instances from time to fourth dimension for one of the following reasons:
- Clear upwardly memory leaks.
- Restart your application and then it can run from a fresh automobile.
- Apply a periodic supercede equally a best practice to test your VMs.
- Update your VM'south operating system epitome or rerun startup scripts to update your software.
Use the Cloud panel, the Google Deject CLI, or the Compute Engine API to perform a restart or replace.
Console
- In the Cloud console, get to the Case groups page.
Go to Instance groups
- Select the managed instance group that has the VMs that you want to restart or replace.
- Click Restart/supercede VMs.
- Under Operation, select Restart or Replace.
- If y'all select Restart, toggle the following parameters:
- Maximum unavailable instances
- Minimum wait time
- If you select Replace, do the following:
- Choose if yous desire to keep the instances names when replacing instances.
- Toggle the following parameters:
- Temporary additional instances
- Maximum unavailable instances
- Minimum await fourth dimension
- If y'all select Restart, toggle the following parameters:
- To offset the operation, click Restart VMs or Supplant VMs.
gcloud
Employ the restart
command or replace
control.
The post-obit command replaces all instances in the MIG, one at a time:
gcloud compute example-groups managed rolling-action replace INSTANCE_GROUP_NAME
The following control restarts each instance, one at a time:
gcloud compute case-groups managed rolling-action restart INSTANCE_GROUP_NAME
You can further customize each of these commands with the same options bachelor for updates (for example, maxSurge
and maxUnavailable
).
API
Telephone call the patch
method on a regional or zonal MIG resources.
In the updatePolicy.minimalAction
field, specify either RESTART
or REPLACE
. In both cases, you must besides provide the versions.instanceTemplate
and versions.name
properties to trigger the action.
For example, for a zonal MIG, the following request shows the minimal configuration necessary to automatically restart 100% of the instances.
PATCH https://compute.googleapis.com/compute/v1/projects/example-project/zones/ZONE/instanceGroupManagers/INSTANCE_GROUP_NAME { "updatePolicy": { "minimalAction": "RESTART", "type": "PROACTIVE" }, "versions": [ { "instanceTemplate": "global/instanceTemplates/CURRENT_INSTANCE_TEMPLATE_NAME", "proper name": "v2" } ] }
Additional replace/restart examples
Perform a rolling restart of all VMs, two at a time
This command restarts all VMs in the group, two at a time. Discover that no new example template is specified.
gcloud compute instance-groups managed rolling-activeness restart INSTANCE_GROUP_NAME \ --max-unavailable=ii \ [--zone=ZONE | --region=REGION]
Perform a rolling restart of all VMs every bit quickly equally possible
gcloud compute case-groups managed rolling-action restart INSTANCE_GROUP_NAME \ --max-unavailable=100% \ [--zone=ZONE | --region=REGION]
Perform a rolling replace of all VMs as quickly every bit possible
gcloud compute example-groups managed rolling-activity supplant INSTANCE_GROUP_NAME \ --max-unavailable=100% \ [--zone=ZONE | --region=REGION]
Preserving instance names
If you lot need to preserve the names of your VM instances across an update, set the replacementMethod
to RECREATE
. You must as well set maxUnavailable
to be greater than 0
and maxSurge
to be 0
. Recreating instances instead of replacing them causes your update to take longer to complete, but the updated instances keep their names.
If you do not specify a replacement method, the MIG's current updatePolicy.replacementMethod
value is used. If it'southward not set then the default value of substitute
is used, which replaces VM instances with new instances that have randomly generated names.
gcloud
When issuing a rolling-activeness
command, include the --replacement-method=recreate
flag.
gcloud compute instance-groups managed rolling-action start-update INSTANCE_GROUP_NAME \ --replacement-method=recreate \ --version=template=NEW_TEMPLATE \ --max-unavailable=v \ [--zone=ZONE | --region=REGION]
API
Call the patch
method on a regional or zonal MIG resources. In the request body, include the updatePolicy.replacementMethod
field:
PATCH /compute/v1/projects/PROJECT_ID/regions/REGION/instanceGroupManagers/INSTANCE_GROUP_NAME { "updatePolicy": { "type": "PROACTIVE", "maxUnavailable": { "fixed": 5 }, "replacementMethod": "RECREATE" }, "versions": [ { "instanceTemplate": "global/instanceTemplates/NEW_TEMPLATE" } ] }
Afterwards you make a asking, you tin monitor the update to know when the update has finished.
Updating a regional managed instance group
A regional MIG contains VM instances that are spread across multiple zones inside a region, as opposed to a zonal MIG, which just contains instances in i zone. Regional MIGs let you lot distribute your instances across more than i zone to improve your application's availability and to support extreme cases where one zone fails or an unabridged group of instances stops responding.
Performing an update on a regional MIG is same as performing an update on a zonal MIG, with a few exceptions described beneath. When you initiate an update to a regional MIG, the Updater e'er updates instances proportionally and evenly across each zone. Yous cannot choose which instances in which zones are updated first nor can you lot choose to update instances in only i zone.
Differences between updating regional versus zonal MIGs
Regional MIGs have the following default update values:
-
maxUnavailable=NUMBER_OF_ZONES
-
maxSurge=NUMBER_OF_ZONES
NUMBER_OF_ZONES
is the number of zones associated with the regional MIG. Past default, the number of zones for a regional MIG is 3
. But y'all might select a unlike number.
If you lot are using fixed numbers when specifying an update, the fixed number must exist either 0
or equal to or greater than the number of zones associated with the regional MIG. For example, if the grouping is distributed across three zones, so yous tin can't prepare maxSurge
to 1
or to 2
considering the Updater has to create an additional case in each of the 3 zones.
Using a stock-still number or a pct in update requests
If you specify a fixed number in your update requests, the number y'all specify is divided by the number of zones in the regional MIG and distributed evenly. For example, if y'all specify maxSurge=10
, then the Updater divides 10 across the number of zones in the region and creates instances based on that number. If the number of instances does not divide evenly beyond zones, the Updater adds the remaining instances to a random zone. And then, for 10 instances beyond 3 zones, 2 of the zones get 3 instances and one zone gets four instances. The aforementioned logic is practical to the maxUnavailable
and the targetSize
parameters for canary updates.
Y'all can specify a percentage but if your MIG contains ten or more VM instances. Percentages are handled slightly differently depending on the situation:
-
If you specify a pct of VM instances for a canary update, the Updater attempts to distribute the instances evenly across zones. The remainder is rounded either up or down in each zone but the total difference isn't more than 1 VM instance per group. For example, for a MIG with 10 instances and a target size percentage of 25%, the update is rolled out to two to three VM instances.
-
If you lot specify a percentage for update options like
maxSurge
andmaxUnavailable
, the percentages are rounded independently per zone.
What's next
- Learn nigh Viewing info about MIGs and managed instances.
- Learn nigh Creating instance templates.
- Acquire how to use epitome families and a rolling supplant to update the Bone image on all VMs in a MIG.
Source: https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
0 Response to "How Long Should I Wait Until I Roll Again"
Postar um comentário