@swamp/gcp/ml
v2026.04.23.1
Google Cloud ml infrastructure models
Repository
https://github.com/systeminit/swamp-extensions
Labels
gcpgoogle-cloudmlcloudinfrastructure
Contents
Quality score
Verified by SwampHow well-documented and verifiable this extension is.
Grade A
- Has README or module doc2/2earned
- README has a code example1/1earned
- README is substantive1/1earned
- Most symbols documented1/1earned
- No slow types1/1earned
- Has description1/1earned
- At least one platform tag (or universal)1/1earned
- Two or more platform tags (or universal)1/1earned
- License declared1/1earned
- Verified public repository2/2earned
Install
$ swamp extension pull @swamp/gcp/ml@swamp/gcp/ml/jobsv2026.04.23.1jobs.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| name | string | Instance name for this resource (used as the unique identifier in the factory pattern) |
| jobId? | string | Required. The user-specified id of the job. |
| labels? | record | Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. |
| predictionInput? | object | Represents input parameters for a prediction job. |
| predictionOutput? | object | Represents results of a prediction job. |
| trainingInput? | object | Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to [submitting a training job](/ai-platform/training/docs/training-jobs). |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
createCreate a jobs
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after creation (default: true) |
getGet a jobs
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the jobs |
updateUpdate jobs attributes
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after update (default: true) |
syncSync jobs state from GCP
cancelcancel
Resources
state(infinite)— Represents a training or prediction job.
@swamp/gcp/ml/locationsv2026.04.23.1locations.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| name | string | Instance name for this resource (used as the unique identifier in the factory pattern) |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
getGet a locations
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the locations |
syncSync locations state from GCP
Resources
state(infinite)— Get the complete list of CMLE capabilities in a location, along with their lo...
@swamp/gcp/ml/modelsv2026.04.23.1models.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| defaultVersion? | object | Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list. |
| description? | string | Optional. The description specified for the model when it was created. |
| labels? | record | Optional. One or more labels that you can add, to organize your models. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models. |
| name? | string | Required. The name specified for the model when it was created. The model name must be unique within the project it is created in. |
| onlinePredictionConsoleLogging? | boolean | Optional. If true, online prediction nodes send `stderr` and `stdout` streams to Cloud Logging. These can be more verbose than the standard access logs (see `onlinePredictionLogging`) and can incur higher cost. However, they are helpful for debugging. Note that [logs may incur a cost](/stackdriver/pricing), especially if your project receives prediction requests at a high QPS. Estimate your costs before enabling this option. Default is false. |
| onlinePredictionLogging? | boolean | Optional. If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each request. Note that [logs may incur a cost](/stackdriver/pricing), especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option. Default is false. |
| regions? | array | Optional. The list of regions where the model is going to be deployed. Only one region per model is supported. Defaults to 'us-central1' if nothing is set. See the available regions for AI Platform services. Note: * No matter where a model is deployed, it can always be accessed by users from anywhere, both for online and batch prediction. * The region for a batch prediction job is set by the region field when submitting the batch prediction job and does not take its value from this field. |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
createCreate a models
getGet a models
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the models |
updateUpdate models attributes
deleteDelete the models
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the models |
syncSync models state from GCP
Resources
state(infinite)— Represents a machine learning solution. A model can have multiple versions, e...
@swamp/gcp/ml/models-versionsv2026.04.23.1models_versions.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| acceleratorConfig? | object | Represents a hardware accelerator request config. Note that the AcceleratorConfig can be used in both Jobs and Versions. Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and [accelerators for online prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| autoScaling? | object | Options for automatically scaling a model. |
| container? | object | Specification of a custom container for serving predictions. This message is a subset of the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#container-v1-core). |
| deploymentUri? | string | The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the [guide to deploying models](/ai-platform/prediction/docs/deploying-models) for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model |
| description? | string | Optional. The description specified for the version when it was created. |
| explanationConfig? | object | Message holding configuration options for explaining model predictions. There are three feature attribution methods supported for TensorFlow models: integrated gradients, sampled Shapley, and XRAI. [Learn more about feature attributions.](/ai-platform/prediction/docs/ai-explanations/overview) |
| framework? | enum | Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, `XGBOOST`. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version of the model to 1.4 or greater. Do **not** specify a framework if you're deploying a [custom prediction routine](/ai-platform/prediction/docs/custom-pred |
| labels? | record | Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models. |
| machineType? | string | Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read [Choosing a machine type for online prediction](/ai-platform/prediction/docs/machine-types-online-prediction). If this field is not specified and you are using a [regional endpoint](/ai-platform/prediction/docs/regional-endpoints), then the machine type defaults to `n1-standard-2`. If this field is not specified and you are using the glo |
| manualScaling? | object | Options for manually scaling a model. |
| name? | string | Required. The name specified for the version when it was created. The version name must be unique within the model it is created in. |
| packageUris? | array | Optional. Cloud Storage paths (`gs://…`) of packages for [custom prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) or [scikit-learn pipelines with custom code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). For a custom prediction routine, one of these packages must contain your Predictor class (see [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses t |
| predictionClass? | string | Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the [`packageUris` field](#Version.FIELDS.package_uris). Specify this field if and only if you are deploying a [custom prediction routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). If you specify this field, you must set [`runtimeVersion`](#Version.FIELDS. |
| pythonVersion? | string | Required. The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when `runtime_version` is set to '1.15' or later. * Python '3.5' is available when `runtime_version` is set to a version from '1.4' to '1.14'. * Python '2.7' is available when `runtime_version` is set to '1.15' or earlier. Read more about the Python versions available for [each runtime version](/ml-engine/docs/runtime-version-list). |
| requestLoggingConfig? | object | Configuration for logging request-response pairs to a BigQuery table. Online prediction requests to a model version and the responses to these requests are converted to raw strings and saved to the specified BigQuery table. Logging is constrained by [BigQuery quotas and limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits, AI Platform Prediction does not log request-response pairs, but it continues to serve predictions. If you are using [continuous evaluation](/ml-engine/ |
| routes? | object | Specifies HTTP paths served by a custom container. AI Platform Prediction sends requests to these paths on the container; the custom container must run an HTTP server that responds to these requests with appropriate responses. Read [Custom container requirements](/ai-platform/prediction/docs/custom-container-requirements) for details on how to create your container image to meet these requirements. |
| runtimeVersion? | string | Required. The AI Platform runtime version to use for this deployment. For more information, see the [runtime version list](/ml-engine/docs/runtime-version-list) and [how to manage runtime versions](/ml-engine/docs/versioning). |
| serviceAccount? | string | Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the `containerSpec` or the `predictionClass` field. Learn more about [using a custom service account](/ai-platform/prediction/docs/custom-service-account). |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
createCreate a versions
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after creation (default: true) |
getGet a versions
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the versions |
updateUpdate versions attributes
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after update (default: true) |
deleteDelete the versions
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the versions |
syncSync versions state from GCP
set_defaultset default
Resources
state(infinite)— Represents a version of the model. Each version is a trained model deployed i...
@swamp/gcp/ml/studiesv2026.04.23.1studies.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| name | string | Instance name for this resource (used as the unique identifier in the factory pattern) |
| studyConfig? | object | Represents configuration of a study. |
| studyId? | string | Required. The ID to use for the study, which will become the final component of the study's resource name. |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
createCreate a studies
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after creation (default: true) |
getGet a studies
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the studies |
deleteDelete the studies
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the studies |
syncSync studies state from GCP
Resources
state(infinite)— A message representing a Study.
@swamp/gcp/ml/studies-trialsv2026.04.23.1studies_trials.ts
Global Arguments
| Argument | Type | Description |
|---|---|---|
| name | string | Instance name for this resource (used as the unique identifier in the factory pattern) |
| finalMeasurement? | object | A message representing a measurement. |
| measurements? | array | A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_time). These are used for early stopping computations. |
| parameters? | array | The parameters of the trial. |
| state? | enum | The detailed state of a trial. |
| location? | string | The location for this resource (e.g., 'us', 'us-central1', 'europe-west1') |
createCreate a trials
| Argument | Type | Description |
|---|---|---|
| waitForReady? | boolean | Wait for the resource to reach a ready state after creation (default: true) |
getGet a trials
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the trials |
deleteDelete the trials
| Argument | Type | Description |
|---|---|---|
| identifier | string | The name of the trials |
syncSync trials state from GCP
add_measurementadd measurement
| Argument | Type | Description |
|---|---|---|
| measurement? | any |
check_early_stopping_statecheck early stopping state
completecomplete
| Argument | Type | Description |
|---|---|---|
| finalMeasurement? | any | |
| infeasibleReason? | any | |
| trialInfeasible? | any |
list_optimal_trialslist optimal trials
stopstop
suggestsuggest
| Argument | Type | Description |
|---|---|---|
| clientId? | any | |
| suggestionCount? | any |
Resources
state(infinite)— A message representing a trial.
2026.04.04.1100.2 KBApr 4, 2026
Google Cloud ml infrastructure models
Release Notes
- Updated: jobs, studies
linux-x86_64linux-aarch64darwin-x86_64darwin-aarch64
gcpgoogle-cloudmlcloudinfrastructure
2026.04.03.3100.1 KBApr 3, 2026
Google Cloud ml infrastructure models
Release Notes
- Updated: jobs, locations, studies, studies_trials, models, models_versions
linux-x86_64linux-aarch64darwin-x86_64darwin-aarch64
gcpgoogle-cloudmlcloudinfrastructure
2026.04.03.199.3 KBApr 3, 2026
Google Cloud ml infrastructure models
Release Notes
- Updated: jobs, locations, studies, studies_trials, models, models_versions
linux-x86_64linux-aarch64darwin-x86_64darwin-aarch64
gcpgoogle-cloudmlcloudinfrastructure
2026.04.02.299.2 KBApr 2, 2026
Google Cloud ml infrastructure models
linux-x86_64linux-aarch64darwin-x86_64darwin-aarch64
gcpgoogle-cloudmlcloudinfrastructure
2026.03.27.196.5 KBMar 27, 2026
Google Cloud ml infrastructure models
Release Notes
- Added: jobs, locations, studies, studies_trials, models, models_versions
linux-x86_64linux-aarch64darwin-x86_64darwin-aarch64
gcpgoogle-cloudmlcloudinfrastructure