Cleanup old docs

Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
This commit is contained in:
Gabriel Adrian Samfira 2023-07-20 10:46:22 +00:00
parent 018692ecc3
commit cc228a035b
5 changed files with 27 additions and 416 deletions

View file

@ -1,14 +1,14 @@
# GitHub Actions Runner Manager (garm)
# GitHub Actions Runner Manager (GARM)
[![Go Tests](https://github.com/cloudbase/garm/actions/workflows/go-tests.yml/badge.svg)](https://github.com/cloudbase/garm/actions/workflows/go-tests.yml)
Welcome to garm!
Welcome to GARM!
Garm enables you to create and automatically maintain pools of [self-hosted GitHub runners](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners), with autoscaling that can be used inside your github workflow runs.
The goal of ```garm``` is to be simple to set up, simple to configure and simple to use. It is a single binary that can run on any GNU/Linux machine without any other requirements other than the providers it creates the runners in. It is intended to be easy to deploy in any environment and can create runners in any system you can write a provider for. There is no complicated setup process and no extremely complex concepts to understand. Once set up, it's meant to stay out of your way.
The goal of ```GARM``` is to be simple to set up, simple to configure and simple to use. It is a single binary that can run on any GNU/Linux machine without any other requirements other than the providers it creates the runners in. It is intended to be easy to deploy in any environment and can create runners in any system you can write a provider for. There is no complicated setup process and no extremely complex concepts to understand. Once set up, it's meant to stay out of your way.
Garm supports creating pools on either GitHub itself or on your own deployment of [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server@3.5/admin/overview/about-github-enterprise-server). For instructions on how to use ```garm``` with GHE, see the [credentials](/doc/github_credentials.md) section of the documentation.
Garm supports creating pools on either GitHub itself or on your own deployment of [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server@3.5/admin/overview/about-github-enterprise-server). For instructions on how to use ```GARM``` with GHE, see the [credentials](/doc/github_credentials.md) section of the documentation.
## Join us on slack
@ -18,7 +18,7 @@ Whether you're running into issues or just want to drop by and say "hi", feel fr
## Installing
Check out the [quickstart](/doc/quickstart.md) document for instructions on how to install ```garm```. If you'd like to build from source, check out the [building from source](/doc/building_from_source.md) document.
Check out the [quickstart](/doc/quickstart.md) document for instructions on how to install ```GARM```. If you'd like to build from source, check out the [building from source](/doc/building_from_source.md) document.
## Installing external providers
@ -27,11 +27,11 @@ External providers are binaries that GARM calls into to create runners in a part
* [OpenStack](https://github.com/cloudbase/garm-provider-openstack)
* [Azure](https://github.com/cloudbase/garm-provider-azure)
Follow the instructions in the README of each provider to install them.
Follow the instructions in the README of each provider to install them.
## Configuration
The ```garm``` configuration is a simple ```toml```. The sample config file in [the testdata folder](/testdata/config.toml) is fairly well commented and should be enough to get you started. The configuration file is split into several sections, each of which is documented in its own page. The sections are:
The ```GARM``` configuration is a simple ```toml```. The sample config file in [the testdata folder](/testdata/config.toml) is fairly well commented and should be enough to get you started. The configuration file is split into several sections, each of which is documented in its own page. The sections are:
* [The default section](/doc/config_default.md)
* [Database](/doc/database.md)
@ -41,11 +41,13 @@ The ```garm``` configuration is a simple ```toml```. The sample config file in [
* [JWT authentication](/doc/config_jwt_auth.md)
* [API server](/doc/config_api_server.md)
## Optimizing your runners
If you would like to optimize the startup time of new instance, take a look at the [performance considerations](/doc/performance_considerations.md) page.
## Write your own provider
The providers are interfaces between ```garm``` and a particular IaaS in which we spin up GitHub Runners. These providers can be either **native** or **external**. The **native** providers are written in ```Go```, and must implement [the interface defined here](https://github.com/cloudbase/garm/blob/main/runner/common/provider.go#L22-L39). **External** providers can be written in any language, as they are in the form of an external executable that ```garm``` calls into.
The providers are interfaces between ```GARM``` and a particular IaaS in which we spin up GitHub Runners. These providers can be either **native** or **external**. The **native** providers are written in ```Go```, and must implement [the interface defined here](https://github.com/cloudbase/garm/blob/main/runner/common/provider.go#L22-L39). **External** providers can be written in any language, as they are in the form of an external executable that ```GARM``` calls into.
There is currently one **native** provider for [LXD](https://linuxcontainers.org/lxd/) and two **external** providers for [Openstack and Azure](/contrib/providers.d/).

View file

@ -2,7 +2,7 @@
Performance is often important when running GitHub action runners with garm. This document shows some ways to improve the creation time of a GitHub action runner.
## garm specific performance considerations
## GARM specific performance considerations
### Bundle the GitHub action runner

View file

@ -1,6 +1,6 @@
# Provider configuration
GARM was designed to be extensible. Providers can be written either as built-in plugins or as external executables. The built-in plugins are written in Go, and they are compiled into the ```garm``` binary. External providers are executables that implement the needed interface to create/delete/list compute systems that are used by ```garm``` to create runners.
GARM was designed to be extensible. Providers can be written either as built-in plugins or as external executables. The built-in plugins are written in Go, and they are compiled into the ```GARM``` binary. External providers are executables that implement the needed interface to create/delete/list compute systems that are used by ```GARM``` to create runners.
GARM currently ships with one built-in provider for [LXD](https://linuxcontainers.org/lxd/introduction/) and the external provider interface which allows you to write your own provider in any language you want.
@ -12,7 +12,7 @@ GARM currently ships with one built-in provider for [LXD](https://linuxcontainer
## LXD provider
Garm leverages the virtual machines feature of LXD to create the runners. Here is a sample config section for an LXD provider:
GARM leverages LXD to create the runners. Here is a sample config section for an LXD provider:
```toml
# Currently, providers are defined statically in the config. This is due to the fact
@ -24,7 +24,7 @@ Garm leverages the virtual machines feature of LXD to create the runners. Here i
[[provider]]
# An arbitrary string describing this provider.
name = "lxd_local"
# Provider type. Garm is designed to allow creating providers which are used to spin
# Provider type. GARM is designed to allow creating providers which are used to spin
# up compute resources, which in turn will run the github runner software.
# Currently, LXD is the only supprted provider, but more will be written in the future.
provider_type = "lxd"
@ -32,7 +32,7 @@ Garm leverages the virtual machines feature of LXD to create the runners. Here i
# be included in the information returned by the API when listing available providers.
description = "Local LXD installation"
[provider.lxd]
# the path to the unix socket that LXD is listening on. This works if garm and LXD
# the path to the unix socket that LXD is listening on. This works if GARM and LXD
# are on the same system, and this option takes precedence over the "url" option,
# which connects over the network.
unix_socket_path = "/var/snap/lxd/common/lxd/unix.socket"
@ -57,7 +57,7 @@ Garm leverages the virtual machines feature of LXD to create the runners. Here i
project_name = "default"
# URL is the address on which LXD listens for connections (ex: https://example.com:8443)
url = ""
# garm supports certificate authentication for LXD remote connections. The easiest way
# GARM supports certificate authentication for LXD remote connections. The easiest way
# to get the needed certificates, is to install the lxc client and add a remote. The
# client_certificate, client_key and tls_server_certificate can be then fetched from
# $HOME/snap/lxd/common/config.
@ -99,7 +99,7 @@ You can choose to connect to a local LXD server by using the ```unix_socket_path
### LXD remotes
By default, garm does not load any image remotes. You get to choose which remotes you add (if any). An image remote is a repository of images that LXD uses to create new instances, either virtual machines or containers. In the absence of any remote, garm will attempt to find the image you configure for a pool of runners, on the LXD server we're connecting to. If one is present, it will be used, otherwise it will fail and you will need to configure a remote.
By default, GARM does not load any image remotes. You get to choose which remotes you add (if any). An image remote is a repository of images that LXD uses to create new instances, either virtual machines or containers. In the absence of any remote, GARM will attempt to find the image you configure for a pool of runners, on the LXD server we're connecting to. If one is present, it will be used, otherwise it will fail and you will need to configure a remote.
The sample config file in this repository has the usual default ```LXD``` remotes:
@ -111,13 +111,13 @@ When creating a new pool, you'll be able to specify which image you want to use.
You can also create your own image remote, where you can host your own custom images. If you want to build your own images, have a look at [distrobuilder](https://github.com/lxc/distrobuilder).
Image remotes in the ```garm``` config, is a map of strings to remote settings. The name of the remote is the last bit of string in the section header. For example, the following section ```[provider.lxd.image_remotes.ubuntu_daily]```, defines the image remote named **ubuntu_daily**. Use this name to reference images inside that remote.
Image remotes in the ```GARM``` config, is a map of strings to remote settings. The name of the remote is the last bit of string in the section header. For example, the following section ```[provider.lxd.image_remotes.ubuntu_daily]```, defines the image remote named **ubuntu_daily**. Use this name to reference images inside that remote.
You can also use locally uploaded images. Check out the [performance considerations](./performance_considerations.md) page for details on how to customize local images and use them with garm.
You can also use locally uploaded images. Check out the [performance considerations](./performance_considerations.md) page for details on how to customize local images and use them with GARM.
### LXD Security considerations
Garm does not apply any ACLs of any kind to the instances it creates. That task remains in the responsibility of the user. [Here is a guide for creating ACLs in LXD](https://linuxcontainers.org/lxd/docs/master/howto/network_acls/). You can of course use ```iptables``` or ```nftables``` to create any rules you wish. I recommend you create a separate isolated lxd bridge for runners, and secure it using ACLs/iptables/nftables.
GARM does not apply any ACLs of any kind to the instances it creates. That task remains in the responsibility of the user. [Here is a guide for creating ACLs in LXD](https://linuxcontainers.org/lxd/docs/master/howto/network_acls/). You can of course use ```iptables``` or ```nftables``` to create any rules you wish. I recommend you create a separate isolated lxd bridge for runners, and secure it using ACLs/iptables/nftables.
You must make sure that the code that runs as part of the workflows is trusted, and if that cannot be done, you must make sure that any malicious code that will be pulled in by the actions and run as part of a workload, is as contained as possible. There is a nice article about [securing your workflow runs here](https://blog.gitguardian.com/github-actions-security-cheat-sheet/).
@ -132,7 +132,7 @@ The configuration for an external provider is quite simple:
```toml
# This is an example external provider. External providers are executables that
# implement the needed interface to create/delete/list compute systems that are used
# by garm to create runners.
# by GARM to create runners.
[[provider]]
name = "openstack_external"
description = "external openstack provider"
@ -151,11 +151,11 @@ The external provider has two options:
* ```provider_executable```
* ```config_file```
The ```provider_executable``` option is the absolute path to an executable that implements the provider logic. Garm will delegate all provider operations to this executable. This executable can be anything (bash, python, perl, go, etc). See [Writing an external provider](./external_provider.md) for more details.
The ```provider_executable``` option is the absolute path to an executable that implements the provider logic. GARM will delegate all provider operations to this executable. This executable can be anything (bash, python, perl, go, etc). See [Writing an external provider](./external_provider.md) for more details.
The ```config_file``` option is a path on disk to an arbitrary file, that is passed to the external executable via the environment variable ```GARM_PROVIDER_CONFIG_FILE```. This file is only relevant to the external provider. Garm itself does not read it. In the case of the sample OpenStack provider, this file contains access information for an OpenStack cloud (what you would typically find in a ```keystonerc``` file) as well as some provider specific options like whether or not to boot from volume and which tenant network to use. You can check out the [sample config file](../contrib/providers.d/openstack/keystonerc) in this repository.
The ```config_file``` option is a path on disk to an arbitrary file, that is passed to the external executable via the environment variable ```GARM_PROVIDER_CONFIG_FILE```. This file is only relevant to the external provider. GARM itself does not read it. In the case of the sample OpenStack provider, this file contains access information for an OpenStack cloud (what you would typically find in a ```keystonerc``` file) as well as some provider specific options like whether or not to boot from volume and which tenant network to use. You can check out the [sample config file](../contrib/providers.d/openstack/keystonerc) in this repository.
If you want to implement an external provider, you can use this file for anything you need to pass into the binary when ```garm``` calls it to execute a particular operation.
If you want to implement an external provider, you can use this file for anything you need to pass into the binary when ```GARM``` calls it to execute a particular operation.
### Available external providers

View file

@ -387,7 +387,7 @@ garm-cli pool add \
--image ubuntu:22.04 \
--max-runners 5 \
--min-idle-runners 0 \
--os-arch arm64 \
--os-arch amd64 \
--os-type linux \
--tags ubuntu,generic
```
@ -435,7 +435,7 @@ gabriel@rock:~$ garm-cli pool ls -a
+--------------------------------------+--------------+---------+----------------------------------------+------------------+-------+---------+---------------+
| ID | IMAGE | FLAVOR | TAGS | BELONGS TO | LEVEL | ENABLED | RUNNER PREFIX |
+--------------------------------------+--------------+---------+----------------------------------------+------------------+-------+---------+---------------+
| 344e4a72-2035-4a18-a3d5-87bd3874b56c | ubuntu:22.04 | default | self-hosted arm64 Linux ubuntu generic | gsamfira/scripts | repo | true | garm |
| 344e4a72-2035-4a18-a3d5-87bd3874b56c | ubuntu:22.04 | default | self-hosted amd64 Linux ubuntu generic | gsamfira/scripts | repo | true | garm |
+--------------------------------------+--------------+---------+----------------------------------------+------------------+-------+---------+---------------+
```
@ -521,7 +521,7 @@ gabriel@rossak:~$ garm-cli runner show garm-tdtD6zpsXhj1
| Addresses | 10.44.30.155 |
| Status Updates | 2023-07-18T14:32:26: runner registration token was retrieved |
| | 2023-07-18T14:32:26: downloading tools from https://github.com/actions/runner/releases/download/v2.3 |
| | 06.0/actions-runner-linux-arm64-2.306.0.tar.gz |
| | 06.0/actions-runner-linux-amd64-2.306.0.tar.gz |
| | 2023-07-18T14:32:30: extracting runner |
| | 2023-07-18T14:32:36: installing dependencies |
| | 2023-07-18T14:33:03: configuring runner |

View file

@ -1,391 +0,0 @@
# Running garm
Create a folder for the config:
```bash
mkdir $HOME/garm
```
Create a config file for ```garm```:
```bash
cp ./testdata/config.toml $HOME/garm/config.toml
```
Customize the config whichever way you want, then run ```garm```:
```bash
garm -config $HOME/garm/config.toml
```
This will start the API and migrate the database. Note, if you're using MySQL, you will need to create a database, grant access to a user and configure those credentials in the ```config.toml``` file.
## First run
Before you can use ```garm```, you need to initialize it. This means we need to create an admin user, and login:
```bash
ubuntu@experiments:~$ garm-cli init --name="local_garm" --url https://garm.example.com
Username: admin
Email: root@localhost
✔ Password: *************█
+----------+--------------------------------------+
| FIELD | VALUE |
+----------+--------------------------------------+
| ID | ef4ab6fd-1252-4d5a-ba5a-8e8bd01610ae |
| Username | admin |
| Email | root@localhost |
| Enabled | true |
+----------+--------------------------------------+
```
Alternatively you can run this in non-interactive mode. See ```garm-cli init -h``` for details.
## Enabling bash completion
Before we begin, let's make our lives a little easier and set up bash completion. The wonderful [cobra](https://github.com/spf13/cobra) library gives us completion for free:
```bash
mkdir $HOME/.bash_completion.d
echo 'source $HOME/.bash_completion.d/* >/dev/null 2>&1|| true' >> $HOME/.bash_completion
```
Now generate the completion file:
```bash
garm-cli completion bash > $HOME/.bash_completion.d/garm
```
Completion for multiple shells is available:
```bash
ubuntu@experiments:~$ garm-cli completion
Generate the autocompletion script for garm-cli for the specified shell.
See each sub-command's help for details on how to use the generated script.
Usage:
garm-cli completion [command]
Available Commands:
bash Generate the autocompletion script for bash
fish Generate the autocompletion script for fish
powershell Generate the autocompletion script for powershell
zsh Generate the autocompletion script for zsh
Flags:
-h, --help help for completion
Global Flags:
--debug Enable debug on all API calls
Use "garm-cli completion [command] --help" for more information about a command.
```
## Adding a repository/organization/enterprise
To add a repository, we need credentials. Let's list the available credentials currently configured. These credentials are added to ```garm``` using the config file (see above), but we need to reference them by name when creating a repo.
```bash
ubuntu@experiments:~$ garm-cli credentials list
+---------+------------------------------+
| NAME | DESCRIPTION |
+---------+------------------------------+
| gabriel | github token or user gabriel |
+---------+------------------------------+
```
Now we can add a repository to ```garm```:
```bash
ubuntu@experiments:~$ garm-cli repository create \
--credentials=gabriel \
--owner=gabriel-samfira \
--name=scripts \
--webhook-secret="super secret webhook secret you configured in github webhooks"
+-------------+--------------------------------------+
| FIELD | VALUE |
+-------------+--------------------------------------+
| ID | 77258e1b-81d2-4821-bdd7-f6923a026455 |
| Owner | gabriel-samfira |
| Name | scripts |
| Credentials | gabriel |
+-------------+--------------------------------------+
```
To add an organization, use the following command:
```bash
ubuntu@experiments:~$ garm-cli organization create \
--credentials=gabriel \
--name=gsamfira \
--webhook-secret="$SECRET"
+-------------+--------------------------------------+
| FIELD | VALUE |
+-------------+--------------------------------------+
| ID | 7f0b83d5-3dc0-42de-b189-f9bbf1ae8901 |
| Name | gsamfira |
| Credentials | gabriel |
+-------------+--------------------------------------+
```
To add an enterprise, use the following command:
```bash
ubuntu@experiments:~$ garm-cli enterprise create \
--credentials=gabriel \
--name=gsamfira \
--webhook-secret="$SECRET"
+-------------+--------------------------------------+
| FIELD | VALUE |
+-------------+--------------------------------------+
| ID | 0925033b-049f-4334-a460-c26f979d2356 |
| Name | gsamfira |
| Credentials | gabriel |
+-------------+--------------------------------------+
```
## Creating a pool
Pools are objects that define one type of worker and rules by which that pool of workers will be maintained. You can have multiple pools of different types of instances. Each pool can have different images, be on different providers and have different tags.
Before we can create a pool, we need to list the available providers. Providers are defined in the config (see above), but we need to reference them by name in the pool.
```bash
ubuntu@experiments:~$ garm-cli provider list
+-----------+------------------------+------+
| NAME | DESCRIPTION | TYPE |
+-----------+------------------------+------+
| lxd_local | Local LXD installation | lxd |
+-----------+------------------------+------+
```
Now we can create a pool for repo ```gabriel-samfira/scripts```:
```bash
ubuntu@experiments:~$ garm-cli pool add \
--repo=77258e1b-81d2-4821-bdd7-f6923a026455 \
--flavor="default" \
--image="ubuntu:20.04" \
--provider-name="lxd_local" \
--tags="ubuntu,simple-runner,repo-runner" \
--enabled=false
+------------------+-------------------------------------------------------------+
| FIELD | VALUE |
+------------------+-------------------------------------------------------------+
| ID | fb25f308-7ad2-4769-988e-6ec2935f642a |
| Provider Name | lxd_local |
| Image | ubuntu:20.04 |
| Flavor | default |
| OS Type | linux |
| OS Architecture | amd64 |
| Max Runners | 5 |
| Min Idle Runners | 1 |
| Tags | ubuntu, simple-runner, repo-runner, self-hosted, x64, linux |
| Belongs to | gabriel-samfira/scripts |
| Level | repo |
| Enabled | false |
+------------------+-------------------------------------------------------------+
```
There are a bunch of things going on here, so let's break it down. We created a pool for repo ```gabriel-samfira/scripts``` (identified by the ID ```77258e1b-81d2-4821-bdd7-f6923a026455```). This pool has the following characteristics:
* flavor=default - The **flavor** describes the hardware aspects of an instance. In LXD terms, this translates to [profiles](https://linuxcontainers.org/lxd/docs/master/profiles/). In LXD, profiles describe how much memory, CPU, NICs and disks a particular instance will get. Much like the flavors in OpenStack or any public cloud provider
* image=ubuntu:20.04 - The image describes the operating system that will be spun up on the provider. LXD fetches these images from one of the configured remotes, or from the locally cached images. On AWS, this would be an AMI (for example).
* provider-name=lxd_local - This is the provider on which we'll be spinning up runners. You can have as many providers defined as you wish, and you can reference either one of them when creating a pool.
* tags="ubuntu,simple-runner,repo-runner" - This list of tags will be added to all runners maintained by this pool. These are the tags you can use to target these runners in your workflows. By default, the github runner will automatically add a few default tags (self-hosted, x64, linux in the above example)
* enabled=false - This option creates the pool in **disabled** state. When disabled, no new runners will be spun up.
By default, a pool is created with a max worker count of ```5``` and a minimum idle runner count of ```1```. This means that this pool will create by default one runner, and will automatically add more, as jobs are triggered on github. The idea is to have at least one runner ready to accept a workflow job. The pool will keep adding workers until the max runner count is reached. Once a workflow job is complete, the runner is automatically deleted, and replaced.
To update the pool, we cam use the following command:
```bash
ubuntu@experiments:~$ garm-cli pool update fb25f308-7ad2-4769-988e-6ec2935f642a --enabled=true
+------------------+-------------------------------------------------------------+
| FIELD | VALUE |
+------------------+-------------------------------------------------------------+
| ID | fb25f308-7ad2-4769-988e-6ec2935f642a |
| Provider Name | lxd_local |
| Image | ubuntu:20.04 |
| Flavor | default |
| OS Type | linux |
| OS Architecture | amd64 |
| Max Runners | 5 |
| Min Idle Runners | 1 |
| Tags | ubuntu, simple-runner, repo-runner, self-hosted, x64, linux |
| Belongs to | gabriel-samfira/scripts |
| Level | repo |
| Enabled | true |
+------------------+-------------------------------------------------------------+
```
Now, if we list the runners, we should see one being created:
```bash
ubuntu@experiments:~$ garm-cli runner ls fb25f308-7ad2-4769-988e-6ec2935f642a
+-------------------------------------------+----------------+---------------+--------------------------------------+
| NAME | STATUS | RUNNER STATUS | POOL ID |
+-------------------------------------------+----------------+---------------+--------------------------------------+
| garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe | pending_create | pending | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-------------------------------------------+----------------+---------------+--------------------------------------+
```
We can also do a show on that runner to get more info:
```bash
ubuntu@experiments:~$ garm-cli runner show garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe
+-----------------+-------------------------------------------+
| FIELD | VALUE |
+-----------------+-------------------------------------------+
| ID | 089d63c9-5567-4318-a3a6-e065685c975b |
| Provider ID | garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe |
| Name | garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe |
| OS Type | linux |
| OS Architecture | amd64 |
| OS Name | ubuntu |
| OS Version | focal |
| Status | running |
| Runner Status | pending |
| Pool ID | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-----------------+-------------------------------------------+
```
If we check out LXD, we can see the instance was created and is currently being bootstrapped:
```bash
ubuntu@experiments:~$ lxc list
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe | RUNNING | 10.247.246.219 (enp5s0) | | VIRTUAL-MACHINE | 0 |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
```
It might take a couple of minutes for the runner to come online, as the instance will do a full upgrade, then download the runner and install it. But once the installation is done you should see something like this:
```bash
ubuntu@experiments:~$ garm-cli runner show garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| FIELD | VALUE |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| ID | 089d63c9-5567-4318-a3a6-e065685c975b |
| Provider ID | garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe |
| Name | garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe |
| OS Type | linux |
| OS Architecture | amd64 |
| OS Name | ubuntu |
| OS Version | focal |
| Status | running |
| Runner Status | idle |
| Pool ID | fb25f308-7ad2-4769-988e-6ec2935f642a |
| Status Updates | 2022-05-06T13:21:54: downloading tools from https://github.com/actions/runner/releases/download/v2.291.1/actions-runner-linux-x64-2.291.1.tar.gz |
| | 2022-05-06T13:21:56: extracting runner |
| | 2022-05-06T13:21:58: installing dependencies |
| | 2022-05-06T13:22:07: configuring runner |
| | 2022-05-06T13:22:12: installing runner service |
| | 2022-05-06T13:22:12: starting service |
| | 2022-05-06T13:22:13: runner successfully installed |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
```
If we list the runners for this pool, we should see one runner with a ```RUNNER STATUS``` of ```idle```:
```bash
ubuntu@experiments:~$ garm-cli runner ls fb25f308-7ad2-4769-988e-6ec2935f642a
+-------------------------------------------+---------+---------------+--------------------------------------+
| NAME | STATUS | RUNNER STATUS | POOL ID |
+-------------------------------------------+---------+---------------+--------------------------------------+
| garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe | running | idle | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-------------------------------------------+---------+---------------+--------------------------------------+
```
## Updating a pool
Let's update the pool and request that it maintain a number of minimum idle runners equal to 3:
```bash
ubuntu@experiments:~$ garm-cli pool update fb25f308-7ad2-4769-988e-6ec2935f642a \
--min-idle-runners=3 \
--max-runners=10
+------------------+----------------------------------------------------------------------------------+
| FIELD | VALUE |
+------------------+----------------------------------------------------------------------------------+
| ID | fb25f308-7ad2-4769-988e-6ec2935f642a |
| Provider Name | lxd_local |
| Image | ubuntu:20.04 |
| Flavor | default |
| OS Type | linux |
| OS Architecture | amd64 |
| Max Runners | 10 |
| Min Idle Runners | 3 |
| Tags | ubuntu, simple-runner, repo-runner, self-hosted, x64, linux |
| Belongs to | gabriel-samfira/scripts |
| Level | repo |
| Enabled | true |
| Instances | garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe (089d63c9-5567-4318-a3a6-e065685c975b) |
+------------------+----------------------------------------------------------------------------------+
```
Now if we list runners we should see 2 more in ```pending``` state:
```bash
ubuntu@experiments:~$ garm-cli runner ls fb25f308-7ad2-4769-988e-6ec2935f642a
+-------------------------------------------+---------+---------------+--------------------------------------+
| NAME | STATUS | RUNNER STATUS | POOL ID |
+-------------------------------------------+---------+---------------+--------------------------------------+
| garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe | running | idle | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-------------------------------------------+---------+---------------+--------------------------------------+
| garm-bc180c6c-6e31-4c7b-8ce1-da0ffd76e247 | running | pending | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-------------------------------------------+---------+---------------+--------------------------------------+
| garm-37c5daf4-18c5-47fc-95de-8c1656889093 | running | pending | fb25f308-7ad2-4769-988e-6ec2935f642a |
+-------------------------------------------+---------+---------------+--------------------------------------+
```
We can see them in LXC as well:
```bash
ubuntu@experiments:~$ lxc list
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| garm-37c5daf4-18c5-47fc-95de-8c1656889093 | RUNNING | | | VIRTUAL-MACHINE | 0 |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| garm-bc180c6c-6e31-4c7b-8ce1-da0ffd76e247 | RUNNING | | | VIRTUAL-MACHINE | 0 |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
| garm-edeb8f46-ab09-4ed9-88fc-2731ecf9aabe | RUNNING | 10.247.246.219 (enp5s0) | | VIRTUAL-MACHINE | 0 |
+-------------------------------------------+---------+-------------------------+------+-----------------+-----------+
```
Once they transition to ```idle```, you should see them in your repo settings, under ```Actions --> Runners```.
The procedure is identical for organizations. Have a look at the garm-cli help:
```bash
ubuntu@experiments:~$ garm-cli -h
CLI for the github self hosted runners manager.
Usage:
garm-cli [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
credentials List configured credentials
debug-log Stream garm log
enterprise Manage enterprise
help Help about any command
init Initialize a newly installed garm
organization Manage organizations
pool List pools
profile Add, delete or update profiles
provider Interacts with the providers API resource.
repository Manage repositories
runner List runners in a pool
version Print version and exit
Flags:
--debug Enable debug on all API calls
-h, --help help for garm-cli
Use "garm-cli [command] --help" for more information about a command.
```