Update docs
Signed-off-by: Gabriel Adrian Samfira <gsamfira@cloudbasesolutions.com>
This commit is contained in:
parent
affb56f9a0
commit
d1d8bfa703
3 changed files with 41 additions and 140 deletions
16
README.md
16
README.md
|
|
@ -30,17 +30,17 @@ Thanks to the efforts of the amazing folks at @mercedes-benz, GARM can now be in
|
|||
|
||||
## Supported providers
|
||||
|
||||
GARM has a built-in LXD provider that you can use out of the box to spin up runners on any machine that runs either a stand-alone LXD instance, or an LXD cluster. The quick start guide mentioned above will get you up and running with the LXD provider.
|
||||
|
||||
GARM also supports external providers for a variety of other targets.
|
||||
GARM uses providers to create runners in a particular IaaS. The providers are external executables that GARM calls into to create runners. Before you can create runners, you'll need to install at least one provider.
|
||||
|
||||
## Installing external providers
|
||||
|
||||
External providers are binaries that GARM calls into to create runners in a particular IaaS. There are currently two external providers available:
|
||||
External providers are binaries that GARM calls into to create runners in a particular IaaS. There are several external providers available:
|
||||
|
||||
* [OpenStack](https://github.com/cloudbase/garm-provider-openstack)
|
||||
* [Azure](https://github.com/cloudbase/garm-provider-azure)
|
||||
* [Kubernetes](https://github.com/mercedes-benz/garm-provider-k8s) - Thanks to the amazing folks at @mercedes-benz for sharing their awesome provider!
|
||||
* [LXD](https://github.com/cloudbase/garm-provider-lxd)
|
||||
* [Incus](https://github.com/cloudbase/garm-provider-incus)
|
||||
|
||||
Follow the instructions in the README of each provider to install them.
|
||||
|
||||
|
|
@ -62,10 +62,4 @@ If you would like to optimize the startup time of new instance, take a look at t
|
|||
|
||||
## Write your own provider
|
||||
|
||||
The providers are interfaces between ```GARM``` and a particular IaaS in which we spin up GitHub Runners. These providers can be either **native** or **external**. The **native** providers are written in ```Go```, and must implement [the interface defined here](https://github.com/cloudbase/garm/blob/main/runner/common/provider.go#L22-L39). **External** providers can be written in any language, as they are in the form of an external executable that ```GARM``` calls into.
|
||||
|
||||
There is currently one **native** provider for [LXD](https://linuxcontainers.org/lxd/) and several **external** providers linked above.
|
||||
|
||||
If you want to write your own provider, you can choose to write a native one, or implement an **external** one. I encourage you to opt for an **external** provider, as those are the easiest to write and you don't need to merge it in GARM itself to be able to use. Faster to write, faster to iterate. The LXD provider may at some point be split from GARM into it's own external project, at which point we will remove the native provider interface and only support external providers.
|
||||
|
||||
Please see the [Writing an external provider](/doc/external_provider.md) document for details. Also, feel free to inspect the two available sample external providers in this repository.
|
||||
The providers are interfaces between ```GARM``` and a particular IaaS in which we spin up GitHub Runners. **External** providers can be written in any language, as they are in the form of an external executable that ```GARM``` calls into. Please see the [Writing an external provider](/doc/external_provider.md) document for details. Also, feel free to inspect the two available sample external providers in this repository.
|
||||
|
|
|
|||
121
doc/providers.md
121
doc/providers.md
|
|
@ -1,126 +1,10 @@
|
|||
# Provider configuration
|
||||
|
||||
GARM was designed to be extensible. Providers can be written either as built-in plugins or as external executables. The built-in plugins are written in Go, and they are compiled into the ```GARM``` binary. External providers are executables that implement the needed interface to create/delete/list compute systems that are used by ```GARM``` to create runners.
|
||||
GARM was designed to be extensible. Providers can be written as external executables. External providers are executables that implement the needed interface to create/delete/list compute systems that are used by ```GARM``` to create runners.
|
||||
|
||||
GARM currently ships with one built-in provider for [LXD](https://linuxcontainers.org/lxd/introduction/) and the external provider interface which allows you to write your own provider in any language you want.
|
||||
|
||||
- [LXD provider](#lxd-provider)
|
||||
- [LXD remotes](#lxd-remotes)
|
||||
- [LXD Security considerations](#lxd-security-considerations)
|
||||
- [External provider](#external-provider)
|
||||
- [Available external providers](#available-external-providers)
|
||||
|
||||
## LXD provider
|
||||
|
||||
GARM leverages LXD to create the runners. Here is a sample config section for an LXD provider:
|
||||
|
||||
```toml
|
||||
# Currently, providers are defined statically in the config. This is due to the fact
|
||||
# that we have not yet added support for storing secrets in something like Barbican
|
||||
# or Vault. This will change in the future. However, for now, it's important to remember
|
||||
# that once you create a pool using one of the providers defined here, the name of that
|
||||
# provider must not be changed, or the pool will no longer work. Make sure you remove any
|
||||
# pools before removing or changing a provider.
|
||||
[[provider]]
|
||||
# An arbitrary string describing this provider.
|
||||
name = "lxd_local"
|
||||
# Provider type. GARM is designed to allow creating providers which are used to spin
|
||||
# up compute resources, which in turn will run the github runner software.
|
||||
# Currently, LXD is the only supprted provider, but more will be written in the future.
|
||||
provider_type = "lxd"
|
||||
# A short description of this provider. The name, description and provider types will
|
||||
# be included in the information returned by the API when listing available providers.
|
||||
description = "Local LXD installation"
|
||||
[provider.lxd]
|
||||
# the path to the unix socket that LXD is listening on. This works if GARM and LXD
|
||||
# are on the same system, and this option takes precedence over the "url" option,
|
||||
# which connects over the network.
|
||||
unix_socket_path = "/var/snap/lxd/common/lxd/unix.socket"
|
||||
# When defining a pool for a repository or an organization, you have an option to
|
||||
# specify a "flavor". In LXD terms, this translates to "profiles". Profiles allow
|
||||
# you to customize your instances (memory, cpu, disks, nics, etc).
|
||||
# This option allows you to inject the "default" profile along with the profile selected
|
||||
# by the flavor.
|
||||
include_default_profile = false
|
||||
# instance_type defines the type of instances this provider will create.
|
||||
#
|
||||
# Options are:
|
||||
#
|
||||
# * virtual-machine (default)
|
||||
# * container
|
||||
#
|
||||
instance_type = "container"
|
||||
# enable/disable secure boot. If the image you select for the pool does not have a
|
||||
# signed bootloader, set this to false, otherwise your instances won't boot.
|
||||
secure_boot = false
|
||||
# Project name to use. You can create a separate project in LXD for runners.
|
||||
project_name = "default"
|
||||
# URL is the address on which LXD listens for connections (ex: https://example.com:8443)
|
||||
url = ""
|
||||
# GARM supports certificate authentication for LXD remote connections. The easiest way
|
||||
# to get the needed certificates, is to install the lxc client and add a remote. The
|
||||
# client_certificate, client_key and tls_server_certificate can be then fetched from
|
||||
# $HOME/snap/lxd/common/config.
|
||||
client_certificate = ""
|
||||
client_key = ""
|
||||
tls_server_certificate = ""
|
||||
[provider.lxd.image_remotes]
|
||||
# Image remotes are important. These are the default remotes used by lxc. The names
|
||||
# of these remotes are important. When specifying an "image" for the pool, that image
|
||||
# can be a hash of an existing image on your local LXD installation or it can be a
|
||||
# remote image from one of these remotes. You can specify the images as follows:
|
||||
# Example:
|
||||
#
|
||||
# * ubuntu:20.04
|
||||
# * ubuntu_daily:20.04
|
||||
# * images:centos/8/cloud
|
||||
#
|
||||
# Ubuntu images come pre-installed with cloud-init which we use to set up the runner
|
||||
# automatically and customize the runner. For non Ubuntu images, you need to use the
|
||||
# variant that has "/cloud" in the name. Those images come with cloud-init.
|
||||
[provider.lxd.image_remotes.ubuntu]
|
||||
addr = "https://cloud-images.ubuntu.com/releases"
|
||||
public = true
|
||||
protocol = "simplestreams"
|
||||
skip_verify = false
|
||||
[provider.lxd.image_remotes.ubuntu_daily]
|
||||
addr = "https://cloud-images.ubuntu.com/daily"
|
||||
public = true
|
||||
protocol = "simplestreams"
|
||||
skip_verify = false
|
||||
[provider.lxd.image_remotes.images]
|
||||
addr = "https://images.linuxcontainers.org"
|
||||
public = true
|
||||
protocol = "simplestreams"
|
||||
skip_verify = false
|
||||
```
|
||||
|
||||
You can choose to connect to a local LXD server by using the ```unix_socket_path``` option, or you can connect to a remote LXD cluster/server by using the ```url``` option. If both are specified, the unix socket takes precedence. The config file is fairly well commented, but I will add a note about remotes.
|
||||
|
||||
### LXD remotes
|
||||
|
||||
By default, GARM does not load any image remotes. You get to choose which remotes you add (if any). An image remote is a repository of images that LXD uses to create new instances, either virtual machines or containers. In the absence of any remote, GARM will attempt to find the image you configure for a pool of runners, on the LXD server we're connecting to. If one is present, it will be used, otherwise it will fail and you will need to configure a remote.
|
||||
|
||||
The sample config file in this repository has the usual default ```LXD``` remotes:
|
||||
|
||||
* <https://cloud-images.ubuntu.com/releases> (ubuntu) - Official Ubuntu images
|
||||
* <https://cloud-images.ubuntu.com/daily> (ubuntu_daily) - Official Ubuntu images, daily build
|
||||
* <https://images.linuxcontainers.org> (images) - Community maintained images for various operating systems
|
||||
|
||||
When creating a new pool, you'll be able to specify which image you want to use. The images are referenced by ```remote_name:image_tag```. For example, if you want to launch a runner on an Ubuntu 20.04, the image name would be ```ubuntu:20.04```. For a daily image it would be ```ubuntu_daily:20.04```. And for one of the unofficial images it would be ```images:centos/8-Stream/cloud```. Note, for unofficial images you need to use the tags that have ```/cloud``` in the name. These images come pre-installed with ```cloud-init``` which we need to set up the runners automatically.
|
||||
|
||||
You can also create your own image remote, where you can host your own custom images. If you want to build your own images, have a look at [distrobuilder](https://github.com/lxc/distrobuilder).
|
||||
|
||||
Image remotes in the ```GARM``` config, is a map of strings to remote settings. The name of the remote is the last bit of string in the section header. For example, the following section ```[provider.lxd.image_remotes.ubuntu_daily]```, defines the image remote named **ubuntu_daily**. Use this name to reference images inside that remote.
|
||||
|
||||
You can also use locally uploaded images. Check out the [performance considerations](./performance_considerations.md) page for details on how to customize local images and use them with GARM.
|
||||
|
||||
### LXD Security considerations
|
||||
|
||||
GARM does not apply any ACLs of any kind to the instances it creates. That task remains in the responsibility of the user. [Here is a guide for creating ACLs in LXD](https://linuxcontainers.org/lxd/docs/master/howto/network_acls/). You can of course use ```iptables``` or ```nftables``` to create any rules you wish. I recommend you create a separate isolated lxd bridge for runners, and secure it using ACLs/iptables/nftables.
|
||||
|
||||
You must make sure that the code that runs as part of the workflows is trusted, and if that cannot be done, you must make sure that any malicious code that will be pulled in by the actions and run as part of a workload, is as contained as possible. There is a nice article about [securing your workflow runs here](https://blog.gitguardian.com/github-actions-security-cheat-sheet/).
|
||||
|
||||
## External provider
|
||||
|
||||
The external provider is a special kind of provider. It delegates the functionality needed to create the runners to external executables. These executables can be either binaries or scripts. As long as they adhere to the needed interface, they can be used to create runners in any target IaaS. This is identical to what ```containerd``` does with ```CNIs```.
|
||||
|
|
@ -163,6 +47,9 @@ For non testing purposes, there are two external providers currently available:
|
|||
|
||||
* [OpenStack](https://github.com/cloudbase/garm-provider-openstack)
|
||||
* [Azure](https://github.com/cloudbase/garm-provider-azure)
|
||||
* [Kubernetes](https://github.com/mercedes-benz/garm-provider-k8s) - Thanks to the amazing folks at @mercedes-benz for sharing their awesome provider!
|
||||
* [LXD](https://github.com/cloudbase/garm-provider-lxd)
|
||||
* [Incus](https://github.com/cloudbase/garm-provider-incus)
|
||||
|
||||
Details on how to install and configure them are available in their respective repositories.
|
||||
|
||||
|
|
|
|||
|
|
@ -96,23 +96,31 @@ At this point, we have a valid config file, but we still need to add `provider`
|
|||
|
||||
This is where you have a decision to make. GARM has a number of providers you can leverage. At the time of this writing, we have support for:
|
||||
|
||||
* LXD
|
||||
* Azure
|
||||
* OpenStack
|
||||
* [OpenStack](https://github.com/cloudbase/garm-provider-openstack)
|
||||
* [Azure](https://github.com/cloudbase/garm-provider-azure)
|
||||
* [Kubernetes](https://github.com/mercedes-benz/garm-provider-k8s) - Thanks to the amazing folks at @mercedes-benz for sharing their awesome provider!
|
||||
* [LXD](https://github.com/cloudbase/garm-provider-lxd)
|
||||
* [Incus](https://github.com/cloudbase/garm-provider-incus)
|
||||
|
||||
The LXD provider is built into GARM itself and has no external requirements. The [Azure](https://github.com/cloudbase/garm-provider-azure) and [OpenStack](https://github.com/cloudbase/garm-provider-openstack) ones are `external` providers in the form of an executable that GARM calls into.
|
||||
All currently available providers are `external`.
|
||||
|
||||
Both the LXD and the external provider configs are [documented in a separate doc](./providers.md).
|
||||
|
||||
The easiest provider to set up is probably the LXD provider. You don't need an account on an external cloud. You can just use your machine.
|
||||
The easiest provider to set up is probably the LXD or Incus provider. Incus is a fork of LXD so the functionality is identical (for now). For the purpose of this document, we'll continue with LXD. You don't need an account on an external cloud. You can just use your machine.
|
||||
|
||||
You will need to have LXD installed and configured. There is an excellent [getting started guide](https://documentation.ubuntu.com/lxd/en/latest/getting_started/) for LXD. Follow the instructions there to install and configure LXD, then come back here.
|
||||
|
||||
Once you have LXD installed and configured, you can add the provider section to your config file. If you're connecting to the `local` LXD installation, the [config snippet for the LXD provider](./providers.md#lxd-provider) will work out of the box. We'll be connecting using the unix socket so no further configuration will be needed.
|
||||
Once you have LXD installed and configured, you can add the provider section to your config file. If you're connecting to the `local` LXD installation, the [config snippet for the LXD provider](https://github.com/cloudbase/garm-provider-lxd/blob/main/testdata/garm-provider-lxd.toml) will work out of the box. We'll be connecting using the unix socket so no further configuration will be needed.
|
||||
|
||||
Go ahead and copy and paste that entire snippet in your GARM config file (`/etc/garm/config.toml`).
|
||||
Go ahead and create a new config somwhere where GARM can access it and paste that entire snippet. For the purposes of this doc, we'll assume you created a new file called `/etc/garm/garm-provider-lxd.toml`. Now we need to define the external provider config in `/etc/garm/config.toml`:
|
||||
|
||||
You can also use an external provider instead of LXD. You will need to define the provider section in your config file and point it to the executable and the provider config file. The [config snippet for the external provider](./providers.md#external-provider) gives you an example of how that can be done. Configuring the external provider is outside the scope of this guide. You will need to consult the documentation for the external provider you want to use.
|
||||
```toml
|
||||
[[provider]]
|
||||
name = "lxd_local"
|
||||
provider_type = "external"
|
||||
description = "Local LXD installation"
|
||||
[provider.external]
|
||||
provider_executable = "/opt/garm/providers.d/garm-provider-lxd"
|
||||
config_file = "/etc/garm/garm-provider-lxd.toml"
|
||||
```
|
||||
|
||||
## The credentials section
|
||||
|
||||
|
|
@ -154,7 +162,7 @@ docker run -d \
|
|||
-p 80:80 \
|
||||
-v /etc/garm:/etc/garm:rw \
|
||||
-v /var/snap/lxd/common/lxd/unix.socket:/var/snap/lxd/common/lxd/unix.socket:rw \
|
||||
ghcr.io/cloudbase/garm:v0.1.3
|
||||
ghcr.io/cloudbase/garm:v0.1.4
|
||||
```
|
||||
|
||||
You will notice we also mounted the LXD unix socket from the host inside the container where the config you pasted expects to find it. If you plan to use an external provider that does not need to connect to LXD over a unix socket, feel free to remove that mount.
|
||||
|
|
@ -187,7 +195,7 @@ Adding the `garm` user to the LXD group will allow it to connect to the LXD unix
|
|||
Next, download the latest release from the [releases page](https://github.com/cloudbase/garm/releases).
|
||||
|
||||
```bash
|
||||
wget -q -O - https://github.com/cloudbase/garm/releases/download/v0.1.3/garm-linux-amd64.tgz | tar xzf - -C /usr/local/bin/
|
||||
wget -q -O - https://github.com/cloudbase/garm/releases/download/v0.1.4/garm-linux-amd64.tgz | tar xzf - -C /usr/local/bin/
|
||||
```
|
||||
|
||||
We'll be running under an unprivileged user. If we want to be able to listen on any port under `1024`, we'll have to set some capabilities on the binary:
|
||||
|
|
@ -196,6 +204,18 @@ We'll be running under an unprivileged user. If we want to be able to listen on
|
|||
setcap cap_net_bind_service=+ep /usr/local/bin/garm
|
||||
```
|
||||
|
||||
Create a folder for the external providers:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/garm/providers.d
|
||||
```
|
||||
|
||||
Download the LXD provider binary:
|
||||
|
||||
```bash
|
||||
wget -q -O - https://github.com/cloudbase/garm-provider-lxd/releases/download/v0.1.0/garm-linux-amd64.tgz | sudo tar xzf - -C /opt/garm/providers.d/
|
||||
```
|
||||
|
||||
Change the permissions on the config dir:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue