Volumes
A volume is a physical space containing files. Each volume as a name. Different types of volume is available, with different characteristics (sharable, posix compliancy, persistance, …).
Volume in ScaleDynamics relies on the following concepts:
- The description of container needs.
- The managament of physical volumes.
- Links between container needs and volumes according to the environment.
Container needs
Containers declare in its warp.config.js
their needs for volumes by adding a volumes
section.
For exemple, a web server could declare the need of a volume to store their assets by a configuration like:
module.exports = {
container: "my-web-server",
image: {
name: 'nginx:latest',
},
volumes: [ {
id: "asset",
description: 'Files to serve',
path: "/usr/share/nginx/html",
mode: 'ro',
} ],
}
id
: anid
use by the SDK during the linking phase.description
: a descrption use for documentation and information during the linking phase.path
: the absolute path inside the container where the volume need to be mounted.mode
:ro
: the container need only to read the volume.rw
: the container need to write on the volume.
Indeed, a container can declared several different volume needs.
Volume managment
To create a volume, use the following command warp volume create
and follow interactive questions, or add relevant options.
You can, as usual, list all available volumes with warp volume list
, get more information on a volume with warp volume info
or delete a volume with warp volume delete
.
For exemple, for our web server, we create a S3 volume with the following command:
$ warp volume create --path /
✔ Enter a new volume name: … my-prod-asset
✔ Pick a model: › s3fs
✔ Select S3 generic connector: › my-s3-connector
✔ Enter a S3 bucket name: … prod-asset
Volume 'my-prod-asset' has been created
With the --path /
option, we are attaching the volume to the organization. So, every projects in that organization will be able to see and use this volume. You can also attach the volume to a specific project by using /my-project
, or to a specific environment by using /my-project/my-env
.
Let's pick the s3fs
model, and use a s3 connector. The connector contains credential and endpoint URL of your S3 storage. At last, enter the name of your S3 bucket where your assets is located.
Volume linking
Now that the container as declared its needs, and volume are created, we need to link the container need with a specific physical volume for an environment. You can link a volume my-prod-asset
to the asset
need in environment prod
, and a volume my-dev-asset
to the asset
need in environment dev
. With that, when you deploy your container, the volumme my-prod-asset
or my-dev-asset
will be mounted automatically in your container depending of the environment prod
or dev
you are deploying into.
You can link volume with the command warp env volume link
, and remove link with warp env volume unlink
. For exemple, let's link our web server assets volume with:
$ npx warp env volume link
✔ Pick a project: › my-web
✔ Pick an environment in project 'my-web': › prod
✔ Pcik a service: › my-web-server
✔ Pick a service volume ID › asset
✔ Pick a volume: › my-prod-asset
But, as usual in interactive mode, the command warp deploy
will ask all necessary questions if the setting was not done before. For exemple, let's deploy the web server, with the following command:
$ npx warp deploy .
Project settings:
- Configuration: 'warp.config.js'
- Service container 'my-web-server' (from 'warp.config.js')
✔ Pick a project: › web-server
✔ Pick an environment in project 'web-project': › prod
You need to select a volume for service volume ID 'asset'
✔ Pick a volume (Files to serve): › my-prod-asset
Volume 'my-prod-asset' has been linked with 'my-web-server' service volume ID 'asset' in environment 'prod'
Deploying project 'jpl-project' in environment 'prod':
[…]
- Container 'my-web-server' with base URL 'my-web-server' at https://bk7nxl97zh5c5pf9u<w5e29nid.scaledynamics.cloud
Now, you can use the URL to access files in the volume my-prod-asset
through a nginx server.
Volume types and characteristics
The following command list all volume types accessible.
$ npx warp volume model list
Name Scope Instantiable
local Global Yes
s3fs Global Yes
…
You can have some details about a volume type with the following command:
$ npx warp volume model info
✔ Pick a model: › local
Name: local
Description: Local volume on runner, persistant, posix compliant, sharable on the same runner
Currently instantiable
local
volume type
The local
volume type is a local volume on the file system of the runner. So, every containers, from differents services or deployments, running on this runner, share the volume, but containers on different runners access separate copies. With this type of volume, be carefull about vertical scaling, when a new runner is allocated, your local
volume will be empty on this new runner.
This volume type is typically usefull for a single runner infrastructure, or for caching data.
The volume is fully posix compliant and persistant.
Creation
For local
volume, use:
npx warp volume model create --model local
s3fs
volume
The s3fs
volume type is a S3 object storage mounted with s3fs tool on the runner. So, every containers running on all runners share the same data on this volume. See the s3fs documentation (opens in a new tab) for more details.
This type of volume is typically usefull for sharing assets, for updating assets, …
The volume is persistant and sharable accross different runners but is NOT fully posix compliant.
S3 connector
In order ro create a s3fs volume you need a s3 connector. A S3 connector contains the S3 endpoint URL and its credentials (access key id & secret access key). You can create different volume with the smae S3 connector by chaging the bucket name.
Creation
For s3fs
volume, use:
npx warp volume model create --model s3fs
You need two things:
- A S3 generic connector.
- A S3 bucket name: the name your S3 bucket conformed to the naming rules (opens in a new tab).
Permissions
Permissions on s3fs volume are managed by meta labels. So, you can experience some troubles if your container does not use root
user. Ownerships and permissions are fully supported when using s3fs exclusively. But, if you use some other tools to populate your S3 volume, you need to be carreful about meta information. Additionaly, s3fs can read meta information coming from s3sync (opens in a new tab).
In case of permission troubles, you can use warp deployment exec
to open a shell on your container and diagnose the issue.