System Storage Manager

A single tool to manage your storage.

Description

System Storage Manager provides an easy to use command line interface to manage your storage using various technologies like lvm, btrfs, encrypted volumes and more.

In more sophisticated enterprise storage environments, management with Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is becoming increasingly more difficult. With file systems added to the mix, the number of tools needed to configure and manage storage has grown so large that it is simply not user friendly. With so many options for a system administrator to consider, the opportunity for errors and problems is large.

The btrfs administration tools have shown us that storage management can be simplified, and we are working to bring that ease of use to Linux filesystems in general.

Download

You can get System Storage Manager from the git repository on GitHub project page https://github.com/system-storage-manager/ssm/releases/. There are two branches: master which contains the stable release and devel where the development is happening. Once in a while the devel branch is merged into master, releasing new version of System Storage Manager. Obviously the devel branch is more up-to-date, however it might not be as stable as the master branch. System Storage Manager has its own regression testing suite, so we’re trying hard not to break things that already work.

You can check out the master branch of the git repository:

git clone https://github.com/system-storage-manager/ssm.git

Licence

(C)2017 Red Hat, Inc., Jan Tulak <jtulak@redhat.com> (C)2011 Red Hat, Inc., Lukas Czerner <lczerner@redhat.com>

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.

Synopsis

ssm [-h] [–version] [-v] [-v**v] [-v**vv] [-f] [-b BACKEND] [-n] {check,resize,create,list,info,add,remove,snapshot,mount,migrate} …

ssm create [-h] [-s SIZE] [-n NAME] [–fstype FSTYPE] [-r LEVEL] [-I STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS] [-v VIRTUAL_SIZE] [device [device …]] [mount]

ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]

ssm info [-h] [item]

ssm remove [-h] [-a] [items [items …]]

ssm resize [-h] [-s SIZE] volume [device [device …]]

ssm check [-h] device [device …]

ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume

ssm add [-h] [-p POOL] device [device …]

ssm mount [-h] [-o OPTIONS] volume directory

ssm migrate [-h] source target

Options

-h, --help show this help message and exit
--version show program’s version number and exit
-v, --verbose Show aditional information while executing.
-vv Show yet more aditional information while executing.
-vvv Show yet more aditional information while executing.
-f, --force Force execution in the case where ssm has some doubts or questions.
-b BACKEND, --backend BACKEND
 Choose backend to use. Currently you can choose from (lvm,btrfs,crypt).
-n, --dry-run Dry run. Do not do anything, just parse the command line options and gather system information if necessary. Note that with this option ssm will not perform all the check as some of them are done by the backends themselves. This option is mainly used for debugging purposes, but still requires root privileges.

System Storage Manager Commands

Introduction

System Storage Manager has several commands that you can specify on the command line as a first argument to ssm. They all have a specific use and their own arguments, but global ssm arguments are propagated to all commands.

Create command

ssm create [-h] [-s SIZE] [-n NAME] [–fstype FSTYPE] [-r LEVEL] [-I STRIPESIZE] [-i STRIPES] [-p POOL] [-e [{luks,plain}]] [-o MNT_OPTIONS] [-v VIRTUAL_SIZE] [device [device …]] [mount]

This command creates a new volume with defined parameters. If a device is provided it will be used to create the volume, hence it will be added into the pool prior to volume creation (See Add command section). More than one device can be used to create a volume.

If the device is already being used in a different pool, then ssm will ask you whether you want to remove it from the original pool. If you decline, or the removal fails, then the volume creation fails if the SIZE was not provided. On the other hand, if the SIZE is provided and some devices can not be added to the pool, the volume creation might still succeed if there is enough space in the pool.

In addition to specifying size of the volume directly, percentage can be specified as well. Specify –size 70% to indicate the volume size to be 70% of total pool size. Additionally, percentage of the used, or free pool space can be specified as well using keywords FREE, or USED respectively.

The POOL name can be specified as well. If the pool exists, a new volume will be created from that pool (optionally adding device into the pool). However if the POOL does not exist, then ssm will attempt to create a new pool with the provided device, and then create a new volume from this pool. If the –backend argument is omitted, the default ssm backend will be used. The default backend is lvm.

ssm also supports creating a RAID configuration, however some back-ends might not support all RAID levels, or may not even support RAID at all. In this case, volume creation will fail.

If a mount point is provided, ssm will attempt to mount the volume after it is created. However it will fail if mountable file system is not present on the volume.

If the backend allows it (currently only supported with lvm backend), ssm can be used to create thinly provisioned volumes by specifying –virtual-size option. This will automatically create a thin pool of a given size provided with –size option and thin volume of a given size provided with –virtual-size option and name provided with –name option. Virtual size can be much bigger than available space in the pool.

-h, --help show this help message and exit
-s SIZE, --size SIZE
 Gives the size to allocate for the new logical volume. A size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define ‘power of two’ units. If no unit is provided, it defaults to kilobytes. This is optional and if not given, maximum possible size will be used. Additionally the new size can be specified as a percentage of the total pool size (50%), as a percentage of free pool space (50%FREE), or as a percentage of used pool space (50%USED).
-n NAME, --name NAME
 The name for the new logical volume. This is optional and if omitted, name will be generated by the corresponding backend.
--fstype FSTYPE
 Gives the file system type to create on the new logical volume. Supported file systems are (ext3, ext4, xfs, btrfs). This is optional and if not given file system will not be created.
-r LEVEL, --raid LEVEL
 Specify a RAID level you want to use when creating a new volume. Note that some backends might not implement all supported RAID levels. This is optional and if no specified, linear volume will be created. You can choose from the following list of supported levels (0,1,10).
-I STRIPESIZE, --stripesize STRIPESIZE
 Gives the number of kilobytes for the granularity of stripes. This is optional and if not given, backend default will be used. Note that you have to specify RAID level as well.
-i STRIPES, --stripes STRIPES
 Gives the number of stripes. This is equal to the number of physical volumes to scatter the logical volume. This is optional and if stripesize is set and multiple devices are provided stripes is determined automatically from the number of devices. Note that you have to specify RAID level as well.
-p POOL, --pool POOL
 Pool to use to create the new volume.
-e [{luks,plain}], –encrypt [{luks,plain}]
Create encrpted volume. Extension to use can be specified.
-o MNT_OPTIONS, --mnt-options MNT_OPTIONS
 Mount options are specified with a -o flag followed by a comma separated string of options. This option is equivalent to the -o mount(8) option.
-v VIRTUAL_SIZE, --virtual-size VIRTUAL_SIZE
 Gives the virtual size for the new thinly provisioned volume. A size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define ‘power of two’ units. If no unit is provided, it defaults to kilobytes.

Info command

ssm info [-h] [item]

EXPERIMENTAL This feature is currently experimental. The output format can change and fields can be added or removed.

Show detailed information about all detected devices, pools, volumes and snapshots found on the system. The info command can be used either alone to show all available items, or you can specify a device, pool, or any other identifier to see information about the specific item.

-h, --help show this help message and exit

List command

ssm list [-h] [{volumes,vol,dev,devices,pool,pools,fs,filesystems,snap,snapshots}]

Lists information about all detected devices, pools, volumes and snapshots found on the system. The list command can be used either alone to list all of the information, or you can request specific sections only.

The following sections can be specified:

{volumes | vol}
List information about all volumes found in the system.
{devices | dev}
List information about all devices found on the system. Some devices are intentionally hidden, like for example cdrom or DM/MD devices since those are actually listed as volumes.
{pools | pool}
List information about all pools found in the system.
{filesystems | fs}
List information about all volumes containing filesystems found in the system.
{snapshots | snap}
List information about all snapshots found in the system. Note that some back-ends do not support snapshotting and some cannot distinguish snapshot from regular volumes. In this case, ssm will try to recognize the volume name in order to identify a snapshot, but if the ssm regular expression does not match the snapshot pattern, the problematic snapshot will not be recognized.
-h, --help show this help message and exit

Remove command

ssm remove [-h] [-a] [items [items …]]

This command removes an item from the system. Multiple items can be specified. If the item cannot be removed for some reason, it will be skipped.

An item can be any of the following:

device
Remove a device from the pool. Note that this cannot be done in some cases where the device is being used by the pool. You can use the -f argument to force removal. If the device does not belong to any pool, it will be skipped.
pool
Remove a pool from the system. This will also remove all volumes created from that pool.
volume
Remove a volume from the system. Note that this will fail if the volume is mounted and cannot be forced with -f.
-h, --help show this help message and exit
-a, --all Remove all pools in the system.

Resize command

ssm resize [-h] [-s SIZE] volume [device [device …]]

Change size of the volume and file system. If there is no file system, only the volume itself will be resized. You can specify a device to add into the volume pool prior the resize. Note that the device will only be added into the pool if the volume size is going to grow.

If the device is already used in a different pool, then ssm will ask you whether or not you want to remove it from the original pool.

In some cases, the file system has to be mounted in order to resize. This will be handled by ssm automatically by mounting the volume temporarily.

In addition to specifying new size of the volume directly, percentage can be specified as well. Specify –size 70% to resize the volume to 70% of it’s original size. Additionally, percentage of the used, or free pool space can be specified as well using keywords FREE, or USED respectively.

Note that resizing btrfs subvolume is not supported, only the whole file system can be resized.

-h, --help show this help message and exit
-s SIZE, --size SIZE
 New size of the volume. With the + or - sign the value is added to or subtracted from the actual size of the volume and without it, the value will be set as the new volume size. A size suffix of [k|K] for kilobytes, [m|M] for megabytes, [g|G] for gigabytes, [t|T] for terabytes or [p|P] for petabytes is optional. If no unit is provided the default is kilobytes. Additionally the new size can be specified as a percentage of the original volume size ([+][-]50%), as a percentage of free pool space ([+][-]50%FREE), or as a percentage of used pool space ([+][-]50%USED).

Check command

ssm check [-h] device [device …]

Check the file system consistency on the volume. You can specify multiple volumes to check. If there is no file system on the volume, this volume will be skipped.

In some cases the file system has to be mounted in order to check the file system. This will be handled by ssm automatically by mounting the volume temporarily.

-h, --help show this help message and exit

Snapshot command

ssm snapshot [-h] [-s SIZE] [-d DEST | -n NAME] volume

Take a snapshot of an existing volume. This operation will fail if the back-end to which the volume belongs to does not support snapshotting. Note that you cannot specify both NAME and DEST since those options are mutually exclusive.

In addition to specifying new size of the volume directly, percentage can be specified as well. Specify –size 70% to indicate the new snapshot size to be 70% of the origin volume size. Additionally, percentage of the used, or free pool space can be specified as well using keywords FREE, or USED respectively.

In some cases the file system has to be mounted in order to take a snapshot of the volume. This will be handled by ssm automatically by mounting the volume temporarily.

-h, --help show this help message and exit
-s SIZE, --size SIZE
 Gives the size to allocate for the new snapshot volume. A size suffix K|k, M|m, G|g, T|t, P|p, E|e can be used to define ‘power of two’ units. If no unit is provided, it defaults to kilobytes. This is optional and if not given, the size will be determined automatically. Additionally the new size can be specified as a percentage of the original volume size (50%), as a percentage of free pool space (50%FREE), or as a percentage of used pool space (50%USED).
-d DEST, --dest DEST
 Destination of the snapshot specified with absolute path to be used for the new snapshot. This is optional and if not specified default backend policy will be performed.
-n NAME, --name NAME
 Name of the new snapshot. This is optional and if not specified default backend policy will be performed.

Add command

ssm add [-h] [-p POOL] device [device …]

This command adds a device into the pool. By default, the device will not be added if it’s already a part of a different pool, but the user will be asked whether or not to remove the device from its pool. When multiple devices are provided, all of them are added into the pool. If one of the devices cannot be added into the pool for any reason, the add command will fail. If no pool is specified, the default pool will be chosen. In the case of a non existing pool, it will be created using the provided devices.

-h, --help show this help message and exit
-p POOL, --pool POOL
 Pool to add device into. If not specified the default pool is used.

Mount command

ssm mount [-h] [-o OPTIONS] volume directory

This command will mount the volume at the specified directory. The volume can be specified in the same way as with mount(8), however in addition, one can also specify a volume in the format as it appears in the ssm list table.

For example, instead of finding out what the device and subvolume id of the btrfs subvolume “btrfs_pool:vol001” is in order to mount it, one can simply call ssm mount btrfs_pool:vol001 /mnt/test.

One can also specify OPTIONS in the same way as with mount(8).

-h, --help show this help message and exit
-o OPTIONS, --options OPTIONS
 Options are specified with a -o flag followed by a comma separated string of options. This option is equivalent to the same mount(8) option.

Migrate command

ssm migrate [-h] source target

Move data from one device to another. For btrfs and lvm their specialized utilities are used, so the data are moved in an all-or-nothing fashion and no other operation is needed to add/remove the devices or rebalance the pool. Devices that do not belong to a backend that supports specialized device migration tools will be migrated using dd.

This operation is not intended to be used for duplication, because the process can change metadata and an access to the data may be difficult.

-h, --help show this help message and exit

Back-ends

Introduction

Ssm aims to create a unified user interface for various technologies like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and possibly more. In order to do so we have a core abstraction layer in ssmlib/main.py. This abstraction layer should ideally know nothing about the underlying technology, but rather comply with device, pool and volume abstractions.

Various backends can be registered in ssmlib/main.py in order to handle specific storage technology, implementing methods like create, snapshot, or remove volumes and pools. The core will then call these methods to manage the storage without needing to know what lies underneath it. There are already several backends registered in ssm.

Btrfs backend

Btrfs is the file system with many advanced features including volume management. This is the reason why btrfs is handled differently than other conventional file systems in ssm. It is used as a volume management back-end.

Pools, volumes and snapshots can be created with btrfs backend and here is what it means from the btrfs point of view:

pool

A pool is actually a btrfs file system itself, because it can be extended by adding more devices, or shrunk by removing devices from it. Subvolumes and snapshots can also be created. When the new btrfs pool should be created, ssm simply creates a btrfs file system, which means that every new btrfs pool has one volume of the same name as the pool itself which can not be removed without removing the entire pool. The default btrfs pool name is btrfs_pool.

When creating a new btrfs pool, the name of the pool is used as the file system label. If there is an already existing btrfs file system in the system without a label, a btrfs pool name will be generated for internal use in the following format “btrfs_{device base name}”.

A btrfs pool is created when the create or add command is used with specified devices and non existing pool name.

volume

A volume in the btrfs back-end is actually just btrfs subvolume with the exception of the first volume created on btrfs pool creation, which is the file system itself. Subvolumes can only be created on the btrfs file system when it is mounted, but the user does not have to worry about that since ssm will automatically mount the file system temporarily in order to create a new subvolume.

The volume name is used as subvolume path in the btrfs file system and every object in this path must exist in order to create a volume. The volume name for internal tracking and that is visible to the user is generated in the format “{pool_name}:{volume name}”, but volumes can be also referenced by its mount point.

The btrfs volumes are only shown in the list output, when the file system is mounted, with the exception of the main btrfs volume - the file system itself.

Also note that btrfs volumes and subvolumes cannot be resized. This is mainly limitation of the btrfs tools which currently do not work reliably.

A new btrfs volume can be created with the create command.

snapshot

The btrfs file system supports subvolume snapshotting, so you can take a snapshot of any btrfs volume in the system with ssm. However btrfs does not distinguish between subvolumes and snapshots, because a snapshot is actually just a subvolume with some blocks shared with a different subvolume. This means, that ssm is not able to directly recognize a btrfs snapshot. Instead, ssm will try to recognize a special name format of the btrfs volume that denotes it is a snapshot. However, if the NAME is specified when creating snapshot which does not match the special pattern, snapshot will not be recognized by the ssm and it will be listed as regular btrfs volume.

A new btrfs snapshot can be created with the snapshot command.

device
Btrfs does not require a special device to be created on.

Lvm backend

Pools, volumes and snapshots can be created with lvm, which pretty much match the lvm abstraction.

pool

An lvm pool is just a volume group in lvm language. It means that it is grouping devices and new logical volumes can be created out of the lvm pool. The default lvm pool name is lvm_pool.

An lvm pool is created when the create or add commands are used with specified devices and a non existing pool name.

Alternatively a thin pool can be created as a result of using –virtual-size option to create thin volume.

volume
An lvm volume is just a logical volume in lvm language. An lvm volume can be created with the create command.
snapshot
Lvm volumes can be snapshotted as well. When a snapshot is created from the lvm volume, a new snapshot volume is created, which can be handled as any other lvm volume. Unlike btrfs lvm is able to distinguish snapshot from regular volume, so there is no need for a snapshot name to match special pattern.
device
Lvm requires a physical device to be created on the device, but with ssm this is transparent for the user.

Crypt backend

The crypt backend in ssm uses cryptsetup and dm-crypt target to manage encrypted volumes. The crypt backend can be used as a regular backend for creating encrypted volumes on top of regular block devices, or even other volumes (lvm or md volumes for example). Or it can be used to create encrypted lvm volumes right away in a single step.

Only volumes can be created with crypt backend. This backend does not support pooling and does not require special devices.

pool
The crypt backend does not support pooling, and it is not possible to create crypt pool or add a device into a pool.
volume

A volume in the crypt backend is the volume created by dm-crypt which represents the data on the original encrypted device in unencrypted form. The crypt backend does not support pooling, so only one device can be used to create crypt volume. It also does not support raid or any device concatenation.

Currently two modes, or extensions are supported: luks and plain. Luks is used by default. For more information about the extensions, please see cryptsetup manual page.

snapshot
The crypt backend does not support snapshotting, however if the encrypted volume is created on top of an lvm volume, the lvm volume itself can be snapshotted. The snapshot can be then opened by using cryptsetup. It is possible that this might change in the future so that ssm will be able to activate the volume directly without the extra step.
device
The crypt backend does not require a special device to be created on.

MD backend

MD backend in ssm is currently limited to only gather the information about MD volumes in the system. You can not create or manage MD volumes or pools, but this functionality will be extended in the future.

Multipath backend

Multipath backend in ssm is currently limited to only gather the information about multipath volumes in the system. You can not create or manage multipath volumes or pools, but this functionality will be extended in the future.

Environment variables

SSM_DEFAULT_BACKEND
Specify which backend will be used by default. This can be overridden by specifying the -b or –backend argument. Currently only lvm and btrfs are supported.
SSM_LVM_DEFAULT_POOL
Name of the default lvm pool to be used if the -p or –pool argument is omitted.
SSM_BTRFS_DEFAULT_POOL
Name of the default btrfs pool to be used if the -p or –pool argument is omitted.
SSM_PREFIX_FILTER
When this is set, ssm will filter out all devices, volumes and pools whose name does not start with this prefix. It is used mainly in the ssm test suite to make sure that we do not scramble the local system configuration.

Availability

System storage manager is available from http://system-storage-manager.github.io. You can subscribe to storagemanager-devel@lists.sourceforge.net to follow the current development.

Installation

To install System Storage Manager into your system simply run:

python setup.py install

as root in the System Storage Manager directory. Make sure that your system configuration meets the requirements in order for ssm to work correctly.

Note that you can run ssm even without installation by using the local sources with:

bin/ssm.local

Requirements

Python 2.6 or higher is required to run this tool. System Storage Manager can only be run as root since most of the commands require root privileges.

There are other requirements listed below, but note that you do not necessarily need all dependencies for all backends. However if some of the tools required by a backend are missing, that backend will not work.

Python modules

  • argparse
  • atexit
  • base64
  • datetime
  • fcntl
  • getpass
  • os
  • pwquality
  • re
  • socket
  • stat
  • struct
  • subprocess
  • sys
  • tempfile
  • termios
  • threading
  • tty

System tools

  • tune2fs
  • fsck.SUPPORTED_FS
  • resize2fs
  • xfs_db
  • xfs_check
  • xfs_growfs
  • mkfs.SUPPORTED_FS
  • which
  • mount
  • blkid
  • wipefs
  • dd

Lvm backend

  • lvm2 binaries

Some distributions (e.g. Debian) have thin provisioning tools for LVM as an optional dependency, while others install it automatically. Thin provisioning without these tools installed is not supported by SSM.

Btrfs backend

  • btrfs progs

Crypt backend

  • dmsetup
  • cryptsetup

Multipath backend

  • multipath

Quick examples

List system storage:

# ssm list
----------------------------------
Device          Total  Mount point
----------------------------------
/dev/loop0    5.00 GB
/dev/loop1    5.00 GB
/dev/loop2    5.00 GB
/dev/loop3    5.00 GB
/dev/loop4    5.00 GB
/dev/sda    149.05 GB  PARTITIONED
/dev/sda1    19.53 GB  /
/dev/sda2    78.12 GB
/dev/sda3     1.95 GB  SWAP
/dev/sda4     1.00 KB
/dev/sda5    49.44 GB  /mnt/test
----------------------------------
------------------------------------------------------------------------------
Volume     Pool      Volume size  FS     FS size      Free  Type   Mount point
------------------------------------------------------------------------------
/dev/dm-0  dm-crypt     78.12 GB  ext4  78.12 GB  45.01 GB  crypt  /home
/dev/sda1               19.53 GB  ext4  19.53 GB  12.67 GB  part   /
/dev/sda5               49.44 GB  ext4  49.44 GB  29.77 GB  part   /mnt/test
------------------------------------------------------------------------------

Create a volume of the defined size with the defined file system. The default back-end is set to lvm and the lvm default pool name (volume group) is lvm_pool:

# ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1

The name of the new volume is ‘/dev/lvm_pool/lvol001’. Resize the volume to 10GB:

# ssm resize -s-5G /dev/lvm_pool/lvol001

Resize the volume to 100G, but it may require adding more devices into the pool:

# ssm resize -s 100G /dev/lvm_pool/lvol001 /dev/loop2

Now we can try to create a new lvm volume named ‘myvolume’ from the remaining pool space with the xfs file system and mount it to /mnt/test1:

# ssm create --fs xfs --name myvolume /mnt/test1

List all volumes with file systems:

# ssm list filesystems
-----------------------------------------------------------------------------------------------
Volume                  Pool        Volume size  FS      FS size      Free  Type    Mount point
-----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001   lvm_pool       25.00 GB  ext4   25.00 GB  23.19 GB  linear
/dev/lvm_pool/myvolume  lvm_pool        4.99 GB  xfs     4.98 GB   4.98 GB  linear  /mnt/test1
/dev/dm-0               dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
/dev/sda1                              19.53 GB  ext4   19.53 GB  12.67 GB  part    /
/dev/sda5                              49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
-----------------------------------------------------------------------------------------------

You can then easily remove the old volume with:

# ssm remove /dev/lvm_pool/lvol001

Now let’s try to create a btrfs volume. Btrfs is a separate backend, not just a file system. That is because btrfs itself has an integrated volume manager. The default btrfs pool name is btrfs_pool.:

# ssm -b btrfs create /dev/loop3 /dev/loop4

Now we create btrfs subvolumes. Note that the btrfs file system has to be mounted in order to create subvolumes. However ssm will handle this for you.:

# ssm create -p btrfs_pool
# ssm create -n new_subvolume -p btrfs_pool


# ssm list filesystems
-----------------------------------------------------------------
Device         Free      Used      Total  Pool        Mount point
-----------------------------------------------------------------
/dev/loop0  0.00 KB  10.00 GB   10.00 GB  lvm_pool
/dev/loop1  0.00 KB  10.00 GB   10.00 GB  lvm_pool
/dev/loop2  0.00 KB  10.00 GB   10.00 GB  lvm_pool
/dev/loop3  8.05 GB   1.95 GB   10.00 GB  btrfs_pool
/dev/loop4  6.54 GB   1.93 GB    8.47 GB  btrfs_pool
/dev/sda                       149.05 GB              PARTITIONED
/dev/sda1                       19.53 GB              /
/dev/sda2                       78.12 GB
/dev/sda3                        1.95 GB              SWAP
/dev/sda4                        1.00 KB
/dev/sda5                       49.44 GB              /mnt/test
-----------------------------------------------------------------
-------------------------------------------------------
Pool        Type   Devices     Free      Used     Total
-------------------------------------------------------
lvm_pool    lvm    3        0.00 KB  29.99 GB  29.99 GB
btrfs_pool  btrfs  2        3.84 MB  18.47 GB  18.47 GB
-------------------------------------------------------
-----------------------------------------------------------------------------------------------
Volume                  Pool        Volume size  FS      FS size      Free  Type    Mount point
-----------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001   lvm_pool       25.00 GB  ext4   25.00 GB  23.19 GB  linear
/dev/lvm_pool/myvolume  lvm_pool        4.99 GB  xfs     4.98 GB   4.98 GB  linear  /mnt/test1
/dev/dm-0               dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
btrfs_pool              btrfs_pool     18.47 GB  btrfs  18.47 GB  18.47 GB  btrfs
/dev/sda1                              19.53 GB  ext4   19.53 GB  12.67 GB  part    /
/dev/sda5                              49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
-----------------------------------------------------------------------------------------------

Now let’s free up some of the loop devices so that we can try to add them into the btrfs_pool. So we’ll simply remove lvm myvolume and resize lvol001 so we can remove /dev/loop2. Note that myvolume is mounted so we have to unmount it first.:

# umount /mnt/test1
# ssm remove /dev/lvm_pool/myvolume
# ssm resize -s-10G /dev/lvm_pool/lvol001
# ssm remove /dev/loop2

Add device to the btrfs file system:

# ssm add /dev/loop2 -p btrfs_pool

Now let’s see what happened. Note that to actually see btrfs subvolumes you have to mount the file system first:

# mount -L btrfs_pool /mnt/test1/
# ssm list volumes
------------------------------------------------------------------------------------------------------------------------
Volume                         Pool        Volume size  FS      FS size      Free  Type    Mount point
------------------------------------------------------------------------------------------------------------------------
/dev/lvm_pool/lvol001          lvm_pool       15.00 GB  ext4   15.00 GB  13.85 GB  linear
/dev/dm-0                      dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
btrfs_pool                     btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1
btrfs_pool:2012-05-09-T113426  btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1/2012-05-09-T113426
btrfs_pool:new_subvolume       btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1/new_subvolume
/dev/sda1                                     19.53 GB  ext4   19.53 GB  12.67 GB  part    /
/dev/sda5                                     49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
------------------------------------------------------------------------------------------------------------------------

Remove the whole lvm pool, one of the btrfs subvolumes, and one unused device from the btrfs pool btrfs_loop3. Note that with btrfs, pools have the same name as their volumes:

# ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/

Snapshots can also be done with ssm:

# ssm snapshot btrfs_pool
# ssm snapshot -n btrfs_snapshot btrfs_pool

With lvm, you can also create snapshots:

# ssm create -s 10G /dev/loop[01]
# ssm snapshot /dev/lvm_pool/lvol001

Now list all snapshots. Note that btrfs snapshots are actually just subvolumes with some blocks shared with the original subvolume, so there is currently no way to distinguish between those. ssm is using a little trick to search for name patterns to recognize snapshots, so if you specify your own name for the snapshot, ssm will not recognize it as snapshot, but rather as regular volume (subvolume). This problem does not exist with lvm.:

# ssm list snapshots
-------------------------------------------------------------------------------------------------------------
Snapshot                            Origin   Volume size     Size  Type    Mount point
-------------------------------------------------------------------------------------------------------------
/dev/lvm_pool/snap20120509T121611   lvol001      2.00 GB  0.00 KB  linear
btrfs_pool:snap-2012-05-09-T121313              18.47 GB           btrfs   /mnt/test1/snap-2012-05-09-T121313
-------------------------------------------------------------------------------------------------------------

For developers

We are accepting patches! If you’re interested in contributing to the System Storage Manager code, just checkout the git repository located on SourceForge. Please, base all of your work on the devel branch since it is more up-to-date and it will save us some work when merging your patches:

git clone --branch devel git@github.com:system-storage-manager/ssm.git storagemanager-code

Any form of contribution - patches, documentation, reviews or rants are appreciated. See Mailing list section section.

Tests

System Storage Manager contains a regression testing suite to make sure that we do not break things that should already work. We recommend that every developer run these tests before sending patches:

python test.py

Tests in System Storage Manager are divided into four levels.

  1. First the doctest is executed.

  2. Then we have unittests in tests/unittests/test_ssm.py which is testing the core of ssm ssmlib/main.py. It is checking for basic things like required backend methods and variables, flag propagations, proper class initialization and finally whether commands actually result in the proper backend callbacks. It does not require root permissions and it does not touch your system configuration in any way. It actually should not invoke any shell command, and if it does it’s a bug.

  3. Second part of unittests is backend testing. We are mainly testing whether ssm commands result in proper backend operations. It does not require root permissions and it does not touch your system configuration in any way. It actually should not invoke any shell command and if it does it’s a bug.

  4. And finally there are real bash tests located in tests/bashtests. Bash tests are divided into files. Each file tests one command for one backend and it contains a series of test cases followed by checks as to whether the command created the expected result. In order to test real system commands we have to create a system device to test on and not touch the existing system configuration.

    Before each test a number of devices are created using dmsetup in the test directory. These devices will be used in test cases instead of real devices. Real operations are performed in those devices as they would be on the real system devices. This phase requires root privileges and it will not be run otherwise. In order to make sure that ssm does not touch any existing system configuration, each device, pool and volume name includes a special prefix, and the SSM_PREFIX_FILTER environment variable is set to make ssm to exclude all items which does not match this special prefix.

    The standard place for temporary files is within SSM tests directory. However, if for some reason you don’t want/can’t use this location, set the SSM_TEST_DIR environment variable to any other path.

    Even though we tried hard to make sure that the bash tests do not change your system configuration, we recommend you not to run tests with root privileges on your work or production system, but rather to run them on your testing machine.

If you change or create new functionality, please make sure that it is covered by the System Storage Manager regression test suite to make sure that we do not break it unintentionally.

Important

Please, make sure to run full tests before you send a patch to the mailing list. To do so, simply run python test.py as root on your test machine.

Documentation

System Storage Manager documentation is stored in the doc/ directory. The documentation is built using sphinx software which helps us not to duplicate text for different types of documentation (man page, html pages, readme). If you are going to modify documentation, please make sure not to modify manual page, html pages or README directly, but rather modify the doc/*.rst and doc/src/*.rst files accordingly so that the change is propagated to all documents.

Moreover, parts of the documentation such as synopsis or ssm command options are parsed directly from the ssm help output. This means that when you’re going to add or change arguments into ssm the only thing you have to do is to add or change it in the ssmlib/main.py source code and then run make dist in the doc/ directory and all the documents should be updated automatically.

Important

Please make sure you update the documentation when you add or change ssm functionality if the format of the change requires it. Then regenerate all the documents using make dist and include changes in the patch.

Mailing list

System Storage Manager developers communicate via the mailing list. The address of our mailing list is storagemanager-devel@lists.sourceforge.net and you can subscribe on the SourceForge project page https://lists.sourceforge.net/lists/listinfo/storagemanager-devel. Mailing list archives can be found here http://sourceforge.net/mailarchive/forum.php?forum_name=storagemanager-devel.

This is also the list where patches are sent and where the review process is happening. We do not have a separate user mailing list, so feel free to drop your questions there as well.

Posting patches

As already mentioned, we are accepting patches! And we are very happy for every contribution. If you’re going to send a patch in, please make sure to follow some simple rules:

  1. Before you’re going to post a patch, please run our regression testing suite to make sure that your change does not break someone else’s work. See Tests section
  2. If you’re making a change that might require documentation update, please update the documentation as well. See Documentation section
  3. Make sure your patch has all the requisites such as a short description preferably 50 characters long at max describing the main idea of the change. Long description describing what was changed with and why and finally Signed-off-by tag.
  4. The preferred way of accepting patches is through pull requests on GitHub, but it is possible to send them to the mailing list if you don’t have GitHub account.
  5. If you’re going to send a patch to the mailing list, please send the patch inlined in the email body. It is much better for review process.

Hint

You can use git to do all the work for you. git format-patch and git send-email will help you with creating and sending the patch to the mailing list.

Indices and tables