2 changed files with 543 additions and 0 deletions
-
460HowtoLVM.md
-
83HowtoLVMoverRAID.md
@ -0,0 +1,460 @@ |
|||
--- |
|||
title: Howto LVM |
|||
... |
|||
|
|||
Support : Freenode/#lvm |
|||
|
|||
LVM est un système permettant de gérer des "logical volumes" (LV) indépendamment des disques. |
|||
On obtient ainsi un système de partitionnement plus souple qu'avec un simple disque. |
|||
Notamment cela facilite l'augmentation (future) de la taille d'une partition. |
|||
|
|||
## Installation |
|||
|
|||
~~~ |
|||
# apt install lvm2 |
|||
~~~ |
|||
|
|||
### PV : les partitions LVM |
|||
|
|||
On doit ensuite créer des partitions de type "Linux LVM" (code `8E`). |
|||
Puis on initialise les partitions pour LVM : |
|||
|
|||
~~~ |
|||
# pvcreate /dev/hda1 |
|||
# pvcreate /dev/hdb1 |
|||
~~~ |
|||
|
|||
**Note** : dans certains cas, on devra utiliser l'option `-ff` (par exemple pour réinitialiser une partition) |
|||
|
|||
On pourra ainsi les partitions LVM du système avec `pvdisplay` ou `pvs` : |
|||
|
|||
~~~ |
|||
# pvdisplay |
|||
--- Physical volume --- |
|||
PV Name /dev/hda1 |
|||
VG Name group1 |
|||
PV Size 124.84 GB / not usable 1.52 MB |
|||
Allocatable yes |
|||
PE Size (KByte) 4096 |
|||
Total PE 31959 |
|||
Free PE 7639 |
|||
Allocated PE 24320 |
|||
PV UUID T12qj5-SEkv-zNrB-QUdG-tFua-b6ok-p1za3e |
|||
|
|||
--- Physical volume --- |
|||
PV Name /dev/hdb1 |
|||
VG Name group1 |
|||
PV Size 13.08 GB / not usable 2.08 MB |
|||
Allocatable yes |
|||
PE Size (KByte) 4096 |
|||
Total PE 3347 |
|||
Free PE 3347 |
|||
Allocated PE 0 |
|||
PV UUID CQEeDw-TYNK-n0nh-G7ti-3U3J-4zgk-a7xg2S |
|||
|
|||
# pvs |
|||
PV VG Fmt Attr PSize PFree |
|||
/dev/hda1 group1 lvm2 a- 13.07G 13.07G |
|||
/dev/hdb1 group1 lvm2 a- 124.84G 34.84G |
|||
|
|||
# pvs -o pv_mda_count,pv_mda_free /dev/hda1 |
|||
#PMda #PMdaFree |
|||
1 91.50K |
|||
|
|||
# pvscan |
|||
PV /dev/sda9 VG group1 lvm2 [124.84 GB / 29.84 GB free] |
|||
PV /dev/sda12 VG group1 lvm2 [13.07 GB / 13.07 GB free] |
|||
Total: 2 [137.91 GB] / in use: 2 [137.91 GB] / in no VG: 0 [0 ] |
|||
~~~ |
|||
|
|||
Si la partition est redimensionnée, on peut augmenter la taille du PV : |
|||
|
|||
~~~ |
|||
# pvresize /dev/hda1 |
|||
~~~ |
|||
|
|||
### VG : les groupes de volumes |
|||
|
|||
Une fois nos PV initialisés, on crée un ou plusieurs groupes de volumes (VG) dans lequels on découpera les volumes logiques (LV). |
|||
|
|||
~~~ |
|||
# vgcreate group1 /dev/hda1 /dev/hdb1 |
|||
Volume group "mylvmtest" successfully created |
|||
~~~ |
|||
|
|||
On peut ainsi les lister avec les commande `vgdisplay` ou `vgs` : |
|||
|
|||
~~~ |
|||
# vgdisplay |
|||
--- Volume group --- |
|||
VG Name group1 |
|||
System ID |
|||
Format lvm2 |
|||
Metadata Areas 2 |
|||
Metadata Sequence No 28 |
|||
VG Access read/write |
|||
VG Status resizable |
|||
MAX LV 0 |
|||
Cur LV 5 |
|||
Open LV 4 |
|||
Max PV 0 |
|||
Cur PV 2 |
|||
Act PV 2 |
|||
VG Size 137.91 GB |
|||
PE Size 4.00 MB |
|||
Total PE 35306 |
|||
Alloc PE / Size 24320 / 95.00 GB |
|||
Free PE / Size 10986 / 42.91 GB |
|||
VG UUID zwApn7-SCSx-ju4h-6Y1R-x6ie-3wl0-uSE1DE |
|||
|
|||
# vgs |
|||
VG #PV #LV #SN Attr VSize VFree |
|||
group1 2 5 0 wz--n- 137.91G 42.91G |
|||
|
|||
# vgscan |
|||
Reading all physical volumes. This may take a while... |
|||
Found volume group "group1" using metadata type lvm2 |
|||
~~~ |
|||
|
|||
### LV : les volumes logiques |
|||
|
|||
On peut maintenant découper nos volumes finaux : |
|||
|
|||
~~~ |
|||
# lvcreate -L5G -nfirstlvmvol group1 |
|||
Logical volume "firstlvmvol" created |
|||
|
|||
# lvcreate -L10G -nsecondlvmvol group1 |
|||
Logical volume "secondlvmvol" created |
|||
~~~ |
|||
|
|||
On a ainsi des périphériques de stockage utilisables (accessibles via `/dev/mapper/<VG>-<LV>` ou `/dev/<VG>/<LV>`) que l'on peut formater : |
|||
|
|||
~~~ |
|||
# mkfs.ext3 /dev/mapper/group1-firstlvmvol |
|||
# mkfs.ext3 /dev/group1/secondlvmvol |
|||
~~~ |
|||
|
|||
On peut lister les LV avec `lvdisplay` ou `lvs` : |
|||
|
|||
~~~ |
|||
# lvdisplay |
|||
--- Logical volume --- |
|||
LV Name /dev/group1/firstlvmvol |
|||
VG Name group1 |
|||
LV UUID iHCvHy-ow0G-Idf2-hNOi-TRFe-BqvW-tmowLj |
|||
LV Write Access read/write |
|||
LV Status available |
|||
# open 1 |
|||
LV Size 5.00 GB |
|||
Current LE 2560 |
|||
Segments 1 |
|||
Allocation inherit |
|||
Read ahead sectors auto |
|||
- currently set to 256 |
|||
Block device 253:0 |
|||
|
|||
--- Logical volume --- |
|||
LV Name /dev/group1/secondlvmvol |
|||
VG Name group1 |
|||
LV UUID S5GPY7-7q6n-1FCy-ydKA-Js2e-BAOy-wlgYQO |
|||
LV Write Access read/write |
|||
LV Status available |
|||
# open 1 |
|||
LV Size 10.00 GB |
|||
Current LE 12800 |
|||
Segments 1 |
|||
Allocation inherit |
|||
Read ahead sectors auto |
|||
- currently set to 256 |
|||
Block device 253:1 |
|||
|
|||
# lvs |
|||
LV VG Attr LSize Origin Snap% Move Log Copy% Convert |
|||
firstlvmvol group1 -wi-ao 5.00G |
|||
secondlvmvol group1 -wi-ao 10.00G |
|||
|
|||
# lvscan |
|||
ACTIVE '/dev/group1/firstlvmvol' [5.00 GB] inherit |
|||
ACTIVE '/dev/group1/secondlvmvol' [10.00 GB] inherit |
|||
~~~ |
|||
|
|||
### Les snapshots LVM |
|||
|
|||
Un snapshot LVM sert à "figer" une partition à chaud, par exemple pour pouvoir faire une vraie sauvegarde "tranquillement" par la suite. |
|||
|
|||
Exemple typique, une base SQL stocke ses fichiers dans `/srv/sql` qui est en LVM : |
|||
|
|||
* On arrête la base SQL (ou on la "lock") |
|||
* On déclenche un snapshot LVM de `/srv/sql` |
|||
* On redémarre (ou "délock") la base SQL : elle n'aura été arrêtée que qq secondes ! |
|||
* Ensuite, on peut monter le snapshot et faire tranquillement son backup (tar, dd, rsync, etc.) |
|||
* Il faut ensuite virer le snapshot (il ne survit pas à un reboot de toute façon) |
|||
|
|||
**Note** : il semble qu'on peut pas trop avoir deux snapshots en même temps. |
|||
|
|||
~~~ |
|||
# lvcreate -L100M -s -n snap /dev/mylvmtest/firstlvmvol |
|||
Logical volume "snap" created |
|||
|
|||
# lvdisplay |
|||
--- Logical volume --- |
|||
LV Name /dev/mylvmtest/firstlvmvol |
|||
VG Name mylvmtest |
|||
LV UUID 4vOXer-YH8x-AB9T-3MoP-BESB-7fyn-ce0Rho |
|||
LV Write Access read/write |
|||
LV snapshot status source of |
|||
/dev/mylvmtest/snap [active] |
|||
LV Status available |
|||
# open 0 |
|||
LV Size 500.00 MB |
|||
Current LE 125 |
|||
Segments 1 |
|||
Allocation inherit |
|||
Read ahead sectors 0 |
|||
Block device 253:0 |
|||
|
|||
--- Logical volume --- |
|||
LV Name /dev/mylvmtest/snap |
|||
VG Name mylvmtest |
|||
LV UUID lF0wn9-7O3A-FacC-gnVM-SPwE-fCnI-5jb9wz |
|||
LV Write Access read/write |
|||
LV snapshot status active destination for /dev/mylvmtest/firstlvmvol |
|||
LV Status available |
|||
# open 0 |
|||
LV Size 500.00 MB |
|||
Current LE 125 |
|||
COW-table size 100.00 MB |
|||
COW-table LE 25 |
|||
Allocated to snapshot 0.02% |
|||
Snapshot chunk size 8.00 KB |
|||
Segments 1 |
|||
Allocation inherit |
|||
Read ahead sectors 0 |
|||
Block device 253:3 |
|||
|
|||
# mkdir /tmp/snap |
|||
|
|||
# mount /dev/mylvmtest/snap /tmp/snap/ |
|||
|
|||
# lvcreate -L100M -s -n snap2 /dev/mylvmtest/firstlvmvol |
|||
Logical volume "snap2" created |
|||
|
|||
device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception. |
|||
Buffer I/O error on device dm-3, logical block 530 |
|||
lost page write due to I/O error on dm-3 |
|||
Buffer I/O error on device dm-3, logical block 530 |
|||
lost page write due to I/O error on dm-3 |
|||
Buffer I/O error on device dm-3, logical block 530 |
|||
lost page write due to I/O error on dm-3 |
|||
Buffer I/O error on device dm-3, logical block 1 |
|||
lost page write due to I/O error on dm-3 |
|||
Buffer I/O error on device dm-3, logical block 1 |
|||
lost page write due to I/O error on dm-3 |
|||
|
|||
# umount /tmp/snap |
|||
|
|||
# lvremove /dev/mylvmtest/snap |
|||
Do you really want to remove active logical volume "snap"? [y/n]: y |
|||
Logical volume "snap" successfully removed |
|||
~~~ |
|||
|
|||
### LVM mirror : du RAID1 avec LVM |
|||
|
|||
Une fonctionnalité peu connue de LVM est de permettre de faire du RAID1. |
|||
Peu d'intérêt a priori. À part peut-être gérer du RAID1 "retaillable" sans gérer un couche "MDADM + LVM". |
|||
|
|||
~~~ |
|||
# pvcreate /dev/sda7 |
|||
Physical volume "/dev/sda7" successfully created |
|||
|
|||
# pvcreate /dev/sdb5 |
|||
Physical volume "/dev/sdb5" successfully created |
|||
|
|||
# lvcreate -L180G -m1 -nlvmirror --corelog vg00 /dev/sda7 /dev/sdb5 |
|||
Logical volume "lvmirror" created |
|||
|
|||
# lvs -a |
|||
LV VG Attr LSize Origin Snap% Move Log Copy% |
|||
lvmirror vg00 mwi-ao 180.00G 8.60 |
|||
[lvmirror_mimage_0] vg00 iwi-ao 180.00G |
|||
[lvmirror_mimage_1] vg00 iwi-ao 180.00G |
|||
~~~ |
|||
|
|||
Pour étendre un miroir LVM, on ajoute des PV au VG : |
|||
|
|||
~~~ |
|||
# pvcreate /dev/sda8 |
|||
Physical volume "/dev/sda8" successfully created |
|||
|
|||
# pvcreate /dev/sdb6 |
|||
Physical volume "/dev/sdb6" successfully created |
|||
|
|||
# vgextend vg00 /dev/sda8 /dev/sdb6 |
|||
Volume group "vg00" successfully extended |
|||
~~~ |
|||
|
|||
On désactive le miroir, on retaille, on réactive : |
|||
|
|||
~~~ |
|||
# lvextend -L+25G /dev/vg00/lvmirror |
|||
Extending 2 mirror images. |
|||
Mirrors cannot be resized while active yeta. |
|||
|
|||
# umount /dev/vg00/lvmirror |
|||
|
|||
# lvchange -an /dev/vg00/lvmirror |
|||
|
|||
# lvextend -L+25G /dev/vg00/lvmirror |
|||
Extending 2 mirror images. |
|||
Extending logical volume lvmirror to 205.00 GB |
|||
Logical volume lvmirror successfully resized |
|||
|
|||
# lvchange -ay /dev/vg00/lvmirror |
|||
~~~ |
|||
|
|||
Enfin on retaille le filesystem : |
|||
|
|||
~~~ |
|||
# e2fsck -f /dev/vg00/lvmirror |
|||
e2fsck 1.40-WIP (14-Nov-2006) |
|||
Pass 1: Checking inodes, blocks, and sizes |
|||
Pass 2: Checking directory structure |
|||
Pass 3: Checking directory connectivity |
|||
Pass 4: Checking reference counts |
|||
Pass 5: Checking group summary information |
|||
/dev/vg00/lvmirror: 216749/23592960 files (13.5% non-contiguous), 44825506/47185920 blocks |
|||
|
|||
# resize2fs /dev/vg00/lvmirror |
|||
resize2fs 1.40-WIP (14-Nov-2006) |
|||
Resizing the filesystem on /dev/vg00/lvmirror to 53739520 (4k) blocks. |
|||
The filesystem on /dev/vg00/lvmirror is now 53739520 blocks long. |
|||
~~~ |
|||
|
|||
## FAQ |
|||
|
|||
### Supprimer une partition LVM d'un VG |
|||
|
|||
~~~ |
|||
# pvmove -v /dev/hde1 |
|||
Finding volume group "mylvmtest" |
|||
Archiving volume group "mylvmtest" metadata. |
|||
Creating logical volume pvmove0 |
|||
mirror: Required device-mapper target(s) not detected in your kernel |
|||
|
|||
# vgreduce mylvmtest /dev/hde1 |
|||
Removed "/dev/hde1" from volume group "mylvmtest" |
|||
~~~ |
|||
|
|||
### Supprimer un LV |
|||
|
|||
~~~ |
|||
# lvremove -v /dev/testlvm/testlvm2 |
|||
Using logical volume(s) on command line |
|||
Do you really want to remove active logical volume "testlvm2"? [y/n]: y |
|||
Archiving volume group "testlvm" metadata. |
|||
Found volume group "testlvm" |
|||
Removing testlvm-testlvm2 |
|||
Found volume group "testlvm" |
|||
Releasing logical volume "testlvm2" |
|||
Creating volume group backup "/etc/lvm/backup/testlvm" |
|||
Logical volume "testlvm2" successfully removed |
|||
~~~ |
|||
|
|||
### Augmenter la taille d'un LV |
|||
|
|||
~~~ |
|||
# umount /dev/mylvmtest/thirdlvmvol |
|||
|
|||
# lvextend -L+1G /dev/mylvmtest/thirdlvmvol |
|||
Extending logical volume thirdlvmvol to 4,00 GB |
|||
Logical volume thirdlvmvol successfully resized |
|||
|
|||
# resize2fs -p /dev/mylvmtest/thirdlvmvol |
|||
|
|||
# e2fsck -f /dev/mylvmtest/thirdlvmvol -C0 |
|||
e2fsck 1.35 (28-Feb-2004) |
|||
Passe 1: vérification inodes, blocs, et des tailles |
|||
Passe 2: vérification de la structure répertoire |
|||
Passe 3: vérification de lca connectivité répertoire |
|||
Pass 4: vérification des compteur de références |
|||
Pass 5: vérification de l'information du sommaire groupe |
|||
/dev/mylvmtest/thirdlvmvol: 11/393216 fichier (0.0% non |
|||
blocs |
|||
|
|||
# resize2fs -p /dev/mylvmtest/thirdlvmvol |
|||
resize2fs 1.35 (28-Feb-2004) |
|||
Resizing the filesystem on /dev/mylvmtest/thirdlvmvol to |
|||
Le système de fichiers /dev/mylvmtest/thirdlvmvol a main |
|||
8576 blocs. |
|||
~~~ |
|||
|
|||
### Réduire la taille d'un LV |
|||
|
|||
~~~ |
|||
ACTIVE '/dev/mylvmtest/secondlvmvol' [60,00 GB] inherit |
|||
|
|||
# umount /dev/mylvmtest/secondlvmvol |
|||
|
|||
# e2fsck -f /dev/mylvmtest/secondlvmvol -C0 |
|||
|
|||
# resize2fs /dev/mylvmtest/secondlvmvol 50G |
|||
resize2fs 1.41.12 (17-May-2010) |
|||
Resizing the filesystem on /dev/mylvmtest/secondlvmvol to 13107200 (4k) blocks. |
|||
The filesystem on /dev/mylvmtest/secondlvmvol is now 13107200 blocks long. |
|||
|
|||
# lvreduce -L-10G /dev/mylvmtest/secondlvmvol |
|||
WARNING: Reducing active logical volume to 50,00 GB |
|||
THIS MAY DESTROY YOUR DATA (filesystem etc.) |
|||
Do you really want to reduce secondlvmvol? [y/n]: y |
|||
Reducing logical volume secondlvmvol to 50,00 GB |
|||
Logical volume secondlvmvol successfully resized |
|||
|
|||
# mount /dev/mylvmtest/secondlvmvol |
|||
~~~ |
|||
|
|||
### LVM et les tailles |
|||
|
|||
Les tailles reportées par LVM sont très peu fiables. |
|||
|
|||
Un exemple concret avec un VG qui annonce : |
|||
|
|||
~~~ |
|||
VG Size 137.91 GB |
|||
PE Size 4.00 MB |
|||
Total PE 35306 |
|||
Alloc PE / Size 23040 / 90.00 GB |
|||
Free PE / Size 12266 / 47.91 GB |
|||
~~~ |
|||
|
|||
On a donc tendance à croire qu'il reste de la place… Pourtant un `lvextend` ou `lvcreate` échoue. |
|||
|
|||
Par exemple : |
|||
|
|||
~~~ |
|||
# lvextend -L+10G /dev/group1/data |
|||
Extending logical volume data to 30.00 GB |
|||
device-mapper: resume ioctl failed: Invalid argument |
|||
Unable to resume group1-data (253:3) |
|||
Logical volume data successfully resized |
|||
|
|||
# lvcreate -L5G -ntest group1 |
|||
device-mapper: resume ioctl failed: Invalid argument |
|||
Unable to resume group1-test (253:4) |
|||
/dev/group1/test: write failed after 0 of 4096 at 0: No space left on device |
|||
Logical volume "test" created |
|||
~~~ |
|||
|
|||
### Restauration |
|||
|
|||
**/!\\ à manipuler avec beaucoup de précautions /!\\** |
|||
|
|||
LVM sauvegarde ses métadatas dans `/etc/lvm/backup` et `/etc/lvm/archive`. |
|||
On peut éventuellement les restaurer via la commande `vgcfgrestore`. |
|||
|
|||
### Infos sur les volumes |
|||
|
|||
~~~ |
|||
# dmestup info -c |
|||
# dmestup info |
|||
~~~ |
@ -0,0 +1,83 @@ |
|||
--- |
|||
title: Howto LVM |
|||
... |
|||
|
|||
* Voir [HowtoLVM]() |
|||
* Voir [HowtoRAIDLogiciel]() |
|||
|
|||
Pendant des années nous avons été réticents à utiliser du LVM sur des volumes RAID, effrayés par des rumeurs de grosses pertes de données… mais surtout prudents de ne pas empiler 2 technologies complexes pour quelque chose d'aussi critique que du stockage. |
|||
|
|||
Puis, petit à petit, en utilisant le RAID logiciel (indispensable si l'on n'a pas de RAID hardware, pas si mal que ça) et le LVM (pratique dans pas mal de cas) indépendamment, on y vient. |
|||
|
|||
## Création du RAID |
|||
|
|||
On crée deux partitions `/dev/sda9` et `/dev/sdb9` de type "Linux raid autodetect" (FD). |
|||
|
|||
~~~ |
|||
# mdadm --create /dev/md8 --chunk=64 --level=raid1 --raid-devices=2 /dev/sda9 /dev/sdb9 |
|||
mdadm: Note: this array has metadata at the start and |
|||
may not be suitable as a boot device. If you plan to |
|||
store '/boot' on this device please ensure that |
|||
your boot-loader understands md/v1.x metadata, or use |
|||
--metadata=0.90 |
|||
Continue creating array? y |
|||
mdadm: Defaulting to version 1.2 metadata |
|||
mdadm: array /dev/md8 started. |
|||
|
|||
# mdadm --query /dev/md8 |
|||
/dev/md8: 37.25GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail. |
|||
|
|||
# mdadm --detail /dev/md8 |
|||
/dev/md8: |
|||
Version : 1.2 |
|||
Creation Time : Sat Nov 20 22:23:28 2010 |
|||
Raid Level : raid1 |
|||
Array Size : 39060920 (37.25 GiB 40.00 GB) |
|||
Used Dev Size : 39060920 (37.25 GiB 40.00 GB) |
|||
Raid Devices : 2 |
|||
Total Devices : 2 |
|||
Persistence : Superblock is persistent |
|||
|
|||
Update Time : Sat Nov 20 22:23:28 2010 |
|||
State : clean, resyncing |
|||
Active Devices : 2 |
|||
Working Devices : 2 |
|||
Failed Devices : 0 |
|||
Spare Devices : 0 |
|||
|
|||
Rebuild Status : 2% complete |
|||
|
|||
Name : cap:8 (local to host cap) |
|||
UUID : 66e00042:c24c606d:a276cb64:bdb476cc |
|||
Events : 0 |
|||
|
|||
Number Major Minor RaidDevice State |
|||
0 8 9 0 active sync /dev/sda9 |
|||
1 8 25 1 active sync /dev/sdb9 |
|||
|
|||
# cat /proc/mdstat |
|||
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] |
|||
md8 : active raid1 sdb9[1] sda9[0] |
|||
39060920 blocks super 1.2 [2/2] [UU] |
|||
[==>..................] resync = 14.7% (5757312/39060920) finish=7.2min speed=75992K/sec |
|||
~~~ |
|||
|
|||
## Création du LVM |
|||
|
|||
~~~ |
|||
# pvcreate /dev/md8 |
|||
Physical volume "/dev/md8" successfully created |
|||
|
|||
# vgcreate vg-over-raid /dev/md8 |
|||
Volume group "vg-over-raid" successfully created |
|||
|
|||
# lvcreate -L37G -nsrv vg-over-raid |
|||
Logical volume "srv" created |
|||
~~~ |
|||
|
|||
## Utilisation |
|||
|
|||
~~~ |
|||
# mkfs.ext3 /dev/vg-over-raid/srv |
|||
# mount /dev/vg-over-raid/srv /srv |
|||
~~~ |
Write
Preview
Loading…
Cancel
Save
Reference in new issue