2

Recently I've replaced the HDDs in a RAID 1 mdadm array due to a lack of space.

The structure is as follows:

- 2 HDD in RAID 1 (Originally 2x2TB, now 2x4TB)
  - LVM
    - Swap
    - BTRFS
      - 3 SubVols for /, /home, and /var
- 1 SSD for LVM writethrough cache (120GB)
- 1 USB for EFI booting and the /boot partition

So I've replaced the HDDs one by one to grow the array and that worked out fine.

However once I got to dealing with LVM and increasing it's size, it told me that I have to remove the cache volume before I can resize the BTRFS Volume.

Trying to remove the cache with:

lvconvert --uncache /dev/Cube/BtrfsVol

Results in this error continually scrolling down the screen:

[root@Cube ~]# lvconvert --uncache /dev/Cube/BtrfsVol 
  Unknown feature in status: 8 2756/11264 256 915001/915040 2279791 11774797 1695959 1757411 781 744 850445 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 smq 0 rw - 
  Flushing 850445 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210922 11495817 2206901 1555153 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210922 11495817 2206901 1555153 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210922 11495817 2206907 1555153 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210922 11495817 2206907 1555153 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210923 11495817 2206985 1555159 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210923 11495817 2206985 1555159 0 0 914991 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914991 blocks for cache Cube/BtrfsVol.
  Unknown feature in status: 8 2756/11264 256 915001/915040 2210923 11495817 2206985 1555159 0 0 914984 3 metadata2 writethrough no_discard_passdown 2 migration_threshold 2048 cleaner 0 rw - 
  Flushing 914984 blocks for cache Cube/BtrfsVol.

Even running lvconvert with the force flag changed nothing.

I did let this run overnight and it was still going by the time I checked on it this morning.

Examples I've seen for the --uncache function don't show or mention long output like this, so I'm not sure on if this is working as intended and I just have to leave it to run longer, or if it's an actual error.

Zelec
  • 21
  • 1

1 Answers1

0

This morning I tried running:

lvconvert --splitcache /dev/Cube/BtrfsVol

And that seemed to fix my issue. It did delete the cache volume rather than splitting it as the command and docs say.

Zelec
  • 21
  • 1