@graf Pleromer ran out of space
@graf LIkewise, they haven't got to me yet.
@dcc @splitshockvirus poast database for your perusal, friend
Personalities : [raid10]
md127 : active raid10 nvme0n1p2[0] nvme3n1p1[3] nvme2n1p1[2] nvme1n1p1[1]
3748384768 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 8/28 pages [32KB], 65536KB chunk
/dev/md127:
Version : 1.2
Creation Time : Mon Oct 23 21:38:52 2023
Raid Level : raid10
Array Size : 3748384768 (3.49 TiB 3.84 TB)
Used Dev Size : 1874192384 (1787.37 GiB 1919.17 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jan 16 03:43:41 2024
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : bitmap
Name : livecd:0
UUID : 9001ffa6:8f0d009c:ba814e3f:c93a2598
Events : 5670
Number Major Minor RaidDevice State
0 259 5 0 active sync set-A /dev/nvme0n1p2
1 259 8 1 active sync set-B /dev/nvme1n1p1
2 259 7 2 active sync set-A /dev/nvme2n1p1
3 259 6 3 active sync set-B /dev/nvme3n1p1
@dcc @splitshockvirus poast media:
md127 : active raid10 nvme4n1p1[4] nvme7n1p1[7] nvme3n1p1[3] nvme2n1p1[2] nvme5n1p1[5] nvme1n1p1[1] nvme0n1p1[0] nvme6n1p1[6]
7500963840 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]
bitmap: 2/56 pages [8KB], 65536KB chunk
/dev/md127:
Version : 1.2
Creation Time : Tue Oct 24 04:04:50 2023
Raid Level : raid10
Array Size : 7500963840 (6.99 TiB 7.68 TB)
Used Dev Size : 1875240960 (1788.37 GiB 1920.25 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jan 16 03:48:23 2024
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : bitmap
Name : livecd:0
UUID : c1e08cf9:1ec433fa:808457a1:3850d892
Events : 9362
Number Major Minor RaidDevice State
0 259 2 0 active sync set-A /dev/nvme0n1p1
1 259 3 1 active sync set-B /dev/nvme1n1p1
2 259 13 2 active sync set-A /dev/nvme2n1p1
3 259 10 3 active sync set-B /dev/nvme3n1p1
4 259 14 4 active sync set-A /dev/nvme4n1p1
5 259 11 5 active sync set-B /dev/nvme5n1p1
6 259 15 6 active sync set-A /dev/nvme6n1p1
7 259 12 7 active sync set-B /dev/nvme7n1p1
@dcc @splitshockvirus working on hot and cold storage with nginx but we have about 2.5TB of live media right now
/dev/md127 7.0T 2.5T 4.2T 37% /var/lib/pleroma/uploads
poast local hot backup server, this is used for database too:
md127 : active raid5 sdg[2] sdf[1] sde[0] sdh[4]
29094441984 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 44/73 pages [176KB], 65536KB chunk
/dev/md127:
Version : 1.2
Creation Time : Wed Dec 20 18:20:54 2023
Raid Level : raid5
Array Size : 29094441984 (27.10 TiB 29.79 TB)
Used Dev Size : 9698147328 (9.03 TiB 9.93 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jan 16 03:50:48 2024
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : store:elf-rehab
UUID : 3f5ba02d:269ec369:f2771511:e41ba52c
Events : 38557
Number Major Minor RaidDevice State
0 8 64 0 active sync /dev/sde
1 8 80 1 active sync /dev/sdf
2 8 96 2 active sync /dev/sdg
4 8 112 3 active sync /dev/sdh
cold storage is via another provider (one week of backups for database, permanent backup of media)
Just finished provisioning this one while listening to some cuck on YT rank im@s characters
[code]
root@cn56.sfo-ca:~# cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md1 : active raid10 nvme1n1p3[3] nvme3n1p3[1] nvme0n1p3[2] nvme2n1p3[0]
12500080640 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 2/94 pages [8KB], 65536KB chunk
md0 : active raid10 nvme1n1p2[3] nvme3n1p2[1] nvme0n1p2[2] nvme2n1p2[0]
2093056 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices:
root@cn56.sfo-ca:~# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Jan 8 14:26:57 2024
Raid Level : raid10
Array Size : 12500080640 (11921.01 GiB 12800.08 GB)
Used Dev Size : 6250040320 (5960.50 GiB 6400.04 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Jan 15 22:48:56 2024
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : bitmap
Name : cn56:1 (local to host cn56)
UUID : 8dadefa4:0cc53826:eac8f188:b9e1678f
Events : 38176
Number Major Minor RaidDevice State
0 259 8 0 active sync set-A /dev/nvme2n1p3
1 259 12 1 active sync set-B /dev/nvme3n1p3
2 259 5 2 active sync set-A /dev/nvme0n1p3
3 259 15 3 active sync set-B /dev/nvme1n1p3
[/code]