I have a 4-disk MDADM RAID10 array in an external eSATA enclosure that I’m trying to get running on a fresh install of Ubuntu 16.04 LTS (botched upgrade and resultant video driver issues led me to determine a fresh install would be the best course of action). When I installed mdadm via apt-get and ran the assemble command (mdadm --assemble --scan) the command returned a notice saying that the array was started with 3 drives out of 4.<\/p>\n
The drives in the array prior to reinstalling were: \n/dev/sde \n/dev/sdf \n/dev/sdg \n/dev/sdh<\/p>\n
As of right now, my “sudo mdadm --misc --detail /dev/md/PBR101” output shows the following:<\/p>\n
Active devices: 3 \nWorking devices: 4 \nFailed devices: 0 \nSpare devices: 1<\/p>\n
spare rebuilding /dev/sde \nactive sync set-B /dev/sdf \nactive sync set-A /dev/sdg \nactive sync set-B /dev/sdh<\/p>\n
It looks like /dev/sde dropped from the array for some reason. I did not physically replace the drive. I did not format the drive. I did not alter the drive or attach it to any other host. Rebuild is currently at 6% so I have some time before I can tell what’s going to happen.<\/p>\n
I have two questions. \nThe first question: When the rebuild is completed, will this drive automatically switch from being a spare to being the other drive in sync set A?<\/p>\n
The second question: If the answer to the first question is “no”, what is the best means of non-destructively forcing this disk to be an active member in the set and provide a mirror to /dev/sdg? These are all 3TB disks and the 6TB usable space is about 80% full. It’s a lot of data to potentially lose and have to recover, and this array was built with the understanding that RAID10 was a very robust RAID model that would be easy to work with without worrying about data loss.<\/p>\n
Can someone offer me some guidance here? I’m already on edge from having to do the reinstall and I really don’t want any more surprises. Patience and reassurance will be repaid with disproportionately excessive gratitude. Thank you in advance.<\/p>","upvoteCount":5,"answerCount":12,"datePublished":"2016-11-23T04:09:14.000Z","author":{"@type":"Person","name":"shnladd","url":"https://community.spiceworks.com/u/shnladd"},"acceptedAnswer":{"@type":"Answer","text":"
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>","upvoteCount":0,"datePublished":"2016-11-23T17:45:17.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/4","author":{"@type":"Person","name":"scottalanmiller","url":"https://community.spiceworks.com/u/scottalanmiller"}},"suggestedAnswer":[{"@type":"Answer","text":"
I have a 4-disk MDADM RAID10 array in an external eSATA enclosure that I’m trying to get running on a fresh install of Ubuntu 16.04 LTS (botched upgrade and resultant video driver issues led me to determine a fresh install would be the best course of action). When I installed mdadm via apt-get and ran the assemble command (mdadm --assemble --scan) the command returned a notice saying that the array was started with 3 drives out of 4.<\/p>\n
The drives in the array prior to reinstalling were: \n/dev/sde \n/dev/sdf \n/dev/sdg \n/dev/sdh<\/p>\n
As of right now, my “sudo mdadm --misc --detail /dev/md/PBR101” output shows the following:<\/p>\n
Active devices: 3 \nWorking devices: 4 \nFailed devices: 0 \nSpare devices: 1<\/p>\n
spare rebuilding /dev/sde \nactive sync set-B /dev/sdf \nactive sync set-A /dev/sdg \nactive sync set-B /dev/sdh<\/p>\n
It looks like /dev/sde dropped from the array for some reason. I did not physically replace the drive. I did not format the drive. I did not alter the drive or attach it to any other host. Rebuild is currently at 6% so I have some time before I can tell what’s going to happen.<\/p>\n
I have two questions. \nThe first question: When the rebuild is completed, will this drive automatically switch from being a spare to being the other drive in sync set A?<\/p>\n
The second question: If the answer to the first question is “no”, what is the best means of non-destructively forcing this disk to be an active member in the set and provide a mirror to /dev/sdg? These are all 3TB disks and the 6TB usable space is about 80% full. It’s a lot of data to potentially lose and have to recover, and this array was built with the understanding that RAID10 was a very robust RAID model that would be easy to work with without worrying about data loss.<\/p>\n
Can someone offer me some guidance here? I’m already on edge from having to do the reinstall and I really don’t want any more surprises. Patience and reassurance will be repaid with disproportionately excessive gratitude. Thank you in advance.<\/p>","upvoteCount":5,"datePublished":"2016-11-23T04:09:14.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/1","author":{"@type":"Person","name":"shnladd","url":"https://community.spiceworks.com/u/shnladd"}},{"@type":"Answer","text":"
First question - Do you have a backup of the data?<\/p>","upvoteCount":1,"datePublished":"2016-11-23T11:37:58.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/2","author":{"@type":"Person","name":"Gary-D-Williams","url":"https://community.spiceworks.com/u/Gary-D-Williams"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Gary D Williams:<\/div>\n
\nFirst question - Do you have a backup of the data?<\/p>\n<\/blockquote>\n<\/aside>\n
Not this data - it’s an HTPC and so the data is both too voluminous as well as insufficiently important to warrant the space to back it up. On the one hand, I’m glad it’s noncritical entertainment data. On the other, it’s making me consider options for backup anyway.<\/p>\n
UPDATE: The rebuild completed and I checked the details again. Lo and behold, the disk shows “active sync set-A” and it seems all has ended well. Thanks for the help, thanks for entertaining my high-strung agonizing.<\/p>","upvoteCount":1,"datePublished":"2016-11-23T13:40:26.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/3","author":{"@type":"Person","name":"shnladd","url":"https://community.spiceworks.com/u/shnladd"}},{"@type":"Answer","text":"
Sounds like all is well now.<\/p>","upvoteCount":0,"datePublished":"2016-11-23T17:46:20.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/5","author":{"@type":"Person","name":"scottalanmiller","url":"https://community.spiceworks.com/u/scottalanmiller"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>","upvoteCount":1,"datePublished":"2016-12-05T05:57:57.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/6","author":{"@type":"Person","name":"viviancollier2","url":"https://community.spiceworks.com/u/viviancollier2"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Vivian Collier:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>\n<\/blockquote>\n<\/aside>\n
EDIT: Never mind, wrong host. Will grab this when I get home tonight.<\/p>","upvoteCount":1,"datePublished":"2016-12-07T16:21:17.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/7","author":{"@type":"Person","name":"shnladd","url":"https://community.spiceworks.com/u/shnladd"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Vivian Collier:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>\n<\/blockquote>\n<\/aside>\n
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.<\/p>","upvoteCount":1,"datePublished":"2016-12-07T16:30:38.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/8","author":{"@type":"Person","name":"scottalanmiller","url":"https://community.spiceworks.com/u/scottalanmiller"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\n\n\n
<\/div>\n
Vivian Collier:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>\n<\/blockquote>\n<\/aside>\n
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.<\/p>\n<\/blockquote>\n<\/aside>\n
Damn, that sucks. Never realized that they didn’t actively support LTS releases - awfully deceptive naming they’ve got going on there.<\/p>","upvoteCount":0,"datePublished":"2016-12-07T18:12:57.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/9","author":{"@type":"Person","name":"shnladd","url":"https://community.spiceworks.com/u/shnladd"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Shnladd:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\n\n\n
<\/div>\n
Vivian Collier:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>\n<\/blockquote>\n<\/aside>\n
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.<\/p>\n<\/blockquote>\n<\/aside>\n
Damn, that sucks. Never realized that they didn’t actively support LTS releases - awfully deceptive naming they’ve got going on there.<\/p>\n<\/blockquote>\n<\/aside>\n
Yeah, just a bit. They do support them “some” just not fully. So it does have “long term support” just not full support, you see. Tricky English. But 12.04 gets more support today than does 11.10 or 12.10, that much is true. But 16.04 does not get as much support as 16.10. Which, if you think about it, is really the only way that it would work well. That’s why RHEL doesn’t do in between releases.<\/p>","upvoteCount":0,"datePublished":"2016-12-07T18:32:29.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/10","author":{"@type":"Person","name":"scottalanmiller","url":"https://community.spiceworks.com/u/scottalanmiller"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\n\n\n
<\/div>\n
Shnladd:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\n\n\n
<\/div>\n
Vivian Collier:<\/div>\n
\n\n\n
<\/div>\n
Scott Alan Miller:<\/div>\n
\nGratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.<\/p>\n<\/blockquote>\n<\/aside>\n
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*<\/p>\n
As for the op. Maybe just show us the output of<\/p>\n
cat /proc/mdstat<\/p>\n
That way we know if it finished the rebuild<\/p>\n<\/blockquote>\n<\/aside>\n
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.<\/p>\n<\/blockquote>\n<\/aside>\n
Damn, that sucks. Never realized that they didn’t actively support LTS releases - awfully deceptive naming they’ve got going on there.<\/p>\n<\/blockquote>\n<\/aside>\n
Yeah, just a bit. They do support them “some” just not fully. So it does have “long term support” just not full support, you see. Tricky English. But 12.04 gets more support today than does 11.10 or 12.10, that much is true. But 16.04 does not get as much support as 16.10. Which, if you think about it, is really the only way that it would work well. That’s why RHEL doesn’t do in between releases.<\/p>\n<\/blockquote>\n<\/aside>\n
That’s really sucky.<\/p>\n
Pasted from Ubuntu release cycle | Ubuntu<\/a><\/p>\nUbuntu Server and desktop release end of life<\/p>\n
Standard Ubuntu releases are supported for 9 months and Ubuntu LTS (long-term support) releases are supported for five years on both the desktop and the server. During that time, there will be security fixes and other critical updates.<\/p>\n
I have moved a lot of stuff over to Debian. But maybe I should look at moving the LTS stuff up to current<\/p>","upvoteCount":0,"datePublished":"2016-12-08T00:38:02.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/11","author":{"@type":"Person","name":"viviancollier2","url":"https://community.spiceworks.com/u/viviancollier2"}},{"@type":"Answer","text":"\n\n
<\/div>\n
Shnladd:<\/div>\n
\nwas built with the understanding that RAID10 was a very robust RAID model that would be easy to work with without worrying about data loss.<\/p>\n<\/blockquote>\n<\/aside>\n
Well I guess you got showd, eh? [that RAID 10 is robust and easy to work with]<\/p>","upvoteCount":0,"datePublished":"2016-12-15T13:56:58.000Z","url":"https://community.spiceworks.com/t/mdadm-array-is-making-me-nervous-disk-status-help/542327/12","author":{"@type":"Person","name":"shannonwheeler","url":"https://community.spiceworks.com/u/shannonwheeler"}}]}}
shnladd
(Shnladd)
November 23, 2016, 4:09am
1
I have a 4-disk MDADM RAID10 array in an external eSATA enclosure that I’m trying to get running on a fresh install of Ubuntu 16.04 LTS (botched upgrade and resultant video driver issues led me to determine a fresh install would be the best course of action). When I installed mdadm via apt-get and ran the assemble command (mdadm --assemble --scan) the command returned a notice saying that the array was started with 3 drives out of 4.
The drives in the array prior to reinstalling were:
/dev/sde
/dev/sdf
/dev/sdg
/dev/sdh
As of right now, my “sudo mdadm --misc --detail /dev/md/PBR101” output shows the following:
Active devices: 3
Working devices: 4
Failed devices: 0
Spare devices: 1
spare rebuilding /dev/sde
active sync set-B /dev/sdf
active sync set-A /dev/sdg
active sync set-B /dev/sdh
It looks like /dev/sde dropped from the array for some reason. I did not physically replace the drive. I did not format the drive. I did not alter the drive or attach it to any other host. Rebuild is currently at 6% so I have some time before I can tell what’s going to happen.
I have two questions.
The first question: When the rebuild is completed, will this drive automatically switch from being a spare to being the other drive in sync set A?
The second question: If the answer to the first question is “no”, what is the best means of non-destructively forcing this disk to be an active member in the set and provide a mirror to /dev/sdg? These are all 3TB disks and the 6TB usable space is about 80% full. It’s a lot of data to potentially lose and have to recover, and this array was built with the understanding that RAID10 was a very robust RAID model that would be easy to work with without worrying about data loss.
Can someone offer me some guidance here? I’m already on edge from having to do the reinstall and I really don’t want any more surprises. Patience and reassurance will be repaid with disproportionately excessive gratitude. Thank you in advance.
5 Spice ups
First question - Do you have a backup of the data?
1 Spice up
shnladd
(Shnladd)
November 23, 2016, 1:40pm
3
Not this data - it’s an HTPC and so the data is both too voluminous as well as insufficiently important to warrant the space to back it up. On the one hand, I’m glad it’s noncritical entertainment data. On the other, it’s making me consider options for backup anyway.
UPDATE: The rebuild completed and I checked the details again. Lo and behold, the disk shows “active sync set-A” and it seems all has ended well. Thanks for the help, thanks for entertaining my high-strung agonizing.
1 Spice up
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.
Sounds like all is well now.
Scott Alan Miller:
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*
As for the op. Maybe just show us the output of
cat /proc/mdstat
That way we know if it finished the rebuild
1 Spice up
shnladd
(Shnladd)
December 7, 2016, 4:21pm
7
Vivian Collier:
Scott Alan Miller:
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*
As for the op. Maybe just show us the output of
cat /proc/mdstat
That way we know if it finished the rebuild
EDIT: Never mind, wrong host. Will grab this when I get home tonight.
1 Spice up
Vivian Collier:
Scott Alan Miller:
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*
As for the op. Maybe just show us the output of
cat /proc/mdstat
That way we know if it finished the rebuild
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.
1 Spice up
shnladd
(Shnladd)
December 7, 2016, 6:12pm
9
Scott Alan Miller:
Vivian Collier:
Scott Alan Miller:
Gratuitous point outs… 1. this is why you don’t run GUIs on servers (is this a server, maybe it is a workstation) and 2. 16.04 LTS is one version old and the latest fixes are in 16.10. Running old Ubuntu, even a little old, leaves you with less stability than running current.
Personally with Ubuntu I’ve stuck with the LTS releases because around 10.04 - 10.10 I had issues with the new releases causing issues. Ever since I’ve stuck with the latest 04.*
As for the op. Maybe just show us the output of
cat /proc/mdstat
That way we know if it finished the rebuild
We had the opposite. Had to drop LTS because it was unstable and Canonical only officially fully supports the latest release, not LTS. So if you want all the stability and updates and vendor support, you have to stay current. LTS is a name only, not actually a support guarantee. They provide some additional support for it, but never as much as the current release, it’s always partial.
Damn, that sucks. Never realized that they didn’t actively support LTS releases - awfully deceptive naming they’ve got going on there.
Yeah, just a bit. They do support them “some” just not fully. So it does have “long term support” just not full support, you see. Tricky English. But 12.04 gets more support today than does 11.10 or 12.10, that much is true. But 16.04 does not get as much support as 16.10. Which, if you think about it, is really the only way that it would work well. That’s why RHEL doesn’t do in between releases.
Scott Alan Miller:
Yeah, just a bit. They do support them “some” just not fully. So it does have “long term support” just not full support, you see. Tricky English. But 12.04 gets more support today than does 11.10 or 12.10, that much is true. But 16.04 does not get as much support as 16.10. Which, if you think about it, is really the only way that it would work well. That’s why RHEL doesn’t do in between releases.
That’s really sucky.
Pasted from Ubuntu release cycle | Ubuntu
Ubuntu Server and desktop release end of life
Standard Ubuntu releases are supported for 9 months and Ubuntu LTS (long-term support) releases are supported for five years on both the desktop and the server. During that time, there will be security fixes and other critical updates.
I have moved a lot of stuff over to Debian. But maybe I should look at moving the LTS stuff up to current
Well I guess you got showd, eh? [that RAID 10 is robust and easy to work with]