New server, new SSD's. I put two Seagate Enterprise PRO 600 240GB SSD's in, create a VD RAID 1.
Every time the server boots, the process gets past the PERC BIOS which shows the logical volume is present, and then the the next screen checks interface inventory and then the LCD panel turns amber and says drive 0 and 1 have a problem. The caddys for both drives have the amber LED blinking quickly, but the drives seem to function fine.
If I reboot, the LCD turns blue (normal), errors are cleared, until the Interface Inventory screen comes up and the process repeats.
I had read elsewhere the Seagate PRO 600's were compatible with the R620 and the H710p (monolithic).
Everything is updated (H710p driver and firmware, BIOS). Not sure if there is a bmc-equivalent firmware that needs to be updated? Is that the Lifecycle Controller that needs updating?
I did install and run Server Administrator and it shows the drives marked with a yellow exclamation mark and says non-critical. Dig deeper and something about "sense" on each physical drive.
I'm just worried that these SSD's would have problems even though they appear to be working correctly.
Ideas?