[illumos-Developer] Webrev: New ARECA arcmsr driver

Chad Cantwell chad at iomail.org
Tue Mar 22 12:55:20 PDT 2011


On Tue, Mar 22, 2011 at 12:20:25PM -0700, Garrett D'Amore wrote:
> On Tue, 2011-03-22 at 12:13 -0700, Chad Cantwell wrote:
> > If by eliminating the CLI bypass logic you mean eliminating support for the CLI
> > entirely, I don't think that is a good idea.  I haven't used the 1880, but in
> > the 1240/60/80ML models with onboard NICs, the NIC was not a replacement for
> > the CLI.  It let you have console bios access via telnet, and a web interface,
> > both which eased administration but neither which replaced all the CLI
> > functionality.  Also, there is a lot you can do with the CLI to automate
> > administration tasks in cron jobs which would be much more difficult using
> > only the onboard NIC.
> 
> Of course, we would prefer you didn't do those tasks-- ZFS and
> illumos/Solaris do best when given JBODs.
> 
> Still the feedback here is useful.   (Unfortunately, the CLI I have
> doesn't work with the 1880 -- it seg faults, and I can't debug it as it
> is closed code.)

I agree, my preference is ZFS and JBODs.  The only Areca hardware raid array I made
in OpenSolaris was for a small root pool a long time ago before I knew how to
make ZFS rpool mirrors.  To be fair, though, there are times when booting off
a hardware array is more reliable than ZFS rpool mirrors depending on how well
the BIOS tries booting off each of the disks (or, one disk might show up in the
BIOS and be the boot device, but have failed so the system will halt while
starting to boot the first disk).  I wouldn't ordinarily be concerned about this
though and don't think it would typically justify having an Areca controller.

> > That said, my last experience with Areca's drivers in OpenSolaris (ONNV circa
> > 134 to 147) wasn't great.  They seemed reliable enough but tended to hog
> > resources on the system (Gigabit throughput over zfs-smb sometimes dropped to
> > as low as 10mbit while scrubbing a 16 disk raidz3 vdev on an Areca 1280ML.
> > No similar slowdown with other disk controllers.  Same slowdowns with iperf
> > tests indicating the gigabit chip was being severely throttled due to the
> > Areca i/o.  The gigabit was a known-good in OpenSolaris intel chip with the
> > e1000g driver).  After trying various tweaks relating to interrupt assignment
> > and zfs throttling, none of which really helped I stopped using my old Areca
> > controllers in OpenSolaris and replaced them with LSI 1068e-based cards.
> 
> Good choice, really. :-)  I still think hardware RAID is a mistake if
> you have ZFS available.  Of course, if you don't have ZFS, then hardware
> RAID may be the lesser evil of the choices remaining to you...
> 
> > 
> > I only mention this as an aside.  Maybe the newer driver fixed this problem.
> > My main point is just that with a properly functioning Areca driver the CLI
> > is useful.
> 
> 
> That's good info, thanks.
> 
> Did you use the CLI on Solaris/OpenSolaris?

I have used it before.  It did act kind of funny but it was possible to use it.  If I
recall correctly, it dedetected several of the same Areca card in my system, and attaching
the CLI to any of the listed cards worked.  Also, it would usually work, but sometimes
randomly crash (it never crashed all of the time though under osol - if it crashed I could
just run it again).

Chad

> 
> 	- Garrett
> 
> > 
> > Chad
> > 
> > On Mon, Mar 21, 2011 at 05:14:03PM -0700, Garrett D'Amore wrote:
> > > Thanks for the feedback Rocky.
> > > 
> > > Eliminating the CLI bypass logic would go a long way to reducing the
> > > complexity of this particular driver.
> > > 
> > > 	- Garrett
> > > 
> > > On Mon, 2011-03-21 at 16:31 -0700, Rocky Shek wrote:
> > > > Great to see Areca 1880 support in illumos!
> > > > 
> > > > We have used Areca products for a long time.
> > > > 
> > > > The good things about them is they have dedicated management NIC for RAID
> > > > creation and maintenance.
> > > >     
> > > > With that, we did not need more from the Areca CLI or archttp utilities as
> > > > we did in the old day.
> > > > 
> > > > From our point of view, we are fine without cli support  
> > > > 
> > > > Looking forward to the integration 
> > > > 
> > > > Rocky
> > > > -----Original Message-----
> > > > From: Garrett D'Amore [mailto:garrett at nexenta.com] 
> > > > Sent: Friday, March 18, 2011 5:15 PM
> > > > To: developer at lists.illumos.org
> > > > Subject: [illumos-Developer] Webrev: New ARECA arcmsr driver
> > > > 
> > > > Recently, ARECA supplied me with a code drop of their latest Solaris
> > > > code, which supports the Areca 1880 cards, as well as adding support for
> > > > the newer interrupt model and fixing some bugs.
> > > > 
> > > > I want to give a great big *THANK YOU* to Areca for supporting illumos
> > > > in this way!
> > > > 
> > > > That said the code had a number of issues, which I've addressed (if
> > > > anyone wants the original code drop, or the list of issues that I've
> > > > fixed, let me know.)
> > > > 
> > > > The updated driver is posted here in webrev form:
> > > > 
> > > > http://mexico.purplecow.org/gdamore/webrev/arcmsr/
> > > > 
> > > > (Some of the changes are gratuitous reordering... I've not tried to
> > > > remove that reordering... the job was big enough as it was.)
> > > > 
> > > > This code is lint and cstyle clean.
> > > > 
> > > > There are a few more changes I'd like to consider making, but would like
> > > > feedback on, and am thinking would better be done as part of a later
> > > > change set:
> > > > 
> > > > a) removal of the special firmware passthrough support.  It isn't safe,
> > > > and from what I can tell doesn't work.  (Their utility crashes, and I
> > > > cannot get source code for it, limiting my ability to do any debug.)  If
> > > > anyone uses the Areca CLI or archttp utilities with arcmsr, I would
> > > > *really* like to know.  (You can configure and manage the RAID mode
> > > > cards using system BIOS, of course.)
> > > > 
> > > > b) Possibly refactor the code into object oriented bits for each of the
> > > > type A, B, C variants.  (I had done this, but the changes were too
> > > > large, and risky, and so I want to proceed with these lower risk changes
> > > > first.)
> > > > 
> > > > c) Switch to SCSAv3.  This would give some performance boost and remove
> > > > a lot of complexity in the driver.  I've already got a version with
> > > > these changes, but they were again rather large and sweeping, and so
> > > > I've shelved them for a future integration as part of my attempt to
> > > > mitigate risk, and also make the review task a bit more manageable.
> > > > 
> > > > d) remove reset(9e) and replace with quiesce(9e) -- testing needed for
> > > > this.
> > > > 
> > > > e) The code has support for handling 64-bit CDB addressing, but uses DMA
> > > > attributes that prevent it (for good reason!).  We should simply that
> > > > code by removal of the special cases for 64-bit CDB handling.  (Note
> > > > that this does not relate to the *buffer* addresses, which are fully
> > > > 64-bit capable.)
> > > > 
> > > > Anyway, I would really, really like feedback on this driver.  Ideally I
> > > > will be integrating this soon.
> > > > 
> > > > 	- Garrett
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Developer mailing list
> > > > Developer at lists.illumos.org
> > > > http://lists.illumos.org/m/listinfo/developer
> > > > 
> > > 
> > > 
> > > 
> > > _______________________________________________
> > > Developer mailing list
> > > Developer at lists.illumos.org
> > > http://lists.illumos.org/m/listinfo/developer
> > 
> > _______________________________________________
> > Developer mailing list
> > Developer at lists.illumos.org
> > http://lists.illumos.org/m/listinfo/developer
> 
> 



More information about the Developer mailing list