Home > Software, Toys > The (current) state of (e)SATA port multipliers

The (current) state of (e)SATA port multipliers

For the most part, open source has been good to me. I can run my business on free software that is for the most part better than its commercial counterparts. I'm able to use often-updated and relatively bug-free software in all facets of my life. However, there are a couple things that have been bugging me - one of them has been SATA Port Multipler (PMP, not to be confused with PM - Power Management!) support. 1TB drives hit an all time low last week (at least in my ads) and it got me motivated to check in to the status of PMP support once again. Previously there weren't a lot of people using it and not a lot of feedback or information. Today, I have a bunch of good information, but still not the magic combination, which for me, includes ZFS (as it is the only filesystem out there still to offer all the good checksumming, self-healing, etc.) This post isn't going to talk about why ZFS rocks. Most people already know why. I want a filesystem that has been built for integrity. This is for home archiving, and I'll be damned if bit rot is going to ruin my memories.

My goal was to find an OS that has ZFS support, which currently is between Solaris (or OpenSolaris, Nexenta, etc.) and FreeBSD 7 (which has "experimental" but usable ZFS support) - I'd prefer Linux as it has the fastest rate of bug fixes and driver support and I am already using Linux on one of my home machines, but ZFS doesn't run natively on it (FUSE is not native enough.)

The following below are the current status I am aware of taken from direct emails exchanged with Adriaan de Groot (who said he might port PMP support to FreeBSD 7) and Tejun Heo (who maintains the PMP driver libata-pmp for Linux) and Google searches, mailing list and forum postings as of today.


I'm not even going to bother here. Most drivers for the controllers are coded for Windows first, and then if we're lucky an open source Linux/etc. version is put out. So Windows support is probably 100% - but you're stuck to using NTFS or something (blech.)


According to Adriaan:

"Don't bother, as port multipliers are not supported. I had some PMP support hacked in at some point, but without NCQ there's not much (performance) point. It *might* be that Soren has added NCQ / PMP in the meantime, but I'm not aware of it."


"PMP requires NCQ to be useful; otherwise you end up queuing all the requests for the different disks and performance goes down a fair bit - but like I said, it's *possible* and if all you're after is big storage, it will work. Anyway, it's not the cards' fault: there just isn't any NCQ in the (FreeBSD) kernel."

Tejun's comment:

Although NCQ and PMP supports don't really have to go together. Non-NCQ PMP should work as good as direct non-NCQ. Nothing more to lose there because it's attached via PMP. Maybe they were talking about pushing multiple commands to a port. If the driver can't do that, PMP won't work too nice.

As of now according to Adriaan, there is no PMP support in FreeBSD (except possibly some highly experimental patches.)

Solaris (and variants)

From Adriaan:

"I run OSOL ... but OSOL also does not support PMP"

Additional links:


Possibly? I did not look into this. It would be a shame if some were supported with NCQ and not given back to FreeBSD... this guy claims to have what sounds like two 4:1 multipliers on his Mac:

http://www.mail-archive.com/[email protected]/msg13749.html


(There is a site that tries to keep up to date with SATA support in the Linux kernel. This is probably the best place to look. As of writing, it still appears up to date. http://linux-ata.org/driver-status.html)

According to Tejun, who was extremely helpful answering my barrage of questions gracefully, it does look like Linux has some PMP options (now if only we had ZFS or BTRFS...!) with the following notes for each chipset:

Controller chipsets: SiI3124, SiI3132 - both will work. His comments:

"3124/3132 controllers are good"

"Performance-wise, all the PMPs on the market should be okay but 3124/3132s have certain limitations on PCI bus side and can't deliver full bandwidth concurrently from all drives attached through PMP. I don't remember the exact numbers but it maxes out after three drives or so."

"I have second gen chips from SIMG and they behave really good."

Note: SiI3124-1 is only 1.5Gbps. SiI3124-2 supports 3Gbps.

Enclosure chipsets: SiI3726, SiI4726, Marvell 88SM4140  - all should work great. His comments:

"3726/4726 PMPs are the first gen PMPs and both are a bit quirky"

"Marvell port multipliers behave really well."

"That said, 3726/4726's work okay too. The only problem is that they always need their first slot occupied to operate correctly. As long as you occupy the first slot, there shouldn't be much problem."

Update: "There have been reports of 3726/4726 PMPs having trouble with 3Gbps link speed under certain configurations. I still don't know what's the exact cause.  It doesn't seem common but if you can maybe trying out with one drive before ordering the whole array is a good idea."

Note: the Marvell is only a 4:1 multiplier, the SiI ones are 5:1.

So the big winner here for the time being is Linux - looks like I will be purchasing a SiI3124-2-based controller (probably this one, which claims it is a 3124A, a successor of the 3124-2) and a SiI3726 or SiI4726 based PMP enclosure. I am trying to determine which one will be the quietest right now. I don't believe that hardware RAID-5 is supported on any PMP, only RAID 0, 1, and 10. It gets confusing because most advertise RAID but it leverages some software RAID stuff. So I will probably using mdadm for RAID5 at home.

Hopefully this is a useful (and correct) summary. I invite anyone with updated information to leave a comment, and I will correct it. Especially if you have experience with eSATA PMP enclosures and have anything to add! Not many people have ventured out into these waters yet, but it looks pretty neat - off a single PCI-X or PCI-e card you could chain 20 drives all with decent bandwidth and only have one cable for each set of five drives...

Categories: Software, Toys
  1. brian
    May 25th, 2008 at 06:49 | #1

    It's a little late, Highpoint's FreeBSD drivers are closed source, but they do support port multipliers just fine. I have a file server with ZFS and 4 drives on a port multiplier (SiI-3726, I think), which sounds exactly like what you want. ZFS on FreeBSD is stable enough for home use right now, as long as you have plenty of RAM (at least 2GB) and don't push it too hard.

  2. Alain Kelder
    February 9th, 2009 at 18:36 | #2

    Good info, thanks! I can report success with this Sil3132 based controller: http://www.sansdigital.com/adapters/ha-san-2espcie2.html connected to this enclosure: http://www.sansdigital.com/towerraid/tr2utb.html on Debian Lenny (haven't tested Etch with this config, but have every reason to believe it should work also).

    Some useful links I've found researching this subject:

    (1) A nice comparison chart of (e)SATA Controllers based on the Silicon Image chipsets:

    (2) Great info on Linux support status of various SATA chipsets:


  3. Anonymous
    July 14th, 2009 at 11:55 | #3

    A better solution would be to use a 4-port Hardware Port Multiplier (HPM) with an Oxford OXUFS936QSE chipset. It has an integrated controller on the HPM and basically raid mode selection is done via a rotary switch and setup is done by a push button switch.

    Supports FAST2 (2 drive RAID0 Striping), FAST4 (4 drive RAID0 Striping), SAFE2 (RAID1 Mirroring), SAFE FAST (Mirrored Striped), BIG 2 (2 drive Concatenation), BIG 4(4 drive Concatenation), RAID 5 over 4 drives, or RAID 5+S using built-in hardware RAID

    No drivers to install so any linux distribution will work on this HPM. No more using software raid.

    Go to http://www.theraidbox.com to learn more

  4. mike
    July 14th, 2009 at 12:10 | #4

    Nice advertisement but mainly against the point 🙂

  5. Gary Coy
    October 4th, 2009 at 13:45 | #5

    Looks like theraidbox dot com is nothing more than a re-seller of Addonics hardware with drives thrown in (for a profit, I'd guess). Nothing special there.

    I'm looking for a good OpenSolaris solution. I want to use COMSTAR+ZFS to serve up iSCSI targets to various machines on my home network/lab.

    I'm considering running Linux (Ubuntu or CentOS) as a base OS if I have to (for driver support), then running OpenSolaris as a VirtualBox guest - using the base OS's drives as 'virtual drives' to map through to OpenSolaris. I'm really interested in the COMSTAR+ZFS option. I'd hate to have to do this. I want to run OpenSolaris natively, and server up iSCSI from an external (Addonics) device. Will this work or am I up in the night?

  6. mike
    October 4th, 2009 at 13:54 | #6

    I would bring this up on the storage-discuss mailing list (and maybe zfs-discuss) and see what people say... I honestly couldn't help you there 🙂

  7. FireWire
    July 22nd, 2010 at 10:40 | #7

    Over 6 months ago, this would be impossible, but it is now....
    a complete driver less hardware raid using port-multiplier technology allows you to have 10TB RAID5 WITHOUT Driver

    This is mean you can use this raid5 in Linux, MAC, Windows, Solaris or VMWare for that matter

    http://www.datoptic.com/esata-hardware-raid-controller-spm394.html with LCD display
    http://www.datoptic.com/5x-drive-hardware-raid-controller.html without LCD display

  8. mike
    July 27th, 2010 at 01:33 | #8

    These are onboard RAID controllers in the unit. The idea here was to leverage the cable simplicity, performance and expandability of eSATA PMPs but the host would treat them as individual disks, most likely running ZFS as it is the most bitrot-resistent filesystem out there right now.

    I am not sure I would buy an eSATA unit with onboard RAID - at that point why not just attach a NAS box and not have to have a controller machine deal with it?

  1. No trackbacks yet.
You must be logged in to post a comment.