Side nav buttonsHomeResourcesTechnologiesNewsAbout

NC BioGrid Testbed Hardware

One of the primary requirements of the NC BioGrid is to support a heterogeneous environment of hardware and operating systems that mimics the existing IT infrastructures of the Testbed sites. Given this, the Testbed currently consists of three hardware and operating system platforms (Intel/Linux, SPARC/Solaris, and Power4/AIX) across four different sites (NCSCNC State, UNC-CH, and Duke) that are connected via the NCREN network.

NC Supercomputing Center / Research Triangle Park, NC

SunFire 3800 - Data Grid Server

  • Solaris 8
  • One Processor/Memory board, single domain
  • Two System Controllers
  • Four 900-MHz Superscalar SPARC V9 / UltraSPARC III Cu processors
  • 8-MB of ECC external cache per processor
  • 4-GB of system memory (four 1-GB DIMM's)
  • Sun Fireplane system interconnect (9.6 GBps sustained throughput)
  • cPCI Gigabit Ethernet NIC (optical, SC style connector) and integrated 10/100 NIC's
  • Two cPCI Fibre Channel network adapter, 100MB/sec per channel
  • Six 18-GB UltraSCSI hard drives
  • Two Sun StorEdge T3 disk arrays, with eighteen 36.4-GB 10K RPM FC-AL hard drives configured in two ~280-GB RAID-5 sets, connected via FC-AL to the host system
  • Redundant power supplies, transfer units, and fan trays
  • Images: front view, back view, SF3800 Sun press photo, StorEdge T3 Sun press photo

SunFire V880 - Compute Node

  • Solaris 8
  • Sun HPC ClusterTools 4.0 and Sun ONE "Forte" Compiler Collection 7.0
  • Four Processor-Memory Modules
  • Eight 750-MHz Superscalar SPARC V9 / UltraSPARC III Cu processors
  • 64-KB data and 32-KB instruction on-chip L1 cache
  • 8-MB of L2 cache
  • 32-GB of system memory
  • Sun Fireplane system interconnect (9.6 GBps sustained throughput)
  • Integrated Gigabit Ethernet (optical, SC style connector) and 10/100 NIC's
  • Twelve 36.4-GB 10K RPM FC-AL hard drives across two disk backplanes
  • Three (N+1) power supplies and redundant cooling fan trays
  • Image: Sun press photo

IBM p690 - Compute Node

  • AIX 5.1f
  • Thirty-two 1.3-GHz POWER4 processors
  • 128-GB of ECC "Chipkill" system memory
  • Four Multi-Chip Modules (MCM) - 8 processors, 32-GB of four-way interleaved memory and 1.44MB L2 cache per MCM
  • 32-KB data and 64-KB instruction L1 cache
  • 512-MB L3 cache
  • 100-GBps between L1 and L2 cache; 10GBps on an MCM; ~5-GBps between MCM's
  • Gigabit Ethernet (optical, SC style connector)
  • Fourteen 36-GB 10K RPM UltraSCSI hard drives
  • Image: IBM press photo

IBM eServer 1300 Linux Cluster

  • RedHat Linux 7.3
  • One IBM xSeries 342 - Master Node for Cluster Administration
    • Two 1.0-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • PCI Gigabit Ethernet (optical, SC style connector) and 10/100 NIC's
    • Integrated 10/100 NIC
    • Two 36-GB UltraSCSI 10K RPM hard drives w/ ServeRAID controller, configured for RAID 1
  • Sixteen IBM xSeries 330's - ten generic Compute Nodes; two Interactive Server; one Globus GIIS Server; one Portal Server; one Kerberos Master; one Kerberos Slave
    • Two 1.26-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • Two integrated 10/100 NIC
    • One 36-GB UltraSCSI 10K RPM hard drive
  • Images: front view, back view

Sun iFORCE Accelerator Rack

  • Solaris 8
  • Two Sun E220R's w/ 2GB of RAM - one for Avaki Bootstrap; one for Avaki Domain Controller
  • Two 450-MHz UltraSPARC II processors
  • 4-MB of ECC external cache per processor
  • 2-GB of system memory (eight 256-MB DIMM's)
  • Integrated 10/100 NIC
  • PCI Quad FastEthernet (QFE) 10/100 NIC
  • PCI Dual UltraSCSI card
  • Sun StorEdge D1000 UltraSCSI disk tray, split between the two systems, with four 18-GB hard drives, two per system
  • Two Sun E220R's w/ 1GB of RAM - one for Avaki Grid Server; one for iPlanet Directory Server and Certificate Management Server; one for Avaki Domain Controller
  • Two 450-MHz UltraSPARC II processors
  • 4-MB of ECC external cache per processor
  • 1-GB of system memory (four 256-MB DIMM's)
  • Integrated 10/100 NIC
  • PCI Quad FastEthernet (QFE) 10/100 NIC
  • PCI Dual UltraSCSI card
  • Sun StorEdge D1000 UltraSCSI disk tray, split between the two systems, with six 18-GB hard drives, three per system
  • Two Sun Netra t1's - one for read-only iPlanet Directory Server and Certificate Management Server; one for Solaris Interactive Server
  • One 440-MHz UltraSPARC II processors
  • 512-MB of system memory
  • Two Integrated 10/100 NIC's
  • 18GB SCSI hard drive
    • Images: front view (cabinet closest to camera), back view

NC State University / Raleigh, NC

SunFire V880 - Compute Node

  • Solaris 8
  • Four processor-memory modules
  • Eight 750-MHz Superscalar SPARC V9 / UltraSPARC III Cu processors
  • 64-KB data and 32-KB instruction on-chip cache
  • 8-MB of L2 cache
  • 32-GB of system memory
  • Sun Fireplane system interconnect (9.6 GBps sustained throughput)
  • Integrated Gigabit Ethernet (optical, SC style connectors) and 10/100 NIC's
  • Twelve 36.4-GB 10K RPM FC-AL hard drives across two disk backplanes
  • Three (N+1) power supplies and redundant cooling fan trays

University of North Carolina / Chapel Hill, NC

IBM eServer 1300 Linux Cluster - Compute Nodes

  • RedHat Linux 7.3
  • Myrinet 2000 interconnect (~236-MBps for sustained one way communications of large messages, and 312-MBps for two-way)
  • Platform LSF for job scheduling
  • One IBM xSeries 342 Master Node for cluster administration
    • Two 1.0-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • PCI Gigabit Ethernet (optical, SC style connectors) and 10/100 NIC's
    • Integrated 10/100 NIC
    • PCI Myrinet card
    • Two 36-GB UltraSCSI 10K RPM hard drives w/ ServeRAID controller, configured for RAID 1
  • Sixteen IBM xSeries 330 Slave Nodes for compute
    • Two 1.26-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • Two integrated 10/100 NIC
    • PCI Myrinet card
    • One 36-GB UltraSCSI 10K RPM hard drive

Duke University / Durham, NC

IBM eServer 1300 Linux Cluster - Compute Nodes

  • RedHat Linux 7.3
  • Sun GridEngine 5.3 for job scheduling
  • One IBM xSeries 342 Master Node for cluster administration
    • Two 1.0-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • PCI Gigabit Ethernet (optical, SC style connectors) and 10/100 NIC's
    • Integrated 10/100 NIC
    • Two 36-GB UltraSCSI 10K RPM hard drives w/ ServeRAID controller, configured for RAID 1
  • Sixteen IBM xSeries 330 Slave Nodes for compute
    • Two 1.26-GHz Intel Pentium III processors
    • 256-KB of L2 cache
    • 512-MB of system memory (two 256-MB DIMM's)
    • Two integrated 10/100 NIC
    • One 36-GB UltraSCSI 10K RPM hard drive