(2) [OCP-Storage-NVMe] NVMe-HDD requirements questions to subgroup members


임준혁
 

Hello Mohamad,

 

Here is my question after seeing meterials you kindly delivered(really thanks!)

 

AFAIK in PCI-SIG there has recently been discussing about not supporting PCIe Gen5 on SFF-8639,

and there was also a mention that OCP(NVMe-HDD) has capped use of this at Gen4.

I found and know that current requirement for NVMe-HDD in this workstream is Gen4,

but (still) wonder if we really no need for Gen5 in the future.

 

I am sorry for missing the previous meetings, and thank you in advance for answering newbie's questions.

 

Thanks, 

Junhyeok Im

--------- Original Message ---------

Sender : Mohamad Elbatal via groups.io <mohamad.elbatal@...>

Date : 2021-04-10 00:08 (GMT+9)

Title : Re: [OCP-Storage-NVMe] NVMe-HDD requirements questions to subgroup members

To : OCP-Storage-NVMeHDD@OCP-All.groups.io<OCP-Storage-NVMeHDD@OCP-All.groups.io>

CC : Jason Adrian<jason.adrian@...>, Dave Landsman<Dave.Landsman@...>, Curtis Stevens<curtis.stevens@...>, Tim Walker<tim.t.walker@...>, Jim Hatfield<james.c.hatfield@...>, Alvin Cox<alvin.cox@...>, Matt Shumway<matt.l.shumway@...>

 

Thanks for summarizing the message, Jason. 

 

It makes a lot of sense for a Cloud High-Density storage subsystem to use the Single-lane with SRIS/SRNS support. Then leverage the existing high-volume SATA connector for a lowest $/slot Simplex NVMe-HDD enclosure. Most Cloud applications should be satisfied with the Single-lane PCIe-Gen3 ~800MB/s for Sequential-Write and Random-Read workloads, even from the fastest Dual-Actuator drive for the next 5-8 years.

 

It also makes a lot of sense for an Enterprise OEM High-Density storage subsystem to use the Dual-lane & SRIS/SRNS support and leverage the existing SAS connector in existing Chassis and Baseplanes/Midplanes, and then upgrade later to new NVMe IOM cards, with Storage-PCIe-Switches instead of SAS-Expanders with minimal added investments to move into the NVMe-oF composable infrastructure.

 

I personally think the I2C trees and RefClks to each drive makes sense for lower drive count storage solutions, where the extra clock lanes and I2C fanouts are not significantly cumbersome; however, in a High-Density storage solution routing the extra I2C and RefClks instead of using In-band management and spread-spectrum clock sampling is going to create uncompetitive storage enclosure solutions.

 

Mohamad

 


From: OCP-Storage-NVMeHDD@OCP-All.groups.io on behalf of Jason Stuhlsatz via groups.io
Sent: Friday, April 9, 2021 7:06 AM
To: OCP-Storage-NVMeHDD@OCP-All.groups.io
Cc: Jason Adrian; Dave Landsman; Curtis Stevens; Tim Walker; Jim Hatfield; Alvin Cox; Matt Shumway
Subject: Re: [OCP-Storage-NVMe] NVMe-HDD requirements questions to subgroup members

 

I note that based on Mohamad's pinout list:

 

If no I2C, no 2nd channel (lane 1), and SRIS required, then the connector is just a standard SATA connector.

If the 2nd channel is added, it's just a standard SAS connector.

Add the I2C and/or RefClks, and in my experience the connector vendors would rather just sell the fully pinned SFF-8639 than create a yet a new pin count version besides the above two.  Granted I'm not a connector vendor and OCP NVMe-HDD volumes may prod a change of mind.

 

 

 

 

Jason Stuhlsatz

Board Architect

Global Board Engineering

jason.stuhlsatz@...

 

678-728-1406

4385 River Green Pkwy

Duluth, GA  30096

 

 

On Wed, Apr 7, 2021 at 5:49 PM Mohamad Elbatal via groups.io <mohamad.elbatal=seagate.com@groups.io> wrote:

 Happy Wednesday everyone!

 

As we discussed last week, I'm sending this email is to get everyone within the NVMe-HDD OCP-subgroup to voice their clear and open position regarding three areas in question within the proposed OCP-Subgroup NVMe-HDD Connector, Drive and Slot requirements:

  1. A Drive & Slot Requirement related set of questions: Should an Initial-Minimum-Drive-Power-Requirement be specified in the OCP NVMe-HDD drive specification, for a given NVMe-HDD to be able to power up and negotiate a Max-Slot-Power for Spin-up? (The answer should include a Minimum-Drive-Current-Required per voltage rale. The idea is that when the drive is powered up, and it is configured to hold off on spinning-up until it negotiates a higher slot power for Spin-up, the system must supply a specific Minimum Current Requirement on each of the +5V and +12V Voltage Rales to enable the drive to function and process commands.) Possible answers are:
    1. Yes, and the Minimum Required Currents are x-Amps for +5V and y-Amps for +12V rales. 
      1. Please provide the Currents required, that is if you are an OCP Drive supplier.
      2. Please provide information on a typical OCP High-Density HDD enclosure slot currents capability on the +5V and +12V rales at start of day if you are a system supplier.
    2. No, since the traditional OCP systems architects have always stagger spin-up, and we will continue to provide adequate power for a specific number of HDDs to Spin-up Automatically on a pre-architected legacy SAS and SATA Spin-up power budget per slot.
  2. A Connector Requirement related set of questions: Last week we discussed using the existing SFF8639 connector requirements as the starting point and bases of the future OCP NVMe-HDD connector required connector. That said, some of the OCP NVMe-HDD subgroup members prefer to see a subset of the SFF8639 high-speed connector pins (on the right side of the connector, images attached to this thread) to be omitted or removed from the OCP NVMe-HDD version of the SFF8639 connector requirements, depending on the needs of a specific Drive or a specific System. Here is the current proposal with a bunch of TBDs associated with each non-essential pin. Please respond if you disagree with the stated Optional or Required status below for each marked TBD:
    1.   Up to Eleven new pins on the U.3 SFF-8639 connector could be needed or optional for the NVMe-HDD:
       

      Pins that are not needed nor essential for Single-Lane NVMe-HDD support:

      • E9, E12, E15:  (TBD - Not Required - Open)

      • E10, E11, E13, E14: (TBD - Not Required - Open)

      • E16: HPT1 (TBD - Not Required - should be grounded on HDD PCB)

      • E17-E22: (TBD - Not Required - Open)

      • S8-S14: 2nd NVMe Port (TBD - Optional for the Systems Slots, but Required support from NVMe-HDD)

      • S15: HPT0 (TBD - Not Required - should be grounded on HDD PCB)

      • S16-S22: 3rd NVMe Port (TBD - Not Required)

      • S23-S28: 4th NVMe Port (TBD - Not Required)

      All Twenty-Nine existing SFF-8482 pins are required for a Dual-Lane NVM-HDD drive.
      The bottom line is that one can design an NVMe-HDD system with only a single lane of NVMe Gen3 or Gen4 port using the lowest cost SATA type connector or use a dual-lane NVMe Gen3 or Gen4 NVMe-HDD using the equivalent of a SAS connector, or pay a little extra and use the modified SFF8639 connector with reduced pins, which we have to give a name and potentially take to SNIA to support a special number of its own. Please provide your comments or disagreements.

      •   E1: REFCLKB+ (TBD - Optional to the System/Slot that support SRIS or SRNS - Drive must always support SRIS or SRNS in absence of REFCLK)
         
      •   E2: REFCLKB-  (TBD - Optional to the System/Slot that support SRIS or SRNS - Drive must always support SRIS or SRNS in absence of REFCLK)
         
      •   E3: +3.3Vaux  (TBD - Optional for System/Slot and Drive - For systems requiring U.3-SSD support: SMBus EEPROM power = 1mA SMbus Inactive & 5mA SMbus active power, or +12V Active)
         
      •   E4: PERSTB# (TBD - Optional to the System/Slot but Required support from the Drive)
         
      •   E5: PERST# (TBD - Optional to the System/Slot but Required support from the Drive)
         
      •   E6: IFDet2# (TBD - Optional to the System/Slot, but required Pull down on HDD PCB - See attached SFF-TA-1005 table)
         
      •   E7: REFCLK+ (TBD - Optional to the System/Slot that support SRIS or SRNS - Drive must always support SRIS or SRNS in absence of REFCLK)
         
      •   E8: REFCLK- (TBD - Optional to the System/Slot that support SRIS or SRNS - Drive must always support SRIS or SRNS in absence of REFCLK)
         
      •   E23: SMBCLK (TBD - Optional to the System/Slot, but Required support from the Drive)
         
      •   E24: SMBDAT (TBD - Optional to the System/Slot, but Required support from the Drive)
         
      •   E25: DualPortEn# (TBD - Optional to the System/Slot and Drive, but Reserved for specific for enterprise solutions)
         
       
  3. A Connector Requirement related question: Can any of you think of any reason for us not to borrow the existing SFF-8639 connector power rating for the NVMe-HDD connector, per existing specification?
    1.  Max Continuous Current for +12V → 1.5A/pin x 2-pins = 3.0A
       
    2.  Max Peak Current for +5V → 2.5A/pin for 1.5sec = 5.0A
       
    3.  Max Continuous Current for +5V → 1.5A/pin x 2-pins = 3.0A
       
    4.  Max Peak Current for +12V → 2.5A/pin for 1.5sec = 5.0A
       

(I initially wanted to send three separate emails, but I figured we do it with one email hopefully)

 

 

Thanks,

Mohamad


This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.