github intel/ethernet-linux-ice v1.0.4
ice-1.0.4

latest releases: v2.3.10, v2.2.9, v2.2.8...
17 months ago

ice Linux* Base Driver for the Intel(R) Ethernet Controller 800 Series

June 23, 2020

===============================================================================

Contents

  • Overview
  • Identifying Your Adapter
  • Important Notes
  • Building and Installation
  • Command Line Parameters
  • Additional Features & Configurations
  • Performance Optimization
  • Known Issues/Troubleshooting

Overview

This driver supports kernel versions 3.10.0 and newer. The associated Virtual
Function (VF) driver for this driver is iavf.

Driver information can be obtained using ethtool, lspci, and ifconfig.
Instructions on updating ethtool can be found in the section Additional
Configurations later in this document.
This driver is only supported as a loadable module at this time. Intel is not
supplying patches against the kernel source to allow for static linking of the
drivers.

For questions related to hardware requirements, refer to the documentation
supplied with your Intel adapter. All hardware requirements listed apply to use
with Linux.

This driver supports XDP (Express Data Path) on kernel 4.14 and later. Note
that XDP is blocked for frame sizes larger than 3KB. This driver supports
AF_XDP zero-copy on kernel 4.18 and later.

Identifying Your Adapter

For information on how to identify your adapter, and for the latest Intel
network drivers, refer to the Intel Support website:
http://www.intel.com/support

Important Notes

Poor receive performance and dropped packets

Devices based on the Intel(R) Ethernet Controller 800 Series may exhibit poor
receive performance and dropped packets. The following steps may improve the
situation:

  1. In your system's BIOS/UEFI settings, select the "Performance" profile.
  2. On RHEL 7.x/8.x, use the tuned power management tool to set the
    "latency-performance" profile.
  3. In other operating systems and environments, use the equivalent tool to set
    the equivalent profile.

Configuring SR-IOV for improved network security

In a virtualized environment, on Intel(R) Ethernet Network Adapters that
support SR-IOV, the virtual function (VF) may be subject to malicious behavior.
Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE
802.1Qbb (priority based flow-control), and others of this type, are not
expected and can throttle traffic between the host and the virtual switch,
reducing performance. To resolve this issue, and to ensure isolation from
unintended traffic streams, configure all SR-IOV enabled ports for VLAN tagging
from the administrative interface on the PF. This configuration allows
unexpected, and potentially malicious, frames to be dropped. See "Configuring
VLAN Tagging on SR-IOV Enabled Adapter Ports" in this README for configuration
instructions.

Do not unload port driver if VF with active VM is bound to it

Do not unload a port's driver if a Virtual Function (VF) with an active Virtual
Machine (VM) is bound to it. Doing so will cause the port to appear to hang.
Once the VM shuts down, or otherwise releases the VF, the command will complete.

Firmware Recovery Mode

A device will enter Firmware Recovery mode if it detects a problem that
requires the firmware to be reprogrammed. When a device is in Firmware Recovery
mode it will not pass traffic or allow any configuration; you can only attempt
to recover the device's firmware. Refer to the Intel(R) Ethernet Adapters and
Devices User Guide for details on Firmware Recovery Mode and how to recover
from it.

Building and Installation

The ice driver requires the Dynamic Device Personalization (DDP) package file
to enable advanced features (such as dynamic tunneling, Flow Director, RSS, and
ADQ). The driver installation process installs the default DDP package file and
creates a soft link ice.pkg to the physical package ice-x.x.x.x.pkg in the
firmware root directory (typically /lib/firmware/ or /lib/firmware/updates/).
The driver install process also puts both the driver module and the DDP file in
the initramfs/initrd image.

NOTE: When the driver loads, it looks for intel/ice/ddp/ice.pkg in the firmware
root. If this file exists, the driver will download it into the device. If not,
the driver will go into Safe Mode where it will use the configuration contained
in the device's NVM. This is NOT a supported configuration and many advanced
features will not be functional. See "Dynamic Device Personalization" later for
more information.

To build a binary RPM package of this driver

Note: RPM functionality has only been tested in Red Hat distributions.

  1. Run the following command, where <x.x.x> is the version number for the
    driver tar file.

    rpmbuild -tb ice-<x.x.x>.tar.gz

    NOTE: For the build to work properly, the currently running kernel MUST
    match the version and configuration of the installed kernel sources. If
    you have just recompiled the kernel, reboot the system before building.

  2. After building the RPM, the last few lines of the tool output contain the
    location of the RPM file that was built. Install the RPM with one of the
    following commands, where is the location of the RPM file:

    rpm -Uvh

    or
    

    dnf/yum localinstall

NOTES:

  • To compile the driver on some kernel/arch combinations, you may need to
    install a package with the development version of libelf (e.g. libelf-dev,
    libelf-devel, elfutilsl-libelf-devel).
  • When compiling an out-of-tree driver, details will vary by distribution.
    However, you will usually need a kernel-devel RPM or some RPM that provides the
    kernel headers at a minimum. The RPM kernel-devel will usually fill in the link
    at /lib/modules/'uname -r'/build.

To manually build the driver

  1. Move the base driver tar file to the directory of your choice.
    For example, use '/home/username/ice' or '/usr/local/src/ice'.

  2. Untar/unzip the archive, where <x.x.x> is the version number for the
    driver tar file:

    tar zxf ice-<x.x.x>.tar.gz

  3. Change to the driver src directory, where <x.x.x> is the version number
    for the driver tar:

    cd ice-<x.x.x>/src/

  4. Compile the driver module:

    make install

    The binary will be installed as:
    /lib/modules//updates/drivers/net/ethernet/intel/ice/ice.ko

    The install location listed above is the default location. This may differ
    for various Linux distributions.

    NOTE: To compile the driver with ADQ (Application Device Queues) flags set,
    use the following command, where is the number of logical cores:

    make -j CFLAGS_EXTRA='-DADQ_PERF -DADQ_PERF_COUNTERS' install

    (This will also apply the above 'make install' command.)

  5. Load the module using the modprobe command.

    To check the version of the driver and then load it:

    modinfo ice

    modprobe ice

    Alternately, make sure that any older ice drivers are removed from the
    kernel before loading the new module:

    rmmod ice; modprobe ice

NOTE: To enable verbose debug messages in the kernel log, use the dynamic debug
feature (dyndbg). See "Dynamic Debug" later in this README for more information.

  1. Assign an IP address to the interface by entering the following,
    where is the interface name that was shown in dmesg after modprobe:

    ip address add <IP_address>/ dev

  2. Verify that the interface works. Enter the following, where IP_address
    is the IP address for another machine on the same subnet as the interface
    that is being tested:

    ping <IP_address>

Command Line Parameters

The only command line parameter the ice driver supports is the debug parameter
that can control the default logging verbosity of the driver. (Note: dyndbg
also provides dynamic debug information.)

In general, use ethtool and other OS-specific commands to configure
user-changeable parameters after the driver is loaded.

Additional Features and Configurations

ethtool

The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. Download it at:
https://kernel.org/pub/software/network/ethtool/

NOTE: The rx_bytes value of ethtool does not match the rx_bytes value of
Netdev, due to the 4-byte CRC being stripped by the device. The difference
between the two rx_bytes values will be 4 x the number of Rx packets. For
example, if Rx packets are 10 and Netdev (software statistics) displays
rx_bytes as "X", then ethtool (hardware statistics) will display rx_bytes as
"X+40" (4 bytes CRC x 10 packets).

Viewing Link Messages

Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following:

dmesg -n 8

NOTE: This setting is not saved across reboots.

Dynamic Device Personalization

Dynamic Device Personalization (DDP) allows you to change the packet processing
pipeline of a device by applying a profile package to the device at runtime.
Profiles can be used to, for example, add support for new protocols, change
existing protocols, or change default settings. DDP profiles can also be rolled
back without rebooting the system.

The ice driver automatically installs the default DDP package file during
driver installation. NOTE: It's important to do 'make install' during initial
ice driver installation so that the driver loads the DDP package automatically.

The DDP package loads during device initialization. The driver looks for
intel/ice/ddp/ice.pkg in your firmware root (typically /lib/firmware/ or
/lib/firmware/updates/) and checks that it contains a valid DDP package file.

If the driver is unable to load the DDP package, the device will enter Safe
Mode. Safe Mode disables advanced and performance features and supports only
basic traffic and minimal functionality, such as updating the NVM or
downloading a new driver or DDP package. Safe Mode only applies to the affected
physical function and does not impact any other PFs. See the "Intel(R) Ethernet
Adapters and Devices User Guide" for more details on DDP and Safe Mode.

NOTES:

  • If you encounter issues with the DDP package file, you may need to download
    an updated driver or DDP package file. See the log messages for more
    information.

  • The ice.pkg file is a symbolic link to the default DDP package file installed
    by the Linux-firmware software package or the ice out-of-tree driver
    installation.

  • You cannot update the DDP package if any PF drivers are already loaded. To
    overwrite a package, unload all PFs and then reload the driver with the new
    package.

  • Only the first loaded PF per device can download a package for that device.

You can install specific DDP package files for different physical devices in
the same system. To install a specific DDP package file:

  1. Download the DDP package file you want for your device.

  2. Rename the file ice-xxxxxxxxxxxxxxxx.pkg, where 'xxxxxxxxxxxxxxxx' is the
    unique 64-bit PCI Express device serial number (in hex) of the device you want
    the package downloaded on. The filename must include the complete serial number
    (including leading zeros) and be all lowercase. For example, if the 64-bit
    serial number is b887a3ffffca0568, then the file name would be
    ice-b887a3ffffca0568.pkg.

To find the serial number from the PCI bus address, you can use the following
command:

lspci -vv -s af:00.0 | grep -i Serial

Capabilities: [150 v1] Device Serial Number b8-87-a3-ff-ff-ca-05-68

You can use the following command to format the serial number without the
dashes:

lspci -vv -s af:00.0 | grep -i Serial | awk '{print $7}' | sed s/-//g

b887a3ffffca0568

  1. Copy the renamed DDP package file to /lib/firmware/updates/intel/ice/ddp/.
    If the directory does not yet exist, create it before copying the file.

  2. Unload all of the PFs on the device.

  3. Reload the driver with the new package.

NOTE: The presence of a device-specific DDP package file overrides the loading
of the default DDP package file (ice.pkg).

RDMA (Remote Direct Memory Access)

Remote Direct Memory Access, or RDMA, allows a network device to transfer data
directly to and from application memory on another system, increasing
throughput and lowering latency in certain networking environments.

The ice driver supports the following RDMA protocols:

  • iWARP (Internet Wide Area RDMA Protocol)
  • RoCEv2 (RDMA over Converged Ethernet)
    The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses
    UDP.

For detailed installation and configuration information, see the README file in
the RDMA driver tarball.

Notes:

  • Devices based on the Intel(R) Ethernet Controller 800 Series do not support
    RDMA when operating in multiport mode with more than 4 ports.

  • You cannot use RDMA or SR-IOV when link aggregation (LAG)/bonding is active,
    and vice versa. To enforce this, on kernels 4.5 and above, the ice driver
    checks for this mutual exclusion. On kernels older than 4.5, the ice driver
    cannot check for this exclusion and is unaware of bonding events.

Application Device Queues (ADQ)

Application Device Queues (ADQ) allow you to dedicate one or more queues to a
specific application. This can reduce latency for the specified application,
and allow Tx traffic to be rate limited per application.

The ADQ information contained here is specific to the ice driver. For more
details, contact your Intel Corp. representative to obtain the E810 ADQ
Configuration Guide.

Requirements:

  • Kernel version 4.19 or later
  • Operating system: Red Hat* Enterprise Linux* 7.5+ or SUSE* Linux Enterprise
    Server* 12+
  • The sch_mqprio, act_mirred and cls_flower modules must be loaded
  • The latest version of iproute2
  • The latest ice driver and NVM image (Note: You must compile the ice driver
    with the ADQ flag as shown in the "Building and Installation" section.)

When ADQ is enabled:

  • You cannot change RSS parameters, the number of queues, or the MAC address in
    the PF or VF. Delete the ADQ configuration before changing these settings.
  • The driver supports subnet masks for IP addresses in the PF and VF. When you
    add a subnet mask filter, the driver forwards packets to the ADQ VSI instead of
    the main VSI.
  • When the PF adds or deletes a port VLAN filter for the VF, it will extend to
    all the VSIs within that VF.

Known issues:

  • The kernel driver does not support ADQ. You must use the latest out-of-tree
    driver to use ADQ.
  • If the application stalls, the application-specific queues may stall for up
    to two seconds. Configuring only one application per Traffic Class (TC) channel
    may resolve the issue.
  • DCB and ADQ cannot coexist. A switch with DCB enabled might remove the ADQ
    configuration from the device. To resolve the issue, do not enable DCB on the
    switch ports being used for ADQ. You must disable LLDP on the interface and
    stop the firmware LLDP agent using the following command:

    ethtool --set-priv-flags fw-lldp-agent off

  • MACVLAN offloads and ADQ are mutually exclusive. System instability may occur
    if you enable l2-fwd-offload and then set up ADQ, or if you set up ADQ and then
    enable l2-fwd-offload.
  • Commands such as 'tc qdisc add' and 'ethtool -L' will cause the driver to
    close the associated RDMA interface and reopen it. This will disrupt RDMA
    traffic for 3-5 seconds until the RDMA interface is available again for
    traffic.
  • Commands such as 'tc qdisc add' and 'ethtool -L' will clear other tuning
    settings such as interrupt affinity. These tuning settings will need to be
    reapplied. When the number of queues are increased using 'ethtool -L', the new
    queues will have the same interrupt moderation settings as queue 0 (i.e., Tx
    queue 0 for new Tx queues and Rx queue 0 for new Rx queues). You can change
    this using the ethtool per-queue coalesce commands.
  • TC filters may not get offloaded in hardware if you apply them immediately
    after issuing the 'tc qdisc add' command. We recommend you wait 5 seconds after
    issuing 'tc qdisc add' before adding TC filters. Dmesg will report the error if
    TC filters fail to add properly.
  • Each TC filter bound to a device based on the Intel(R) Ethernet Controller
    800 Series consumes a certain number of hardware resources that are shared at
    the device level. Once resources are assigned, deleting individual filters does
    not completely free those hardware resources. To free them completely, you must
    unload and reload the driver. See the ADQ Configuration Guide for more details.

To set up the adapter for ADQ, where is the interface in use:

  1. Reload the ice driver to remove any previous TC configuration:

    modprobe -r ice

    modprobe ice

  2. Enable hardware TC offload on the interface:

    ethtool -K hw-tc-offload on

  3. Disable LLDP on the interface, if it isn't already:

    ethtool --set-priv-flags fw-lldp-agent off

  4. Verify settings:

    ethtool -k | grep "hw-tc"

    ethtool --show-priv-flags

Example output:
Private flags for p1p1:
link-down-on-close : off
fw-lldp-agent : off
channel-inline-flow-director : off
channel-pkt-inspect-optimize : on

To create traffic classes (TCs) on the interface:
NOTE: Run all TC commands from the ../iproute2/tc/ directory.

  1. Use the tc command to create traffic classes. You can create a maximum of
    16 TCs per interface.

    tc qdisc add dev root mqprio num_tc map

    queues <count1@offset1 ...> hw 1 mode channel shaper bw_rlimit
    min_rate <min_rate1 ...> max_rate <max_rate1 ...>
    Where:
    num_tc : The number of TCs to use.
    map : The map of priorities to TCs. You can map up to
    16 priorities to TCs.
    queues <count1@offset1 ...>: For each TC, @. The max
    total number of queues for all TCs is the number of cores.
    hw 1 mode channel: 'channel' with 'hw' set to 1 is a new hardware offload
    mode in mqprio that makes full use of the mqprio options, the TCs,
    the queue configurations, and the QoS parameters.
    shaper bw_rlimit: For each TC, sets the minimum and maximum bandwidth
    rates. The totals must be equal to or less than the port speed. This
    parameter is optional and is required only to set up the Tx rates.
    min_rate <min_rate1>: Sets the minimum bandwidth rate limit for each TC.
    max_rate <max_rate1 ...>: Sets the maximum bandwidth rate limit for each
    TC. You can set a min and max rate together.

NOTE: See the mqprio man page and the examples below for more information.

  1. Verify the bandwidth limit using network monitoring tools such as ifstat or
    sar -n DEV [interval] [number of samples]

NOTE: Setting up channels via ethtool (ethtool -L) is not supported when the
TCs are configured using mqprio.

  1. Enable hardware TC offload on the interface:

    ethtool -K hw-tc-offload on

  2. Apply TCs to ingress (Rx) flow of the interface:

    tc qdisc add dev ingress

EXAMPLES:
See the tc and tc-flower man pages for more information on traffic control and
TC flower filters.

  • To set up two TCs (tc0 and tc1), with 16 queues each, priorities 0-3 for
    tc0 and 4-7 for tc1, and max Tx rate set to 1Gbit for tc0 and 3Gbit for tc1:

    tc qdisc add dev ens4f0 root mqprio num_tc 2 map 0 0 0 0 1 1 1 1 queues

    16@0 16@16 hw 1 mode channel shaper bw_rlimit max_rate 1Gbit 3Gbit
    Where:
    map 0 0 0 0 1 1 1 1: Sets priorities 0-3 to use tc0 and 4-7 to use tc1
    queues 16@0 16@16: Assigns 16 queues to tc0 at offset 0 and 16 queues
    to tc1 at offset 16

  • To set a minimum rate for a TC:

    tc qdisc add dev ens4f0 root mqprio num_tc 2 map 0 0 0 0 1 1 1 1 queues

    4@0 8@4 hw 1 mode channel shaper bw_rlimit min_rate 25Gbit 50Gbit

  • To set a maximum data rate for a TC:

    tc qdisc add dev ens4f0 root mqprio num_tc 2 map 0 0 0 0 1 1 1 1 queues

    4@0 8@4 hw 1 mode channel shaper bw_rlimit max_rate 25Gbit 50Gbit

  • To set both minimum and maximum data rates together:

    tc qdisc add dev ens4f0 root mqprio num_tc 2 map 0 0 0 0 1 1 1 1 queues

    4@0 8@4 hw 1 mode channel shaper bw_rlimit min_rate 10Gbit 20Gbit
    max_rate 25Gbit 50Gbit

  • To configure TCP TC filters, where:
    protocol: Encapsulation protocol (valid options are IP and 802.1Q).
    prio: Priority.
    flower: Flow-based traffic control filter.
    dst_ip: IP address of the device.
    ip_proto: IP protocol to use (TCP or UDP).
    dst_port: Destination port.
    src_port: Source port.
    skip_sw: Flag to add the rule only in hardware.
    hw_tc : Route incoming traffic flow to this hardware TC. The TC count
    starts at 0. For example, hw_tc 1 indicates that the filter
    is on the second TC.

  • TCP: Destination IP + L4 Destination Port
    To route TCP traffic with a matching destination IP address and destination
    port to the given TC:

    tc filter add dev protocol ip parent ffff: prio 1 flower dst_ip

    <ip_address> ip_proto tcp dst_port <port_number> skip_sw hw_tc 1

  • TCP: Destination IP + L4 Source Port
    To route TCP traffic with a matching destination IP address and source port
    to the given TC:

    tc filter add dev protocol ip parent ffff: prio 1 flower dst_ip

    <ip_address> ip_proto tcp src_port <port_number> skip_sw hw_tc 1

  • To verify successful TC creation and traffic filtering, after filters are
    created:

    tc qdisc show dev

  • To view all filters:

    tc filter show dev parent ffff:

Intel(R) Ethernet Flow Director

The Intel Ethernet Flow Director performs the following tasks:

  • Directs receive packets according to their flows to different queues
  • Enables tight control on routing a flow in the platform
  • Matches flows and CPU cores for flow affinity

NOTE: An included script (set_irq_affinity) automates setting the IRQ to CPU
affinity.

NOTE: This driver supports the following flow types:

  • IPv4
  • TCPv4
  • UDPv4
  • SCTPv4
  • IPv6
  • TCPv6
  • UDPv6
  • SCTPv6
    Each flow type supports valid combinations of IP addresses (source or
    destination) and UDP/TCP/SCTP ports (source and destination). You can supply
    only a source IP address, a source IP address and a destination port, or any
    combination of one or more of these four parameters.

NOTE: This driver allows you to filter traffic based on a user-defined flexible
two-byte pattern and offset by using the ethtool user-def and mask fields. Only
L3 and L4 flow types are supported for user-defined flexible filters. For a
given flow type, you must clear all Intel Ethernet Flow Director filters before
changing the input set (for that flow type).

NOTE: Flow Director filters impact only LAN traffic. RDMA filtering occurs
before Flow Director, so Flow Director filters will not impact RDMA.

The following table summarizes supported Intel Ethernet Flow Director features
across Intel(R) Ethernet controllers.


Feature 500 Series 700 Series 800 Series

VF FLOW DIRECTOR Supported Routing to VF Not supported
not supported

IP ADDRESS RANGE Supported Not supported Field masking
FILTER

IPv6 SUPPORT Supported Not supported Supported

CONFIGURABLE Configured Configured Configured
INPUT SET per port globally per port

ATR Supported Supported Not supported

FLEX BYTE FILTER Starts at Starts at Starts at
beginning beginning of beginning
of packet payload of packet

TUNNELED PACKETS Filter matches Filter matches Filter matches
outer header inner header inner header

Flow Director Filters

Flow Director filters are used to direct traffic that matches specified
characteristics. They are enabled through ethtool's ntuple interface. To enable
or disable the Intel Ethernet Flow Director and these filters:

ethtool -K ntuple <off|on>

NOTE: When you disable ntuple filters, all the user programmed filters are
flushed from the driver cache and hardware. All needed filters must be re-added
when ntuple is re-enabled.

To display all of the active filters:

ethtool -u

To add a new filter:

ethtool -U flow-type src-ip [m <ip_mask>] dst-ip [m

<ip_mask>] src-port [m <port_mask>] dst-port [m <port_mask>]
action
Where:
- the Ethernet device to program
- can be ip4, tcp4, udp4, sctp4, ip6, tcp6, udp6, sctp6
- the IP address to match on
<ip_mask> - the IPv4 address to mask on
NOTE: These filters use inverted masks.
- the port number to match on
<port_mask> - the 16-bit integer for masking
NOTE: These filters use inverted masks.
- the queue to direct traffic toward (-1 discards the
matched traffic)

To delete a filter:

ethtool -U delete

Where is the filter ID displayed when printing all the active filters,
and may also have been specified using "loc " when adding the filter.

EXAMPLES:
To add a filter that directs packet to queue 2:

ethtool -U flow-type tcp4 src-ip 192.168.10.1 dst-ip \

192.168.10.2 src-port 2000 dst-port 2001 action 2 [loc 1]

To set a filter using only the source and destination IP address:

ethtool -U flow-type tcp4 src-ip 192.168.10.1 dst-ip \

192.168.10.2 action 2 [loc 1]

To set a filter based on a user-defined pattern and offset:

ethtool -U flow-type tcp4 src-ip 192.168.10.1 dst-ip \

192.168.10.2 user-def 0x4FFFF action 2 [loc 1]

where the value of the user-def field contains the offset (4 bytes) and
the pattern (0xffff).

To match TCP traffic sent from 192.168.0.1, port 5300, directed to 192.168.0.5,
port 80, and then send it to queue 7:

ethtool -U enp130s0 flow-type tcp4 src-ip 192.168.0.1 dst-ip 192.168.0.5

src-port 5300 dst-port 80 action 7

To add a TCPv4 filter with a partial mask for a source IP subnet:

ethtool -U flow-type tcp4 src-ip 192.168.0.0 m 0.255.255.255 dst-ip

192.168.5.12 src-port 12600 dst-port 31 action 12

NOTES:
For each flow-type, the programmed filters must all have the same matching
input set. For example, issuing the following two commands is acceptable:

ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7

ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.5 src-port 55 action 10

Issuing the next two commands, however, is not acceptable, since the first
specifies src-ip and the second specifies dst-ip:

ethtool -U enp130s0 flow-type ip4 src-ip 192.168.0.1 src-port 5300 action 7

ethtool -U enp130s0 flow-type ip4 dst-ip 192.168.0.5 src-port 55 action 10

The second command will fail with an error. You may program multiple filters
with the same fields, using different values, but, on one device, you may not
program two tcp4 filters with different matching fields.

The ice driver does not support matching on a subportion of a field, thus
partial mask fields are not supported.

Flex Byte Flow Director Filters

The driver also supports matching user-defined data within the packet payload.
This flexible data is specified using the "user-def" field of the ethtool
command in the following way:
+----------------------------+--------------------------+
| 31 28 24 20 16 | 15 12 8 4 0 |
+----------------------------+--------------------------+
| offset into packet payload | 2 bytes of flexible data |
+----------------------------+--------------------------+

For example,
... user-def 0x4FFFF ...

tells the filter to look 4 bytes into the payload and match that value against
0xFFFF. The offset is based on the beginning of the payload, and not the
beginning of the packet. Thus

flow-type tcp4 ... user-def 0x8BEAF ...

would match TCP/IPv4 packets which have the value 0xBEAF 8 bytes into the
TCP/IPv4 payload.

Note that ICMP headers are parsed as 4 bytes of header and 4 bytes of payload.
Thus to match the first byte of the payload, you must actually add 4 bytes to
the offset. Also note that ip4 filters match both ICMP frames as well as raw
(unknown) ip4 frames, where the payload will be the L3 payload of the IP4 frame.

The maximum offset is 64. The hardware will only read up to 64 bytes of data
from the payload. The offset must be even because the flexible data is 2 bytes
long and must be aligned to byte 0 of the packet payload.

The user-defined flexible offset is also considered part of the input set and
cannot be programmed separately for multiple filters of the same type. However,
the flexible data is not part of the input set and multiple filters may use the
same offset but match against different data.

RSS Hash Flow

Allows you to set the hash bytes per flow type and any combination of one or
more options for Receive Side Scaling (RSS) hash byte configuration.

ethtool -N rx-flow-hash

Where is:
tcp4 signifying TCP over IPv4
udp4 signifying UDP over IPv4
tcp6 signifying TCP over IPv6
udp6 signifying UDP over IPv6
And is one or more of:
s Hash on the IP source address of the Rx packet.
d Hash on the IP destination address of the Rx packet.
f Hash on bytes 0 and 1 of the Layer 4 header of the Rx packet.
n Hash on bytes 2 and 3 of the Layer 4 header of the Rx packet.

Accelerated Receive Flow Steering (aRFS)

Devices based on the Intel(R) Ethernet Controller 800 Series support
Accelerated Receive Flow Steering (aRFS) on the PF. aRFS is a load-balancing
mechanism that allows you to direct packets to the same CPU where an
application is running or consuming the packets in that flow.

NOTES:

  • aRFS requires that ntuple filtering is enabled via ethtool.
  • aRFS support is limited to the following packet types:
    • TCP over IPv4 and IPv6
    • UDP over IPv4 and IPv6
    • Nonfragmented packets
  • aRFS only supports Flow Director filters, which consist of the
    source/destination IP addresses and source/destination ports.
  • aRFS and ethtool's ntuple interface both use the device's Flow Director. aRFS
    and ntuple features can coexist, but you may encounter unexpected results if
    there's a conflict between aRFS and ntuple requests. See "Intel(R) Ethernet
    Flow Director" for additional information.

To set up aRFS:

  1. Enable the Intel Ethernet Flow Director and ntuple filters using ethtool.

    ethtool -K ntuple on

  2. Set up the number of entries in the global flow table. For example:

    NUM_RPS_ENTRIES=16384

    echo $NUM_RPS_ENTRIES > /proc/sys/net/core/rps_sock_flow_entries

  3. Set up the number of entries in the per-queue flow table. For example:

    NUM_RX_QUEUES=64

    for file in /sys/class/net/$IFACE/queues/rx-*/rps_flow_cnt; do

    echo $(($NUM_RPS_ENTRIES/$NUM_RX_QUEUES)) > $file;

    done

  4. Disable the IRQ balance daemon (this is only a temporary stop of the service
    until the next reboot).

    systemctl stop irqbalance

  5. Configure the interrupt affinity.

    set_irq_affinity

To disable aRFS using ethtool:

ethtool -K ntuple off

NOTE: This command will disable ntuple filters and clear any aRFS filters in
software and hardware.

Example Use Case:

  1. Set the server application on the desired CPU (e.g., CPU 4).

    taskset -c 4 netserver

  2. Use netperf to route traffic from the client to CPU 4 on the server with
    aRFS configured. This example uses TCP over IPv4.

    netperf -H -t TCP_STREAM

Enabling Virtual Functions (VFs)

Use sysfs to enable virtual functions (VF).

For example, you can create 4 VFs as follows:

echo 4 > /sys/class/net//device/sriov_numvfs

To disable VFs, write 0 to the same file:

echo 0 > /sys/class/net//device/sriov_numvfs

The maximum number of VFs for the ice driver is 256 total (all ports). To check
how many VFs each PF supports, use the following command:

cat /sys/class/net//device/sriov_totalvfs

Note:
You cannot use RDMA or SR-IOV when link aggregation (LAG)/bonding is active,
and vice versa. To enforce this, on kernels 4.5 and above, the ice driver
checks for this mutual exclusion. On kernels older than 4.5, the ice driver
cannot check for this exclusion and is unaware of bonding events.

Configuring VLAN Tagging on SR-IOV Enabled Adapter Ports

To configure VLAN tagging for the ports on an SR-IOV enabled adapter, use the
following command. The VLAN configuration should be done before the VF driver
is loaded or the VM is booted. The VF is not aware of the VLAN tag being
inserted on transmit and removed on received frames (sometimes called "port
VLAN" mode).

ip link set dev vf vlan

For example, the following will configure PF eth0 and the first VF on VLAN 10:

ip link set dev eth0 vf 0 vlan 10

Enabling a VF link if the port is disconnected

If the physical function (PF) link is down, you can force link up (from the
host PF) on any virtual functions (VF) bound to the PF. Note that this requires
kernel support (Red Hat kernel 3.10.0-327 or newer, upstream kernel 3.11.0 or
newer, and associated iproute2 user space support).

For example, to force link up on VF 0 bound to PF eth0:

ip link set eth0 vf 0 state enable

Note: If the command does not work, it may not be supported by your system.

Setting the MAC Address for a VF

To change the MAC address for the specified VF:

ip link set vf 0 mac

For example:

ip link set vf 0 mac 00:01:02:03:04:05

This setting lasts until the PF is reloaded.

NOTE: Assigning a MAC address for a VF from the host will disable any
subsequent requests to change the MAC address from within the VM. This is a
security feature. The VM is not aware of this restriction, so if this is
attempted in the VM, it will trigger MDD events.

Trusted VFs and VF Promiscuous Mode

This feature allows you to designate a particular VF as trusted and allows that
trusted VF to request selective promiscuous mode on the Physical Function (PF).

To set a VF as trusted or untrusted, enter the following command in the
Hypervisor:

ip link set dev vf 1 trust [on|off]

NOTE: It's important to set the VF to trusted before setting promiscuous mode.
If the VM is not trusted, the PF will ignore promiscuous mode requests from the
VF. If the VM becomes trusted after the VF driver is loaded, you must make a
new request to set the VF to promiscuous.

Once the VF is designated as trusted, use the following commands in the VM to
set the VF to promiscuous mode. For promiscuous all:

ip link set promisc on

Where <ethX> is a VF interface in the VM

For promiscuous Multicast:

ip link set allmulticast on

Where <ethX> is a VF interface in the VM

NOTE: By default, the ethtool private flag vf-true-promisc-support is set to
"off," meaning that promiscuous mode for the VF will be limited. To set the
promiscuous mode for the VF to true promiscuous and allow the VF to see all
ingress traffic, use the following command:

ethtool --set-priv-flags vf-true-promisc-support on

The vf-true-promisc-support private flag does not enable promiscuous mode;
rather, it designates which type of promiscuous mode (limited or true) you will
get when you enable promiscuous mode using the ip link commands above. Note
that this is a global setting that affects the entire device. However, the
vf-true-promisc-support private flag is only exposed to the first PF of the
device. The PF remains in limited promiscuous mode regardless of the
vf-true-promisc-support setting.

Next, add a VLAN interface on the VF interface. For example:

ip link add link eth2 name eth2.100 type vlan id 100

Note that the order in which you set the VF to promiscuous mode and add the
VLAN interface does not matter (you can do either first). The result in this
example is that the VF will get all traffic that is tagged with VLAN 100.

Virtual Function (VF) Tx Rate Limit

Use the ip command to configure the maximum or minimum Tx rate limit for a VF
from the PF interface.

For example, to set a maximum Tx rate limit of 8000Mbps for VF 0:

ip link set eth0 vf 0 max_tx_rate 8000

For example, to set a minimum Tx rate limit of 1000Mbps for VF 0:

ip link set eth0 vf 0 min_tx_rate 1000

NOTE:

  • If DCB or ADQ are enabled on a PF, you cannot set a minimum Tx rate on the
    VFs associated with that PF.
  • If both DCB and ADQ are disabled on a PF, then you can set a minimum Tx rate
    on the VFs associated with that PF.
  • If you set a minimum Tx rate limit on a PF for SR-IOV VFs and then apply a
    DCB or ADQ configuration, the PF cannot guarantee the minimum Tx rate limits
    for those VFs.
  • If you set a minimum Tx rate on VFs across multiple ports that have an
    aggregate bandwidth over 100Gbps, the PFs cannot guarantee the minimum Tx rate
    set for the VFs.

Malicious Driver Detection (MDD) for VFs

Some Intel Ethernet devices use Malicious Driver Detection (MDD) to detect
malicious traffic from the VF and disable Tx/Rx queues or drop the offending
packet until a VF driver reset occurs. You can view MDD messages in the PF's
system log using the dmesg command.

  • If the PF driver logs MDD events from the VF, confirm that the correct VF
    driver is installed.
  • To restore functionality, you can manually reload the VF or VM or enable
    automatic VF resets.
  • When automatic VF resets are enabled, the PF driver will immediately reset
    the VF and reenable queues when it detects MDD events on the receive path.
  • If automatic VF resets are disabled, the PF will not automatically reset the
    VF when it detects MDD events.

To enable or disable automatic VF resets, use the following command:

ethtool --set-priv-flags mdd-auto-reset-vf on|off

MAC and VLAN Anti-Spoofing Feature for VFs

When a malicious driver on a Virtual Function (VF) interface attempts to send a
spoofed packet, it is dropped by the hardware and not transmitted.

NOTE: This feature can be disabled for a specific VF:

ip link set vf spoofchk {off|on}

Jumbo Frames

Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
to a value larger than the default value of 1500.

Use the ifconfig command to increase the MTU size. For example, enter the
following where is the interface number:

ifconfig mtu 9000 up

Alternatively, you can use the ip command as follows:

ip link set mtu 9000 dev

ip link set up dev

This setting is not saved across reboots. The setting change can be made
permanent by adding 'MTU=9000' to the following file:
/etc/sysconfig/network-scripts/ifcfg- for RHEL
or
/etc/sysconfig/network/<config_file> for SLES

NOTE: The maximum MTU setting for jumbo frames is 9702. This corresponds to the
maximum jumbo frame size of 9728 bytes.

NOTE: This driver will attempt to use multiple page sized buffers to receive
each jumbo packet. This should help to avoid buffer starvation issues when
allocating receive packets.

NOTE: Packet loss may have a greater impact on throughput when you use jumbo
frames. If you observe a drop in performance after enabling jumbo frames,
enabling flow control may mitigate the issue.

Speed and Duplex Configuration

In addressing speed and duplex configuration issues, you need to distinguish
between copper-based adapters and fiber-based adapters.

In the default mode, an Intel(R) Ethernet Network Adapter using copper
connections will attempt to auto-negotiate with its link partner to determine
the best setting. If the adapter cannot establish link with the link partner
using auto-negotiation, you may need to manually configure the adapter and link
partner to identical settings to establish link and pass packets. This should
only be needed when attempting to link with an older switch that does not
support auto-negotiation or one that has been forced to a specific speed or
duplex mode. Your link partner must match the setting you choose. 1 Gbps speeds
and higher cannot be forced. Use the autonegotiation advertising setting to
manually set devices for 1 Gbps and higher.

Speed, duplex, and autonegotiation advertising are configured through the
ethtool* utility. ethtool is included with all versions of Red Hat after Red
Hat 7.2. For the latest version, download and install ethtool from the
following website:

https://kernel.org/pub/software/network/ethtool/

To see the speed configurations your device supports, run the following:

ethtool

Caution: Only experienced network administrators should force speed and duplex
or change autonegotiation advertising manually. The settings at the switch must
always match the adapter settings. Adapter performance may suffer or your
adapter may not operate if you configure the adapter differently from your
switch.

Data Center Bridging (DCB)

NOTE:
The kernel assumes that TC0 is available, and will disable Priority Flow
Control (PFC) on the device if TC0 is not available. To fix this, ensure TC0 is
enabled when setting up DCB on your switch.

DCB is a configuration Quality of Service implementation in hardware. It uses
the VLAN priority tag (802.1p) to filter traffic. That means that there are 8
different priorities that traffic can be filtered into. It also enables
priority flow control (802.1Qbb) which can limit or eliminate the number of
dropped packets during network stress. Bandwidth can be allocated to each of
these priorities, which is enforced at the hardware level (802.1Qaz).

Adapter firmware implements LLDP and DCBX protocol agents as per 802.1AB and
802.1Qaz respectively. There are potentially two DCBX modes on Linux, depending
on the underlying PF device:

  • Intel Ethernet Controller 800 Series adapters support both firmware DCBX and
    software DCBX. If FW-LLDP is enabled, DCBX will run in firmware.

DCB parameters can be established via a firmware LLDP/DCBX agent or a software
LLDP/DCBX agent. Only one LLDP/DCBX agent can be active on a single interface
at a time. When the firmware DCBX agent is active, software agents will not be
able to receive or transmit LLDP frames.

See the "FW-LLDP (Firmware Link Layer Discovery Protocol)" section in this
README for the ethtool commands to query the status of the firmware LLDP/DCBX
agent.

When operating in firmware DCBX mode, the adapter is in an "always willing"
state. DCB settings are applied on the adapter by transmitting a nonwilling
configuration from the link partner. Typically this is a switch. For
configuring DCBX parameters on a switch, please consult the switch
manufacturer's documentation.

NOTES:

  • The ice driver supports DCB when the firmware agent is on or off by
    supporting software DCBX agents.
  • When the firmware LLDP agent is disabled, you can configure DCB parameters
    using software LLDP/DCBX agents that interface with the Linux kernel's DCB
    Netlink API. We recommend using OpenLLDP as the DCBX agent when running in
    software mode. For more information, see the OpenLLDP man pages and
    https://github.com/intel/openlldp.
  • iSCSI with DCB is not supported.

FW-LLDP (Firmware Link Layer Discovery Protocol)

Use ethtool to change FW-LLDP settings. The FW-LLDP setting is per port and
persists across boots.

To enable LLDP:

ethtool --set-priv-flags fw-lldp-agent on

To disable LLDP:

ethtool --set-priv-flags fw-lldp-agent off

To check the current LLDP setting:

ethtool --show-priv-flags

NOTE: You must enable the UEFI HII "LLDP Agent" attribute for this setting to
take effect. If "LLDP AGENT" is set to disabled, you cannot enable it from the
OS.

Flow Control

Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
receiving and transmitting pause frames for ice. When transmit is enabled,
pause frames are generated when the receive packet buffer crosses a predefined
threshold. When receive is enabled, the transmit unit will halt for the time
delay specified when a pause frame is received.

NOTE: You must have a flow control capable link partner.

Flow Control is disabled by default.

Use ethtool to change the flow control settings.

To enable or disable Rx or Tx Flow Control:

ethtool -A rx <on|off> tx <on|off>

Note: This command only enables or disables Flow Control if auto-negotiation is
disabled. If auto-negotiation is enabled, this command changes the parameters
used for auto-negotiation with the link partner.

Note: Flow Control auto-negotiation is part of link auto-negotiation. Depending
on your device, you may not be able to change the auto-negotiation setting.

NOTE:

  • The ice driver requires flow control on both the port and link partner. If
    flow control is disabled on one of the sides, the port may appear to hang on
    heavy traffic.
  • You may encounter issues with link-level flow control (LFC) after disabling
    DCB. The LFC status may show as enabled but traffic is not paused. To resolve
    this issue, disable and reenable LFC using ethtool:

ethtool -A rx off tx off

ethtool -A rx on tx on

NAPI

This driver supports NAPI (Rx polling mode).
For more information on NAPI, see
https://www.linuxfoundation.org/collaborate/workgroups/networking/napi

MACVLAN

This driver supports MACVLAN. Kernel support for MACVLAN can be tested by
checking if the MACVLAN driver is loaded. You can run 'lsmod | grep macvlan' to
see if the MACVLAN driver is loaded or run 'modprobe macvlan' to try to load
the MACVLAN driver.

NOTE:

  • In passthru mode, you can only set up one MACVLAN device. It will inherit the
    MAC address of the underlying PF (Physical Function) device.

ice devices support L2 Forwarding Offload. This will offload the processing
required for L2 Forwarding from the system processors to the ice device.
Perform the following steps to enable L2 Forwarding Offload:

  1. Enable L2 Forwarding offload:

    ethtool -K l2-fwd-offload on

  2. Create the MACVLAN netdevs and bind them to the PF.

  3. Bring up/enable the MACVLAN netdevs.

NOTE: MACVLAN offloads and ADQ are mutually exclusive. System instability may
occur if you enable l2-fwd-offload and then set up ADQ, or if you set up ADQ
and then enable l2-fwd-offload.

IEEE 802.1ad (QinQ) Support

The IEEE 802.1ad standard, informally known as QinQ, allows for multiple VLAN
IDs within a single Ethernet frame. VLAN IDs are sometimes referred to as
"tags," and multiple VLAN IDs are thus referred to as a "tag stack." Tag stacks
allow L2 tunneling and the ability to segregate traffic within a particular
VLAN ID, among other uses.

NOTES:

  • 802.1ad (QinQ) is supported in 3.19 and later kernels.
  • 802.1ad (QinQ) and RDMA are not compatible.
  • Receive checksum offloads and VLAN acceleration are not supported for 802.1ad
    (QinQ) packets.
  • 0x88A8 traffic will not be received unless VLAN stripping is disabled with
    the following command:

    ethool -K rxvlan off

  • 0x88A8/0x8100 double VLANs cannot be used with 0x8100 or 0x8100/0x8100 VLANS
    configured on the same port. 0x88a8/0x8100 traffic will not be received if
    0x8100 VLANs are configured.
  • The VF can only transmit 0x88A8/0x8100 (i.e., 802.1ad/802.1Q) traffic if:
    1. The VF is not assigned a port VLAN.
    2. spoofchk is disabled from the PF. If you enable spoofchk, the VF will not
      transmit 0x88A8/0x8100 traffic.
  • The VF may not receive all network traffic based on the Inner VLAN header
    when VF true promiscuous mode (vf-true-promisc-support) and double VLANs are
    enabled in SR-IOV mode.

The following are examples of how to configure 802.1ad (QinQ):

ip link add link eth0 eth0.24 type vlan proto 802.1ad id 24

ip link add link eth0.24 eth0.24.371 type vlan proto 802.1Q id 371

Where "24" and "371" are example VLAN IDs.

IEEE 1588 Precision Time Protocol (PTP) Hardware Clock (PHC)

Precision Time Protocol (PTP) is used to synchronize clocks in a computer
network. PTP support varies among Intel devices that support this driver. Use
'ethtool -T ' to get a definitive list of PTP capabilities supported by
the device.

Tunnel/Overlay Stateless Offloads

Supported tunnels and overlays include VXLAN, GENEVE, and others depending on
hardware and software configuration. Stateless offloads are enabled by default.

To view the current state of all offloads:

ethtool -k

UDP Segmentation Offload

Allows the adapter to offload transmit segmentation of UDP packets with
payloads up to 64K into valid Ethernet frames. Because the adapter hardware is
able to complete data segmentation much faster than operating system software,
this feature may improve transmission performance.
In addition, the adapter may use fewer CPU resources.

NOTES:

  • UDP transmit segmentation offload requires Linux kernel 4.18 or later.
  • The application sending UDP packets must support UDP segmentation offload.

To enable/disable UDP Segmentation Offload, issue the following command:

ethtool -K tx-udp-segmentation [off|on]

Performance Optimization

Driver defaults are meant to fit a wide variety of workloads, but if further
optimization is required, we recommend experimenting with the following
settings.

IRQ to Adapter Queue Alignment

Pin the adapter's IRQs to specific cores by disabling the irqbalance service
and using the included set_irq_affinity script. Please see the script's help
text for further options.

  • The following settings will distribute the IRQs across all the cores
    evenly:

    scripts/set_irq_affinity -x all , [ , ... ]

  • The following settings will distribute the IRQs across all the cores that
    are local to the adapter (same NUMA node):

    scripts/set_irq_affinity -x local ,[ , ... ]

  • For very CPU-intensive workloads, we recommend pinning the IRQs to all
    cores.

Rx Descriptor Ring Size

To reduce the number of Rx packet discards, increase the number of Rx
descriptors for each Rx ring using ethtool.

  • Check if the interface is dropping Rx packets due to buffers being full
    (rx_dropped.nic can mean that there is no PCIe bandwidth):

    ethtool -S | grep "rx_dropped"

  • If the previous command shows drops on queues, it may help to increase
    the number of descriptors using 'ethtool -G':

    ethtool -G rx

    Where is the desired number of ring entries/descriptors

    This can provide temporary buffering for issues that create latency while
    the CPUs process descriptors.

Interrupt Rate Limiting

This driver supports an adaptive interrupt throttle rate (ITR) mechanism that
is tuned for general workloads. The user can customize the interrupt rate
control for specific workloads, via ethtool, adjusting the number of
microseconds between interrupts.

To set the interrupt rate manually, you must disable adaptive mode:

ethtool -C adaptive-rx off adaptive-tx off

For lower CPU utilization:

  • Disable adaptive ITR and lower Rx and Tx interrupts. The examples below
    affect every queue of the specified interface.

  • Setting rx-usecs and tx-usecs to 80 will limit interrupts to about
    12,500 interrupts per second per queue:

    ethtool -C adaptive-rx off adaptive-tx off rx-usecs 80

    tx-usecs 80

For reduced latency:

  • Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0
    using ethtool:

    ethtool -C adaptive-rx off adaptive-tx off rx-usecs 0

    tx-usecs 0

Per-queue interrupt rate settings:

  • The following examples are for queues 1 and 3, but you can adjust other
    queues.

  • To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or
    about 100,000 interrupts/second, for queues 1 and 3:

    ethtool --per-queue queue_mask 0xa --coalesce adaptive-rx off

    rx-usecs 10

  • To show the current coalesce settings for queues 1 and 3:

    ethtool --per-queue queue_mask 0xa --show-coalesce

Bounding interrupt rates using rx-usecs-high:

  • Valid Range: 0-236 (0=no limit)

    The range of 0-236 microseconds provides an effective range of 4,237 to
    250,000 interrupts per second. The value of rx-usecs-high can be set
    independently of rx-usecs and tx-usecs in the same ethtool command, and is
    also independent of the adaptive interrupt moderation algorithm. The
    underlying hardware supports granularity in 4-microsecond intervals, so
    adjacent values may result in the same interrupt rate.

  • The following command would disable adaptive interrupt moderation, and allow
    a maximum of 5 microseconds before indicating a receive or transmit was
    complete. However, instead of resulting in as many as 200,000 interrupts per
    second, it limits total interrupts per second to 50,000 via the rx-usecs-high
    parameter.

    ethtool -C adaptive-rx off adaptive-tx off rx-usecs-high 20

    rx-usecs 5 tx-usecs 5

Virtualized Environments

In addition to the other suggestions in this section, the following may be
helpful to optimize performance in VMs.

  • Using the appropriate mechanism (vcpupin) in the VM, pin the CPUs to
    individual LCPUs, making sure to use a set of CPUs included in the
    device's local_cpulist: /sys/class/net//device/local_cpulist.

  • Configure as many Rx/Tx queues in the VM as available. (See the iavf driver
    documentation for the number of queues supported.) For example:

    ethtool -L <virt_interface> rx tx

Known Issues/Troubleshooting

Dynamic Debug

If you encounter unexpected issues during driver load, some of the most useful
information for developers to receive in a bug report can include driver
logging. This logging uses a kernel feature called Dynamic Debug, which is
generally enabled in most kernel configurations (CONFIG_DYNAMIC_DEBUG=y).

To load the driver with dynamic debug enabled, run modprobe with the dyndbg
parameter:

modprobe ice dyndbg=+p

The driver will then load and print debugging information into the kernel log
(dmesg) and is usually logged into the system log viewable by journalctl or in
/var/log/messages. Saving this information to a file and attaching it to any
bug report can help shorten the reproduction and debugging time for a developer.

To enable dynamic debug during runtime operation of the driver, use this
command:

echo "module ice +p" > /sys/kernel/debug/dynamic_debug/control

For more details, see the Dynamic Debug documentation included in the Linux
kernel instructions.

'ethtool -S' does not display Tx/Rx packet statistics

Issuing the command 'ethtool -S' does not display Tx/Rx packet statistics. This
is by convention. Use other tools (e.g. ifconfig, ip) that display standard
netdev statistics such as Tx/Rx packet statistics.

Unexpected Issues when the device driver and DPDK share a device

Unexpected issues may result when an ice device is in multi driver mode and the
kernel driver and DPDK driver are sharing the device. This is because access to
the global NIC resources is not synchronized between multiple drivers. Any
change to the global NIC configuration (writing to a global register, setting
global configuration by AQ, or changing switch modes) will affect all ports and
drivers on the device. Loading DPDK with the "multi-driver" module parameter
may mitigate some of the issues.

Fiber optics and auto-negotiation

Modules based on 100GBASE-SR4, active optical cable (AOC), and active copper
cable (ACC) do not support auto-negotiation per the IEEE specification. To
obtain link with these modules, you must turn off auto-negotiation on the link
partner's switch ports.

'ethtool -a' autonegotiate result may vary between drivers

For kernel versions 4.6 or higher, 'ethtool -a' will show the advertised and
negotiated autoneg settings. For kernel versions below 4.6, ethtool will only
report the negotiated link status.

The issue is cosmetic and does not affect functionality. Installing the latest
ice driver and upgrading your kernel to version 4.6 or higher will resolve the
issue.

AF_XDP fails to allocate buffers

On kernels older than 5.3, you may see an undesirable CPU load during packet
processing if you enable AF_XDP in native mode and the Rx ring size is larger
than the UMEM fill queue. This is due to a known issue in the kernel and was
fixed in 5.3. To address the issue, upgrade your kernel to 5.3 or newer.

SCTP checksum offloads aren't indicated on Geneve tunnel

For SCTP traffic over a Geneve tunnel, the SCTP checksum isn't offloaded to the
device, even when tx-checksum-sctp is on. This is due to a limitation in the
Linux kernel. However, for Rx traffic, the SCTP checksum is verified if
rx-checksumming is on. For both Tx and Rx traffic, you can offload the outer
UDP checksum to the device.

Incorrect link speed reported on older VF drivers

Linux distributions with older iavf or i40evf drivers (including Red Hat
Enterprise Linux 8) may show an incorrect link speed on VF interfaces. This
issue is cosmetic and does not affect VF functionality. To resolve the issue,
download the latest iavf driver.

Older VF drivers on Intel Ethernet Controller 800 Series based adapters

Some Windows* VF drivers from Release 22.9 or older may encounter errors when
loaded on a PF based on the Intel Ethernet Controller 800 Series on Linux KVM.
You may see errors and the VF may not load. This issue does not occur starting
with the following Windows VF drivers:

  • v40e64, v40e65: Version 1.5.65.0 and newer

To resolve this issue, download and install the latest iavf driver.

'VF X failed opcode 24' error message in dmesg on host

With a Microsoft Windows Server 2019 guest machine running on a Linux host, you
may see 'VF <vf_number> failed opcode 24' error messages in dmesg on the host.
This error is benign and does not affect traffic. Installing the latest iavf
driver in the guest will resolve the issue.

Windows guest OSs on a Linux host may not pass traffic across VLANs

The VF is not aware of the VLAN configuration if you use Load Balancing and
Failover (LBFO) to configure VLANs in a Windows guest. VLANs configured using
LBFO on a VF driver may result in failure to pass traffic.

SR-IOV virtual functions have identical MAC addresses

When you create multiple SR-IOV virtual functions, the VFs may have identical
MAC addresses. Only one VF will pass traffic, and all traffic on other VFs with
identical MAC addresses will fail. This is related to the
"MACAddressPolicy=persistent" setting in
/usr/lib/systemd/network/99-default.link.

To resolve this issue, edit the /usr/lib/systemd/network/99-default.link file
and change the MACAddressPolicy line to "MACAddressPolicy=none". For more
information, see the systemd.link man page.

MDD events in dmesg when creating maximum number of VLANs on the VF

When you create the maximum number of VLANs on the VF, you may see MDD events
in dmesg on the host. This is due to the asynchronous design of the iavf
driver. It always reports success to any VLAN requests, but the requests may
fail later. The guest OS could try to send traffic on a VLAN that is not
configured on the VF, which will cause a Malicious Driver Detection (MDD) event
in dmesg on the host.

This issue is cosmetic. You do not need to reload the PF driver.

'ip address' or 'ip link' command displays an error on a single-port NIC
with 245+ VFs

When you use the 'ip address' or 'ip link' command on a Linux host configured
with 245 or more VFs on a single-port adapter, you may encounter a "Buffer too
small for object" error. This is due to a known issue in the iproute2 tools.
Please use ifconfig instead of iproute2. You can install ifconfig via the
net-tools-deprecated package.

Support

For general information, go to the Intel support website at:
http://www.intel.com/support/

or the Intel Wired Networking project hosted by Sourceforge at:
http://sourceforge.net/projects/e1000

If an issue is identified with the released source code on a supported kernel
with a supported adapter, email the specific information related to the issue
to e1000-devel@lists.sf.net.

License

This program is free software; you can redistribute it and/or modify it under
the terms and conditions of the GNU General Public License, version 2, as
published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 51 Franklin
St - Fifth Floor, Boston, MA 02110-1301 USA.

The full GNU General Public License is included in this distribution in the
file called "COPYING".

Copyright(c) 2017 - 2020 Intel Corporation.

Trademarks

Intel is a trademark or registered trademark of Intel Corporation or its
subsidiaries in the United States and/or other countries.

  • Other names and brands may be claimed as the property of others.

Don't miss a new ethernet-linux-ice release

NewReleases is sending notifications on new releases.