Scroll to navigation

NVME-CONNECT(1) NVMe Manual NVME-CONNECT(1)

NAME

nvme-connect - Connect to a Fabrics controller.

SYNOPSIS

nvme connect

[--transport=<trtype> | -t <trtype>]
[--nqn=<subnqn> | -n <subnqn>]
[--traddr=<traddr> | -a <traddr>]
[--trsvcid=<trsvcid> | -s <trsvcid>]
[--host-traddr=<traddr> | -w <traddr>]
[--hostnqn=<hostnqn> | -q <hostnqn>]
[--hostid=<hostid> | -I <hostid>]
[--nr-io-queues=<#> | -i <#>]
[--nr-write-queues=<#> | -W <#>]
[--nr-poll-queues=<#> | -P <#>]
[--queue-size=<#> | -Q <#>]
[--keep-alive-tmo=<#> | -k <#>]
[--reconnect-delay=<#> | -c <#>]
[--ctrl-loss-tmo=<#> | -l <#>]
[--duplicate_connect | -D]
[--disable_sqflow | -d]
[--hdr_digest | -g]
[--data_digest | -G]

DESCRIPTION

Create a transport connection to a remote system (specified by --traddr and --trsvcid) and create a NVMe over Fabrics controller for the NVMe subsystem specified by the --nqn option.

OPTIONS

-t <trtype>, --transport=<trtype>

This field specifies the network fabric being used for a NVMe-over-Fabrics network. Current string values include:
Value Definition
rdma The network fabric is an rdma network (RoCE, iWARP, Infiniband, basic rdma, etc)
fc WIP The network fabric is a Fibre Channel network.
loop Connect to a NVMe over Fabrics target on the local host

-n <subnqn>, --nqn <subnqn>

This field specifies the name for the NVMe subsystem to connect to.

-a <traddr>, --traddr=<traddr>

This field specifies the network address of the Controller. For transports using IP addressing (e.g. rdma) this should be an IP-based address (ex. IPv4).

-s <trsvcid>, --trsvcid=<trsvcid>

This field specifies the transport service id. For transports using IP addressing (e.g. rdma) this field is the port number. By default, the IP port number for the RDMA transport is 4420.

-w <traddr>, --host-traddr=<traddr>

This field specifies the network address used on the host to connect to the Controller.

-q <hostnqn>, --hostnqn=<hostnqn>

Overrides the default Host NQN that identifies the NVMe Host. If this option is not specified, the default is read from /etc/nvme/hostnqn first. If that does not exist, the autogenerated NQN value from the NVMe Host kernel module is used next. The Host NQN uniquely identifies the NVMe Host.

-I <hostid>, --hostid=<hostid>

UUID(Universally Unique Identifier) to be discovered which should be formatted.

-i <#>, --nr-io-queues=<#>

Overrides the default number of I/O queues create by the driver.

-W <#>, --nr-write-queues=<#>

Adds additional queues that will be used for write I/O.

-P <#>, --nr-poll-queues=<#>

Adds additional queues that will be used for polling latency sensitive I/O.

-Q <#>, --queue-size=<#>

Overrides the default number of elements in the I/O queues created by the driver.

-k <#>, --keep-alive-tmo=<#>

Overrides the default keep alive timeout (in seconds).

-c <#>, --reconnect-delay=<#>

Overrides the default delay (in seconds) before reconnect is attempted after a connect loss.

-l <#>, --ctrl-loss-tmo=<#>

Overrides the default controller loss timeout period (in seconds).

-D, --duplicate_connect

Allows duplicated connections between same trnsport host and subsystem port.

-d, --disable_sqflow

Disables SQ flow control to omit head doorbell update for submission queues when sending nvme completions.

-g, --hdr_digest

Generates/verifies header digest (TCP).

-G, --data_digest

Generates/verifies data digest (TCP).

EXAMPLES

•Connect to a subsystem named nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432 on the IP4 address 192.168.1.3. Port 4420 is used by default:

# nvme connect --transport=rdma --traddr=192.168.1.3 \
--nqn=nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432

SEE ALSO

nvme-discover(1) nvme-connect-all(1)

AUTHORS

This was co-written by Jay Freyensee[1] and Christoph Hellwig[2] for Keith Busch[3].

NVME

Part of the nvme-user suite

NOTES

1.
Jay Freyensee
mailto:james.p.freyensee@intel.com
2.
Christoph Hellwig
mailto:hch@lst.de
3.
Keith Busch
mailto:keith.busch@intel.com
04/15/2019 NVMe