Simple Performance Testing
This section provides an example for testing the performance of Fusion File Share Server to assess its ability to utilize the entire network bandwidth. This test can help identify potential bottlenecks in your environment.
Tools Used:
Assumptions and Environment:
- The test is performed on a single Fusion File Share Server, and a single Windows client
- The server's IP address is
10.210.0.3
- The server has a share that's mapped to the
Z:
drive on the client - There's no additional network traffic on the server and client, and minimal network traffic on the LAN during the test
First, use iperf
to determine the maximum network throughput. Then, test the share's throughput using fio
, to evaluate whether it can fully utilize the available network bandwidth.
Network Throughput Test with iperf
-
Install
iperf
on the server and the client:-
On the server: Use your package manager to install
iperf3
(the following example is for Debian or Ubuntu; for other distributions, use the appropriate package manager):sudo apt update
sudo apt install iperf3 -
On the client: Download the latest release of
iperf3
for Windows from here, and extract it toC:\iperf3
.
-
-
Run the test:
- On the server, in a new terminal, start the
iperf3
server:iperf3 -s
- On the client, open a command prompt, and run the following command to connect to the server (replace
10.210.0.3
with the IP address of your server):C:\> C:\iperf3\iperf3.exe -c 10.210.0.3
- On the server, in a new terminal, start the
The test will run for a few seconds, and its output will resemble:
C:\> iperf3.exe -c 10.210.0.3
Connecting to host 10.210.0.3, port 5201
[ 5] local 10.210.0.2 port 49837 connected to 10.210.0.3 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 206 MBytes 1.72 Gbits/sec
[ 5] 1.00-2.00 sec 227 MBytes 1.90 Gbits/sec
[ 5] 2.00-3.00 sec 230 MBytes 1.93 Gbits/sec
[ 5] 3.00-4.01 sec 231 MBytes 1.92 Gbits/sec
[ 5] 4.01-5.00 sec 226 MBytes 1.92 Gbits/sec
[ 5] 5.00-6.01 sec 238 MBytes 1.98 Gbits/sec
[ 5] 6.01-7.00 sec 234 MBytes 1.98 Gbits/sec
[ 5] 7.00-8.01 sec 238 MBytes 1.98 Gbits/sec
[ 5] 8.01-9.00 sec 233 MBytes 1.97 Gbits/sec
[ 5] 9.00-10.01 sec 229 MBytes 1.91 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 2.24 GBytes 1.92 Gbits/sec sender
[ 5] 0.00-10.03 sec 2.24 GBytes 1.92 Gbits/sec receiver
iperf Done.
From the output, you can determine the maximum network throughput between the server and the client. In this example, the throughput is around 1.92 Gbits/sec, which is about 240 MB/s.
SMB Throughput Test with fio
To test the throughput of your SMB share, you can use the popular fio
tool. It allows you to simulate various I/O patterns and measure the performance of file I/O, making it ideal for evaluating the performance of an SMB server.
The configuration file test.fio
you created earlier contains two jobs: one for sequential writes and another for sequential reads. These jobs will write and read 5 GB of data to and from the share (1 GB per job, with 5 jobs each).
-
Install
fio
on the client:On the client, download the latest release of
fio
for Windows from here. The installer will extract it toC:\Program Files\fio
. -
Run the test:
noteStarting with Windows 11 24H2 and Windows Server 2025, clients require SMB message signing by default, which might affect client performance and skew the results. To ensure accurate results, disable message signing on the client by running the following PowerShell command:
Set-SmbClientConfiguration -RequireSecuritySignature $false
Run the
fio
tool with the configuration file you've created earlier:C:\> Z:
Z:\> "C:\Program Files\fio\fio.exe" test.fio
The output will resemble:
fio-3.37
Starting 10 threads
sequential-write-direct: Laying out IO file (1 file / 1024MiB)
sequential-write: Laying out IO file (1 file / 1024MiB)
sequential-write: Laying out IO file (1 file / 1024MiB)
sequential-write: Laying out IO file (1 file / 1024MiB)
sequential-write: Laying out IO file (1 file / 1024MiB)
sequential-read: Laying out IO file (1 file / 1024MiB)
sequential-read: Laying out IO file (1 file / 1024MiB)
sequential-read: Laying out IO file (1 file / 1024MiB)
sequential-read: Laying out IO file (1 file / 1024MiB)
sequential-read: Laying out IO file (1 file / 1024MiB)
Jobs: 5 (f=2): [_(5),f(2),R(1),f(1),R(1)][100.0%][r=238MiB/s][r=29 IOPS][eta 00m:00s]
sequential-write-direct: (groupid=0, jobs=10): err= 0: pid=3348: Thu May 23 12:42:54 2024
read: IOPS=25, BW=205MiB/s (215MB/s)(5120MiB/25026msec)
slat (usec): min=330, max=8734, avg=678.68, stdev=656.30
clat (msec): min=92, max=341, avg=189.41, stdev=27.46
lat (msec): min=92, max=346, avg=190.09, stdev=27.58
clat percentiles (msec):
| 1.00th=[ 110], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 169],
| 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194],
| 70.00th=[ 203], 80.00th=[ 211], 90.00th=[ 222], 95.00th=[ 232],
| 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 342], 99.95th=[ 342],
| 99.99th=[ 342]
bw ( KiB/s): min=97661, max=327045, per=100.00%, avg=213054.79, stdev=9158.66, samples=242
iops : min= 11, max= 39, avg=23.82, stdev= 1.21, samples=242
write: IOPS=27, BW=218MiB/s (229MB/s)(5120MiB/23474msec); 0 zone resets
slat (msec): min=79, max=353, avg=180.41, stdev=31.09
clat (nsec): min=634, max=5567.9k, avg=62988.34, stdev=271977.83
lat (msec): min=79, max=353, avg=180.48, stdev=31.11
clat percentiles (nsec):
| 1.00th=[ 1012], 5.00th=[ 1464], 10.00th=[ 1688],
| 20.00th=[ 2064], 30.00th=[ 2512], 40.00th=[ 3024],
| 50.00th=[ 3856], 60.00th=[ 19584], 70.00th=[ 29056],
| 80.00th=[ 63744], 90.00th=[ 130560], 95.00th=[ 205824],
| 99.00th=[ 782336], 99.50th=[1679360], 99.90th=[5537792],
| 99.95th=[5537792], 99.99th=[5537792]
bw ( KiB/s): min=144571, max=324207, per=100.00%, avg=226041.45, stdev=8452.16, samples=228
iops : min= 14, max= 37, avg=25.31, stdev= 1.11, samples=228
lat (nsec) : 750=0.16%, 1000=0.23%
lat (usec) : 2=8.67%, 4=16.09%, 10=3.05%, 20=1.80%, 50=8.67%
lat (usec) : 100=3.98%, 250=5.39%, 500=1.02%, 750=0.39%, 1000=0.16%
lat (msec) : 2=0.31%, 10=0.08%, 100=0.39%, 250=48.52%, 500=1.09%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=640,640,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=205MiB/s (215MB/s), 205MiB/s-205MiB/s (215MB/s-215MB/s), io=5120MiB (5369MB), run=25026-25026msec
WRITE: bw=218MiB/s (229MB/s), 218MiB/s-218MiB/s (229MB/s-229MB/s), io=5120MiB (5369MB), run=23474-23474msec
From the output, you can determine the read and write throughput of the share.
In this example, the write throughput is approximately 229 MB/s, and the read throughput is around 215 MB/s, indicating a successful result. Ideally, these values should be close to the maximum network throughput determined earlier with iperf
, allowing for minor degradation due to network protocol overhead.
Investigating Results
If the SMB throughput does not closely match the network throughput, consider investigating the following areas: