Matrix performance section
From Linux NFS
(Difference between revisions)
(→Comparison of NFSv4 vs. NFSv3 for common use cases) |
m |
||
(2 intermediate revisions not shown) | |||
Line 18: | Line 18: | ||
|Time to perform sequence of cacheable read/write operations | |Time to perform sequence of cacheable read/write operations | ||
|Iozone | |Iozone | ||
- | |''' | + | |'''Open''' |
- | | | + | |'''Bull''' |
| | | | ||
|- | |- | ||
Line 265: | Line 265: | ||
|- | |- | ||
|IV.E.5 | |IV.E.5 | ||
- | |NFS "Cluster" scenario with 1000 | + | |NFS "Cluster" scenario with 1000 clients and several servers |
|Film industry, HPC or visualization workload | |Film industry, HPC or visualization workload | ||
|'''New''' | |'''New''' | ||
Line 272: | Line 272: | ||
|- | |- | ||
|IV.E.6 | |IV.E.6 | ||
- | |NFS front end with cluster backend; 100 | + | |NFS front end with cluster backend; 100 clients |
| | | | ||
|'''New''' | |'''New''' | ||
Line 279: | Line 279: | ||
|- | |- | ||
|IV.E.7 | |IV.E.7 | ||
- | |Pure cluster; 100 | + | |Pure cluster; 100 clients |
| | | | ||
|'''New''' | |'''New''' | ||
Line 384: | Line 384: | ||
|- | |- | ||
|IV.G.7 | |IV.G.7 | ||
- | |Measure performance when scaling CPU count per node on SMP | + | |Measure performance when scaling CPU count per node on SMP |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | + |
Latest revision as of 13:30, 14 July 2007
Contents |
Comparison of NFSv4 vs. NFSv3 for common use cases
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.A.1 | Time to perform sequence of unique read/write operations | Iozone | In progress | Bull | Done by Bull in 2004 |
IV.A.2 | Time to perform sequence of cacheable read/write operations | Iozone | Open | Bull | |
IV.A.3 | Random reads/writes/opens from many clients to one server | Iozone | In progress | Bull | Done by Bull in 2004 |
IV.A.4 | Industry standard loads | SpecSFS, Specweb99 | New | Tools does not exist. | |
IV.A.5 | Time to read file from beginning to end and then rewrite it | IOzone | In progress | Bull | Part of IOZone standard tests |
IV.A.6 | Time for appending info to a log file sporadically over time | Iozone | New | ' | |
IV.A.7 | Metadata - open/close intensive workload | Iozone | New | ||
IV.A.8 | Metadata - directory scanning | Iozone | Done | Bull | Directory scanning over NFSv4 is analysed here. Time to stat a directory is O(n²) where n is the number of files in the directory. |
IV.A.9 | Metadata - create/delete | Iozone | New | ||
IV.A.10 | Metadata - changing attributes (chown, chmod) while dir scanning | IOZone | New | ||
IV.A.11 | How many locks can be made and released over time | LTP | Open | Bull | |
IV.A.12 | Comparison of speeds attainable for different NIC cards | New |
NFSv4 on TCP vs. RDMA
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.B | Compare latency, throughput, etc. of NFSv4 on TCP vs. RDMA | New | Only prototypes exist currently; possibly will be more fully implemented by end of 2005 |
Test performance on different local filesystems
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.C.1 | Analyze whether file system choice affects performance | Iozone | DONE | Bull | NFSv4 performances do not depend on the local file-system used |
IV.C.2 | Test performance with Ext2 on server with metadata /acls | IOZone/FFsB | New | ||
IV.C.3 | Test performance with ext3 on server with metadata / acls | IOZone/FFsB | New | ||
IV.C.4 | Test performance with Reiser3 on server with metadata / acls | IOZone/FFsB | New | ||
IV.C.5 | Test performance with xfs on server with metadata / acls | IOZone/FFsB | New | ||
IV.C.6 | Test performance with jfs on server with metadata / acls | IOZone/FFsB | New | ||
IV.C.7 | Test performance with Reiser4 on server with metadata /acls | IOZone/FFsB | New |
Test perfomance on different cluster filesystems
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.D.1 | Test performance when using GFS cluster file system | New | |||
IV.D.2 | Test performance when using Luster cluster file system | New | |||
IV.D.3 | Test performance when using GPFS cluster file system | New | |||
IV.D.4 | Test performance when using Polyserve cluster file system | New |
Evaluation in various load scenarios
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.E.1 | Test performance with large numbers of small (<4k) files | addhoc tool | In progress | Bull | While most NFS functionnalities are not affected by the number of files (2 000 000 of empty files). |
*IV.E.1
*Sub topic 1 | Test performance with large numbers of small (<4k) files - stat function ; empty files - | addhoc tool | Done | Bull | Stat answer is O(n²). More details here |
IV.E.1
*Sub topic 2 | Test performance with large numbers of small (<4k) files - Open function ; empty files - | addhoc tool | Near done | Bull | Open function is O(n). There is a bottle neck for n=1620000. More details here. Comparisons with local file system and NFSv3. |
IV.E.2 | Test performance with a few very large (>1G) files | IOzone | Open | Bull | Goals need clarifications: are we manipulating files (acl/metadatas/moving...) or reading/writing files? |
IV.E.3 | 4-16 clients generating high load on 1 server in lab environment | Mail/user dir | New | ||
IV.E.4 | 2000-5000 clients on 5-10 servers in production environment | Clusters | New | NetApps | |
IV.E.5 | NFS "Cluster" scenario with 1000 clients and several servers | Film industry, HPC or visualization workload | New | ||
IV.E.6 | NFS front end with cluster backend; 100 clients | New | |||
IV.E.7 | Pure cluster; 100 clients | New |
Evaluation in stress scenarios
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.F.1 | Measure performance of server when in limited resource situations :
| New | |||
IV.F.2 | Measure performance of client when in limited resource situations :
| New | |||
IV.F.3 | Graceful failure mode | New | See Chuck for more info | ||
IV.F.4 | Measure memory/network/CPU efficiency of client for fixed workload | IOzone - FFsB | In progress | Bull |
Scalability (performance)
ID | test | tool test | status | owner | notes |
---|---|---|---|---|---|
IV.G. | Verify server scalability with clients generating various basic requests (ACCESS, GETATTR, et al) | Iozone | New | ||
IV.G.2 | Verify server scalability with clients using compound requests | Iozone | New | ||
IV.G.3 | Measure effects of scaling up number of connections | IOZone | Open | Bull | SMP - Measure number of mounts per second on client and server |
IV.G.4 | Measure effects of increasing number of files | Addhoc tool | Open | Bull | |
IV.G.5 | Measure effects of increasing file size (with/without cache) | IOzone | Open | Bull | |
IV.G.6 | Measure effects when increasing size of on-the-wire NFS read or write operations :
| Iozone | Open | Bull | |
IV.G.7 | Measure performance when scaling CPU count per node on SMP |