Jump to content

Lustre (file system): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Intro: copy-edit
Marbud (talk | contribs)
No edit summary
Line 3: Line 3:
| logo = [[Image:Lustre_logo.gif]]
| logo = [[Image:Lustre_logo.gif]]
| developer = [[Cluster File Systems]]
| developer = [[Cluster File Systems]]
| latest_release_version = 1.4.7
| latest_release_version = 1.6.0.1
| latest_release_date = [[August 25]], [[2006]]
| latest_release_date = [[May 25]], [[2007]]
| operating_system = [[Linux]]
| operating_system = [[Linux]]
| genre = [[Distributed file system]]
| genre = [[Distributed file system]]

Revision as of 20:04, 14 May 2007

Lustre
Developer(s)Cluster File Systems
Stable release
1.6.0.1 / May 25, 2007
Repository
Operating systemLinux
TypeDistributed file system
LicenseGPL
Websitehttp://www.lustre.org

Lustre is a Free Software distributed file system, generally used for large scale cluster computing. The name is a merge of Linux and clusters. The project aims to provide a file system to cope with clusters of tens of thousands of nodes with petabytes of storage capacity, without compromising on speed or security, and is available under the GNU GPL.

Lustre is designed, developed and maintained by Cluster File Systems, Inc., with input from many other individuals and companies.

Many of the fastest supercomputers in the world are clusters using the Lustre file system for storage, such as systems at Oak Ridge National Laboratory, Pacific Northwest National Laboratory, Lawrence Livermore National Laboratory and Los Alamos National Laboratory.

Design

Each file stored on a Lustre file system is considered an object. Lustre presents all clients a standard POSIX semantics and concurrent read and write access to the shared objects. A Lustre file system has four functional units: a meta data server (MDS) that stores meta data; an object storage target (OST) that stores actual data; an object storage server (OSS) that manages the OSTs; and client(s) that access and use the data. OSTs are block-based devices. An MDS, OSS and OST can be on the same node or on different nodes. Lustre does not directly talk or manage OSTs, and delegates this responsibility to OSSs to ensure scalability for large-scale clusters and supercomputers.

Implementation

On a Massively Parallel Processor (MPP), computational processors can access Lustre file system as redirecting their I/O requests to the job launcher service node if that one is configured as a Lustre client. Although it is the easiest method, it provides poor overall performance. A slightly more complicated way to provide a very good overall performance is to use liblustre library. Liblustre is a user-level library allowing computational processors to mount and use the Lustre file system as a client, bypassing the redirection back to the service node. Using liblustre, the computational processors can access a Lustre file system, even if the service node on which the job was launched is not a Lustre client. Liblustre allows data movement directly between application space and the Lustre OSSs without the need for an intervening data copy through the lightweight kernel, thus providing low latency, high bandwidth access from computational processors to the Lustre file system directly. Due to its impressive performance characteristics and its scalability, it is the most suitable solution utilizing Lustre over MPP systems. Liblustre is the biggest design difference between Lustre on MPPs such as Cray XT3 and on conventional clustered computational systems.

Lustre technology has been integrated in the HP StorageWorks Scalable File Share product.

See also