This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Developer(s) | Jakub Kruszona-Zawadzki / Core Technology |
---|---|
Initial release | 30 May 2008; 16 years ago (2008-05-30) (v. 1.5.0) |
Stable release | 4.56.6 / 23 September 2024; 2 months ago (2024-09-23) |
Preview release | 4.56.6 / 23 September 2024; 2 months ago (2024-09-23) |
Repository | |
Operating system | Linux, FreeBSD, NetBSD, macOS, Solaris, OpenIndiana, |
Type | Distributed file system |
License | GPLv2 / proprietary |
Website | https://moosefs.com |
Moose File System (MooseFS) is an open-source, POSIX-compliant distributed file system developed by Core Technology. MooseFS aims to be fault-tolerant, highly available, highly performing, scalable general-purpose network distributed file system for data centers. Initially proprietary software, it was released to the public as open source on May 30, 2008.
Currently two editions of MooseFS are available:
- MooseFS - released under GPLv2 license,
- MooseFS Professional Edition (MooseFS Pro) - release under proprietary license in binary packages form.
Design
The MooseFS follows similar design principles as Fossil, Google File System, Lustre or Ceph. The file system comprises three components:
- Metadata server (MDS) — manages the location (layout) of files, file access and namespace hierarchy. The current version of MooseFS does support multiple metadata servers and automatic failover. Clients only talk to the MDS to retrieve/update a file's layout and attributes; the data itself is transferred directly between clients and chunk servers. The Metadata server is a user-space daemon; the metadata is kept in memory and lazily stored on local disk.
- Metalogger server — periodically pulls the metadata from the MDS to store it for backup. Since version 1.6.5, this is an optional feature.
- Chunk servers (CSS) — store the data and optionally replicate it among themselves. There can be many of them, though the scalability limit has not been published. The biggest cluster reported so far consists of 160 servers. The Chunk server is also a user-space daemon that relies on the underlying local file system to manage the actual storage.
- Clients — talk to both the MDS and CSS. MooseFS clients mount the file system into user-space via FUSE.
Features
To achieve high reliability and performance MooseFS offers the following features:
- Fault-tolerance — MooseFS uses replication, data can be replicated across chunkservers, the replication ratio (N) is set per file/directory. If (N−1) replicas fail the data will still be available. At the moment MooseFS does not offer any other technique for fault-tolerance. Fault-tolerance for very big files thus requires vast amount of space - N × filesize instead of filesize + (N × stripesize) as would be the case for RAID 4, RAID 5 or RAID 6. Version 4.x PRO of MooseFS implements 8+n Erasure Coding.
- Striping — Large files are divided into chunks (up to 64 megabytes) that might be stored on different chunk servers in order to achieve higher aggregate bandwidth.
- Load balancing — MooseFS attempts to use storage resources equally, the current algorithm seems to take into account only the consumed space.
- Security — Apart from classical POSIX file permissions, since the 1.6 release MooseFS offers a simple, NFS-like, authentication/authorization.
- Coherent snapshots — Quick, low-overhead snapshots.
- Transparent "trash bin" — Deleted files are retained for a configurable period of time.
- Data tiering / storage classes — Possibility to "label" servers, create label definitions called "Storage Classes" and decide, on which types of servers the data is stored
- "Project" quotas support
- POSIX locks, flock locks support
Hardware, software and networking
Similarly to other cluster-based file systems MooseFS uses commodity hardware running a POSIX compliant operating system. TCP/IP is used as the interconnect.
MooseFS in figures
- Storage size is up to: 2 Bytes = 16 EiB = 16 384 PiB
- Single file size is up to: 2 Bytes = 128 PiB
- Number of files is up to: 2 = 2.1 × 10
- Number of active clients is unlimited it depends on number of file descriptors in the system
See also
- BeeGFS
- Ceph
- Distributed file system
- GlusterFS
- Google File System
- List of file systems § Distributed fault-tolerant file systems
- LizardFS – a fork of MooseFS v. 1.6.x
- Lustre
References
- Contributors to moosefs/moosefs · GitHub
- "About us - Core Technology - MooseFS fault tolerant network distributed file system". Core Technology.
- "Date of the first public release: 2008-05-30" https://github.com/moosefs/moosefs/blob/master/README.md
- "MooseFS 1.5 (2008-05-30)" https://github.com/moosefs/moosefs/blob/master/NEWS
- ^ "Support – documentation, status and best practices – MooseFS".
- ^ "moosefs/NEWS at master · moosefs/moosefs". GitHub. 14 July 2022.
- ^ "Releases · moosefs/moosefs". GitHub.
- "We also successfully compiled MooseFS from sources on OpenIndiana Hipster." https://moosefs.com/download.html Archived 2016-03-23 at the Wayback Machine
- Mariusz Gądarowski (2010-04-01). "MooseFS: Bezpieczny i rozproszony system plików" (PDF) (in Polish). Linux Magazine Poland.
- MooseFS 3.0 Storage Classes Manual https://moosefs.com/Content/Downloads/moosefs-storage-classes-manual.pdf
- MooseFS Factsheet