1OCFS2 filesystem 2================== 3OCFS2 is a general purpose extent based shared disk cluster file 4system with many similarities to ext3. It supports 64 bit inode 5numbers, and has automatically extending metadata groups which may 6also make it attractive for non-clustered use. 7 8You'll want to install the ocfs2-tools package in order to at least 9get "mount.ocfs2" and "ocfs2_hb_ctl". 10 11Project web page: http://oss.oracle.com/projects/ocfs2 12Tools web page: http://oss.oracle.com/projects/ocfs2-tools 13OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/ 14 15All code copyright 2005 Oracle except when otherwise noted. 16 17CREDITS: 18Lots of code taken from ext3 and other projects. 19 20Authors in alphabetical order: 21Joel Becker <joel.becker@oracle.com> 22Zach Brown <zach.brown@oracle.com> 23Mark Fasheh <mfasheh@suse.com> 24Kurt Hackel <kurt.hackel@oracle.com> 25Tao Ma <tao.ma@oracle.com> 26Sunil Mushran <sunil.mushran@oracle.com> 27Manish Singh <manish.singh@oracle.com> 28Tiger Yang <tiger.yang@oracle.com> 29 30Caveats 31======= 32Features which OCFS2 does not support yet: 33 - Directory change notification (F_NOTIFY) 34 - Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease) 35 36Mount options 37============= 38 39OCFS2 supports the following mount options: 40(*) == default 41 42barrier=1 This enables/disables barriers. barrier=0 disables it, 43 barrier=1 enables it. 44errors=remount-ro(*) Remount the filesystem read-only on an error. 45errors=panic Panic and halt the machine if an error occurs. 46intr (*) Allow signals to interrupt cluster operations. 47nointr Do not allow signals to interrupt cluster 48 operations. 49atime_quantum=60(*) OCFS2 will not update atime unless this number 50 of seconds has passed since the last update. 51 Set to zero to always update atime. 52data=ordered (*) All data are forced directly out to the main file 53 system prior to its metadata being committed to the 54 journal. 55data=writeback Data ordering is not preserved, data may be written 56 into the main file system after its metadata has been 57 committed to the journal. 58preferred_slot=0(*) During mount, try to use this filesystem slot first. If 59 it is in use by another node, the first empty one found 60 will be chosen. Invalid values will be ignored. 61commit=nrsec (*) Ocfs2 can be told to sync all its data and metadata 62 every 'nrsec' seconds. The default value is 5 seconds. 63 This means that if you lose your power, you will lose 64 as much as the latest 5 seconds of work (your 65 filesystem will not be damaged though, thanks to the 66 journaling). This default value (or any low value) 67 will hurt performance, but it's good for data-safety. 68 Setting it to 0 will have the same effect as leaving 69 it at the default (5 seconds). 70 Setting it to very large values will improve 71 performance. 72localalloc=8(*) Allows custom localalloc size in MB. If the value is too 73 large, the fs will silently revert it to the default. 74localflocks This disables cluster aware flock. 75inode64 Indicates that Ocfs2 is allowed to create inodes at 76 any location in the filesystem, including those which 77 will result in inode numbers occupying more than 32 78 bits of significance. 79user_xattr (*) Enables Extended User Attributes. 80nouser_xattr Disables Extended User Attributes. 81acl Enables POSIX Access Control Lists support. 82noacl (*) Disables POSIX Access Control Lists support. 83resv_level=2 (*) Set how aggressive allocation reservations will be. 84 Valid values are between 0 (reservations off) to 8 85 (maximum space for reservations). 86dir_resv_level= (*) By default, directory reservations will scale with file 87 reservations - users should rarely need to change this 88 value. If allocation reservations are turned off, this 89 option will have no effect. 90coherency=full (*) Disallow concurrent O_DIRECT writes, cluster inode 91 lock will be taken to force other nodes drop cache, 92 therefore full cluster coherency is guaranteed even 93 for O_DIRECT writes. 94coherency=buffered Allow concurrent O_DIRECT writes without EX lock among 95 nodes, which gains high performance at risk of getting 96 stale data on other nodes. 97