Dublin OSS barcamp<\/a>) :<\/p>\nNFS<\/strong><\/p>\n\n- Pro: standard, cross-platform, easy to implement<\/li>\n
- Con: Poor performance, single point of failure (single locking manager, even in HA)<\/li>\n<\/ul>\n
GFS2<\/strong><\/p>\n\n- Pro: Very responsive on large data files, works on physical and virtual, quota and SE-Linux support, faster than EXT3 when I\/O operations are on the same node<\/li>\n
- Con: Only supported with Red Hat, Performance issues on accessing small files on several subdirectory on different nodes<\/li>\n<\/ul>\n
OCFS2<\/strong><\/p>\n\n- Pro: Very fast with large and small data files on different node with two types of performance models (mail, data file). Works on a physical and virtual.<\/li>\n
- Con: Supported only through contract with Oracle or SLES, no quota support, no on-line resize<\/li>\n<\/ul>\n
First we need to install OCFS2 tools :<\/p>\n
sudo apt-get install ocfs2-tools<\/code><\/p>\nThere is another package ocfs2console that you want to install to configure the cluster via GUI, but since I\u2019m using ubuntu server I\u2019m skipping this to configure my cluster manually.<\/p>\n
<\/p>\n
<\/p>\n
Create on every node attached to storage \/etc\/ocfs2\/cluster.conf<\/p>\n
sudo vi \/etc\/ocfs2\/cluster.conf<\/code><\/p>\nWith the content below, only replace node1 and node2 with their respective names and IP for each node :<\/p>\n
node:
\nname = node1
\ncluster = ocfs2
\nnumber = 0
\nip_address = 10.10.0.0
\nip_port = 7777
\nnode:
\nname = node2
\ncluster = ocfs2
\nnumber = 1
\nip_address = 10.10.0.1
\nip_port = 7777
\ncluster:
\nname = ocfs2
\nnode_count = 2<\/code><\/p>\nNow you reconfigure ocfs2-tools with the default values :<\/p>\n
sudo dpkg-reconfigure ocfs2-tools<\/code><\/p>\nthen restart services :<\/p>\n
sudo \/etc\/init.d\/o2cb restart
\nsudo \/etc\/init.d\/ocfs2 restart<\/code><\/p>\nIf your fiber card connected to your host\/storage, and virtual disks created and presented you should run fdisk to see it :<\/p>\n
$ sudo fdisk -l
\nDisk \/dev\/sda: 1073.7 GB, 1073741824000 bytes
\n255 heads, 63 sectors\/track, 130541 cylinders
\nUnits = cylinders of 16065 * 512 = 8225280 bytes
\nSector size (logical\/physical): 512 bytes \/ 512 bytes
\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes
\nDisk identifier: 0x02020202<\/code><\/p>\nthe result have been truncated to show only one virtual disk, and you might see multiple \/dev\/sda, \/dev\/sdb, \/dev\/sdc\u2026 according to your configuration in addition to your local hard disks. What I have done is creating a 1TB partition that I will share between my two nodes :<\/p>\n
$ sudo fdisk \/dev\/sda<\/code><\/p>\nIn fdisk menu choose \u201cn\u201d for new partition, and choose your partition size according to your requirements. Then use \u201cw\u201d to write changes and exit.<\/p>\n
Finally we create a ocfs2 partition table :<\/p>\n
$ mkfs.ocfs2 \/dev\/sda<\/code><\/p>\nmount your partition :<\/p>\n
$ sudo mkdir \/archives
\n$sudo mount -t ocfs2 \/dev\/sda \/archives<\/code><\/p>\nor you can add it to fstab to mount automatically on boot :<\/p>\n
$ \/dev\/sda \/archives ocfs2 _netdev 0 0<\/code><\/p>\nThe _netdev option is used here to prevent the system from attempting to mount these file systems until the network has been enabled on the system.<\/p>\n
You want to test your new partition, and you will notice that every file\/folder created on node1 is automatically available on node2, and vise-versa.<\/p>\n
Enjoy !<\/p>\n","protected":false},"excerpt":{"rendered":"
One of the applications that I\u2019m working on uses archived documents, there is no NoSQL here, just plain tiff files with indexes in Oracle database. Everything related to document access, permissions, conversions, watermarking, security, encryption \u2026 is managed by the application itself. So I had to keep my cluster permanently connected to a SAN storage […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[47,107,116,171,173,185,219,222,261],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts\/4825"}],"collection":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/comments?post=4825"}],"version-history":[{"count":0,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts\/4825\/revisions"}],"wp:attachment":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/media?parent=4825"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/categories?post=4825"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/tags?post=4825"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}