HPC Cluster Computing
ClusterVision have a 100% dedicated focus on high performance cluster computing (HPCC). We specialise in :
ClusterVision's compute clusters consist of a number of AMD or Intel processor based servers, typically connected by Gigabit Ethernet and a high-bandwidth, low-latency network interconnect such as InfiniBand.
ClusterVision's storage clusters combine high-end commodity server hardware with high-quality SATA, SAS or FCAL (Fibre Channel) RAID units or NAS servers to build clusters of file servers with virtually unlimited storage capacity.
BigData and Hadoop
ClusterVision has experience in the design and build of database cluster systems for BigData and Hadoop style applications, and work with a number of companies offering commercial implementations and/or providing support for Hadoop, including Cloudera CDH and NetApp Open Solution.
Contact us - Individual customer references to ClusterVision’s existing Hadoop and BigData cluster installations in Europe are available on request.
Examples include BigScience projects, such as the Large Hadron Collider, radio frequency identification, internet management and telecommunication records, retail, e-commerce and a range of military, surveillance and other security related applications.
As one of the most rapidly growing areas of High Performance Computing, most of the leading hardware manufacturers, including many of ClusterVision’s closest Technology Partners have specific BigData orientated solutions.
The application is divided into many small work fragments, each of which may be processed across any node in the cluster system. Hadoop implementations therefore typically require a specific distributed file system for data storage and retrieval which provides a very high aggregate bandwidth across the cluster. In addition to standard File Transfer Protocol (FTP), Hadoop file systems include HDFS (Hadoop Distributed File System), Amazon S3, and CloudStore.
A small Hadoop cluster will typically comprise a single master and multiple compute or data processing and tracking nodes. In a larger Hadoop cluster system, in addition to the data processing nodes, the distributed file system is typically managed through a dedicated server to host the file system index, together with a secondary node to generate snapshots of the memory structures in order to secure and prevent corruption of the file-system data.
ClusterVision's database clusters provide a fully redundant, turn-key database solution by combining elements from our compute and storage clusters with a parallel database and Bright Cluster Manager. Available databases include Oracle 11g and MySQL Cluster.
ClusterVision is world-wide Oracle partner.
|Copyright 2002-2012 ClusterVision BV|