Etu Appliance

Easier than Ever
The Only Choice for Hadoop Platforms


Big Data Processing in a Box

  • Scalability - Architecture of Public Cloud Lineage
  • Reliability - Carrier-grade Quality
  • Performance - Efficiency for Enterprises


Why is an Etu Appliance Cluster indispensable to a Hadooper?

high-performance, security, reliability, and easy management



  • Highly Integrated Appliance
    A Hadoop-based Big Data processing platform that combines distributed storage with parallel computing, which integrates software and hardware. Etu Appliance is a design of simplicity and optimization that is an agglomeration of high-performance, effective management, security, reliability, scalability, and openness.


  • High-Performance Operating System
    Starting from bare metal to up and running with an installed comprehensive software stack and optimized configuration, an Etu Appliance cluster with more than 100 Worker Nodes can be automatically deployed and configured in 10+ minutes.


  • Full System High Availability
    Using a full system rather than just the Hadoop Name Node High Availability comprehensively prevents the risk for a single point failure.


  • Multi-Tenant Security
    Etu Appliance is built in with a multi-tenant identity and access control mechanism providing a level of high data security comparable to a public cloud.


  • Ad Hoc Network Design
    Etu Appliance is designed to combine the multi-interface bandwidth and adapter fault tolerance allowing considerable high-performance network transmission and high availability.


  • Human-Based Centralized Management
    Etu Appliance offers a web-based graphical user interface that designates central management and settings across nodes in cluster.


  • Low-Risk Adoption
    The minimal Etu Appliance cluster contains 1 Master Node and 2 Worker Nodes, which offers the flexibility in purchasing less than one full Hadoop rack or even half.


  • Linear Scalability
    Depending on the unit(s) of Worker Nodes, an Etu Appliance cluster can be scaled-out up to tens of hundreds folds.


  • Open and Value-Added Platform
    The open Big Data processing platform enables a variety of applications to be deployed on it.



Enjoy the beauty of Hadoop cluster deployment and simple management


10+ minutes to deploy 10-node Hadoop cluster

Think about this. As a Hadoop-based application developer or a Hadoop system administrator, what is a reasonable amount of time for you to get a 10-node Hadoop cluster up and running? Would it be three days, one day, three hours, or maybe one hour?


3 to 12 times more powerful than those self-installed ones

Imagine a product designed for running Hadoop jobs that enables you to process big data on the same cluster scale with a five-fold leap in performance every day compared to current systems. Would it not make more sense to have your system capable of satisfying your users by replacing the existing application system with one that offers instantaneous access to it?


New benchmark for the reliability of a Hadoop system

Are you still trying to figure out a solution for Hadoop Name Node High Availability? In reality, the job has been done with a broader built-in HA that covers the whole cluster system: computing, storage, host, network services, security mechanisms, network interfaces, all beyond a Hadooper’s imagination of HA.


High integration with Kerberos and LDAP to guarantee multi-tenant security

In regards to storage and computing of Big Data, how does one get the full benefit of an emphasized multi-tenancy in cloud computing? Imagine achieving the goal of authenticating and authorizing different users and applications to access segmented data.


Boosts first call resolution with one GUI and one URL

As a business grows a Hadoop cluster should ideally be able to scale-out accordingly. Imagine you have a 50-node cluster that needs a system adjustment overhaul, parameter optimization, and software maintenance. How many hosts do you have to connect to and how many command lines are needed to accomplish this task?





 1  Staff can do their Hadoop-related jobs respectively and make progress all together, which improves professional performance.

When it comes to storage and computing for multi-structured big data, neither developers nor system administrators are free from taxing Hadoop Ecosystem problems. These challenges range from software stacks that are unfavorable to installation, to difficulty in managing system and applications, to not knowing the right way to reach optimization, and etc. Etu Appliance saves time for developers and system administrators, which is literally improving the performance of your professionals through efficiency.


 2  More confidence in Mission Critical Big Data computing.

Etu Appliance recognizes the Full System HA, enabling enterprises to feel more at ease and gain a peace of mind in knowing that the relatively more crucial Big Data computing tasks can be shifted to a Hadoop platform that will continuously extract value from multi-structured Big Data.


 3  A Big Data PaaS is ready for a full run.

Etu Appliance’s centralized Web-based Management Console functions as a PaaS user portal for enterprises and internet service providers that deserve the best application platform solution to erect both public and private clouds. Besides simplifying Hadoop cluster management, companies can achieve enterprise-level multi-tenant authentication, authorization, and meet major requirements in self-service, security, scalability, and stability.


 4  Ability to access and compute accumulated data repeatedly, which can offer unlimited commercial value.

This platform is characterized of having a high level of openness, allowing it to have multiple applications run on it. As time goes by, the data that has been saved in an Etu Appliance cluster will increase the value of your business.


 5  Performance doubles, TCO drops

To focus on performance optimization we start from the distinctive operating system and then continue forward throughout the whole system. Etu Appliance compliments Hadoop and takes advantage of the features such as parallel computing architecture and linear scalability. Compared to general Hadoop platforms, the adoption of an Etu Appliance cluster reduces the number of nodes undertaking computing tasks, which significantly cuts down on an enterprise’s CAPEX and management costs.



The differences between an Etu Appliance and a general Hadoop platform


Etu Appliance

General Hadoop Platform

 Operating System Etu OS is specialized for running Hadoop jobs. General Linux distributions.
 Computing Performance Offers an increase in computing performance 3 to 12 times higher than a general Hadoop platform through a full system optimization. General system with a non-calibrated computing performance.
 Data Acquisition An Etu Log Collector carries the capability of receiving UDP packets higher than 60,000 EPS per node, along with an accuracy rate of more than 99.99%. General data collection performance without optimization.
 Cluster Deployment The ability to initiate one click deployment, meaning it has the ability to get deployment and setup finished for more than 100 Worker Nodes from bare metal to up and running within 10+ minutes while giving attention to a HA system architecture. Before the installation of a Hadoop Ecosystem software stack it requires the manual installation of an operating system and cluster services for each node. After this then it can be configured to run.
 Distinctive Network Design Combines the multi-interface bandwidth that allows for HA. General network lacks in design for performance and fault tolerance.
 High Availability Architecture The built-in full system HA is highly integrated covering from computing, storage, host, network services, security mechanisms, network interfaces, and etc. to avoid a single point of failure in all aspects. Generally, a Hadoop Name Node HA mechanism is provided and users are required to carry out many detailed steps, which might lead to a high failure rate.
 Multi-tenant Security The built-in Kerberos and LDAP fully meet the requirements of an enterprise-level multi-tenant platform authentication and authorization. Users are required to implement Kerberos and LDAP integration, which ensues a lengthy and complicated process. Other than that, threshold of a system integration is very high.
 Cluster Management A web-based graphical user interface that designates central management and settings across nodes in cluster. A root access is needed to enter each node to manage the host system.



How Etu Appliance Works


The integration of Etu Appliance is flexible to meet expansion needs for all enterprises. Begin by investing in a minimum package and expand accordingly to business growth needs. The minimum package for an Etu Appliance is a cluster of three nodes (1 Master Node + 2 Worker Nodes). As your Big Data processing workloads grow we simply increase worker nodes through scale-out expansions without interrupting ongoing services.




Etu Appliance Editions and Roles in Cluster


Etu Appliance Edition

Etu Appliance Role

  • Standard Edition
  • Enterprise Edition
  • Master Node: Etu 1000M Series (Master Node)
  • Worker Node: Etu 1000W Series (Worker Node)
  • Worker Node: Etu 2000W Series (Worker Node)


※ Etu Appliance Edition Comparison Chart is listed in the "Etu Appliance v2 Datasheet".




Further inquiry for Etu Appliance, please contact





Etu, Etu Appliance, and Etu logo are trademarks of Etu Corporation. All other brands and trademarks referenced herein are acknowledged to be trademarks or registered trademarks of their respective holders.