• お問い合わせ
  • ユーザー登録
ニュースリリース
Linuxについて 会社概要 役員紹介 所在地 採用情報 ニュースリリース プレスルーム
ニュースリリース年間一覧
2013年
2012年
2011年
2010年
2009年
2008年
2007年
2006年
2005年
2004年
2003年
2002年
2001年
2000年
1999年
1998年
ニュースリリース年間一覧
2006年
2005年
2004年
2002年
ニュースリリース
*News Release
1999.03.02
パシフィック・ハイテック、
クラスタリング機能を備えた
Turbolinux Cluster Web Serverを発表
パシフィック・ハイテック株式会社(本社:東京都世田谷区梅丘1-14-4、代表 取締役社長:クリフ・ミラー)はLinuxで初めてのfault toleranceシステムを搭 載したエンタープライズ向けLinuxオペレーティングシステムTurbolinux Cluster Web Server(仮)を発表しました。出荷は5月を予定。

オープンソースコミュニティーの方々、特にWensong Zhang氏による Linux Virtual Server Projectの成果についてお知らせいたします。
パシフィック・ハイテックのクラスタリング計画は当初まったく別の開発経路で進んでいましたが、 現在ではLinux Virtual Serverと多くのコンセプトを共有するものとなっています。 そのためWensong氏の作成されたすばらしいカーネルパッチを採用させていただき、 さらにそこにいくつかの機能を付加しました。
また、Wensong氏のWebページで公開されたアイデアのいくつかを実装しています。

Wensong Zhang氏に感謝します。

以下英文です。ご了承下さい。
詳細が決まりましたら改めてニュースリリース(日本語)をお送りします。
================================================================

Contact:Craig Oda Lonn Johnston
Pacific HiTech A&R Partners
510-663-9153650-363-0982 ext.3923

craigoda@turbolinux.com
ljohnston@arpartners.com

Pacific HiTech Announces World's First Linux Clustering Solution for Corporations; Asia's Largest Linux Company Demonstrates New Technology at LinuxWorld Expo.

OAKLAND, Calif.-March 1, 1999- Pacific HiTech, the leader in high performance Linux, today announced the world's first high availability Linux clustering technology -- Turbolinux Cluster Web Server.

Designed to accommodate web traffic of millions of daily hits, Turbolinux Cluster Web Server runs on standard Intel CPU-based servers and offers high availability and performance features previously only available from proprietary high-end Unix-based systems.

Pacific HiTech is demonstrating its new clustering technology at LinuxWorld Expo in San Jose, from March 2 through March 4.

"Linux clustering for most people means Beowulf-type systems -- parallel and distributed computing that is mainly used in research labs to solve scientific problems," said Cliff Miller, CEO of Pacific HiTech. "We are offering something quite different. Turbolinux Cluster Web Server is the first commercial clustering solution for Linux that meets the high availability and performance needs of corporate customers. For E-commerce and ISP customers, system downtime is a direct hit on the bottomline."

Turbolinux Cluster Web Server scales from two to scores of Intel servers with automatic load balancing and fault tolerance. Servers can be located in a corporate "farm" in one location or connected via a WAN in remote locations.

The cluster configuration is transparent to the outside world and clients see only a single internet host. On the inside, the cluster architecture consists of multiple Linux servers, one of them acting as a load balancing router which intercepts the client requests and forwards them to the clustered servers. The Turbo Cluster software automatically detects failures and reconfigures the cluster to exclude the failed servers. Its built-in scalability allows the administrators to plug new servers into the cluster and improve the site performance by sharing the load better among the servers. Turbolinux Cluster is protocol independent and supports many internet server protocols, including HTTP, proxy, FTP, SNMP, DNS and others.

Turbolinux Cluster Web Server will be available in May direct from the Pacific HiTech web site or through authorized PHT distributors. Pricing is not yet available..
Pacific HiTech will support Turbolinux Cluster Web Server through a team of staff engineers at its U.S. headquarters and with national authorized support partners.

About PHT and Turbolinux

PHT, Asia's largest Linux company, is a privately held company founded in 1992. PHT distributes a consumer and corporate suite of products in English, Japanese and Chinese versions called Turbolinux for Intel desktop and server platforms. PHT enjoys strategic relationships with leading hardware and software vendors in Asia and the United States, including Oracle, Adaptec, Accton, Empress, Softbank, DDI and Ado Denshi (Japan's largest electronics retailer). PHT, with offices in the United States, Australia, China and Japan, can be found on the Internet at www.pht.com or, in Japanese, at www.pht.co.jp.
###

================================================================

TurboCluster for Linux

1. Introduction

TurboCluster for Linux provides scalable and fault tolerant internet servers based on a cluster of Linux servers. The cluster configuration is transparent to the outside world, the clients see only a single internet host. On the inside, the cluster architecture consists of several Linux servers, one of them acting as a load balancing router, which intercepts the client requests and forwards them to the clustered servers.

The TurboCluster software automatically detects failures and reconfigures the cluster to exclude the failed servers. Its built-in scalability allows the administrators to plug new servers into the cluster and improve the site performance by sharing the load better among the servers. TurboCluster is protocol independent and can therefore support many internet server protocols like HTTP, proxy, FTP, SNMP, DNS and others.

2. Architecture

An important part of the TurboCluster software resides inside the Linux kernel, where the load balancing code uses IP Masquerading, IP Encapsulation and other IP Routing related technologies to distribute the load among the servers.

The load balancer chooses one of the available servers for each client request and forwards all the packets from the client directly to the actual server by using the IP Encapsulation technique. The server simply responds to the request as if the client had connected directly to it.

Apart for the changes in the kernel code, the clustered computers are just common Linux servers. They run any server software like Apache, Squid, Sendmail and other well known server applications.

In order to provide fault tolerance, a turbod daemon is kept running on the TurboCluster computers. The daemons on the servers, keep communicating with each other in order to detect the server failures and quickly exclude the failed servers from the cluster. When the server appears back on line, it registers itself back into the cluster and starts accepting client requests.

Since the turbo server is protocol independent, it can not take responsibility for the consistency of the data served by the clustered computers. In case of a HTTP server, it is the administrator's responsibility to provide the same contents to all servers. The contents can either be replicated to each server's local file system, shared by a network file system (NFS), or provided by a distributed file system like CODA, for example.

3. Functionality

The TurboCluster software takes the responsibility for:

- Starting up the turbod daemons on each clustered server at the system start up.

- Configuring the servers to participate the cluster and setup the computer's network interfaces.

- Monitoring the functionality and availability of the servers in the cluster and automatically excluding the failed servers from the cluster.

- Monitoring the internet daemons on the clustered servers by issuing appropriate polls or requests according to the protocol the daemons support. Currently only the HTTP protocol is checked by issuing HTTP request to the httpd.

- Sending e-mail messages to address specified in the configuration file. Mail is sent upon server or protocol daemon failures and contain a failure description as well as the current cluster configuration.

- Shutting down each node or the whole cluster upon user requests.

4. Detailed description

When the clustered servers start up, the system initialization scripts automatically start the trubod daemons. All the daemons in the cluster immediately start the interdaemon communication in order to detect present servers. Then one of the servers is selected to become a master, namely a router. The preferred master can be specified in the configuration file.

Once the master is selected, it creates an additional ethernet interface alias with a virtual IP address of the cluster. The master then notifies the local ethernet router and the neighboring hosts, that the master now possesses the virtual IP address. This is accomplished by broadcasting ARP (Address Resolution Protocol) messages to the local network. Now the master is configured to receive all packets sent to the virtual IP address.

The turbod program configures the master kernel tables to forward internet packets sent to the virtual IP address to one of the clustered servers. This is done on a per connection basis, which means each TCP or UDP connection established by the client can be routed to a different local server. Additionally the master can route some of these connections to the master itself and can therefore play a server role besides being a master.

The master sends the routed IP packets to the servers through IP tunnels. In order for the tunneling to work, the turbod program configures an additional detunneling network interface on each sever. The detunneling interfaces on servers make sure the IP packets, received through the master, get restored to identical form as they had when they started their internet journey. This means the packet traversal through the master is transparent to the servers.

Servers simply respond to the clients by sending the response packets directly to the client, this time avoiding the master. Since most IP traffic is generated by the servers and sent to the clients and only a small percentage vice versa, this architecture maintains the highest possible network performance of the cluster as a whole.

The master computer has the ability to balance the load on each of the servers. Each time a client requests a connection, the master chooses a server to which it forwards the request. Choosing a server is controlled by a scheduling algorithm and the scheduler configuration. The basic scheduling algorithms are round-robin, which simply forwards each new request to the next available server and the weighted round-robin algorithm, which is similar to round-robin, but forwards more requests to more powerful servers. In the second case, the weight, which resembles the server load is specified in the cluster configuration file.

The more advanced scheduling algorithms keep a track of the number of active connections to each server and chooses a server among those with the least connections. Those scheduling algorithms are called least-connection and weighted least-connection.

Scheduling algorithms are currently hardcoded into the kernel and can only be switched by recompiling the kernel. In future, the scheduling algorithms will be implemented as kernel modules, which will permit more runtime configurability.

While the TurboCluster is running, the turbod programs keep checking the servers for errors. In case of the master failure, the daemons running on the servers, choose another server and reconfigure it to become a master. In case of a server failure, the master computer simply excludes the failed server from the routing table and let the other servers take some additional load. Usually the failure response time is around 15 to 30 seconds.

Sometimes, in spite of server's normal functionality, one of the internet services might fail for some reason. Such a condition can be detected by regular software checks of the running services. HTTP checking, for example, is already implemented by fetching web documents on the server. Checks for additional protocols can easily be added to the turbod.

5. Requirements

- Two or more Linux servers connected to the internet to form the cluster.

- Additional IP address for the cluster.

- Linux kernel version 2.0.36 with the Virtual Server patch version 0.7 or later. The kernel must be compiled with IP networking, aliasing, firewalling, masquerading, tunneling and virtual server support. (Plans are already under way for implementing the system to support the 2.2.x kernel.)

- The TurboCluster software with accompanying turbod program, scripts, configuration files and utilities.

- Disk space:
approximately 1Mb for program and configuration files, additional space for log files which keep growing with usage.