IFS Middleware Server

Overview

IFS Middleware Server is the default application server used by IFS Applications. The application server is distributed in IFS component "IFS Middleware Server" and is based on Oracle WebLogic Enterprise Edition. Since it is embedded into an IFS component all installation, administration and configuration is done using IFS tools.

IFS Middleware Server introduces some new keywords and concepts that are important to know in order to understand how the application server functions. This page will describe central concepts as Node Manager, Admin Server and Managed Server. It also describes how clustering works with and without an external load balancer and how servers are controlled. It will also explain that IFS Middleware Server is more complex than a single Java process and enables high availability and rigorous security for managing the cluster as well as better scalability and control.

Concepts

An IFS Applications instance requires a number of resources such as data sources, messaging, enterprise applications and application servers in order to be operational. The umbrella under which these resources are managed is called a domain.

Node Manager

Node Manager is a utility that enables starting, shutting down and restarting Admin Server and Managed Servers on one node either from the host itself or from another node. The Node Manager has low memory footprint and should always run as a service.

Admin Server

The Admin Server is a server that controls the master configuration. It does not host any applications.

Managed Server

A Managed Server hosts one or more applications.

Host

Host is the term used for a physical- or virtual machine, i.e. a computer.

Machine

Machine is the term for the logical representation of a host within the domain.

HTTP Server

HTTP Server is the entry-point of all applications. The HTTP Server is sometimes referred to as Web Server.

Cluster

There are two types of clustering; vertical and horizontal. Vertical clustering means that there are two or more servers running on the same node and horizontal clustering means there are two or more servers running or different nodes. On IFS Middleware Server it is possible to configure how many servers to run on any node, thus giving the option to run both a horizontal and a vertical cluster at the same time.
A configuration of IFS Middleware Server is a cluster even if it contains only a single Managed Server.

Node

Any node in the cluster will contain the following:

Each of the items listed above require unique ports in order to function. This includes not only the current node but all the other nodes in the cluster (with the exception for the Admin Server).
Each node will contain two Node Managers; one for the application server and one for the HTTP Server. Which port the Node Managers will listen to is configurable but the same port will be used on all the nodes in the cluster. I.e. it is not possible to have the application server's Node Manager listen to port 5556 on one node and on port 6665 on another. The same applies to the HTTP Node Manager. When forming the cluster, one of the nodes in the cluster will have to be the master node, which will always be the host running the IFS Installer. The master node will contain the Admin Server which is used as the center point for the cluster. The Admin Server holds the master configuration and the other nodes will periodically check for any changes and update their configuration if necessary. Managed Servers can also be started, stopped and monitored using the Admin Server. No other nodes than the master node will have an Admin Server. The number of managed servers each node will host is configurable, it might not contain any Managed Servers at all, a scenario which would only make sense in case you wish to decrease the load of the master node. Each node will also contain everything necessary to set up an HTTP Server. However, it will only run on the master node after the installation. The others can be used as well if this is desirable but most likely they will only be used when an external load balancer is being used. If this is the case the other HTTP Servers will have to be started manually after the load balancer has been configured. The HTTP Server requires a Node Manager port, an admin port and a listener port for HTTP and possibly HTTPS.

Default configuration
The default configuration for IFS Middleware Server is built up using a single node hosting one Managed Server as illustrated below.

Default setup.

Default load balancing.

Extended configuration
Below is an example configuration where there are three nodes in a cluster which in total hosts five managed servers. NodeA is the master node and hosts one Managed Server and the other nodes each hosts two Managed Servers. The master node also hosts the Admin Server which only exists on NodeA. The first image illustrates a configuration where no external load balancer is being used and where the HTTP Server (and Node Manager) are disabled. The second image illustrates the same scenario except an external load balancer is being used and the HTTP Servers on NodeB and NodeC has been enabled to handle requests. Please note that this setup is for demonstration purpose only and should not be considered as a recommended cluster setup.

Cluster configuration without load balancer.

Cluster configuration with an external load balancer.

Load Balancing

The load is always balanced between the application servers by the HTTP Server no matter if there is an external load balancer in front or not. This typically eliminates the need of having more than one HTTP Server running when there is no external load balancer.

Load balancing example for one HTTP Server and five managed servers in a cluster.

While the HTTP Server loads the balance between the application servers, an external load balancer can be used to load the balance between HTTP Servers, thus increasing the throughput for HTTP calls if this is needed. This means that although the external load balancer forwards your request to NodeC you might still end up communicating with the application server on NodeA. The below image describes the load balancing where an external load balancer is used and the HTTP Servers on all nodes are serving requests.

Load balancing example with an external load balancer, three HTTP Servers and five managed servers.

The Node Manager's Role

The Node Manager is acting as an entry point on each node. Therefore it is important that it is up and running at all times. It also handles server crash recovery if the system goes down or a running server crashes. This is why it is important that the Node Manager is started automatically when the host starts, only then can it start up the servers that terminated unexpectedly. It is also important that managed servers are stopped correctly using the stop scripts if it needs to be stopped for any reason, otherwise they will bounce right up again.

The Admin Server's Role

The Admin Server plays an important role in the cluster. As previously mentioned it is the center point of the entire cluster. It maintains the master configuration and propagates any changes to the application servers; it is responsible for managing new or updated applications and it also starts and stops the managed servers. Should the Admin Server become unavailable, the applications servers will continue to run independently but try to reconnect on a regular basis in order to receive configuration changes.
When starting an application server, the Admin Server is contacted and asked to start the specific server. The Admin Server then contacts the Node Manager on the machine where the application server resides which in turn tells the managed server to start and reports back to the Admin Server. The Admin Server can now communicate directly with the application server and can tell it to stop.

Example of starting or stopping a managed server.

More information about how to control the cluster can be found here.