Reputation: 131
We have 2 pc's with Windows Server 2008 R2. Both pc's have glassfish installed with a web application. The first pc (PDC) has the ip 192.168.1.7, the second pc (BDC) has the ip 192.168.1.8 and the users log in to the application using the ip 192.168.1.7.
What we want to do is if the pc user can't access the application using the ip 192.168.1.7, then use the 192.168.1.8 without the user having to do anything.
We found out that you can do it with glassfish, but it use apache as an intermediary. If the pc that has apache fails then can’t do failover or use the application. Also we have seen that we can use EasyDNS to do it using domain name, but the network has no internet access so we discarded it too.
Is there any way to do failover without depending on internet or a program in a central pc?
Upvotes: 0
Views: 628
Reputation: 1463
One needs a way of recieving requests on one shared ip address and have them distributed over several nodes (load balancing/High-Availability failover).
Optionally one can layer intermediary functionality between the LB/HA and the backend services, such as offloading static content, certain types of request filtering and so on.
For just the LB/HA one commonly uses either
It is not clear in your question what problem you intend to solve with Apache beyond redundancy, which it usually doesn't solve as it tends to be a single point of failure in itself. IIS has the same dilemma where the ARR add-on is frequently used to reverse-proxy to backend servers but is a single point of failure in itself. Just as Windows has NLB to provide the redundancy link for several identically configured servers (such as Java servers, IIS with ARR nodes, Apache nodes), Apache on Linux has mechanisms such as described here.
However, these are not as feature rich and efficient as using dedicated and specialized load balancers to front the backend servers and optional IIS/Apache/other frontend webnodes. Additionally they tend to create their own problem domains which limit the feature sets and create additional depencency complexes. In short, they tend to overconsume maintenance/development time unless the setup is very simple.
Clustered services sharing the same disk, such as suggested in a comment, are commonly not used in web redundancy scenarios as they solve other design problems better (such as high availability file or print services). One reason to use web redundancy technologies as discussed in this answer is the ease with which one maintenance recycles nodes, another is the scalability factor, and so on. The only drawback vs. clustering solutions is that they do not keep track of user state. However, as clustering solutions are commonly specialized towards specific implementations to keep user session awareness, whilst web delivery solutions have a wide range of session awareness distribution such an argument is moot most of the time.
Using peripheral services such as DNS for failover tends to be regarded with well founded suspicion as these with few exceptions (such as mx-pointers) are not designed with such a scenario in mind. Specialized variants which are, may still suffer from the extensive caching built into the DNS subsystems commonly in use and which form weak links in such a setup.
DNS as primary redundancy mechanism may still be considered if:
The alternative to providing the LB/HA logic server-side is to build it into the client.
Upvotes: 1