OBIEE High Availability - Presentation Services and Scheduler
Continuing from my previous post, I'd like to talk about how to configure and set up High Availability (HA) for the Presentation Services (PS) and the Scheduler.
Presentation Services
With the BI services already set up in a clustered mode, managed by the Cluster Controller, the next step is to configure a new Cluster-aware ODBC connection. This connection is set to communicate with the Cluster Controller instead of connecting directly to a BI server instance. Each Presentation Server instance in the cluster is then configured to use this new ODBC connection. On a Unix box, the connections are defined in the [OracleBI]/setup/odbc.ini file, but if you are running on a Windows box, use the [Control Panel/Administrative Tasks/Data Sources (ODBC)] Wizard to set up a similar connection (make sure not to use a space/blank in the connection name, as this will not work). This new ODBC connection must be defined on each PS node we are setting up. Although there is nothing stopping you from naming each ODBC DSN differently, I really suggest you keep some constancy and stick to the same name.
The [OracleBIDate]/web/config/instanceconfig.xml file on each node contains the configuration properties of the PS. The first interesting bit is which ODBC connection the PS uses to connect to a BI server. Find and change the <DSN> entry in this file, to refer to the our Cluster DSN instead of the AnalyticsWeb connection. This step needs to be done on each box we are setting up.
<DSN>AnalyticsWeb_Cluster</DSN>
Another thing to note is the <CatalogPath> entry. As all the PS need to share the same web catalog, we have to set up a shared directory (much like we did for the BI server repository and global cache) to store the catalog files. I have set up a share on /media/share/Catalog that I use for this purpose. Make sure you copy all your catalog directory structure to this shared directory.
<CatalogPath> /media/share/Catalog/samplesales </CatalogPath>
This takes care of what we need to run each PS in our cluster. Restart each service for the changes to take effect.
OC4J
For the purpose of this write-up I am using a single OC4J instance. I am assuming that you already have an OC4J instance set up, running the analytics web application. In a simple deployment, the configured application is made aware of a single PS instance, and all incoming web requests are forwarded directly to that node. We can reconfigure this so that each PS in our cluster gets a round-robin assignment of the incoming requests. The file in question is called web.xml and can be found in the [OASHOME]/j2ee/home/applications/analytics/analytics/WEB-INF/ directory (if you deployed your WAR file locally in a simple stand-alone OBIEE manner, then OASHOME is replaced by [OracleBI]/oc4j_bi/). Instead of a two pair/value parameters (host and port), we put a single parameter, that lists all the host:port values for the PS nodes.
Replace
<init-param> <param-name>oracle.bi.presentation.sawserver.Host</param-name> <param-value>localhost</param-value> </init-param> <init-param> <param-name>oracle.bi.presentation.sawserver.Port</param-name> <param-value>9710</param-value> </init-param>
With
<init-param> <param-name>oracle.bi.presentation.sawservers</param-name> <param-value>aravis4.rmcvm.com:9710; aravis1.rmcvm.com:9710</param-value> </init-param>
Make sure you restart the application after you edit this file.
Scheduler
Assuming that you have already gone through the default scheduler configuration on each node, adding these to the cluster is quite straight forward. In the [OracleBI]/server/Config/NQClusterConfig.INI file, add the following line
SCHEDULERS = "aravis4.rmcvm.com:9705:9708", "aravis1.rmcvm.com:9705:9708";
(obviously replacing my server names with your own). This will basically tell the Cluster Controller that the first server in the list, namely aravis4, will be the main/active scheduler, and aravis1 will be the passive one. Next step is to configure each scheduler to join the cluster. On each node, use the schconfig tool to change the Advanced settings of the Scheduler (Choice 1, 3 and then 3) to set 'Participant in a Cluster' to True. Save your settings and exit the tool. Now restart the cluster controllers on each node and then start up the schedulers.
JavaHost
By default, each PS will communicate with the JavaHost running on the local machine, on the default port of 9810. We can, if the mood strikes us, decide to share the JavaHost services on each node with the other nodes, in (you guessed it) a round-robin manner. Would there really be any gain in doing so, though? Certainly we could put the JavaHost services on yet another set of nodes, which would then require that we configure each PS to use those. Again, edit the [OracleBIData]/web/config/instanceconfig.xml file and add the following code within the <ServerInstance> tags
<JavaHostProxy> <Hosts> <Host address="aravis1.rmcvm.com" port="9810" /> <Host address="aravis4.rmcvm.com" port="9810" /> </Hosts> </JavaHostProxy>
Make sure to restart the JavaHost and PS services on each node. This method can also be used to let the PS service now that the JavaHost is running on a non-default port (i.e. you had a port conflict and changed the port in the [OracleBI]/web/javahost/config/config.xml file.
And remember; When in doubt, reboot. It can save you a lot of time and frustration to restart all the services each time, even though you might think that restarting one service is all that should be required ;)