Upgrade Replication Server 5.0 - 5.1 (Linux)

To upgrade your PoolParty installation, follow the steps described below.

Note

In order to upgrade to a specific PoolParty release, you must ensure that your current PoolParty installation is at the immediate previous release. In other words, you cannot skip any intermediate releases when upgrading PoolParty. For example, if your PoolParty installation is at version 4.6 and you want to end up at 5.1, you must perform the upgrade to 5.0 and then to 5.1. PoolParty's automated upgrade process for GNU/Linux hosts makes this process a breeze.

Note that with release 4.1 a 64bit build of the Sun/Oracle Java 7 Runtime Environment is required for running a PoolParty server.

These instructions assume you are upgrading a PoolParty default installation for GNU/Linux with PoolParty installed at /opt/poolparty, unless otherwise stated.

Upgrade Procedure

STEP 1: Stop the Replication (Slave) PoolParty server

Extract the archive containing the PoolParty Replication Server upgrade files on your system, change to the resulting directory, and run ./replication-update-5.1.1.bash from there.

Note

If you chose to install PoolParty at another location than the default (/opt/poolparty), supply the full path to your PoolParty installation directory as the first argument to the script, like, e. g., so: ./pp-update-5.1.1.bash /usr/local/poolparty. If you are running your PPT instance under another user or group (default: poolparty) supply the owner and group as the second and third argument to the script e.g. ./pp-update-5.1.1.bash installpath owner group.

Additionally it is now possible to run the update script with different PoolParty installation and data directories, for this to work you need to specify the full path to your PoolParty data directory as a fourth argument to the script e.g. ./pp-update-5.1.1-bash installpath owner group datapath

STEP 2: Reconfigure replication on the master server

Note

In this step we assume that you already upgraded your master server to PoolParty 5.0.

Stop the master server. Add the following code in the solrconfig.xml file for the following cores on the master server.

  • conceptData

  • conceptMatching

  • thesaurusBasedDisambiguation

  • corpusTerm

  • geoIndexCitiesAndCountries

  • excludedTerms

The files can be found here:

  • /opt/poolparty/data/solr/<core-name>/conf/

<requestHandler name="/replication" class="solr.ReplicationHandler">
  <lst name="master">
    <str name="replicateAfter">startup</str>
    <str name="replicateAfter">commit</str>
    <str name="confFiles">schema.xml,stopwords.txt,elevate.xml</str>
    <str name="commitReserveDuration">00:00:10</str>
  </lst>
  <str name="maxNumberOfBackups">1</str>
</requestHandler>

Start the master server.

STEP 3: Configure replication on the replication instance (slave server)

Add the following code in the solrconfig.xml file for following cores on the slave server and update the settings for masterUrl and httpBasicAuthPassword.

  • conceptData

  • conceptMatching

  • thesaurusBasedDisambiguation

  • corpusTerm

  • geoIndexCitiesAndCountries

  • excludedTerms

The files can be found in the same place as on the master server.

Replication core configuration:

<requestHandler name="/replication" class="solr.ReplicationHandler" >
  <lst name="slave">
    <str name="masterUrl">http://<source-server>:80/solr/<core-name>/replication</str>
    <str name="pollInterval">00:00:20</str>
    <str name="compression">internal</str>
    <str name="httpConnTimeout">5000</str>
    <str name="httpReadTimeout">10000</str>
    <str name="httpBasicAuthUser">solr</str>
    <str name="httpBasicAuthPassword">solr-password</str>
 </lst>
</requestHandler>

STEP 4: Start the Replication Server again

Post Installation Test and Configuration

You can test the setup issuing the following commands in a browser. They should provide valid replies.

Master Server
 
http://<master-server>/solr/<core-name>/replication?command=details

Slave Server
http://<slave-server>/solr/<core-name>/replication?command=details