Timelock Server Installation

This guide explains how to install the Timelock server, and to configure existing services to use Timelock Server rather than embedded AtlasDB time and lock services. The basic method is to first install Timelock without any clients, and then for each existing service, prepare the service and Timelock to talk to each other.

Obtain the Timelock binaries

Tip

Internal consumers can skip these steps.

  1. Clone the git repository.

    git clone git@github.com:palantir/atlasdb.git; cd atlasdb
    
  2. Generate a tar file with Gradle:

    ./gradlew timelock-server:distTar
    

This will place a tar file in the build/distributions directory of the timelock-server project. The tar file follows semantic versioning.

Install Timelock (without clients)

We recommend deploying either a 3-node or a 5-node Timelock cluster. A 3-node cluster will provide better performance, whereas a 5-node cluster has greater fault tolerance.

Tip

Internal consumers can deploy the binaries by the usual method.

For real-world installations, we recommend provisioning a dedicated host machine for each Timelock server.

  1. Copy this tar file to each machine on which you want to deploy Timelock Server.

  2. On each machine which you intend to deploy Timelock Server, untar the archive.

    tar -zxvf timelock-server-0.25.0.sls.tgz; cd timelock-server-0.25.0
    
  3. Configure the Timelock Server - see Timelock Server Configuration for a guide on this. The configuration file is at var/conf/timelock.yml relative to the root directory of the Timelock Server.

  4. Start the Timelock Server with service/bin/init.sh start. This should output the process ID of the Timelock Server. You can view the logs in the (by default) var/log directory.

  5. It may be useful to run a health-check to verify that the Timelock Server is running. To do this, you can curl the server’s healthcheck endpoint on its admin port (default: 8081).

    curl localhost:8081/healthcheck
    

    The output should indicate that the Timelock Server is healthy. The output should resemble the following:

    {
        "deadlocks": {
            "healthy": true
        }
    }
    

Add client(s) to Timelock

  1. Back up each client

  2. Configure each client to use Timelock. Detailed documentation is here. You must remove any leader, timestamp, or lock blocks; the timelock block to add looks like this:

atlasdb:
   timelock:
     serversList:
       servers:
         - palantir-1.com:8080
         - palantir-2.com:8080
         - palantir-3.com:8080
       sslConfiguration:
         trustStorePath: var/security/truststore.jks
  1. (Optional) For verification purposes, you may retrieve a timestamp from each client you are configuring to use TimeLock. This can typically be performed with the Fetch Timestamp CLI or Dropwizard bundle. For example, using the Dropwizard bundle:

./service/bin/<service> atlasdb timestamp fetch

 Note down the value of the timestamp returned; we will subsequently use these values to ensure that migration took place.
  1. Shut down each client that has been newly added.

  2. Restart your Timelock cluster.

  3. Migrate each client to the timelock server - see the separate migration docs. For Cassandra KVS, this is automatic.

Warning

Do not skip this step if your client uses DbKvs! Failure to migrate your client will cause severe data corruption, as Timelock will serve timestamps starting from 1.

  1. Restart each client.

  2. (Optional) To verify that the migration worked correctly, get a fresh timestamp for each client from the Timelock server. For each client, the timestamp returned should be strictly greater than the corresponding timestamp obtained in step 10.