You are currently browsing the tag archive for the ‘syncrepl’ tag.

In previous posts i pointed out the benefits of using web services (instead of direct LDAP access) to perform LDAP writes. In this post i ‘m going to take the whole discussion a few steps further and propose a mechanism (based on the basic principles of the LDAP Synchronization Protocol used by OpenLDAP for replication) to keep heterogeneous data systems synced (in our case a database with an LDAP server).

There are two, quite different scenarios when writing to an LDAP server (which actually depend on the client performing the updates):

  • The first one is the case where you are using some sort of administration/modification interface (usually a web page) and perform the modifications through it. In this case the user is performing changes online and will usually wait until he gets a positive response from the underlying LDAP server. Even if you use a web services mechanism to perform the LDAP changes the end result will be the same: The web service function will return with a positive (or negative) result along with the corresponding error message if the operation failed at some point. The user is then able to take any necessary actions to make things work (like change a value to meet some constraints, use an attribute value that is not already used if we have a unique restriction on that attribute and so on) or just abandon the operation and move on.  The changes performed are targeted, small,  contained and serialized. Most of the error handling and decisions are performed by the user himself. The web services interface needs just perform the actions and return with the result without worrying about ‘keeping things in sync’.
  • The second scenario is one where you have an external data source of some kind (usually a database) which needs to keep things synchronized with LDAP. Say your main employee source is your payroll system and you need to keep your LDAP synced with that (add people when they join the firm, modify them when needed and delete them when they leave). In this case you will usually create a web services operation interface and perform every operation when needed (call a web service add() operation when you create a user in the payroll system and so on).

The second scenario poses a few serious constraints and problems:

  1. Before enabling the above mechanism the external data source and LDAP need to be in sync. If your LDAP is empty and your payroll system already contains users you need a way to actually populate (usually with a manual offline mechanism) your Directory.
  2. You need a queuing mechanism so that changes are buffered and sent
    1. at a rate that the LDAP server can handle
    2. even if the LDAP server is down for a while
  3. Changes should be sent serialized, in the order they were committed to the data source and you cannot afford to lose a modification (that might break future modifications that depend on a previous one).
  4. An error in an operation should not put the whole synchronization process to a halt and it should be recoverable in most cases.
  5. If somehow you lose a modification you need a mechanism to resync the two parties.

The above complexities make it obvious that just keeping a changelog and performing the operations committed through the web services mechanism is not actually enough. What is needed is a full, robust, synchronization protocol. The OpenLDAP folks have gone through great pains to design and implement such a mechanism, called the LDAP Syncronization Protocol (described in the OpenLDAP Admin Guide and in RFC 4444). It’s a nice move to use the work done and keep the basic principles when designing a mechanism for our case.

First of all we need to define a key used to synchronize the entries. That key is actually application specific. We could have a unique employee id in our payroll system and decide to keep the same value in the employeenumber attribute in our LDAP entries. Obviously, we could have multiple data sources for each entry (for different attributes), in which case we need to define one key for each data source.

Along with that we have to define a new attribute that will hold the data sources for each entry (if we have just one that step could be skipped). The attribute will be multivalued and have one value for every data source (for instance myCompanyEntryDataSource=payroll). We will need to define a ‘friendlyname’ for each data source in our case.

If we keep multiple entry types (for instance students, faculty, employees) that each need special handling we will need to define an attribute to keep that one too (sometimes eduPersonAffiliation will suffice). If our entries are all of the same type we can skip that one too.

Thankfully, we don’t need to change much in our data source. The only thing that would be required is to keep a Global Data Version (GDV). That would just be a global, unique number that will be incremented for every write (add/delete/modify) operation on the database. The GDV could be maintained as a separate resource and be atomically and transaction-ally maintained for every operation committed (it must be incremented only by one on every operation, the increment operation should be atomic and an write operation on our data must always end with an increment on the GDV value).

We now have to define a few more things:

  • Keep the global data version in our LDAP tree base in the form of a multivalued attribute. We keep one value for each data source and distinguish between them by using value tags (based on the data source friendly name we have defined). The GDV could be maintained on our data tree base entry for easy access.
  • Create a PresentEntry() web service operation. That operation will just send all data (that a data source holds and shares with the LDAP directory) for an entry and update (or create if necessary) the entry on the LDAP  accordingly. If the entry is synced no modification will be performed.
  • Add an update timestamp attribute to the schema that will be added on all entries and will be updated by the PresentEntry() operation.

We are now one step before actually defining our synchronization mechanism. The last step is to manually walk our LDAP tree and set the data source attribute accordingly for all the entries that depend on it (if a data source attribute is actually maintained).

Synchronization Process:

  • Call a web service called GetDataVersion(). That will return the GDV for our data source (or null if we have never synchronized). If the GDV is stale (compared to the one maintained in the data source) then we can actually start the synchronization. Otherwise there’s no reason to perform any further action.
  • Call a web service called StartTotalUpdate(DataVersion, UpdateTimestamp). DataVersion is the GDV of the source, while UpdateTimestamp is just the timestamp of our call to StartTotalUpdate(). We use this function in order to signal the start of the synchronization process as well as to achieve isolation. We don’t allow the sync process to run more than once simultaneously. The operation will return:
    • DataCurrent if no action is needed
    • UpdateStarted
    • UpdateInProgress if another update is already in progress
    • NoDataVersion if LDAP does not have a GDV (in the case of initialization)
    • Error in case of an error (like the LDAP server is not available)

    It will also return a TotalUpdateId unique number which will need to be used by all subsequent web service operations (as an extra argument).

  • Call the PresentEntry() operation for all the entries maintained on the data source (which should be synchronized with the LDAP directory). The call will be serialized and the operation on the LDAP could implement a throttling mechanism in order to perform operations with a rate that the LDAP server can handle. WS-ReliableMessaging could also be used to make sure that calls to the operation are ordered and happen only once. Every entry that is updated will also get an update timestamp attribute with the value of the UpdateTimestamp passed in StartTotalUpdate(). If the operation hits an unrecoverable error it will return the error to the caller and the update process will stop.
  • Call a web service called EndTotalUpdate(TotalUpdateId, Boolean success, error message). This operations signals the end of the update process. If the update failed at some point we can signal it and also log a corresponding error message. If it was successful we update the GDV attribute in the base entry and run a post update process that will scan all entries (with a data source attribute corresponding to this source) that have an update timestamp attribute with a value other than the UpdateTimestamp. These entries should just be deleted.

When we initiate a total update and assign a totaupdateid we can also assign an idle timeout and an idle timer. Every call to the PresentEntry() operation will zero the timer. If the timer exceeds the idle timeout then we kill the update process and return a corresponding error to a call for the PresentEntry() or EndTotalUpdate() operations.

The above need minimum changes in the data source and only require coding of the PresentEntry() caller and provider. On the other hand even a single update needs walking through all the data maintained which is not scalable for more than a few hundred entries. If we have already implemented operation specific web service operations (add, change, delete etc) and we are willing to maintain a changelog on our data source we can define a process to only send updates to the LDAP and keep the above process for initial syncing and as a fallback if we lose sync.

In this case we need to create a changelog in the data source that will be updated on every write operation with the following data:

  • GDV: The Global Data Version that corresponds to this operation
  • Entry unique id
  • Entry type
  • Change Type
  • Changed Data

In this case the update process can be defined as follows (quite similar to the total update process actually):

  • Call the GetDataVersion()
  • Call a StartUpdate() service similar to StartTotalUpdate() (with the same arguments, return values and funcionality)
  • Call the operation specific web services.  The services perform the operation, update the update timestamp attribute and return success or error. An optimization would be to only create a delete() operation for entry removal and keep the PresentEntry() operation for the rest of you needs.
  • Call a EndUpdate() service similar to EndTotalUpdate(). The only difference is that in this case there’s no reason to walk through the tree and search for entries that need deletion.

The above process can be used if the LDAP tree has been initialized and the changelog contains enough data to perform synchronization. Otherwise a total update is performed and we fall back to normal changelog based updates.

In a previous post i described how a single entry change could lead to replication halt and a need to reinitialize things again. In OpenLDAP, since SyncRepl uses the present phase replication conflicts can be easily fixed. Just make the appropriate changes to the problematic entries and replication will resume and replicas will get synchronized.

The problem arises with Delta-Syncrepl. Since it depends on a Changelog database holding entry changes, when it reaches the problematic query, it will keep on trying to execute it on the replicas and replication will get halted. As a result, it’s important to be able to clear the changelog from the corresponding entries so that replication can continue.

The easiest way to do that is to simply wipe out everything from the changelog database. Syncrepl will fall back to full replication, replicas wil get synchronized and the changelog can be used to facilitate delta-syncrepl after synchronization. Since usually RefreshAndPersist replication will be used and replicas will not be far behind, the accesslog will probably be quite small and deleting it won’t make any large difference in synchronization time.

The hard way is to try and find the faulting entries and delete them. Now it’s a good time to read the slapo-accesslog man page and also look at some example entries in the accesslog database. Here are three entries, corresponding to an entry add. modify and deletion:

# 20090125102756.000001Z, accesslog
dn: reqStart=20090125102756.000001Z,cn=accesslog
objectClass: auditAdd
reqStart: 20090125102756.000001Z
reqEnd: 20090125102757.000000Z
reqType: add
reqSession: 17
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0
reqMod: objectClass:+ top
reqMod: objectClass:+ person
reqMod: objectClass:+ organizationalPerson
reqMod: objectClass:+ inetOrgPerson
reqMod: uid:+ test-user123
reqMod: cn:+ test user
reqMod: sn:+ user
reqMod: givenName:+ test
reqMod: structuralObjectClass:+ inetOrgPerson
reqMod: entryUUID:+ 93a43296-7f16-102d-9d8f-c7eb2283aa50
reqMod: creatorsName:+ cn=manager,dc=ntua,dc=gr
reqMod: createTimestamp:+ 20090125102756Z
reqMod: entryCSN:+ 20090125102756.999906Z#000000#001#000000
reqMod: modifiersName:+ cn=manager,dc=ntua,dc=gr
reqMod: modifyTimestamp:+ 20090125102756Z

# 20090125102958.000001Z, accesslog
dn: reqStart=20090125102958.000001Z,cn=accesslog
objectClass: auditModify
reqStart: 20090125102958.000001Z
reqEnd: 20090125102958.000002Z
reqType: modify
reqSession: 21
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0
reqMod: cn:= test user
reqMod: entryCSN:= 20090125102958.789043Z#000000#001#000000
reqMod: modifiersName:= cn=manager,dc=ntua,dc=gr
reqMod: modifyTimestamp:= 20090125102958Z

# 20090125103004.000001Z, accesslog
dn: reqStart=20090125103004.000001Z,cn=accesslog
objectClass: auditDelete
reqStart: 20090125103004.000001Z
reqEnd: 20090125103004.000002Z
reqType: delete
reqSession: 22
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0

As you can see the important parts are the RDN which is based on the request time (reqStart), the request type (reqType with values add, modify and delete – although you can also just use the corresponding values for the objectclass attribute) and the request target DN (reqDN, holding the DN of the entry).  Consequently, in order to be able to search the accesslog database efficiently, the administrator should add an index for the reqDN attribute and perform queries in the form: (&(reqDN=<entry DN>)(objectclass=auditAdd|auditModify|auditDelete)).

Depending on the conflicting entry’s final form, the administrator should delete all accesslog entries that will create a conflict and only leave (or not leave at all if there isn’t one) the resolving request.

I was curious to see how long online initialization would take with our server setup. First of all i wanted to test how much time slapadd takes with our user database. Keep in mind that we keep quite a lot of attributes in our user entries (the inetorg* set of objectclasses, eduperson and various service specific attributes), so we ‘ve ended up configuring 20 indexes on our database. That probably means that slapadd will have a lot of work to do when adding entries.
Plain slapadd took around 10 minutes to complete for 23,000 entries while slapadd with the -q option takes only 1,5 mintues!! Talk about a speedup. So it seems the -q option is a must.
The main difference between slapadd and online initialization though is that the later will go through all the modules configured on the ldap server. In our case this configuration includes:

  • Changelog overlay (only added to the server configuration for the last init run).
  • Audit log (thus all operations will get logged on the audit log file).
  • constraint and attribute uniqueness overlays.
  • Schema checking.

Consequently, I would assume that online initialization would take more time than the original slapadd (since it actually has to update two databases, the actual user database and the accesslog db as well as log the operations to the audit log).

(note:  Please don’t take the actual init times into account, just the difference between them. Right now both replicas are hosted on the same machine as jails so i ‘m planning to update the numbers when i move one to another machine).

The first try on online init failed and seemed to get stack somewhere. Adding Sync to the value of olcLogLevel under cn=config can be very helpful on troubleshooting. It turned out that there’s an obvious chicken-egg problem with the value list attribute constraint of the constraint overlay. When setting up a value list attribute constraint, the list is taken from the result of an LDAP URL. Since during initialization there are no entries in the directory, that LDAP URL will return an empty list and as soon as you try to add an entry with an attribute corresponding to a value list constraint the operation will fail and initialization will get halted. The solution was to disable the value list constraint until initialization finished.

Next try was a success but took much more time than anticipated.  One tuning action that had nice results was to increase the checkpoint interval. At first i had configured checkpoint to happen every 60KB’s on the user database and 120KB on the changelog database. Since i also had configured checkpoints to happen every 5 minutes it was a good idea to increase the size to 1MB. This  dropped online initialization to 22 minutes (no accesslog overlay), more than double the time for plain slapadd to run on the same data.

Just to have an idea on the performance increase of checkpoint tuning here are the results on my test platform (consumer has a minimum configuration with just the back-hdb configuration and no changelog, auditlog, unqiueness and constraint overlays):

  • Checkpoint Size: 120KB, Init Time: 14 minutes
  • Checkpoint Size: 1MB, Init Time: 7 minutes (50% decrease!)

After enabling the accesslog overlay, the increase in total operation processing was quite impressive since this time online init took around 33 minutes (compared to 22 minutes).

UPDATE:

Since syncrepl will fall back to full present/delete phase if the changelog does not contain enough information or is outdated, it becomes clear that the accesslog database and it’s contents are not mission critical. Consequently, it is advisable to enable the dbnosync configuration directive for the accesslog database. Enabling the setting dropped online init time to around 25 minutes, almost removing the additional accesslog update cost.

Lastly, a couple more notes on tuning the BDB database:

  • db_stat -m will give you statistics on the BDB cache usage and hit ratio. It can be a helpful tool to check out if your BDB cachesize setting (set_cachesize in DB_CONFIG) is working properly or is set too low.
  • This wiki page on Sun (although targeted on Sun DS) can provide useful hints on determining if your database page size is optimal for your configuration. In OpenLDAP back-hdb the corresponding configuration directive is dbpagesize.

About Me

Kostas Kalevras

LinkedIn profile

E-mail:kkalev AT gmail DOT com
My status

More about me...

a