You are currently browsing the tag archive for the ‘openldap’ tag.

There are many cases where you want to move from an old, legacy LDAP backend to OpenLDAP. Sometimes, this transition requires moving to a new naming context (for instance from o=<company>,c=<country> style to dc-based naming) and a lot of schema changes. The problem that the administrator usually faces is performing the necessary changes on the actual ldap data, a task that usually requires writing a script to manipulate an LDIF export. That is always a hard task and any error/omission is not easily fixed.
Another way to do things is to use the backends/overlays provided by OpenLDAP to transform the actual online data in such a way that a simple LDAP search on the whole tree will be enough to get an LDIF file ready for import on the new system. The necessary steps include (in the order described):

  • The meta backend to proxy requests to the legacy LDAP server.
  • The rwm overlay to map attributes and objectclasses to new names and delete those that will no longer be needed.
  • The relay backend to perform a suffix massage (if it is required). The suffix massage can be done earlier but doing that on a later stage provides the advantage of being able to transform DN-syntax values of mapped attributes.

Here’s an example (real life) configuration for the above scenario:

Read the rest of this entry »


I run across an OpenLDAP log file analysis tool. It’s written in Perl and will walk through the log file created by OpenLDAP (logging is performed through syslog) and report on various things like (taken from the website):

operations (e.g., Connect, Bind, Unbind) performed per host, unindexed searches, attributes requested, search filters used, total operations per server, and operation breakdowns by day, hour and month.

There’s also a statistics collector (collects data from cn=Monitor) that can be used as an input for graphing application in order to create graphs of server statistics.

Sun along with it’s Java Directory Server offering also provides a set of very useful utilities (as well as a set of LDAP libraries), called Directory Server Resource Kit (DSRK). It is available in binary form for Solaris and Redhat Linux so installation is as easy as running unzip.

The utilities it includes are quite helpful especially if you ‘re running a Java Directory Server (Log analyzer, Search Operations replay tool an so on). The documentation is quite helpful and provides all the necessary usage information. Maybe i ‘ll write a post soon about using them with your Sun DS installation.

One set of utilities included is specifically targeted at benchmarking your LDAP server operations. It is not as extensive as slamd but on the other side it’s very easy to understand and use so it was a good opportunity to use it on our OpenLDAP installation to measure it’s relative performance, especially with different configuration options.

Just as a side note, i am not trying to measure OpenLDAP performance or compare it with other products, i was just trying to find the best configuration for our setup and the software/hardware limits. The client was not even on the same network as the server although i made sure they were on a gigabit path.

License notice: According to the license accompanying DSRK “3.5 You may not publish or provide the results of any benchmark or comparison tests run on the Licensed Software to any third party without Sun’s prior written consent.” i ‘ve removed actual numbers from the post in order to stay out of trouble. If i find that i can include them, this notice will be removed and the numbers added.

All tools are multithreaded and provide command line option to set the number of threads used. In my case i found that i got the best performance when i set the number of threads to be equal to the number of real CPU cores. Also most of the tools allow random searches on the directory by performing substitutions on the provided search filter with random lines from an input file. That way you can simulate real life usage of your directory service.

First tool i used was the authentication rate measurement tool. I tried two setups. In the first the tool would perform a new connection for each bind while in the second it would keep a connection open and try rebinding. I only used one specific bind DN since i only wanted to measure the bind operation speed and not ldap entry searching. In the first case i got ‘a few’ binds/sec (with a single threaded tool run) while in the second case a multithreaded run performed a few thousands binds/sec. Using tcpdump to check out network overhead it became clear that every bind (along with connection establishment and breakup) needed 0,028 seconds with all the network packets transmitted. As a result each thread had a network upper limit of 35 binds/sec which made clear that this type of setup could not achieve extraordinary performance in a connect and bind every time setup.

The second tool was the search rate measurement tool. I always made sure to connect and bind once and use that connection for subsequent searches in order to measure only the LDAP Search speed. What i wanted to see was the impact of different configuration on the BDB and entry caches. Before each run i restarted the ldap server and performed an ldapsearch of the whole tree in order to prime the caches. All the iterations were random searches on the uid attribute requesting all entry attributes. My database holds about 24,000 entries, has a minimum BDB cache of 4MB and a total size of 109MB. The results were interesting:

BDB cache Entry cache Searches/Sec DBD cache hit ratio id2entry/dn2id/uid/total OpenLDAP process resident memory size
4MB zero


81%,87%,94%,91% 62MB
8MB zero


82%,94%,97%,93% 79MB
60MB zero


97%,99%,99%,99% 141MB
110MB zero


99% all 142MB
110MB 8000


119MB 24000






BDB cache hit ratios were taken using db_stat -m. I started by keeping entry cache to zero entries and increasing the BDB cache in order to check out the difference in performance. Initial run with 4MBs produced a few thousand searches/sec while a BDB cache of 110MB produced a medium increase while it required more than double memory. You may wonder about the negligible difference in memory usage between 60 and 110MB of BDB cache. The reason is that i only performed searches on the uid attribute so, although the total database size is 109MBs, the sum of the id2entry.bdb, dn2id.dbd and uid.bdb was only 82MB. Since BDB will automatically increase the BDB cache setting by 25%, 60MB was actually 75MB, so increasing the setting to 110MB didn’t have any real effect. Having a 99% BDB cache hit ratio though proved helpful since the search rate increased quite significantly.

The rest of the runs tried to measure the entry cache performance hit. I kept BDB cache to 110MB in order to keep database files on memory and only measure the entry cache speedup. First run was made with an entry cache of 8000 entries (1/3 of the total entries) and the other with an entry cache of 24000 entries (100% of the entries). The entry cache provided a substantial increase (with 24,000 entries entry cache) from the first run and the 110MB BDB cache run. Memory usage increase was significant though since resident size increased to 330MB (compared to 60MB with 4MB BDB cache, almost 6x times more).

The last run tried to use both caches while minimizing memory usage. The server was setup to use 8MB of BDB cache and 8000 entries entry cache. It performed more than the 110MB BDB cache run but less than the entry cache measuring tries. That’s an imporant  increase from the initial run and it reaches a large percentage of the results of the combined full BDB/Entry cache with 211MB memory usage (3,5x 60MB and 65% of the 330MB).

It’s obvious that adding more caching can change the search rate substantially although that incurs a large cost in memory usage. Also, adding more cache than your working set will probably have no result and will just use up valuable memory. Since usually memory available is not unlimited (and the machine used as an LDAP server might also perform other duties as well) i would suggest a strategy like the last run. Set BDB cache to 2x or 3x times the minimum required and the entry cache to a reasonable value to hold your working set and not consume all your system memory. If on the other hand your database is small or you have enough memory, increasing the caches can only help in performance (but always try to keep the working set and not the whole database).

The last tool used was the entry modification rate measurement tool. I kept the full BDB/Entry cache settings in order to keep my database on memory and only measure the modification overhead (and not the entry searching cost). The tool was configured to change the userpassword attribute of a random entry with a random 10 character string. Since the entry selection was totally random that meant that database and disk access would also be random and performance would be much smaller than adding entries. In my setup i had the accesslog database, the audit log and logging enabled so an entry modification actually meant multiple disk writes to all the corresponding databases and log files. I made two test runs. One with a single thread and one with 8 threads (always connecting and binding once to minimize network and connection overhead). The results were a bit poor:

  • Single threaded: A mods/sec
  • 8 threads: B (B more than A) mods/sec

Measuring i/o usage with systat i found that the disk subsystem was quite saturated performing around 250-300 transactions/sec so the above numbers were probably the actual disk limit of the server.  I wanted to see what the overhead of the accesslog and audit log was so i temporarily disabled them and run the test again. This time it reached to more than double the number of the plain 8 thread run. The above makes it obvious that adding modules like audit log or accesslog can be helpful for debugging or for increasing the replication performance but it can also hurt the write throughput a lot. As a result, the administrator should always try and keep modules that require disk writes to the minimum.

Most of the tips listed here are included in my previous posts but i thought it would be a good idea to also keep them all in one place (and update it every time i have a new one :)). So here goes:

  • Make sure you index the values that need indexing and only keep the indexes that count. Also take into account that if the candidate entries for an entry are a significant percentage of the total entry number you are probably better off with sequential scanning of the whole database, compared to random access of a large percentage of the entries. It is very important to index attributes used for attribute uniqueness and for replication (entryCSN and entryUUID for the back-hdb and entryCSN, reqEnd, reqResult, reqStart for the accesslog database).
  • The most important cache is the Berkley DB cache which will allow you to keep database content in memory and reduse disk access to the minimum. As i showed in previous posts you can get a significant cache hit ratio by just keeping the internal B-tree pages for all the *.bdb files of your database. To calculate the total size needed you should run db_stat -d for all the bdb files and take the sum of (internal pages * page size). This sum should then be used as the value for the set_cachesize directive in DB_CONFIG. db_stat -m can be used to gather statistics on the BDB cache usage.
  • Since we are on the subject, in case the database page size is not suitable for the kind of data the database is carrying, db_stat -d will show a large amount of overflow pages for the problematic database files. You can use the dbpagesize <dbfile> <size> back-hdb directive to set the page size accordingly (but that change requires reinitializing your database).
  • If you can keep the database log file on another disk that will definitely increase performance.
  • Another idea is to set buffered i/o for the openldap log file in syslog.conf.
  • The entry cache should hold your entry working set. Your database might hold 100K users but you might notice that only 10K use your services each day. In this case it would be a good idea (especially if you are short on memory) to only keep the working set in entry cache. idlcache should be 3 times the size of the entry cache.
  • The checkpoint directive value should be set to a large value like 1MB. A nice idea would be to also set it to perform checkpoints every 5 minutes. That way if you have bursts of entry changes (or you are initializing your database) you will checkpoint every 1MB or else (during normal operation) every 5 minutes.
  • Since the accesslog database is not mission critical, if you are using it it would be a good idea to set the dbnosync directive.
  • Set 4 threads per CPU core.
  • UPDATE: You should use db_stat -c to take a look at the lock subsystem. If you see the maximum number of locks/lockers/lock objects at any one time reaching the maximum number possible you should increase the maximum values. The settings are:
set_lk_max_locks <nun>
set_lk_max_lockers <num
set_lk_max_objects <num>
  • UPDATE2: On Linux systems and with large user base slapadd will run much better if you use shared memory instead of the default mmaped files (shm_key directive in back-hdb configuration).

In a previous post i described how a single entry change could lead to replication halt and a need to reinitialize things again. In OpenLDAP, since SyncRepl uses the present phase replication conflicts can be easily fixed. Just make the appropriate changes to the problematic entries and replication will resume and replicas will get synchronized.

The problem arises with Delta-Syncrepl. Since it depends on a Changelog database holding entry changes, when it reaches the problematic query, it will keep on trying to execute it on the replicas and replication will get halted. As a result, it’s important to be able to clear the changelog from the corresponding entries so that replication can continue.

The easiest way to do that is to simply wipe out everything from the changelog database. Syncrepl will fall back to full replication, replicas wil get synchronized and the changelog can be used to facilitate delta-syncrepl after synchronization. Since usually RefreshAndPersist replication will be used and replicas will not be far behind, the accesslog will probably be quite small and deleting it won’t make any large difference in synchronization time.

The hard way is to try and find the faulting entries and delete them. Now it’s a good time to read the slapo-accesslog man page and also look at some example entries in the accesslog database. Here are three entries, corresponding to an entry add. modify and deletion:

# 20090125102756.000001Z, accesslog
dn: reqStart=20090125102756.000001Z,cn=accesslog
objectClass: auditAdd
reqStart: 20090125102756.000001Z
reqEnd: 20090125102757.000000Z
reqType: add
reqSession: 17
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0
reqMod: objectClass:+ top
reqMod: objectClass:+ person
reqMod: objectClass:+ organizationalPerson
reqMod: objectClass:+ inetOrgPerson
reqMod: uid:+ test-user123
reqMod: cn:+ test user
reqMod: sn:+ user
reqMod: givenName:+ test
reqMod: structuralObjectClass:+ inetOrgPerson
reqMod: entryUUID:+ 93a43296-7f16-102d-9d8f-c7eb2283aa50
reqMod: creatorsName:+ cn=manager,dc=ntua,dc=gr
reqMod: createTimestamp:+ 20090125102756Z
reqMod: entryCSN:+ 20090125102756.999906Z#000000#001#000000
reqMod: modifiersName:+ cn=manager,dc=ntua,dc=gr
reqMod: modifyTimestamp:+ 20090125102756Z

# 20090125102958.000001Z, accesslog
dn: reqStart=20090125102958.000001Z,cn=accesslog
objectClass: auditModify
reqStart: 20090125102958.000001Z
reqEnd: 20090125102958.000002Z
reqType: modify
reqSession: 21
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0
reqMod: cn:= test user
reqMod: entryCSN:= 20090125102958.789043Z#000000#001#000000
reqMod: modifiersName:= cn=manager,dc=ntua,dc=gr
reqMod: modifyTimestamp:= 20090125102958Z

# 20090125103004.000001Z, accesslog
dn: reqStart=20090125103004.000001Z,cn=accesslog
objectClass: auditDelete
reqStart: 20090125103004.000001Z
reqEnd: 20090125103004.000002Z
reqType: delete
reqSession: 22
reqAuthzID: cn=manager,dc=ntua,dc=gr
reqDN: uid=test-user123,dc=ntua,dc=gr
reqResult: 0

As you can see the important parts are the RDN which is based on the request time (reqStart), the request type (reqType with values add, modify and delete – although you can also just use the corresponding values for the objectclass attribute) and the request target DN (reqDN, holding the DN of the entry).  Consequently, in order to be able to search the accesslog database efficiently, the administrator should add an index for the reqDN attribute and perform queries in the form: (&(reqDN=<entry DN>)(objectclass=auditAdd|auditModify|auditDelete)).

Depending on the conflicting entry’s final form, the administrator should delete all accesslog entries that will create a conflict and only leave (or not leave at all if there isn’t one) the resolving request.

I was curious to see how long online initialization would take with our server setup. First of all i wanted to test how much time slapadd takes with our user database. Keep in mind that we keep quite a lot of attributes in our user entries (the inetorg* set of objectclasses, eduperson and various service specific attributes), so we ‘ve ended up configuring 20 indexes on our database. That probably means that slapadd will have a lot of work to do when adding entries.
Plain slapadd took around 10 minutes to complete for 23,000 entries while slapadd with the -q option takes only 1,5 mintues!! Talk about a speedup. So it seems the -q option is a must.
The main difference between slapadd and online initialization though is that the later will go through all the modules configured on the ldap server. In our case this configuration includes:

  • Changelog overlay (only added to the server configuration for the last init run).
  • Audit log (thus all operations will get logged on the audit log file).
  • constraint and attribute uniqueness overlays.
  • Schema checking.

Consequently, I would assume that online initialization would take more time than the original slapadd (since it actually has to update two databases, the actual user database and the accesslog db as well as log the operations to the audit log).

(note:  Please don’t take the actual init times into account, just the difference between them. Right now both replicas are hosted on the same machine as jails so i ‘m planning to update the numbers when i move one to another machine).

The first try on online init failed and seemed to get stack somewhere. Adding Sync to the value of olcLogLevel under cn=config can be very helpful on troubleshooting. It turned out that there’s an obvious chicken-egg problem with the value list attribute constraint of the constraint overlay. When setting up a value list attribute constraint, the list is taken from the result of an LDAP URL. Since during initialization there are no entries in the directory, that LDAP URL will return an empty list and as soon as you try to add an entry with an attribute corresponding to a value list constraint the operation will fail and initialization will get halted. The solution was to disable the value list constraint until initialization finished.

Next try was a success but took much more time than anticipated.  One tuning action that had nice results was to increase the checkpoint interval. At first i had configured checkpoint to happen every 60KB’s on the user database and 120KB on the changelog database. Since i also had configured checkpoints to happen every 5 minutes it was a good idea to increase the size to 1MB. This  dropped online initialization to 22 minutes (no accesslog overlay), more than double the time for plain slapadd to run on the same data.

Just to have an idea on the performance increase of checkpoint tuning here are the results on my test platform (consumer has a minimum configuration with just the back-hdb configuration and no changelog, auditlog, unqiueness and constraint overlays):

  • Checkpoint Size: 120KB, Init Time: 14 minutes
  • Checkpoint Size: 1MB, Init Time: 7 minutes (50% decrease!)

After enabling the accesslog overlay, the increase in total operation processing was quite impressive since this time online init took around 33 minutes (compared to 22 minutes).


Since syncrepl will fall back to full present/delete phase if the changelog does not contain enough information or is outdated, it becomes clear that the accesslog database and it’s contents are not mission critical. Consequently, it is advisable to enable the dbnosync configuration directive for the accesslog database. Enabling the setting dropped online init time to around 25 minutes, almost removing the additional accesslog update cost.

Lastly, a couple more notes on tuning the BDB database:

  • db_stat -m will give you statistics on the BDB cache usage and hit ratio. It can be a helpful tool to check out if your BDB cachesize setting (set_cachesize in DB_CONFIG) is working properly or is set too low.
  • This wiki page on Sun (although targeted on Sun DS) can provide useful hints on determining if your database page size is optimal for your configuration. In OpenLDAP back-hdb the corresponding configuration directive is dbpagesize.

We ‘ll be moving to OpenLDAP 2.4 shortly at ntua as our primary LDAP server. I wanted to post a few details on my testing so here goes:
In general the server is quite stable, fast and reliable. The only thing an administrator should really keep an eye on is making sure that the database files have the correct ownership/permissions after running database administration tools (slapadd, slapindex etc).


OpenLDAP has a really nice set of features which can prove quite useful like:

  • Per user limits. The administrator can set time and size limits on a per user or user group basis apart from setting it globally. One nice application is to limit anonymous access to only a few entries per query while allowing much broader limits for authenticated users. In my case I found that you have to set both soft and hard limits for things to work correctly.
  • MemerOf Overlay. Maintains a memberOf attribute in users which are members of static groups.
  • Dynamic Group Overlay. Dynamically creates the member attribute for dynamic groups based on the memberURL attribute value. Since the memberURL is evaluated on every group entry access care should be taken so that only the proper users access the group entries.
  • Audit Log Overlay. Maintains an audit text log file with all entry changes.
  • Referrential Integrity and Attribute Uniqueness Overlays
  • Constraint Overlay. A very handy overlay which allows the administrator to set various constraints on attribute values. Examples are count constraint (for instance on the userpassword attribute), size constraint (jpegPhoto attribute), regular expression constraint (mail attribute) and even value list constraint (through an LDAP URI) which can be very handy for attributes with a specific value set (like edupersonaffiliation). Make sure you use 2.4.13 though since value list constraint will crash the server in earlier versions.
  • Monitor Backend in order to monitor directly through LDAP server status, parameters and operations.
  • Dynamic Configuration which provides an LDAP view of all the server configuration thus allowing the administrator to dynamically change most of the server configuration. I would not recommend only using dynamic config (although it is possible) since it’s a bit cryptic and hard to administer. What i did is to enable it on top of the regular text slapd.conf in order to be able to dynamically change a few parameters, especially the database read-only feature (which can be used to perform online maintenance tasks like indexing or backing up data).

Read the rest of this entry »

I ‘ve never given too much attention to LDAP Proxy Authorization till recently when a colleague brought it up. It’s actually a very neat way to perform operations on an LDAP server as a normal user without requiring knowledge of the user’s credentials.  As a result you don’t need to setup an all powerful account but you can actually perform actions using a target user’s actual identity, simplifying access control policy on the LDAP server.

The above can come very handy in Single Sign On cases. Imagine a web site which uses Shibboleth to authenticate users (and thus has no knowledge of their password) but also has to perform actions on the user accounts stored on an LDAP server, either directly or by using a web service interface. Since the web site does not have the user’s password, it cannot perform an LDAP Bind with the user’s credentials. What it can do though is to bind as a special user which has proxy authorization privileges, SASL authorize to the user’s DN and perform the corresponding actions.

More information on Proxy Authorization and how to set it up in OpenLDAP can be found in the Administrator’s Guide ‘Using SASL’ chapter. One nice feature in OpenLDAP is that you can limit the accounts to which a user can authorize to to a specific user set defined either by an LDAP URL or a DN regular expression.

Got my paper included in the 1st LDAP Conference which will take place in Cologne, Germany between 6-7 September. Seems that all of the right people will be there including Kurt Zeilegna (OpenLDAP founder), Howarch Chu (SYMAS – OpenLDAP), Alex Karasulu (ApacheDS), Ludovic Poitou (Sun, OpenDS). Usually i can only find a few presentations in a conference that i feel i must attend. In this case i cannot find a single presentation not worth it’s time. Seems it will be two very busy days.

Abstracts Page:

Conference Page:

Symas (big commercial backers and developers for OpenLDAP) posted benchmark numbers on their site comparing OpenLDAP with the latest versions of FDS (Fedora DS), OpenDS (the Java based Directory Server from Sun which is still in the early stages of development) and ApacheDS (which does not seem to be able to handle real world loads).

It seems that OpenLDAP outperforms the rest many times over, with FDS having trouble keeping 1 million entries in 8MB of RAM. Good news for OpenLDAP, bad news for FDS (which needs much more work, especially for large scale cases) and it makes me wonder how Sun ONE would perform (since it shares the same code base with FDS).


OpenLDAP vs OpenDS (and ApacheDS)

An older benchmark of OpenLDAP against Sun DS

About Me

Kostas Kalevras

LinkedIn profile

E-mail:kkalev AT gmail DOT com
My status

More about me...