[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Password file with over 3000 users.



Craig Sanders wrote:
> On Fri, Sep 21, 2007 at 12:02:23AM +0200, Ian wrote:
>> We actually use this password file across a couple of servers with a
>> cron job that copies it across with rsync every 5 minutes. At some
>> stage I must look at a SQL or LDAP based solution. I originally chose
>> the rsync because each server can run on its own if one goes down and
>> there are no performance issues. But libpam-ccreds and nss-updatedb
>> appear to offer the same functionality when coupled with ldap.
>>
>> We already have a postgresql backend for our radius server, would it
>> be better to run SQL -> LDAP -> nss or go directly from SQL -> nss?

Why the hell would you bother with LDAP? 3000 users is really not a lot,
and for sure, SQL can handle the dump of 3000 entries every X minutes.

> SQL servers are useful, but i just don't trust them the way i trust
> plain text files.
> 
> even where i've stored, e.g., postfix maps in postgresql, i've always
> set them up so that the postgresql table is dumped (by cron job) to
> a plain text file and made into a hashed db file - that way the mail
> server will keep on working even if the postgresql server is down or
> unreachable or overloaded.
> 
> the trick to that is using a timestamp table auto-updated by a trigger
> when anything in the database is changed - that way you can avoid
> regenerating the hash maps when nothing has changed (i.e. the same idea
> as timestamp dependancies in Make).
> 
> craig

100% agree with that technique, and what has been said above, this is
what we did in our control panel too.

Also, I'd like to add that this dump should be atomic (eg: if the dump
fails, switch back to the previous dump). We didn't do it yet, but it's
still on our todo, and this is very important.

Thomas



Reply to: