[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Password file with over 3000 users.

On Fri, Sep 21, 2007 at 12:02:23AM +0200, Ian wrote:
> Craig Sanders wrote:
> > if you have the libnss-db package (part of nsswitch) installed, you have
> > everything you need already.
> Thanks. This is exactly what I need. I actually found this and got it
> running shortly after I posted my original message. Everything is in
> Debian - except the documentation!

AFAIK, there isn't any documentation.  certainly not any step-by-step howto.

it's one of the many things that IF you know about it, it's easy to
figure out. the difficult part is finding out about it in the first

> I will have to look into controlling the cron spam.

append ">/dev/null 2>&1" to the cron job. IMO, the output of that
Makefile isn't ever interesting enough to care about (and if there's a
problem, you'll notice it long before you read root's mail).

> We actually use this password file across a couple of servers with a
> cron job that copies it across with rsync every 5 minutes. At some
> stage I must look at a SQL or LDAP based solution. I originally chose
> the rsync because each server can run on its own if one goes down and
> there are no performance issues. But libpam-ccreds and nss-updatedb
> appear to offer the same functionality when coupled with ldap.
> We already have a postgresql backend for our radius server, would it
> be better to run SQL -> LDAP -> nss or go directly from SQL -> nss?

can't say. personally, i tend to avoid making web/mail/shell/etc servers
dependant on an SQL server for authentication (especially if it's
mysql). i'd be more inclined to use LDAP and have an LDAP slave on
every server that needs it.

SQL servers are useful, but i just don't trust them the way i trust
plain text files.

even where i've stored, e.g., postfix maps in postgresql, i've always
set them up so that the postgresql table is dumped (by cron job) to
a plain text file and made into a hashed db file - that way the mail
server will keep on working even if the postgresql server is down or
unreachable or overloaded.

the trick to that is using a timestamp table auto-updated by a trigger
when anything in the database is changed - that way you can avoid
regenerating the hash maps when nothing has changed (i.e. the same idea
as timestamp dependancies in Make).


craig sanders <cas@taz.net.au>

Reply to: