[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

postgresql or mysql for request-tracker on etch?



I'm currently rebuilding a request-tracker server at work.
Luckily I have some spare servers so I can build a new one and migrate
the off the old one when I'm ready.
The old one is a neglected and hacked about sarge box, which I want to
rebuild.  Luckily it's in a DMZ behind and behind a reverse proxy....

Anyway, I have request-tracker3.6 running on the new box, and it's built
nicely using standard debian packages with no nasty hacks.  I've
practised importing the database off the old box (running 3.4) and it
all seems to work...

What would be really nice would be to move to a clustered database.  I
could have two instances of RT running and accessing the same database
on two database servers, but it doesn't seem that easy with RT.

Problems (my understanding of them)
1. etch has mysql 5.0.32.  this seems to support clustering but the
database has to be in ndb format.  Request-tracker uses innodb.
2. an ndb clustered database has to fit in memory.  our RT database is
5gig and the machines both have 2gig of memory

so, what about postgresql
does it support "clustering" like ndb, so you can get the same
read/write result which ever physical server you use?
if I switch from mysql to postgresql will I be able to convert the
existing database over?

currently I have a cron job that dumps the database from one box to the
other at night, so that the most that we can ever loose is 24 hours of
work, but it would be nice to have an immediate solution like telling
users to flip to https://box2/rt whilst I go and sort out what went
wrong with box1

any ideas would be appreciated

note that I'm a database newbie.  I just know enough to get RT working
and that's about it.

thanks, Philip



Reply to: