On Sat, May 13, 2000 at 08:16:55PM -0400, Chris Wagner wrote: > At 10:10 AM 5/12/00 +1000, Craig Sanders wrote: > >i don't see how. apache just sends the log data out to the pipe, it > >doesn't wait for the pipe program to commit the record to the database. > >as far as delaying apache goes, it's probably less of a delay than > >writing it to a text file. > > I see what you're saying. But a slow or messed up pipe can lead to lost log > data. This is a situation where MySQL being faster would make it worth it. > I think it would be safer to use that perl thingy to just write the data to > a table as fast as possible and then let the database touch it only after > the log file is closed. Hell, it might even be better to just set up a > customlog that writes in table format. Lost data is bad. :) So what happens when you're reading the requests database and Apache wants to write more data? With MySQL, the table is locked and now you just lost data. More often, you want to read data but the writer has locked the table. I'd noticed this before but hadn't really thought about the issue. It's really not that hard to set up Postgres - I'm an idiot and I figured it out in a few days. The more I use it, the more I like it. Thanks for the script Craig. I had to tweak it so fields consisting of "-" were changed to NULLs, I'm not sure why but the database wasn't happy about fields consisting of "-". Since that's just Apache's way of saying NULL I didn't spend a lot of time investigating it. Now to write a front-end to display all this data :) Does anyone have any great ideas on what to do with old data? I'm guessing that letting the table grow without bound will be bad ... Regards, -- Nathan Norman "Eschew Obfuscation" Network Engineer GPG Key ID 1024D/51F98BB7 http://home.midco.net/~nnorman/ Key fingerprint = C5F4 A147 416C E0BF AB73 8BEF F0C8 255C 51F9 8BB7
Attachment:
pgpsxm1J1ts1v.pgp
Description: PGP signature