[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Possible akonadi problem?



Am Samstag, 31. Januar 2015, 10:28:41 schrieb Kevin Krammer:
> On Friday, 2015-01-23, 09:54:10, Martin Steigerwald wrote:
> > 1) It is a *cache* and *nothing* else. It never stores any information
> > inside the cache that isn´t stored elsewhere or that it isn´t able to
> > rebuild. That said, in case of issues with the cache, it is possible
> > to
> > remove it and rebuild it from scratch *without* *any* data loss
> > involved and *without* having to recreate filter rules.
> 
> That's not always possible.
> Most obvious example is writing data to backends who's actual storage is
> unreachable, i.e. an IMAP server not reachable due to no network
> connection.

Okay, for me thats more of a journal not a cache. But it can be seen as a 
write cache, yes.

And it creates problems for backup purposes. At least, if having such a 
kind of journal is unavoidable, I think it should be file based. Like some 
outgoing maildir for mails. How did KMail 1 solve this?

Why? I think I wouldn´t store a mail that is not stored elsewhere just in 
the database. I´d make Akonadi as robust as it can get on a database, i.e. 
cache loss. No config, no data, just metadata, ideally only recreatable 
metadata in there. Similar to Baloo. And store everything else with the 
backend storage if possible.

Also treat the backend storage as authoritative. If the backend storage 
has a mail, the database does not see, the mail is there. Period. If the 
database sees mails that are not there, those mails are not there. If 
there is a discrepancy between the cache and the backend storage, the 
bankend storage is always right. Only exception: The backend storage can´t 
be reached for a while or aborts a connection.

I think Akonadi should follow the same robustness principles as for 
example Postfix. It receives the mail, writes it, fsync() it and *only* 
then says "I have it, you can discard it" to the sending mailserver.

> > 2) Make it *just plain* obvious *where* Akonadi stores the
> > configuration and the data. For now its at least ~/.config/akonadi
> > one or two files per resource (the changes.dat there),
> > ~/.kde/share/config/akonadi*, ~/.kde/share/config/kmail2rc (it
> > contains references to Akonadi resources), ~/.kde/share/config/ which
> > contains the local filter rules.
> 
> The config for Akonadi is in $XDG_CONFIG_HOME/akonadi.
> The other locations are those of programs using Qt4 based kdelibs. The
> switch fir XDG_CONFIG_HOME will most likely happen with the first Qt5
> based version of said programs.

So the amount if different directories will go down?

I am hinting at user intro-spectability here. Sure I can understand a 
maildir, but even after some years, Akonadi still puzzles me. There is a 
bug report where moved mails are for a long time just in the database or 
file_db_data and do not appear in the destination maildir. For me thats a 
big, huge no-go.

If I move a file with Dolphin, I expect it to be in the destination 
directory instead of in some cache. If I have a local maildir resource, I 
expect that it contains *all my local* mails and that if I backup it, I 
have *all my mails*. Storing some mails elsewhere for a longer time, means 
that I have to backup the maildir *and* the database. And I loose the 
database due to some corruption, I risk loosing mail. If a Postfix did 
something like this, it would be an disaster.

Actually I do not see at all, why the mails should be cached within the 
database of file_db_data *at all* on a *local* maildir based move 
operation. Just move them already!

For the user such a behaviour is at best confusing. At worst it may lead 
to data loss.

> > 5) If you use a database, make perfectly sure that there *never ever*
> > can be two database processes trying to access the same database. I
> > have seen this several times with Akonadi MYSQL backend that I had
> > two mysqld processes. Thread the database as *part* of Akonadi and
> > make an akonadictl *stop* it *or* report a failure when it cannot
> > stop it. And make akonadictl never start it, if there is still one
> > running.
> 
> My understanding is that the control process sees the subprocess as
> finished. This will of course be solved by systemd which can terminate
> subprocesses based on cgroups membership.

I don´t see how systemd is needed for that. And it would be non-portable 
to BSD then.

If there is still a database process running on the Akonadi database, do 
not run a second one. I never have anything like this happen with the any 
mysql init script. Also I never saw this with Zimbra´s zmcontrol command. 
If I told it to stop, it was stopped, including the database.

But well… I hope Akonadi Next will be much leaner and not use MySQL at 
all.

So, please I see two mysqld processes running on the same Akonadi 
database, it is a bug. Its as simple as that. But well, I reported it.

Akonadi deals with *important* user data. Always *be safe* and 
*conservative* about it. Avoid anything that puts the data at risk.

Akonadi got much better. With KDE SC 4.14 its a lot better than it was 
initially, and honestly, initially in my oppinion it was a huge big mess. 
Just like Nepomuk stuff was. But still it has issues.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

Attachment: signature.asc
Description: This is a digitally signed message part.


Reply to: