Re: ext2 file system: moving between armel and amd64 etc.
- To: Theodore Tso <tytso@mit.edu>
- Cc: Martin Michlmayr <tbm@cyrius.com>, Riku Voipio <riku.voipio@iki.fi>, debian-arm@lists.debian.org
- Subject: Re: ext2 file system: moving between armel and amd64 etc.
- From: Osamu Aoki <osamu@debian.org>
- Date: Fri, 6 Nov 2009 01:24:42 +0900
- Message-id: <20091105162442.GA30198@osamu.debian.net>
- In-reply-to: <20091029192927.GG18464@mit.edu>
- References: <20091026122536.GA12435@osamu.debian.net> <20091026140758.GA19767@kos.to> <20091026153656.GA15280@deprecation.cyrius.com> <20091027160125.GA11163@osamu.debian.net> <20091027180128.GB7915@mit.edu> <20091029140453.GA8350@osamu.debian.net> <20091029192927.GG18464@mit.edu>
Hi,
Excuse me for my slow response.
On Thu, Oct 29, 2009 at 03:29:27PM -0400, Theodore Tso wrote:
> On Thu, Oct 29, 2009 at 11:04:54PM +0900, Osamu Aoki wrote:
> > My questions can be summarized as following 3 questions:
> >
> > Q1. Is there some option to fsck.ext2
> > to force unsigned under amd64 or
> > to force signed under armel.
>
> No.
>
> > Q2. Is there option to tune2fs to change signed or unsigned flag.
>
> No.
Thanks for confirming that the manpage is the complete one.
> > Q3. Is it normal to have exitcode=1 for fsck under armel if partition
> > has been created with mkfs.ext2 under amd64 to be unsigned?
>
> Only the first time fsck is run, if the filesystem was created using
> an ancient mke2fs. After that, no, assuming the filesystem was
> cleanly unmounted.
Thanks. Debian lenny mke2fs seems current enough as I glanced its source.
As we also know that its kernel properly patched, Debian lenny is in
good shape.
> > (It seemed to me that fsck under such case tends to find the
> > filesystem not to be cleanly unmounted.)
>
> I think you're jumping to conclusions here, and since I didn't see
> your initial message, I don't know why you think this might be the
> case. How about details about what you are seeing, in as much detail
> as you can, with copies of the fsck output as the bare minimum?
Basically, my armel system is not functioning now. Give me some time.
Here, my armel system on which I had issues was an odd vender provided
system probably created by hardware vender in connection with Canonical.
This device, Sharp PC-Z1, provides us with a system re-initialization
disk on microSD card. That disk image contains some derivative of
Ubuntu/Debian binary. The problem seen was when I was running this
re-initialization disk. The fsck code in its init script exits upon
seeing non-arm initialized ext2 disk.
> The issue with the signed vs unsigned char problem with
> htree/dir_index feature --- which does not even exist in ext2, by the
> way --- was that the algorithm used for hashing directory names for
> htrees was depended on whether or not chars are signed or unsigned.
> Given a particular filesystem, it's impossible to know whether it was
> originally created and written on a system with a signed char or
> unsigned char. For systems with purely ASCII filenames, where high
> bit of each character is zero, it didn't make a difference. However,
> for people who tried to store UTF-8 or ISO-8859-1 chars in their
> filenames, those files would not be accessible if the filesystem was
> moved between a system with signed chars vs unsigned chars, and a
> forced fsck would cause e2fsck to complain and fix up the directory
> for the other architecture.
>
> The way we fixed this was with the following hack. The superblock has
> two flags, EXT2_FLAGS_SIGNED_HASH and EXT2_FLAGS_UNSIGNED_HASH. If
> the either flag is set, modern kernels will use the hash algorithm
> appropriately tweaked as if chars are signed or unsigned, regardless
> of the natural "signed vs unsigned" nature of chars on that platform.
> If neither flag is set, the kernel will use the default signed vs
> unsigned char version of the algorithm.
>
> If neither flag is set, modern e2fsck binaries will set one of these
> two flags, based on the architecture that it is run on. Modern mke2fs
> binaries will set whichever flag is appropriate with its arguments.
> In both cases, the test that is done in C looks like this:
>
> c = (char) 255;
> if (((int) c) == -1) {
> super->s_flags |= EXT2_FLAGS_SIGNED_HASH;
> } else {
> super->s_flags |= EXT2_FLAGS_UNSIGNED_HASH;
> }
>
> It really doesn't matter whether the signed or unsigned version of the
> hash algorithm is used, as long as it's consistent. So If you happen
> to create the filesystem on an unsigned char system, and then use the
> filesystem exclusively on a signed char system, things will work fine,
> so long as everybody is using modern versions of kernel and e2fsprogs.
> You might have problems if occasionally boot into an ancient kernel
> that doesn't understand these flags, *AND* you use non-ASCII
> characters in filenames.
(Kernel was not giving problem. It was fsck in init code.)
> But that's the only lossage mode you should
> see.
Thanks for this explanation.
> If you create the file system using an ancient version of mke2fs, and
> then run a modern version of e2fsck, it will print a message to this
> fact: "Adding dirhash hint to filesystem", and then since it has
> modified the filesystem, it will return with an exit code of 1. But
> it will only do this the first time, since after that point, the
> dirhash hint will have been set.
>
> Does this help explain what's going on?
Very much so. Thank you.
Osamu
Reply to: