[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: ext2 file system: moving between armel and amd64 etc.



On Thu, Oct 29, 2009 at 11:04:54PM +0900, Osamu Aoki wrote:
> My questions can be summarized as following 3 questions:
> 
> Q1.  Is there some option to fsck.ext2 
>          to force unsigned under amd64 or
>          to force   signed under armel.

No.

> Q2.  Is there option to tune2fs to change signed or unsigned flag.

No.

> Q3.  Is it normal to have exitcode=1 for fsck under armel if partition
>      has been created with mkfs.ext2 under amd64 to be unsigned? 

Only the first time fsck is run, if the filesystem was created using
an ancient mke2fs.  After that, no, assuming the filesystem was
cleanly unmounted.

>      (It seemed to me that fsck under such case tends to find the 
>      filesystem not to be cleanly unmounted.)

I think you're jumping to conclusions here, and since I didn't see
your initial message, I don't know why you think this might be the
case.  How about details about what you are seeing, in as much detail
as you can, with copies of the fsck output as the bare minimum?

The issue with the signed vs unsigned char problem with
htree/dir_index feature --- which does not even exist in ext2, by the
way --- was that the algorithm used for hashing directory names for
htrees was depended on whether or not chars are signed or unsigned.
Given a particular filesystem, it's impossible to know whether it was
originally created and written on a system with a signed char or
unsigned char.  For systems with purely ASCII filenames, where high
bit of each character is zero, it didn't make a difference.  However,
for people who tried to store UTF-8 or ISO-8859-1 chars in their
filenames, those files would not be accessible if the filesystem was
moved between a system with signed chars vs unsigned chars, and a
forced fsck would cause e2fsck to complain and fix up the directory
for the other architecture.

The way we fixed this was with the following hack.  The superblock has
two flags, EXT2_FLAGS_SIGNED_HASH and EXT2_FLAGS_UNSIGNED_HASH.  If
the either flag is set, modern kernels will use the hash algorithm
appropriately tweaked as if chars are signed or unsigned, regardless
of the natural "signed vs unsigned" nature of chars on that platform.
If neither flag is set, the kernel will use the default signed vs
unsigned char version of the algorithm. 

If neither flag is set, modern e2fsck binaries will set one of these
two flags, based on the archtecture that it is run on.  Modern mke2fs
binaries will set whichever flag is appropriate with its arguments.
In both cases, the test that is done in C looks like this:

	c = (char) 255;
	if (((int) c) == -1) {
		super->s_flags |= EXT2_FLAGS_SIGNED_HASH;
	} else {
		super->s_flags |= EXT2_FLAGS_UNSIGNED_HASH;
	}

It really doesn't matter whether the signed or unsigned version of the
hash algorithm is used, as long as it's consistent.  So If you happen
to create the filesystem on an unsigned char system, and then use the
filesystem exclusively on a signed char system, things will work fine,
so long as everybody is using modern versions of kernel and e2fsprogs.
You might have problems if occasionally boot into an ancient kernel
that doesn't understand these flags, *AND* you use non-ASCII
characters in filenames.  But that's the only lossage mode you should
see.

If you create the file system using an ancient version of mke2fs, and
then run a modern version of e2fsck, it will print a message to this
fact: "Adding dirhash hint to filesystem", and then since it has
modified the filesystem, it will return with an exit code of 1.  But
it will only do this the first time, since after that point, the
dirhash hint will have been set.

Does this help explain what's going on?

						- Ted


Reply to: