[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A USB HDD is trouble outbreak in debian7.7

GLANTANK is a gigabit version of LANTANK. IO-DATA's child company
Chousensha (Challenger) produced these in response to the Kurobako
(with which some here may be familiar).



These are customizable NAS units, apparently running a customized
debian (MAKAI -- MAKe Again ISO Image) internally.


LANTANK had an SH4 cpu, but the GLANTANK has an ARM (XScale).

On Mon, Jan 5, 2015 at 9:22 AM, Jun Itou <itou_jun@infoseek.jp> wrote:
> I managed debian 7 by the following constitution.
>   Body) I-O DATA GLANTANK 2.0TB (500GB * 4 RAID0, iop32x)
>   USB1) I-O DATA HDZ-UES  2.0TB (500GB * 4 JBOD)
>   USB2) I-O DATA HDZ-UES  1.0TB (250GB * 4 JBOD)
>   USB3) I-O DATA HDZ-UES  1.0TB (250GB * 4 JBOD)
>   USB4) I-O DATA HDW-UE   1.0TB (500GB * 2 JBOD)
> After making 7.7 from debian 7.6, malfunction occurred.
>   1) A sector error occurs when I do mount and becomes the lead only
>   2) I fail in synchronization of the file system when I do fdisk
> I gave following tests to cut a problem into pieces.
>   1) I do operation same as GLANTANK in x64 environment whether it is a problem of the hardware.
>     -> Because the same problem occurs with both, it is not peculiar to hardware.
>   2) I confirm whether it is the problem of the HDD of USB1 - 4 with the test tool of the HDD maker.
>     -> Because the HDD of all passed a test, it is not a problem of USB1 - 4.
>   3) I do the same operation in Fedora whether it is a problem peculiar to debian.
>     -> Because it reappeared in Fedora, I conclude it to be a problem of kernel.
>   4) I change a version of kernel on debian and do the same operation.
>     -> 3.2.62   : It does not reappear
>        3.2.63 ~ : Reappear
>        3.18.0 ~ : Reappear
>   5) I report it to kernel.org and do the same operation after enforcement in the end run which there was of the answer.
>     -> Please refer to https://bugzilla.kernel.org/show_bug.cgi?id=89511.

The summary here is that the GLANTANK seems not to like the

話をまとめると GLANTANK が SYNCHRONIZE CACHE の命令に応じないようです。


Alan Stern says that the kernel should be able to handle this in 3.18
or later, but Jun indicates that 3.18 does not handle it yet.

スターン氏によると、カーネルの 3.18 なら対処できるであろう、ということでした。その反面、 JunItou の結果はそうではなかった。

> It is to say that my USB HDD cannot support the change of this journal function in conclusion.
> I make kernel latest or seem to but make a USB HDD a different one if I continue managing it by the present constitution.
> I knew that it was not developed debian 8 for GLANTANK by a document.
> Because there is no help for it, I think to manage it without formating ext2, and using the journal function.
> ※In addition, one of file system is destroyed when an error happens as for this malfunction even once.
>   I hope that it reappears and is not given a test with the contained HDD of important data.
> ※Because the funny grammar is machine translation; a pardon
> That's all.

I'm thinking that the best place to fix this would be in the MAKAI
community, but, as I check back in the Japanese list from last month,
it looks like Jun has already tried contacting them. And that
community seems to be an abandoned sourceforge.jp project, completely
untouched since 2010. And the Makai project doesn't seem to have a
place to post bugs.


簡単に考えたら、 MAKAIのコミュニティに連絡した方が速い、かな?と思うけど、先月の日本のメールリスとを見直したら、もう、連絡されているのではないですか?それに、2010年からはプロジェクトが完全ほったらかしの様子ですね。さらに、バッグの窓口がなさそうです。

it appears that last month I seem to have found something that made it
look to me like the once-maintainer of MAKAI, kin-neko, has
more-or-less washed his hands of a ten-year-old project that never
developed a self-sustaining community. I don't remember where I found
that, however.

先月の投稿に、僕がどこかで見つけた情報のため、 MAKAI管理者の kinneko

Anybody here in user, arm, or embedded (or d-japanese) have a suggestion?

Joel Rees

Be careful when you look at conspiracy.
Look first in your own heart,
and ask yourself if you are not your own worst enemy.
Arm yourself with knowledge of yourself, as well.

Reply to: