Re: powerbook g4 sleep
On Sun, 2005-01-30 at 01:48 +0200, Simo Melenius wrote:
> Vincent Bernat wrote:
> >>Since the PBG4 has a nVidia graphic chip it will not sleep (AFAIK)... at
> >>least until BenH gets one and goes one step forward in becoming a
> > Benh will not be able to find out how to make it to sleep without some
> > specs. It is the same story with ATI based one: the sleep started
> > working when the specs have been submitted by ATI.
> Well, you can reverse-engineer or you can read the specs. I'm not a
> hardware hacker but could the former not even be an option in this case?
Well, the sleep support for the latest models has been obtained by
examining IO accesses done by the MacOS driver (Paulus did most of this
work). We do have some specs, including register specs, but that's not
enough. The actual bootstrap procedure of the chip is complex and beyond
what is documented by specs, however, at least, having some specs and
some knowledge of the chip (I've been tweaking ATI chips for some time
now) did help very significantly. I don't have that knowledge of nVidia
chips and don't even have a register spec, so why a similar thing is
possible, it's not something I would do (and I don't have an nVidia
based machine anyway).
Also, it's amusing how people always talk about "reverse engineering" as
it was the simplest thing to do... and could easily replace proper HW
specs from vandors. It can be extremely complex and time consuming,
especially with chips as convoluted as modern graphic chips, and I
simply cannot spare the time for doing it, nor can pretty much anybody I
know who would eventually be able to do it.
> When I tried Debian GNU/Linux on my PB and stumbled upon the sleep
> problem, I googled that IIRC the problem was related to waking up the
> chip after sleep. Furthermore, someone had noticed the GeForce's .kext
> even had symbols in it and, out of curiosity, I disassembled the kext. I
> could find functions and assembly code that _seemed_ to be related to
> waking up or resetting the chip, I'm not sure on how high/low level
> though. I could imagine a seasoned guru could understand the relevant
> code (or lack of it...) better, but since this was months ago I conclude
> it can't be that simple. Therefore, I'm wondering what is it actually
> with the nVidia chip that makes the trouble?
Reading disassembled code, even when you are a seasoned guru, isn't like
reading a clear explanation on what has to be done, especially when you
do know what the various bits & pieces the code is peeking at/poking to
are related to. The symbol names are definitely useful tho. I suspect
somebody with HW access and enough time on his/her hands could obtain
some reasults. Note that we _do_ have already some bits for nVidia
chips, but not enough yet to revive the chip's output afaik.
Benjamin Herrenschmidt <email@example.com>