Bug#604096: Bug#601341: Bug#602418: #601341, #602418 and #604096 seem to be duplicates
> > Did you get these patches in too:
> >
> > 25021c9 x86: define arch_vm_get_page_prot to set _PAGE_IOMAP on VM_IO vmas
> > 2eb6682 drm: recompute vma->vm_page_prot after changing vm_flags
> > dbbc947 ttm: Set VM_IO only on pages with TTM_MEMTYPE_FLAG_FIXED set.
>
> I seem to have 25021c9 and 2eb6682 but not dbbc947.
Good. The first two are essential.
>
> I was experimenting with (from xen.git)
> 95518271 ttm: Set VM_IO only on pages with TTM_MEMTYPE_FLAG_NEEDS_IOREMAP set.
> c54d5aa1 ttm: Change VMA flags if they != to the TTM flags.
> e1687eae fb: propagate VM_IO to VMA.
> Is this a dead-end?
So e1687eae is upstream, the other two become obsolete once devel/p2m-identity is flushed out.
Hmm, did you also include:
"ttm: When TTM_PAGE_FLAG_DMA32 allocate pages under and" in your tree, ah yes - you
pulled the devel/ttm.pci-api-v2 which has an updated variant of that.
>
> dbbc947 and 95518271 seem to have a lot in common.
Yup. The architecture of the ttm code changed from 2.6.34->2.6.37.
.. snip..
> Thanks, I'll try adding dbbc947. Should I ignore 95518271, c54d5aa1 and
> e1687eae for the time being?
No, please do try those. So:
ttm: Set VM_IO only on pages with TTM_MEMTYPE_FLAG_NEEDS_IOREMAP set
without this, you would get these weird errors:
(XEN) mm.c:1747:d0 Bad L1 flags c00000
(XEN) mm.c:779:d0 Bad L1 flags c00000
(XEN) mm.c:4659:d0 ptwr_emulate: could not get_page_from_l1e()
[ 123.222339] BUG: unable to handle kernel paging request at ffff8800747382f8
[ 123.222339] IP: [<ffffffff8100e73a>] xen_set_pte+0x31/0x36
[ 123.222339] PGD 1002067 PUD 2e4067 PMD 488067 PTE 10000074738065
..
[ 123.385710] [<ffffffff8100e7e6>] xen_set_pte_at+0xa7/0xb2
[ 123.385710] [<ffffffff8100c59d>] ? __raw_callee_save_xen_make_pte+0x11/0x1e
[ 123.385710] [<ffffffff810cd303>] vm_insert_mixed+0x86/0xb0
[ 123.385710] [<ffffffffa003d68a>] ttm_bo_vm_fault+0x201/0x26c [ttm]
>
> > Or you can go straight ahead and look at devel/p2m-identity (however, there is a bug
> > in them - ballooning in huge amounts of memory does not work right).
>
> I'd be very wary of taking an infrastructure change of that magnitude
> into Squeeze in its current frozen state.
Good point. Don't take them.
Reply to: