[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Kernel build fails



I've sent in a patch with my remaining concerns attached to the mailing lists and set up a linux-next branch finally. 
https://github.com/threader/linux/tree/master-build-ppc 

In regards to asm/io.h:
---
Will there come a mail saying this broke the PPC6'ish based CPU
someone made in their garage? And lwesync is a valid PPC32
instruction, should i just follow the example above where
BARRIER_LWESYNC is PPC64 only?
---

-Michael 
On Sat, Jan 22, 2022, 10:48 Mike <michael.heltne@gmail.com> wrote:
It should be just CONFIG_POWER6_CPU unless ppc64 also require this?iv

-Mike

On Sat, Jan 22, 2022, 10:36 Mike <michael.heltne@gmail.com> wrote:
I think I need to add a || CONFIG_POWER6_CPU to that 'stbcix' condition..

-Mike



On Sat, Jan 22, 2022, 09:42 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> wrote:
Hello Mike!

On 1/21/22 23:07, Mike wrote:
> Waiting for git to do it's thing, but do we need a voodoo priest(es)
> here? The attached patch is building.

I will have a look at this issue next week myself. We need to make sure
that it not only fixes 32-bit PowerPC but also 64-bit PowerPC big-endian.

Also, test builds of Debian the packages are necessary as well.

Adrian

--
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer - glaubitz@debian.org
`. `'   Freie Universitaet Berlin - glaubitz@physik.fu-berlin.de
  `-    GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913

From 226efa05733457bb5c483f30aab6d5c6a304422c Mon Sep 17 00:00:00 2001
From: threader <michael.heltne@gmail.com>
Date: Sun, 23 Jan 2022 14:17:10 +0100
Subject: [PATCH] arch: powerpc: fix building after binutils changes. 'dssall'
 in mmu_context.c is an altivec instruction, build that accordingly. 'ptesync'
 is a PPC64 instruction, so dont go there for if not. And apparently ifdef
 __powerpc64__ isnt enough in all configurations and 'stbcix' and friends, all
 POWER6 instructions hopefully not needed by CONFIG_PPC64 in general, wanted
 to play.

                 Signed-off-by: Micahel B Heltne <michael.heltne@gmail.com>
---
 arch/powerpc/include/asm/io.h | 7 ++++---
 arch/powerpc/lib/sstep.c      | 4 +++-
 arch/powerpc/mm/Makefile      | 3 +++
 arch/powerpc/mm/pageattr.c    | 4 ++--
 4 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index beba4979bff939..d3a9c91cd06a8b 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -334,7 +334,7 @@ static inline void __raw_writel(unsigned int v, volatile void __iomem *addr)
 }
 #define __raw_writel __raw_writel
 
-#ifdef __powerpc64__
+#ifdef CONFIG_PPC64
 static inline unsigned long __raw_readq(const volatile void __iomem *addr)
 {
 	return *(volatile unsigned long __force *)PCI_FIX_ADDR(addr);
@@ -352,7 +352,8 @@ static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
 	__raw_writeq((__force unsigned long)cpu_to_be64(v), addr);
 }
 #define __raw_writeq_be __raw_writeq_be
-
+#endif
+#ifdef CONFIG_POWER6_CPU
 /*
  * Real mode versions of the above. Those instructions are only supposed
  * to be used in hypervisor real mode as per the architecture spec.
@@ -417,7 +418,7 @@ static inline u64 __raw_rm_readq(volatile void __iomem *paddr)
 			     : "=r" (ret) : "r" (paddr) : "memory");
 	return ret;
 }
-#endif /* __powerpc64__ */
+#endif /* CONFIG_POWER6_CPU */
 
 /*
  *
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index a94b0cd0bdc5ca..4ffd6791b03ec0 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -1465,7 +1465,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
 		switch ((word >> 1) & 0x3ff) {
 		case 598:	/* sync */
 			op->type = BARRIER + BARRIER_SYNC;
-#ifdef __powerpc64__
+#ifdef CONFIG_PPC64
 			switch ((word >> 21) & 3) {
 			case 1:		/* lwsync */
 				op->type = BARRIER + BARRIER_LWSYNC;
@@ -3267,9 +3267,11 @@ void emulate_update_regs(struct pt_regs *regs, struct instruction_op *op)
 		case BARRIER_LWSYNC:
 			asm volatile("lwsync" : : : "memory");
 			break;
+#ifdef CONFIG_PPC64
 		case BARRIER_PTESYNC:
 			asm volatile("ptesync" : : : "memory");
 			break;
+#endif
 		}
 		break;
 
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index df8172da2301b7..2f87e77315997a 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -4,6 +4,9 @@
 #
 
 ccflags-$(CONFIG_PPC64)	:= $(NO_MINIMAL_TOC)
+ifeq ($(CONFIG_ALTIVEC),y)
+CFLAGS_mmu_context.o += $(call cc-option, -maltivec, -mabi=altivec)
+endif
 
 obj-y				:= fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
 				   init_$(BITS).o pgtable_$(BITS).o \
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
index edea388e9d3fbb..ccd04a386e28fc 100644
--- a/arch/powerpc/mm/pageattr.c
+++ b/arch/powerpc/mm/pageattr.c
@@ -54,11 +54,11 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
 	}
 
 	pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);
-
+#ifdef CONFIG_PPC64
 	/* See ptesync comment in radix__set_pte_at() */
 	if (radix_enabled())
 		asm volatile("ptesync": : :"memory");
-
+#endif
 	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
 
 	spin_unlock(&init_mm.page_table_lock);

Reply to: