[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#600408: ocaml: Building OCaml with LOCAL_CALLBACK_BYTECODE enabled



Le Sunday 17 Oct 2010 à 00:53:29 (+0200), Guillaume Yziquel a écrit :
> 
> > > I'm currently having issues with C++ callbacks to OCaml, [...]
> > 
> > Could you be more precise?

For the follow-up, an explanation of my segfault and why it happens
(indirection mismatch, and probably weak API for callbacks in bytecode,
at least less robust than callbacks in native code).

http://caml.inria.fr/mantis/view.php?id=5166

Basically, bytecode callbacks construct a bytecode sequence (the global
variable that would be disabled by enabling LOCAL_CALLBACK_BYTECODE),
that gets executed. The APPLY in this sequence calls the bytecode of the
callback. And in the execution of this APPLY, you have an indirection
problem when you pass a closure (I mean pointer to bytecode +
environment, it works if you pass only a pointer to the bytecode).

So this segfault issue is upstream.

> > Why should it be?
> 
> To me, the question is "why shouldn't it be?".
> 
> > Where did you get that from? Is this LOCAL_CALLBACK_BYTECODE documented
> > somewhere? The only usage I see is in byterun/callback.c, and I don't
> > see why it should matter here (we are just using the standard bytecode
> > interpreter).
> 
> Haven't found documentation on LOCAL_CALLBACK_BYTECODE anywhere. I'm
> stumbling on it doing painful gdb debugging.

Just to be clear: It seems to me that it much better to have callbacks
building up the [ACC 6 APPLY etc...] sequence for the bytecode
interpreter in the stack then leaving out a static global variable in
the wild to worry about. I simply wish enabling LOCAL_CALLBACK_BYTECODE
to be considered in OCaml's Debian distribution.

-- 
     Guillaume Yziquel
http://yziquel.homelinux.org



Reply to: