[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

xm save



Bonjour,

J'ai un petit soucis sur ma Dédibox v3, avec Xen 3.4 + noyau
2.6.32-5-xen-amd64, si je fais un xm save -c, ca fait planter ma VM, et
ensuite suis obligé carrément de faire un xm destroy avant de pouvoir la
relancer par la suite.

Voila ce que j'ai en faisant ensuite un xm console dessus (vu que le ssh
ne marche pas pendant qu'elle plante):

Debian GNU/Linux 5.0 xen1 hvc0

xen1 login: [ 5873.910289] WARNING: g.e. still in use!
[ 5873.910289] WARNING: leaking g.e. and page still in use!
[ 5873.935068] WARNING: g.e. still in use!
[ 5873.935088] WARNING: leaking g.e. and page still in use!
[ 5873.958782] WARNING: g.e. still in use!
[ 5873.958804] WARNING: leaking g.e. and page still in use!
[ 5873.958818] WARNING: g.e. still in use!
[ 5873.958829] WARNING: leaking g.e. and page still in use!
[ 5873.968155] Setting capacity to 8388608
[ 5873.970237] Setting capacity to 262144
 6000.144108] kjournald     D 0000000000000002     0   363      2 0x00000000
[ 6000.144134]  ffff880007c6e350 0000000000000246 ffff880002bc8000
0000000000000200
[ 6000.144168]  ffff880002bc8000 0000000000000001 000000000000f8a0
ffff8800024cdfd8
[ 6000.144200]  0000000000015640 0000000000015640 ffff880002b96a60
ffff880002b96d58
[ 6000.144232] Call Trace:
[ 6000.144256]  [<ffffffff8102eda8>] ? pvclock_clocksource_read+0x3a/0x70
[ 6000.144279]  [<ffffffff8110f4fc>] ? sync_buffer+0x0/0x40
[ 6000.144298]  [<ffffffff8110f4fc>] ? sync_buffer+0x0/0x40
[ 6000.144319]  [<ffffffff8130b041>] ? io_schedule+0x73/0xb7
[ 6000.144337]  [<ffffffff8110f537>] ? sync_buffer+0x3b/0x40
[ 6000.144356]  [<ffffffff8130c2f2>] ? _spin_unlock_irqrestore+0xd/0xe
[ 6000.144376]  [<ffffffff8130b54e>] ? __wait_on_bit+0x41/0x70
[ 6000.144395]  [<ffffffff8110f4fc>] ? sync_buffer+0x0/0x40
[ 6000.144413]  [<ffffffff8130b5e8>] ? out_of_line_wait_on_bit+0x6b/0x77
[ 6000.144434]  [<ffffffff81066b38>] ? wake_bit_function+0x0/0x23
[ 6000.144463]  [<ffffffffa00971d1>] ?
journal_commit_transaction+0x508/0xe2b [jbd]
[ 6000.144489]  [<ffffffff8100e6fd>] ? xen_force_evtchn_callback+0x9/0xa
[ 6000.144509]  [<ffffffff8100ee22>] ? check_events+0x12/0x20
[ 6000.144527]  [<ffffffff8100ee0f>] ? xen_restore_fl_direct_end+0x0/0x1
[ 6000.144548]  [<ffffffff8100ee0f>] ? xen_restore_fl_direct_end+0x0/0x1
[ 6000.144567]  [<ffffffff8100e6fd>] ? xen_force_evtchn_callback+0x9/0xa
[ 6000.144587]  [<ffffffff8100ee0f>] ? xen_restore_fl_direct_end+0x0/0x1
[ 6000.144608]  [<ffffffff8130c2f2>] ? _spin_unlock_irqrestore+0xd/0xe
[ 6000.144630]  [<ffffffffa009a423>] ? kjournald+0xdf/0x226 [jbd]
[ 6000.144650]  [<ffffffff81066b0a>] ? autoremove_wake_function+0x0/0x2e
[ 6000.144672]  [<ffffffffa009a344>] ? kjournald+0x0/0x226 [jbd]
[ 6000.144691]  [<ffffffff8106683d>] ? kthread+0x79/0x81
[ 6000.144710]  [<ffffffff81012baa>] ? child_rip+0xa/0x20
[ 6000.144742]  [<ffffffff81011d61>] ? int_ret_from_sys_call+0x7/0x1b
[ 6000.144762]  [<ffffffff8101251d>] ? retint_restore_args+0x5/0x6
[ 6000.144781]  [<ffffffff8100ee0f>] ? xen_restore_fl_direct_end+0x0/0x1
[ 6000.144801]  [<ffffffff8100ee0f>] ? xen_restore_fl_direct_end+0x0/0x1
[ 6000.144820]  [<ffffffff81012ba0>] ? child_rip+0x0/0x20

Avez-vous remarqué cela?

Merci :-)


Reply to: