[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#759849: multipath-tools: FTBFS: uxsock.c:20:31: fatal error: systemd/sd-daemon.h: No such file or directory



On Monday 01 September 2014 07:48 PM, Michael Biebl wrote:
In native init scripts, we did a lot of check before starting and
>shutting down the daemon. Things like checking the root device, or
>tiggering LVM Volume Group activitation. They were easily done in shell.
>
>What would the systemd team recommend for it ?
>
Could you elaborate a bit more, why those are needed?
What is upstream doing about this?

The block storage has many components that work closely with one another.

Take an example, root fs on LVM on Multipath on iSCSI.

The flow for such a setup is to:
1) Start iSCSI and discover the LUNs
2) Detect and create mulitpath maps for matching LUNs in DM Multipath
3) Detect and Activate Volume Group out of the newly detected DM Multipath Physical Volumes
4) Mount the file system.

The same can be applied to the shutdown sequence. You want to have proper checks in place before initiating a shutdown of the service. One would argue that the service should not stop if it has active services.

Many of the services (mulitpath, iscsi, for example) have a 2 part component. One in the kernel and the other in userspace. The kernel space service will not terminate if any service is active. But the userspace is not so forgiving.

In open-iscsi, if you ask the daemon to shutdown, it will. If there are active sessions, the kernel component will not terminate the current sessions. But the userspace daemon will be shutdown. That means, that when there is the next state failure, open-iscsi will have no idea of determining that a LUN state has changed

Similar is the case with DM Multipath. The userspace DM Multipath daemon is responsible for polling and keeping an up-to-date status of the Device Mapper maps. If the userspace daemon is inactive, and underneath there is a fabric state change, there is no way to propagate that error to the upper layers.

These design issues, since they are part of the core storage stack, if triggered, leave you with a machine with no access to your root disk. Any process at that time, may get into a 'DI' process state or an immediate device failure. The only action then would be to hardware reset your machine.

This is why we do a lot of checks in the init scripts to warn the user.


Similar approaches were taken in RHEL (5 and 6) and SLES (10 and 11). I'm not sure what Red Hat or SUSE has chosen for their latest releases, as I don't work on those products any more.


My inclination is to ship both, the systemd service files and the init scripts, in their current form along with whatever limitations each may have, and let the user choose.


And by the way, can someone please shed some more light on Debian bug: 760182

Per the bug report, there is no systemd support in d-i. Which then means that I need to disable systemd support ?


--
Ritesh Raj Sarraf | http://people.debian.org/~rrs
Debian - The Universal Operating System


Reply to: