[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [PATCH nbd-client] client: Request NBD_INFO_BLOCK_SIZE and set constraints in the kernel



On Thu, Jun 16, 2022 at 12:29:42PM +0100, Richard W.M. Jones wrote:
> NBD servers may advertise their minimum, preferred and maximum block
> size constraints.  The constraints do not map well to what the kernel
> expects, so see this design document for the mapping used here:
> https://lists.debian.org/nbd/2022/06/msg00022.html
> 
> This patch only does the smallest change to read these constraints
> from the server and map the preferred block size to kernel hints
> minimum_io_size and optimal_io_size.  The minimum and maximum
> constraints are ignored for now.
> 
> The names of the kernel hints are very confusing, and do not refer to
> the "minimum" of anything, see this document for an explanation:
> https://people.redhat.com/msnitzer/docs/io-limits.txt

...

> @@ -529,6 +532,28 @@ void parse_sizes(char *buf, uint64_t *size, uint16_t *flags) {
>  	printf("\n");
>  }
>  
> +/* Parse reply from NBD_INFO_BLOCK_SIZE */
> +void parse_block_sizes(char *data, uint32_t datasize, uint32_t *block_sizes)
> +{
> +	if (datasize < 3 * sizeof(uint32_t)) {
> +		err("E: server sent too short reply to NBD_INFO_BLOCK_SIZE, ignoring");
> +		return;
> +	}
> +	memcpy(&block_sizes[0], &data[0], sizeof(uint32_t));
> +	block_sizes[0] = ntohl(block_sizes[0]);
> +	memcpy(&block_sizes[1], &data[4], sizeof(uint32_t));
> +	block_sizes[1] = ntohl(block_sizes[1]);
> +	memcpy(&block_sizes[2], &data[8], sizeof(uint32_t));
> +	block_sizes[2] = ntohl(block_sizes[2]);

Is it worth doing data validation that the numbers the server sent us make sense?

> +
> +#if 0
> +	printf("server block sizes: min: %ld, pref: %ld, max: %ld\n",
> +	       (long) block_sizes[0],
> +	       (long) block_sizes[1],
> +	       (long) block_sizes[2]);
> +#endif

While this was obviously useful during testing, do we need the #if 0
in the final patch?

> @@ -682,7 +709,12 @@ void negotiate(int *sockp, u64 *rsize64, uint16_t *flags, char* name, uint32_t n
>  		return;
>  	}
>  
> -	send_info_request(sock, NBD_OPT_GO, 0, NULL, name);
> +	block_sizes[0] = 1; /* default minimum */
> +	block_sizes[1] = 4096; /* default preferred */
> +	block_sizes[2] = 0xffffffff; /* default maximum */

Should we instead default to 512 as the minimum (assume that the
server can't gracefully handle sub-sector requests unless it
explicitly tells us), and 32M as the maximum (assume that the server
will likely disconnect if we exceed its buffers, where 32M is the
maximally-portable buffer size that older servers supported before
newer servers could advertise larger buffers)?

Overall, I think the strategy makes sense.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


Reply to: