[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#985481: debootstrap: Detection of docker container is broken with cgroup v2



Hello Nicholas! Thanks for your feedback here, see replies below.


On Sun, 11 Apr 2021 11:51:20 -0400 Nicholas D Steeves <nsteeves@gmail.com> wrote:

> I'm not sure that systemd-detect-virt and your patch are
> forward-compatible in light of
>
> Originally, ".dockerenv" was for transmitting the environment
> variables of the container across the container boundary -- I would
> not recommend relying on its existence either (IIRC, that code
> you've linked to is the only reason it still exists). There's
> likely something incriminatory inside /sys/fs/cgroup, but I haven't
> checked recently.
> https://github.com/moby/moby/issues/18355#issuecomment-220484748
>
> This makes it sounds like ".dockerenv" may be deprecated and later
> removed.

That's a good point, but it's also a 5 years old comment, and the .dockerenv file is still present these days.

I would think that if Docker plans to remove it, they will issue a more formal deprecation warning that will give us enough time to fix things on our side. Also the fact that systemd checks for this file gives me more confidence that it's not just me doing something fancy here: it seems that this is the "de facto" solution to detect docker containers.

FWIW, it's also the most common solution on Q&A sites like stackoverflow. Other people do that, because there is no better solution provided apparently. Unless I missed it.


> Cgroup v2 is also mounted at /sys/fs/cgroup, so I wonder if the original
> check should be rewritten to check for something under this path instead
> of mountinfo?  Also, using this /sys/fs/cgroup method, I'm not sure if
> it's better debootstrap style to express the OR logical operator in the
> regex or a shell "||" (ie: seems to be needed because the tree under
> /sys/fs/cgroup is different between v1 and v2).

I just had a quick look in /sys/fs/cgroup from within a container. Nothing obvious stands out, there's no file named docker, and nothing in the content of those files mentions docker. I'll attach the output below.

I will CC Tianon, as he was the author of the comment mentioned above, and he might know better, 5 years after :)

In short, Tianon, if you're reading those lines, our question is: what would be the right way to detect that we're running from within a docker container, apart from checking for the existence of the file `/.dockerenv` ???

Thanks !



---- Logs -- checking /sys/fs/cgroup from within a docker container


# head -n 100 /sys/fs/cgroup/*
==> /sys/fs/cgroup/cgroup.controllers <==
cpuset cpu io memory hugetlb pids rdma

==> /sys/fs/cgroup/cgroup.events <==
populated 1
frozen 0

==> /sys/fs/cgroup/cgroup.freeze <==
0

==> /sys/fs/cgroup/cgroup.max.depth <==
max

==> /sys/fs/cgroup/cgroup.max.descendants <==
max

==> /sys/fs/cgroup/cgroup.procs <==
1
16

==> /sys/fs/cgroup/cgroup.stat <==
nr_descendants 0
nr_dying_descendants 0

==> /sys/fs/cgroup/cgroup.subtree_control <==

==> /sys/fs/cgroup/cgroup.threads <==
1
16

==> /sys/fs/cgroup/cgroup.type <==
domain

==> /sys/fs/cgroup/cpu.max <==
max 100000

==> /sys/fs/cgroup/cpu.pressure <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=6857

==> /sys/fs/cgroup/cpu.stat <==
usage_usec 145464
user_usec 57509
system_usec 87955
nr_periods 0
nr_throttled 0
throttled_usec 0

==> /sys/fs/cgroup/cpu.weight <==
100

==> /sys/fs/cgroup/cpu.weight.nice <==
0

==> /sys/fs/cgroup/cpuset.cpus <==


==> /sys/fs/cgroup/cpuset.cpus.effective <==
0-7

==> /sys/fs/cgroup/cpuset.cpus.partition <==
member

==> /sys/fs/cgroup/cpuset.mems <==


==> /sys/fs/cgroup/cpuset.mems.effective <==
0

==> /sys/fs/cgroup/hugetlb.1GB.current <==
0

==> /sys/fs/cgroup/hugetlb.1GB.events <==
max 0

==> /sys/fs/cgroup/hugetlb.1GB.events.local <==
max 0

==> /sys/fs/cgroup/hugetlb.1GB.max <==
max

==> /sys/fs/cgroup/hugetlb.1GB.rsvd.current <==
0

==> /sys/fs/cgroup/hugetlb.1GB.rsvd.max <==
max

==> /sys/fs/cgroup/hugetlb.2MB.current <==
0

==> /sys/fs/cgroup/hugetlb.2MB.events <==
max 0

==> /sys/fs/cgroup/hugetlb.2MB.events.local <==
max 0

==> /sys/fs/cgroup/hugetlb.2MB.max <==
max

==> /sys/fs/cgroup/hugetlb.2MB.rsvd.current <==
0

==> /sys/fs/cgroup/hugetlb.2MB.rsvd.max <==
max

==> /sys/fs/cgroup/io.max <==

==> /sys/fs/cgroup/io.pressure <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=24710
full avg10=0.00 avg60=0.00 avg300=0.00 total=24710

==> /sys/fs/cgroup/io.stat <==
259:0 rbytes=5431296 wbytes=0 rios=196 wios=0 dbytes=0 dios=0
254:0 rbytes=5431296 wbytes=0 rios=196 wios=0 dbytes=0 dios=0
254:1 rbytes=5431296 wbytes=0 rios=196 wios=0 dbytes=0 dios=0

==> /sys/fs/cgroup/io.weight <==
default 100

==> /sys/fs/cgroup/memory.current <==
6696960

==> /sys/fs/cgroup/memory.events <==
low 0
high 0
max 0
oom 0
oom_kill 0

==> /sys/fs/cgroup/memory.events.local <==
low 0
high 0
max 0
oom 0
oom_kill 0

==> /sys/fs/cgroup/memory.high <==
max

==> /sys/fs/cgroup/memory.low <==
0

==> /sys/fs/cgroup/memory.max <==
max

==> /sys/fs/cgroup/memory.min <==
0

==> /sys/fs/cgroup/memory.numa_stat <==
anon N0=602112
file N0=4866048
kernel_stack N0=49152
shmem N0=0
file_mapped N0=3108864
file_dirty N0=0
file_writeback N0=0
anon_thp N0=0
inactive_anon N0=598016
active_anon N0=0
inactive_file N0=1486848
active_file N0=3649536
unevictable N0=0
slab_reclaimable N0=0
slab_unreclaimable N0=0
workingset_refault_anon N0=0
workingset_refault_file N0=0
workingset_activate_anon N0=0
workingset_activate_file N0=0
workingset_restore_anon N0=0
workingset_restore_file N0=0
workingset_nodereclaim N0=0

==> /sys/fs/cgroup/memory.oom.group <==
0

==> /sys/fs/cgroup/memory.pressure <==
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /sys/fs/cgroup/memory.stat <==
anon 602112
file 4866048
kernel_stack 49152
percpu 0
sock 0
shmem 0
file_mapped 3108864
file_dirty 0
file_writeback 0
anon_thp 0
inactive_anon 598016
active_anon 0
inactive_file 1486848
active_file 3649536
unevictable 0
slab_reclaimable 0
slab_unreclaimable 0
slab 0
workingset_refault_anon 0
workingset_refault_file 0
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
pgfault 2310
pgmajfault 0
pgrefill 0
pgscan 0
pgsteal 0
pgactivate 759
pgdeactivate 0
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

==> /sys/fs/cgroup/memory.swap.current <==
0

==> /sys/fs/cgroup/memory.swap.events <==
high 0
max 0
fail 0

==> /sys/fs/cgroup/memory.swap.high <==
max

==> /sys/fs/cgroup/memory.swap.max <==
max

==> /sys/fs/cgroup/pids.current <==
2

==> /sys/fs/cgroup/pids.events <==
max 0

==> /sys/fs/cgroup/pids.max <==
18738

==> /sys/fs/cgroup/rdma.current <==

==> /sys/fs/cgroup/rdma.max <==


--
Arnaud Rebillout


Reply to: