[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: user traffic accounting



On 8 Jan 2002, at 18:25, martin f krafft wrote:

> [cc'd to gr and peter because i think they might be
> interested and because they might have valuable input. this
> is about accounting on a user basis for each and every byte
> a user or her domains cause. debian-isp is open to
> posting... original post lives at [1]]
>
> also sprach Marcel Hicking <hicking@du.gtn.com>
> [2002.01.08.1634 +0100]: > > User Mode Linux virtual
> machines are networkable, > > to each other, to the host,
> and to other physical > > machines.  So, UML can be used to
> set up a virtual > > network that allows setting up and
> testing of > > experimental services. > >
> http://user-mode-linux.sourceforge.net/
>
> i.e. basically vmware for linux-on-linux only (for now), and
> free...
>
> this is *very* cool, thanks so much. i mean, damn you, how
> could you show me this, now i have something else to occupy
> my time with ;) (i hope you aren't offended by my use of
> "damn").
No, dammit, no prob ;-)


> anyway, this is wicked, and i immediately want to give a
> virtual machine to every single one of my users. since i
> only have one IP (not true, but i don't have an IP per
> user), i'd have to do MASQ along with proxies on the host,
> but i think this could work. your comments on the following,
> please...
>
> the best is, i think you could create *one* filesystem to
> serve them all, mount it read-only, and then provide them
> with /home/user - which is either NFS-mounted from the host,
> or which is simply a partition mounted from a file in their
> /home on the host. then again, i'd love to *not* have users
> on the host then. that's the least trouble...

I'd go for real partitions. No worries with quotas, and
faster than NFS anyway.


> let me start with constructing the hosting services before i
> attack the tough nuts... so the system will have 1.2.3.4 as
> the official IP, and a 172.16/16 network between the
> official host and all the vm's.
>
> 1. postfix. there'll be a postfix running in each and every
> vm, taking care of the hosted domains only. it is configured
> to send via postfix on the master (smtp-relay), and the
> master's postfix is configured to relay mail for all domains
> in the VMs, using the transport table to then deliver it to
> the vm's postfix on the 172.16/16 subnet. thus, even though
> the mail traffic that my server farm sees isn't tthe same
> that's flowing between the master and the vm, they are
> (virtually) identical. because of received-headers adding
> size, those users that only send will cause me some loss,
> those that mostly receive will pay a little more. but it's
> within the bytes to kilobytes range, thus no problem.
>
> 2. bind9. this is also moderately easy. the master runs a
> bind9 server that's configured to go recursive for the
> domains in the vmachines. the vm bind9 uses the master bind9
> as the only forwarder.

Guess you could also use a hidden primary configuration.
Your publically announced NS is actually configured as
slave getting updates from the virtual binds. You might
even be ablel to run the official master bind on a
different machine for additional securtity. In case someone
manages to break out of the the virtual machine jail, he
won't be able to mess with your dns too much.
I run this sort of config here and there where somewhat
trusted customers want to have control over their zones.


> 3. apache. things are getting more difficult. because of
> virtual hosting, one would have to employ a transparent
> squid proxy without caching abilities (maybe there's a
> better, low-weight proxy for this) because what it should
> really do is respond to a request for something like
> vm1.madduck.net with the response it receives from a request
> on the 172.16/16 subnet to the apache running in the
> appropriate virtual machine. there are two problems i see:
> logging - inside the vm, all requests for a domain's webpage
> will appear to be coming from the proxy rather than the
> original requester. i wonder if it's possible to have a
> relay that reads ahead in the HTTP request to decide how to
> forward/NAT the request before relaying it on the IP
> level... the second problem is HTTPS, but then again, with a
> single IP, you can't really run multiple HTTPS domains
> anyway, so users simply won't get their own HTTPS server -
> if they need HTTPS, then a special configuration could be
> set up on the main HTTPS server, which NFS-mounts the
> respective directory from the VM into the HTTPS ServerRoot,
> which will at least account for the actual payload data even
> if the request and HTTP response header are not going to be
> included in the accounted traffic volume. oh well.
>
> 4. shell traffic. because 172.16/16 is illegal, masquerading
> is done, which makes the master host be the upstream gateway
> for the VMs. thus every byte will be registered by iptables
> or ipac-ng as it passes through the master host's netfilter.
> thus traffic caused on the shell will be counted without
> overlap, next to, and completely identical to the traffic
> caused by the daemons on the VM.
>
> 5. ssh. this is the real bitch! you can't proxy SSH, you
> can't really forward it. i could either give users accounts
> on the master host with their login shells configured to do
> host-based RSA authenticated login to their VM, or i could
> give out special SSH ports and forward those. for instance,
> user joe will be able to login to his VM at
> 172.16.101.123:22 via ssh to 1.2.3.4:22123. this is not a
> problem in terms of known_hosts because say joe owns
> joe.net, but he also helps to administer another domain,
> coop.net, which lives in another VM. while ssh'ing to
> joe.net via port 22123, his known_hosts will register the
> joe.net VM's RSA/DSA key with the IP 1.2.3.4 and hostname
> joe.net, when ssh'ing into coop.net via port 22456, he
> better be using coop.net as destination, or he'll get MITM
> alerts. it's a little problematic to ask users to use SSH on
> weird ports, but it's the cleanest way around... however,
> the shell-login proposed first is also a way, and while
> clean, it only requires additional admin overhead (as well
> as a damn secure script that establishes the new ssh
> session). this has the negative effect of logins always
> coming from the same IP, but what gives...
>
> now, given that i don't host other services (for now at
> least. remote ldap-tls and remote postgres-tls might
> come...), this looks to me like some perfect solution,
> albeit cumbersome. but it would definitely add to security
> actually.
>
> what do you think? and for added challenge, the machine this
> will be implemented is really only SSH accessible to me, so

Basically this sounds fine to me. Not sure about the ssh
business, either. Not a nice and clean solution yet.

I'd be really intersted in how the project goes.
Kept us up to date!


> this all has to be implemented remotely ;)

Apart from setting up a base system, i've never done
anything _not_ remotely ;-)



--
   __
 .´  `.
 : :' !   Enjoy
 `. `´   Debian/GNU Linux
   `-   Now even on the 5 Euro banknote!



Reply to: