[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Web applications specific issues



> > > Providing a VirtualHosting facility
> > > -----------------------------------------------------------------
> >
> > moreover, independently from the multiple instances problems,
> > packages usually provide subdirectories auto-configuration, and
> > never allow the user to let the web app live on a virtual host, or
> > worse in a subdir of a particular vhost.
>
> could you provide an example of that?  i don't know of any webapps
> off the top of my head that fall into this category.

well, I guess my english is terrible. but let's take imp.
very often you want an imp.foo.org/ or a webmail.foo.org that points to 
it (same for squirellmail and co). and default install is to be on 
http://localhost/horde2/ IIRC, and is too restrictive, since it's not 
the more common use, even if that's the easiest config.

> > IMHO, there is sth to do, maybe some srvconfig-common that would
> > take advantage of the under-used /srv tree.
>
> i really disagree here.  i think we should remain completely /srv
> agnostic, or at least do nothing other than symlink into /srv.  there
> already exist mechanisms to alias a website directory to something
> under /usr/share/foo, so i think it's kind of moot.

well, we know how to alias *one* instance of the webapp, not really many 
instances on many hosts, without duplicating the code.

take the webmail example, you may want a webmail.foo.org that serves 
foo.org users, and webmail.bar.fr that serves only @bar.fr users. this 
is unfeasible without patching the webmail.

but if we have /srv, then we can have :

/srv/webmail.foo.org/[symlinks to /usr/share except vhost specific 
files]

/srv/webmail.bar.fr/[symlinks to /usr/share except vhost specific files]

and if the user has some ways to register the facts that those sites 
exists, then we may be able to provide some kind of automatic upgrades, 
that would be really really nice.

the good point of it, is that it copes well with a lot of apps (mostly 
php and perl ones).

> > I believe we can write a set of scripts that would know some vhosts
> > and services because they follow some debian policy about internal
> > /srv/ use, and that would enable automagic per vhost configuration,
> > and not per server like it is the case for most of applications.
>
> that's not true, most applications (at least ones that don't require
> daemons) do work on a per-vhost level, though the local admin has
> to do some work to get them to work.

disagreed. the apps that have daemons (like the python servers that also 
embeds their own http server) are vhost-aware and provide a global 
config file, that is able to configure every single vhost.

But most of the apps I use are completely blind to vhost, and the way to 
use the same code for different instances is to play with include_path 
(wich is ugly ... see end of my mail) or play with symlinks, wich is 
not very nice, but at least more sensible (IMHO).

> > [...]
> yeah, this kind of sucks... istr someone declaring that they were
> going to do some work on this, but can't seem to dig up a reference. 
> however, this is not as serious of an issue (unless you're packaging
> a PEAR library) because you can always work around it by changing
> include_path with ini_set (either in the code, config file, or apache
> config).
> > [...]
> is there a limitation of the number of paths in the include_path?

no this is a real concern. extending include_path without any boundary, 
will raise :
 * performances issues (since to find an include you will have to scan
   every single directory that is listed in the include_path)
 * masking issues that are necessarily avoided if everything lives
   in /usr/share/php.

I mean, if foo.php is in two directories that are in the include_path, 
then the first one will mask the second one. if they are 
in /usr/share/php/lib1/foo.php and /usr/share/php/lib2/foo.php then you 
have no more problems, since you have to require 'lib1/foo.php' and 
there is no more ambiguity. (relying on the include_path order is a 
potential big security issue).

Moreover, this would prevent some collision between libs, beeing 
enforced by the packaging system, wich is an extra security.


> On Tue, May 03, 2005 at 01:50:04PM +0200, Pierre Habouzit wrote:
> > another thing, that is related to what I said, is providing some
> > "quality" criteriums to allow a php package to be in debian.
>
> that's a great idea, that i haven't heard before.  having a "web
> applications must meet the following standards" criteria could
> save us a lot of trouble.  a problem i see with that is it could
> exclude an otherwise dfsg application from debian, which i don't
> think would make it very attractive to the debian community at large.
> how about "must meet the following standards if they are to work
> out-of-the-box, otherwise the packager must do no automatic stuff"?

well, IMHO debian has not to package every piece of shit because it 
simply exists. and let's be honnest, a lot of beginner programmers 
write web apps, and those apps are written like shit. A webapp full of 
code misconceptions will sooner or later :
 * have bugs
 * have security problems
 * have annoying upgrade problems
and that will create burden on the shoulders of too many people.

No I really believe that we should have some strict (not necessarily a 
lot, but sensible ones) requirements and quality standards. and the 
magic quote invariability is a sensible one for a library (e.g.).


-- 
·O·  Pierre Habouzit
··O
OOO                                                http://www.madism.org

Attachment: pgpPDtDkYzlrU.pgp
Description: PGP signature


Reply to: