[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian and OSS vs vSphere



On Wed, 29 Feb 2012 09:02:56 +0100, Davide Mirtillo wrote:

Il 28/02/2012 20:08, Peter Teunissen ha scritto:
On 28 feb. 2012, at 16:15, Robert Brockway wrote:

On Tue, 28 Feb 2012, Davide Mirtillo wrote:

I was also wondering if any of you had opinions regarding Proxmox.
http://pve.proxmox.com/wiki/Main_Page [1] It seems like a solid
solution and it also looks it's gonna be something that works out
of the box by just installing it, which is kinda what i was hoping
for - yes, i know, i'm lazy :)
Hi Davide. I was just about to send a reply to your other email
suggesting you try Proxmox :) It offers OpenVZ and KVM so allows you
to enjoy using Linux containers or fully virtualised systems. I've
used OpenVZ a lot over the years and trialed Proxmox a while back and
was quite impressed.
I'd like to add my own positive experience with proxmox in a small
environment. Having experience with openvz on my private servers, I
quickly gravitated towards promox when looking for something supporting containers, virtual machines and sporting a GUI even my windows minded fellow team members could understand ;-). I use it to run a server that
supports our development team. It uses containers for java web apps
(confluence and Jira) and network services like DNS and dhcp and
virtual machines running windows to do software upgrade tests, evaluate software and supply remote users or team members running linux on their laptops with RDP sessions to the unavoidable set of windows dev apps. I
can happily run ±5 containers and ±5 window VM's on a quad core
server with 16GB. The GUI is quite intuitive and provides enough
functionality. Deploying a new VM or container is a breeze. It should also support live migration between hardware nodes, although i didn't
test this. Backups are easy to setup either to directly connected
storage of something like NFS. Best of all, it's debian beneath the
GUI, so on the cli, if needed, you'll feel right at home. Peter

Hello Peter, that is some good information right there - i installed
proxmox 2.0 rc1 yesterday afternoon on a workstation computer, for
testing, and i am currently looking at the performance.

Keep in mind that 2.0 is still unstable. For real world use I'd stick to 1.9 for now or wait for 2.0 final to be released.

Would you please
be more specific about the configuration of the machine you are using? ie
cpu model, disk / controller configuration, installed nics, etc.. A
private reply would be enough!

It's a IBM System x3200 M3 with a Intel Xeon x3430 4-core 2.4ghz, 16GB ram, 4x 500GB cold-swap SATA in RAID 10 configuration using IBM ServeRAIDBR10iL controller and dual Intel 82574L Gigabit NIC.

The solution i am looking for will
determine the hardware i will be ordering, so i am concerned about it
right now. How should i consider technologies like KVM and openVZ
regarding stability? I'm talking about downtimes and maintenance time.

My system has been running over a year now 24/7 and has had one lockup that required a reboot of the HW node. Apart from that, the HW node has been maintenance free. Upgrading to a new version is as easy as upgrading any Debian system. Keep in mind that my setup is a simple one, I don't use multiple nodes or networked storage. So, YMMV. On my private servers I use openvz directly on stock debian squeeze and it has never failed me. The only issues I had were with live migrating containers from one node to another, but that may very well be caused by the specific setup I use, where both nodes are on separate subnets. As for performance, the most demanding container I have is one running Confluence wiki and Jira issue manager (java based) with a postgresql DB. I've had that setup running on a (remote, professionally setup and maintained) vmware cluster (as a CentOS linux VM) and on my local proxmox server (as a openvz debian container). The local proxmox based on performed slightly better. I don't have any info on the setup of that VMware cluster, so it's just anecdotal 'evidence'.

I heard multiple opinions on xen being a bad thing to work with, both
performance wise and stability wise. I wouldn't want to set this "private
cloud" up only to discover it's not production ready!

Like I mentioned above, I haven't got experience using proxmox with multiple nodes. It seems to support it just fine and I haven't seen big issues with it on the proxmox forums [3]. But again, others (the proxmox forum [3]?) might be more informative.

I am also wondering
how i will be able to deal with storage and multiple nodes: how does
proxmox behave on the matter? In case the main machine goes down i would
be pretty screwed, wouldn't i?

IIRC Promox does support live migration for networked storage for KVM, and AFAIR the upcoming 2.0 will also support it for for containers. Check the proxmox wiki [2], there's a page on (HA) storage solutions [4]. If by 'main machine' you mean the shared storage, then yes, when it goes down the cluster goes down and you'll have to use your backups. There's however a HA storage solution based on DRBD, see [5]. Again, check the proxmox forums and the wiki for info on it's use and stability.

I guess that since proxmox is debian
derivate i could eventually have a separate machine for storage and just
mount a remote share through fuse and use that, but i'm open for
suggestions. -- Davide Mirtillo

 Peter

Links:
------
[1] http://pve.proxmox.com/wiki/Main_Page
[2] http://www.proxmox.com/support/free-community-support/proxmox-ve-wiki
[3] http://forum.proxmox.com/forum.php
[4] http://pve.proxmox.com/wiki/Storage_Model
[5] http://pve.proxmox.com/wiki/DRBD


Reply to: