Proxmox VE: Import Wizard for Migrating VMware ESXi VMs

245 points
1/20/1970
a month ago
by aaronius

Comments


matthew-wegner

I'm in game development, and I run ESXi for two reasons:

* Unmodified macOS guest VMs can run under ESXi (if you're reading this on macOS, you have an Apple-made VMXNet3 network adapter driver on your system--see /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/AppleVmxnet3Ethernet.kext )

* Accelerated 3D has reasonable guest support, even as pure software. You wouldn't want to work in one of those guest VMs, but for any sort of build agent it should be fine, including opening i.e. Unity editor itself in-VM to diagnose something

Does anyone know where either of these things stand with Proxmox today?

I imagine macOS VM under Proxmox is basically a hackintosh with i.e. OpenCore as bootloader?

a month ago

wutwutwat

I run macOS/OSX Sonoma in Proxmox. It does pcie/gpu passthrough (AMD rx580 8gb). The proxmox host is running a AMD Ryzen 7 5700G cpu and has 64gb memory. The mac vm disks are on a zfs fs on a wd black nvme ssd.

It's the fastest mac I've ever owned and it's virtual, executed on a machine running a chip that apple never supported, and you'd never be able to tell it was a vm unless you were told so. kvm and vfio are amazing pieces of software.

A good place to start: https://github.com/luchina-gabriel/OSX-PROXMOX

a month ago

rfoo

> zfs fs on a wd black nvme ssd

Why? IIRC running ZFS on NVMe SSDs limit their performance seriously. With sufficient queue depth modern SSDs can easily get up to 1mln+ IOPS and on ZFS I can barely get 100k :(

a month ago

tpetry

Most probably because ZFS still has the most extensive data correctness guarantess of all filesystems on Linux. Yes, bcachefs will have checksums too but it it is still in beta.

a month ago

wutwutwat

copy on write, instant snapshots, data checksumming, easy rollbacks, auto trim, compression

proxmox takes advantage of zfs' features if available, like when you select block storage on zfs it makes each vm disk its own dataset which you can tune as you see fit for each one. cloning a vm is instant if you make a cow clone. suspending the vm to do snapshot based backups is also nearly instant, etc

Disable atimes and turn off the arc cache and metadata cache (don't really need either with fast storage compared to spinning rust) and use lz4 compression and you minimize write amplification or needless wear on the drive. zfs is perfectly fine on ssd for my use case

a month ago

bonton89

btrfs has checksums as well.

a month ago

bonton89

The lack of 3D paravirtual devices is a real sore spot in kvm. To my knowledge, virgl still isn't quite there but is all there is so far in this space. VMware has the best implementation IMO and everything else is a step down.

a month ago

mrpippy

Note that macOS guest support ended with ESXi 7.0: https://kb.vmware.com/s/article/88698

Running macOS is only supported/licensing-compliant on Apple-branded hardware anyway, and with the supported Intel Macs getting pretty old this was inevitable anyway.

a month ago

moondev

Mac mini 2018 is still the best Mac for vms

   6 cores 12 threads
   64GB DDR
   nvme
   4 thunderbolt3 ports for pci expansion
   10GbE onboard nic
   boots ESXi
   boots Proxmox
   boots or virtualizes windows
   boots or virtualizes linux
   boots or virtualizes macos
   iGPU passthrough
   Supports nested virt
a month ago

greggsy

If you’re just running a bare metal hypervisor then you may as well just go for a second hand tiny form factor platform. No point paying 15-20% extra for the same specs in a svelte case if you’re not running native macOS.

a month ago

moondev

> No point paying 15-20% extra for the same specs in a svelte case if you’re not running native macOS.

The main point is proper non-hackintosh support for running (unlimited amount) of macOS vms on esxi - which requires Apple hardware. Running macOS, Windows and Linux on the metal is another benefit. IMO it's almost as versatile as a framework chromebook (chromeos, crostini, kvm, android with play store)

a month ago

mysteria

The SPICE backend has decent OpenGL 3D support with software rendering, it's slow but it works for simple graphics. It's intended for 2D so the desktop's pretty fast IMO. That only works for Linux and Windows guests though, not Apple ones.

MacOS VMs do work in Proxmox with a Hackintosh setup but you pretty much have to passthrough a GPU to the VM if you're using the GUI. Otherwise you're stuck with the Apple VNC remote desktop and that's unbearably slow.

a month ago

zozbot234

For paravirtualized hardware rendering you can use virtio-gpu. In addition to Linux, a Windows guest driver is available but it's highly experimental still and not very easy to get working.

a month ago

zbrozek

What's the best solution for remote viewing with virtio-gpu? I vaguely recall that it had some incompatibility with spice.

I have a bunch of qemu/kvm virtual machines for various tasks. For two of them I decided that graphics performance needed to be at least OK and ended up buying and passing through old Quadro cards. It'd be lovely to not have to do that.

a month ago

mysteria

I don't think virtio-gpu works on SPICE. I only use it for VMs on my desktop as I can display the output on the local machine using virt-viewer.

a month ago

zbrozek

Still the case, huh. That's unfortunate. All of my VMs run on a headless server, so my only means of use is via some form of remote desktop.

a month ago

m463

You can run macos vms under proxmox.

You can also accelerate video (though it's uncertain what "pure software" means?)

this guy: https://www.nicksherlock.com

has been writing up proxox vms running macos for a while, the latest being:

https://www.nicksherlock.com/2022/10/installing-macos-13-ven...

yes, it uses opencore.

I haven't really bothered with macos GPU acceleration, It is possible but a little fiddly with an nvidia card. I mostly rely on screen sharing to display the remote vm and that's really good (rich text copy/paste, drag and drop files, etc)

As to general guest 3d, you can do this with GPU passthrough, and although it's technical the first time, each vm is then easy.

basically I add this line to each VM for my graphics card:

  hostpci0: 01:00,pcie=1,x-vga=1
and this passthroughs a USB controller for my USB dac:

  hostpci1: 03:00,pcie=1
(this is specific to my system device tree, it will usually be different for each specific machine)

The passthough has worked wonderfully for Windows, ubuntu and arch linux guests, giving a full accelerated desktop, and smooth gaming with non-choppy sound.

two things I wish proxmox did better:

- passthrough USB into containers

usb is pretty fiddly to propagate into a container. It is marginally better with VMs but sometimes devices still don't show up.

- docker/podmain containers

proxmox has lxc containers, but using them is a little like being a sysadmin. Docker/podmain containers that you can build from a dockerfile would be a much better experience.

a month ago

moondev

Nested virtualization also works great on ESXi for macOS guests ( so you can run docker desktop if so inclined ). I believe this is possible with proxmox as well with CPU=host but have not tried it.

For graphics, another cool thing is intel iGPU pci passthrough - I have had success with this when running esxi on my 2018 mac mini https://williamlam.com/2020/06/passthrough-of-integrated-gpu...

a month ago

unquietwiki

* Proxmox can emulate VMWare adapters. https://pve.proxmox.com/wiki/QEMU/KVM_Virtual_Machines

* If you don't mind dedicating the video card to a VM, you can do PCI-passthrough. https://pve.proxmox.com/wiki/PCI_Passthrough

a month ago

oneplane

Apple has been adding a lot of virt and SPICE things IIRC. Some of it isn't supported in VMware (it lacks a bunch of standard virt support), but the facilities are growing instead of shrinking which is a good sign.

On Proxmox you can do the same. You're going to need OpenCore if you're not on a Mac indeed. But if you're not on a Mac you're breaking the EULA anyway.

a month ago

rufugee

Does this work with VMWare Workstation as well? I'd love to run macOS in a VM on my Linux desktop for the few apps I have to use on macOS...

a month ago

justinclift

This works pretty well for running macOS on Linux:

https://github.com/notAperson535/OneClick-macOS-Simple-KVM/

Mostly used it when trying to track down reported macOS bugs for an OSS project, so maybe once every few months. But it's worked quite well at those times. :)

a month ago

lathiat

quickemu also does the job, not just for macOS but many operating systems

"Quickly create and run highly optimised desktop virtual machines for Linux, macOS and Windows; with just two commands. You decide what operating system you want to run and Quickemu will figure out the best way to do it for you." https://github.com/quickemu-project/quickemu

a month ago

justinclift

Thanks, hadn't come across that before. :)

a month ago

zerkten

It has in the past for me, but I haven't run it since 2021.

a month ago

[deleted]
a month ago

t3rra

[flagged]

a month ago

bluedino

It's the ecosystem.

Sure, your organization is spending another million dollars on VMware this year, but what are the options?

* Your outsourced VMware-certified experts don't actually know that much about virtualization (somehow)

* Your backup software provider is just now researching adding Proxmox support (https://www.theregister.com/2024/01/22/veeam_proxmox_oracle_...)

* A few years ago you 'reduced storage cost and complexity' by moving to VMware vSAN, now you have a SAN purchase and data migration on your task list

* The hybrid cloud solution that was implemented isn't compatible with Proxmox

* The ServiceNow integration for VMware works great and is saving you tons of time and money. You want to give that up?

* Can you live without logging, reporting, and dashboards until your team gets some free time?

a month ago

zozbot234

With a million dollar per year to play with, you should ultimately be able to replace all of these. Especially since it's not like Proxmox is lacking its own third-party support options (but it being built on FLOSS tech still leaves you with a lot more flexibility).

a month ago

technion

I'll agree in theory but to play devil's advocate:

* I did the VCP4 and 5 courses. It's entirely a sales certification. I mean it's a technical certification, but I've never run into anyone who certified for the purpose of running an organisation's tech. Rather, you certify for the purpose of your company being able to sell the product. Note also much of VMware's training focus lately has been on things outside their main virtualisation, like Horizon or their MDM product. * Accurate. But I don't think it'll be far off. * Proxmox does Ceph out of the box. I'll also add that it's very easy to manage, unlike vSAN. I'll further add that none of the VMware training and certifications I've ever done covered vSAN, all the courses assume someone bought a SAN. * All the "hybrid cloud" pushed at least by Microsoft completely assumes you're in Hyper-V and is irrelevant to VMware * I've consulted to an awful lot of VMware organisations and I've never seen servicenow integration in place. I'm sure it's relevant for some peopel.

a month ago

wkat4242

Their MDM product was airwatch which was pretty amazing until VMware bought it and stopped developing core features in favour of cruft nobody wanted like vdi integration.

Now airwatch is surpassed even by Intune.

a month ago

oneplane

All of those points would also assume:

* You are big enough to need that and actually implement it

* You have the budget to do so

* You actually have the need to do that in-house

If you are at that scale but you don't have the internal knowledge, you were going to get bitten anyway. If you are not at that scale, you were already bitten and you shouldn't have been doing it anyway.

a month ago

tlamponi

> It's the ecosystem.

Definitively, and situations like the Broadcom one IMO just underline that as a company you should never ever get your core infra locked into proprietary vendors' ecosystem, as that is just waiting for getting squeezed out, which they can for the reasons you laid out.

> Your outsourced VMware-certified experts don't actually know that much about virtualization (somehow).

That should be a wake-up call to have some in-house expertise for any core infra you run, at least as a mid-sized, or bigger, company. Most projects targeting the enterprise, like Proxmox VE, provide official trainings for exactly that reason.

https://proxmox.com/en/services/training

> * Your backup software provider is just now researching adding Proxmox support (https://www.theregister.com/2024/01/22/veeam_proxmox_oracle_...)

Yeah, that's understandable, one wants to avoid switching both, the hyper-visors that hosts core-infrastructure and the backup solution that holds all data, often even from the whole period a company needs to legally save that.

But as you saw, even the biggest backup player sees enough reason to hedge their offerings and takes Proxmox VE very seriously as alternative, the rest is a matter of time.

> A few years ago you 'reduced storage cost and complexity' by moving to VMware vSAN, now you have a SAN purchase and data migration on your task list

No, you should rather evaluate Proxmox's Ceph integration instead of getting yet another overly expensive SAN box. As ceph allows you to also run a powerful and near indestructible HCI storage, but avoids any lock-in as Ceph is FLOSS and there are many companies providing support for it and other hyper-visors that can use it.

> * The hybrid cloud solution that was implemented isn't compatible with Proxmox. > * The ServiceNow integration for VMware works great and is saving you tons of time and money. You want to give that up?

That certainly needs more work and is part of the chicken and egg problem that backup support is (or well, was) facing, but also somewhat underlines how lock-in works.

> * Can you live without logging, reporting, and dashboards until your team gets some free time?

Proxmox VE has some integrated logging and metrics, and provides native support to send to external metrics server, we use that for all of our infra (that runs on a dozen PVE servers in various datacenters around the world) with great success and not much initial implementation effort.

So yeah, it's the ecosystem, but there are alternatives for most things and just throwing up your hands only signals to those companies that they can squeeze you much tighter.

a month ago

mavhc

Your outsourced experts are actually just people with google.

Proxmox on zfs means zfs snapshot send/receive, simple. I made my own immutable zfs backup system for £5

a month ago

Helmut10001

I use Proxmox since 5 years at home. Mostly docker nested in unprivileged LXCs on ZFS for performance reasons. I love the reliability. Proxmox has never lost me. They churn out constant progress without making too much noise. No buzzwords, no advertising, just a good reliable product that keeps getting better.

a month ago

dusanh

Unprivileged LXCs? Interesting, I thought containers would require privileged LXC. At least, that it my takeaway from trying to run Podman in a nesting enabled, but unprivileged LXC under non-root user. I kept running into

> newuidmap: write to uid_map failed: Operation not permitted

I tried googling it, tried some of the solutions, but reached the conclusion that it's happening because the LXC is not privileged.

a month ago

LilBytes

Have to map GUID and UIDs from the Proxmox host to the LxC to allow bind mappings work as an example.

Proxmox doco for unprivileged LxC is here: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers

a month ago

dusanh

Replying to my own comment for posterity. I was able to figure it out!

In the LXC, I created a new non-root user and then set it's uid and git to a LOWER number than what every tutorial about rootless Podman recommends (devel being my non-root user here):

  # cat /etc/subuid /etc/subgid
  devel:10000:5000
  devel:10000:5000
Then I also had to edit configuration for the LXC itself on the Proxmox host to allow tun and have it created on container boot:

   # cat /etc/pve/lxc/{lxc_vmid}.conf
   ...truncated...
   lxc.cgroup2.devices.allow: c 10:200 rwm
   lxc.mount.entry: /dev/net dev/net none bind,create=dir
Note: I have no idea why a lower number of ids works...
22 days ago

Helmut10001

Ah.. apologies for my misguidance below. It made me realize that I wrote these blog posts with a VM on another Hypervisor, not on my Proxmox/LXC (just the Docker guide that I haven't yet transitioned to rootless-in-unprivileged-lxc).

See the explanation here [1]: Unprivileged LXC on Proxmox seem to be restricted to the uid range below 65536 UIDs/GIDs (to be used _inside_ the LXC -> to be mapped to > 100000:165536 outside the LXC/on the host).

In order to use subuids/gids > 65536 inside the LXC, add a mapping to the LXC config:

    root:100000:65536
to /etc/subgid and /etc/subuid.

Now you'll have 100000 to 165536 available inside the LXC, where you can add:

    devel:100000:65536
to the /etc/subgid and /etc/subuid inside the LXC, for nested rootless podman/docker.

As a consequence, you're mapping the devel user to the same range as the LXC root user. In other words, processes inside the LXC and inside the rootless podman could run in the same namespace on the Proxmox host. If you don't want that, you'll need to provide additional ranges to the LXC (e.g. root:100000:165536 and then map `devel` to (e.g.) 200000 to 265536 (devel:200000:265536).

* I did not actually test all stated above.

[1] https://forum.proxmox.com/threads/how-to-use-uids-gids-highe...

22 days ago

Helmut10001

I once wrote a post about Docker in unprivileged LXC on ZFS [1]. The post is a little bit outdated, as it is much simpler today with ZFS 2.2.0, which is natively supported. There's also a more recent post that shows how to run rootless docker [2], with updated uid-mappings. Both may be helpful, have a look.

The advantage of using LXC for me is resource consumption and separation of concerns. I have about 35 Docker containers spread over 10 LXCs. The average CPU use is 1-3% and I only need about 10GB of memory (even with running bigger containers like Nextcloud, Gitlab, mailcow-dockerized etc.). With docker-compose.yml's, automatic updates are easy and robust.

[1]: https://du.nkel.dev/blog/2021-03-25_proxmox_docker/

[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...

a month ago

dusanh

Thank you, this made me realize I assigned a wrong (too little) number for uids. It did not fix my issue however. I still see

  (dev) $ podman info
  ERRO[0000] running `/usr/bin/newuidmap 3427 0 1000 1 1 100000 65536`: newuidmap: open of uid_map failed: Permission denied
  Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1
I tried a solution I found on Red Hat's Customer Portal:

  (root) # setcap cap_setuid+ep /usr/bin/newuidmap
  (root) # setcap cap_setgid+ep /usr/bin/newgidmap
Also did not work. I can run

  (root) # podman info 
just fine as root. This leads me to believe there are some other problems with my non-root user permissions.

EDIT: It probably makes little sense, to run rootless on top of an already unprivileged LXC. I just wanted to give vscode server it's own non-root user in there. Oh well...

a month ago

Helmut10001

Yes, just start from scratch and provide uid-mappings from the beginning. Looks like those uids were set from before adding the mappings and it is trying to access uids it is not allowed to access.

I used rootless docker in rootless lxc because the Postgres Docker (e.g.) will try to setup a non-root user by default. In a rootless LXC, this means it will try to access very large uids (>100000), which are not available, unless explicitly prepared.

a month ago

dusanh

That actually did not do anything different for me. I did the following:

1) Created a new LXC.

2) As root, I created a new user "devel"

3) For the "devel" user set both subuid and subgid to devel:100000:65536

4) As root, installed podman

5) In another SSH session, logged in as "devel" and ran "podman version"

Same error as before. This is in a Debian 12 LXC running on Proxmox.

a month ago

Helmut10001

I am also using Docker in Debian 12 LXC on Proxmox. I am not sure what has gone wrong here.

a month ago

dusanh

Was there anything extra you have done on the Host itself?

a month ago

Helmut10001

I described the full process here [1]. The only thing that seems to differ is podman for you.

Ah, I see:

> 4) As root, installed podman

I installed docker as the non-root user. See my Mastodon post, there's a specific procedure to install Docker in a user namespace ("devel" in your case).

[1]: https://du.nkel.dev/blog/2021-03-25_proxmox_docker/

25 days ago

itopaloglu83

Very swift move by Proxmox. For context, VMware recently increased their prices as much as 1200% for some customers.

a month ago

rwmj

Tons of products like this have existed for years. Virt-v2v (which I wrote) since 2007/8, Platespin, Xen XCP, AWS's tooling, Azure Migrate etc.

a month ago

itopaloglu83

Yes, that’s true. But this is not about the product but about business practices of Broadcom. They tend to do sharp price increases when they purchase a product line.

ServeTheHome talked about this a while ago. https://youtu.be/peH4ic7g5yc

a month ago

lelandbatey

Important because VMWare's been acquired by Broadcom (November 22, 2023) and Broadcom's been turning the screws on customers to get more money. Many folks are looking for alternatives. More context:

2024/02/26 Can confirm a current Broadcom VMware customer went from $8M renewal to $100M https://news.ycombinator.com/item?id=39509506

2024/02/13 VMware vSphere ESXi free edition is dead https://news.ycombinator.com/item?id=39359534

2024/01/18 VMware End of Availability of perpetual licensing and associated products https://news.ycombinator.com/item?id=39048120

2024/01/15 Order/license chaos for VMware products after Broadcom takeover https://news.ycombinator.com/item?id=38998615

2023/12/12 VMware transition to subscription, end of sale of perpetual license https://news.ycombinator.com/item?id=38615315

a month ago

blaerk

I really hope the crazy prize increase of vmware products will end the use of esxi and the rest of the vsphere suite, it is one of the worst applications and apis i have ever had the displeasure of working with!

a month ago

candiddevmike

VMware has a track record of pretty great reliability across a vast array of hardware. Yes, the APIs suck, but they're a case study on tech debt: vSphere is basically the Windows equivalent of datacenter APIs. They chose the best technology at the time (2009, which meant SOAP, Powershell, XML, etc) and had too much inertia to rework it.

a month ago

mianos

Not to mention how flakey it is at scale. There is always some vmware guy who replies to me saying how good it is, but if you have thousands of VMs it is a random crapshoot. Something you just don't see with say AWS and Azure at similar scale. It reaks of old age and hack on hack over many years, and that is saying something when compared to AWS.

a month ago

oneplane

The VMWare APIs are indeed pretty bad, even the ones on their modern products for some reason (i.e. NSX etc.) where they did adopt more modern methods but still managed to pull a Microsoft with 'one API for you, a different API for us'.

Being pretty bad doesn't mean they don't work of course, but when the best a product has to offer is clickops, they have missed the boat about 15 years ago.

a month ago

fh973

I really hope that the price increase creates a business opportunity for new technology. This space has been plagued by subpar "free" alternatives (Openstack, Kubernetes) for a decade.

a month ago

zettabomb

I can't concur. VMware was the leader in virtualization technology for a long time, and honestly nothing is quite as simple to start with as ESXi if you've never used a type 1 hypervisor before. I'm not so familiar with the APIs, so perhaps you're correct in that sense.

a month ago

nolok

> nothing is quite as simple to start with as ESXi if you've never used a type 1 hypervisor before

Not sure about where ESXi is at lately on that level, but latest proxmox is really, really simple to start with if you've never used an hypervisor. You boot on the usb drive, press yes a few times, open the ip:port they give you and then you can click "create vm", next next next here is the iso to boot from and that's it.

Any tech user who has some vague knowledge about virtual machine or even run virtualbox on his computer could do it, and the more advanced fonctions (from proper backups and snapshot to multi node replication and load balancing) are absurdly simple to figure out in the UI.

I can't talk about the performance or quality of one against the other, but in pure difficulty to approach proxmox is doing very very good.

a month ago

mavhc

also does zfs raidz boot in the installer

a month ago

MrDarcy

Also does ceph in the GUI for near instant live migrations.

a month ago

moondev

By application do you mean vCenter? It's in an entirely different league than proxmox.

https://i0.wp.com/williamlam.com/wp-content/uploads/2023/04/...

https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

a month ago

MrDarcy

It’s not in a different league. I’ve used both in production. As others have said vSphere breaks down with thousands of VM’s, and worse the vSwitch implementation is buggy and unreliable as soon as you add more than a couple to a cluster.

a month ago

moondev

> the vSwitch implementation is buggy and unreliable as soon as you add more than a couple to a cluster.

Next time try dSwitch (distributed switch) instead of vSwitch. It's designed for cluster use and much more powerful (and easier to manage across hosts). Manually managing vSwitches across a cluster sounds like torture.

a month ago

kazen44

i would disagree with you there, especially because there is very little on the sdn front which matches NSX-T in terms of SDN capabilities, this is something in which vmware has been ahead, the only other people with the same capabilities seem to be hyperscalers.

a month ago

c0l0

Take a look at Proxmox SDN features: https://pve.proxmox.com/pve-docs/chapter-pvesdn.html (some of it is still in beta, I think).

I think it comes pretty close - close enough for probably most but the very largest of users, who, I think, should probably have tried to become hyperscalers themselves, instead of betting the farm and all the land around it on VMware (by Broadcom).

a month ago

kazen44

the thing it is mainly missing is multi-tenancy self service. (ipam integration seems very nice though).

NSX allows you to create seperate clusters which hosts VM's which run the routing and firewalling functionality.

a month ago

oneplane

NSX-T and what hyperscalers do is essentially orchestration of things that already exist anyway. The load balancing in NSX is mostly just some openresty and Lua which as been around for quite a while. Classic Q-in-Q and bridging also does practically all of the classic L2 & L3 networking that tends to be touted as 'new', while you could even do that fully orchestrated when Puppet was the hot new thing back in the day.

Some things (that were created before NSX) may have come from internet exchanges and hyperscalers, like openflow, P4, and FRR, but were really not missing parts that were required to do software defined networking. If anything, the only thing you really needed for SDN was Linux, and the only real distinction between SDN and non-SDN was hardwired ASICs in the network fabric (well, not hard-hardwired, but with limited programmability or 'secret' APIs).

a month ago

SV_BubbleTime

We went from $66 last year to $3600 this year.

There won’t be another year.

a month ago

[deleted]
a month ago

F00Fbug

I spent 15 years managing a VMware-centric data center. I ran the free version at home for at least 5 years. When I ran out of vCPUs on my free license I switched to Proxmox and the migration was almost painless. This new tool should help even more.

For most vanilla hosting, you could get away with Proxmox and be just fine. I've been running it for at least 5 years in my basement and haven't had a single hiccup. I bet a lot of VMware customers will be jumping ship when their licenses expire.

a month ago

whalesalad

I did this recently and it was honestly a walk in the park. Was quite pleasantly surprised when all my vm's just booted up and resumed work as normal. The only thing I was worried about was the mac addresses used for dedicated dhcp leases, but all of that "just worked" too!

a month ago

moondev

If proxmox supported OVA and OVF it would be huge. It seems technically possible as there is a new experimental kvm backend for virtual box which supports OVA.

At the end of the day OVA is just machine metadata packaged as XML with all required VM artifacts, there is also some cool things like supporting launch variables. Leveraging the format would bring a bunch of momentum considering all the existing OVAs in the wild

a month ago

the_swd

Proxmox documentation does mention OVF support https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Import_OV...

Seems a bit barebones, as in no support for a nice OVF properties launch UI, but one should be able to extract an OVA to an OVF and VMDK an manually edit the OVF with appropriate properties.

I actually had plans this week to try exactly that...

a month ago

moondev

Interesting thanks for sharing. Surfacing this in the UI would be great if it works well for sure.

Another handy feature is the ContentLibrary for organizing and launching OVA/OVF, as well as launching OVA directly from a URL without needing to download it next to the cli.

This makes me think there could be an opportunity in "PhotoPea (kvm gui) for vCenter" - in the same manner photopea is a clean room implementation of the photoshop UI/UX

a month ago

tlamponi

  > Interesting thanks for sharing. Surfacing this in the UI would be great if it works well for sure.
That's on the roadmap, from the original forum post linked here:

  > Q: Will other import sources be supported in the future?
  > A: We plan to integrate our OVF/OVA import tools into this new stack in the future.
  >    Currently, integrating additional import sources is not on our roadmap, but will be re-evaluated periodically.
a month ago

Underphil

I've set up a ton of virtual appliances that way. It's just a regular ZIP file with the config and vmdk(s).

a month ago

publicmail

Do you have any more info on this KVM back end for Virtualbox? I love Virtualbox (I know, I know) but the one annoying thing is the dependency on out of tree kernel modules (at least they’re open source though).

a month ago

RamRodification

From the post in case you missed it:

Q: Will other import sources be supported in the future?

A: We plan to integrate our OVF/OVA import tools into this new stack in the future. Currently, integrating additional import sources is not on our roadmap, but will be re-evaluated periodically.

a month ago

shrubble

I am researching whether to buy puts on AVGO (Broadcom, owner of VMware) since I believe their Vmware revenue will crater in 12 months or so. They took on 32 billion in debt to buy VMW also, which will tank their stock price, I think.

a month ago

gruturo

It never takes as little as you (or others, myself included) think it should. Big companies have a lot of inertia and changing anything which is working today, even if it saves a lot, attached your name to the risk it will fail horribly, so you'd be reluctant to suggest it, esp. since it's usually not your own money (your budget maybe, but not your own money).

Broadcom knows this very well and likely turned the price screw exactly right - just before the breaking point for the critical mass of their customers.

What I think will lead to the eventual implosion of VMware's market share, on a longer timescale, is the removal of free ESXi. Many people acquire familiarity with small scale/home/demo labs or PoC prototypes, then they recommend going with what they're familiar with. This led Microsoft where they are now, by always giving big discounts to students and never going too hard on those running cracked copies. They saw it as an investment and they were bloody right. If the product had been better it would completely dominate now, but even as shoddy as it is, it's a huge cash cow.

a month ago

bityard

All of AVGO/Broadcom's moves with VMware have been to keep revenue somewhat steady by focusing on their biggest customers locked into their ecosystem, while drastically cutting back everything else to lower expenses. This should produce excellent short-term financial results which the market will very likely reward with a higher stock price over the next year or two. The board and C-suites know what they are doing.

Of course, destroying the trust they had with their customers means the long-term prospects of the VMware are not so good.

a month ago

gonzo

so they sell the husk of VMware back to Dell when they're done.

a month ago

candiddevmike

I'd exercise caution, in my experience, it'll take years for companies to transition from VMware to somewhere else. In the interim, their revenue will most likely pop as they're squeezing the shit out of these unlucky souls.

a month ago

mvdwoord

I concur, being close to the fire it will take years for large organizations to move off their VMware stacks. Inertia of large organizations is a thing, but mostly, there are so many custom integrations made with other systems, lots of them tied up in the vSphere stack.

SDN is one thing but the amount of effort put in vROPS / vRA / vRO etc is not easily replaced. Workflows integrating with backups, CMDB, IAM, Security and what not have no catch all migration path with some import wizards.

Meanwhile, Broadcom will happily litigate where necessary and invoice their way to a higher stock price.

$0.02

a month ago

halfcat

It’s not about whether you think their revenue will crater (or any other fundamentals).

It’s about answering the question: Why is the current price of puts wrong?

a month ago

[deleted]
a month ago

Denote6737

Proxmox striking whilst the iron is still hot. Impressive.

a month ago

adr1an

For the sake of completeness, xcp-ng is an alternative to migrate VMware ESXi VMs too!

a month ago

fulafel

No mentions yet in the comments re the widespread VMware breakin/ransomware epidemic of recent times as the reason to move. I hope there are many people motivated by that and not just the price increases.

a month ago

rwmj

Does this do the hard bit, ie installing virtio drivers during conversion?

a month ago

bityard

All of the most popular Linux distros tend have the virtio drivers installed by default.

a month ago

rwmj

Not in the initramfs which is rather important if you want them to boot without having to use slow emulated IDE. Then there's Windows guests.

a month ago

prmoustache

It have been years since I used windows VMs, can't you install .inf driver's file beforehand?

a month ago

rwmj

Windows needs at least the boot block device driver to be installed before conversion (else it cannot find the boot disk), and there are many other changes you need to make. Virt-v2v does all this stuff during conversion, and it's hard to get right.

a month ago

justinclift

Oh, that would be a smart move. :)

If it doesn't, any idea if it's something they could automate easily?

a month ago

rwmj

In recent versions:

  virt-customize -a disk.img --inject-virtio-win <METHOD>
https://libguestfs.org/virt-customize.1.html

However they'll also be missing out on all the other stuff that virt-v2v does.

a month ago

luzer7

Does anyone have a good _basic_ guide on LVM/LVM Thin? I'm having a hard time wrapping my head around LVM and moving the vmdk to it. Mainly a Window admin with some Linux experience.

I understand that LVM holds data in it but when I make a Windows VM in proxmox it stores the data in a LVM partition(?) as opposed to ESXi or Hyper-V making a VHD or VMDK.

Kinda confusing .

a month ago

abbbi

proxmox is using LVM for direct attached raw volumes. LVM is just a logical volume manager for linux, which gives you more features than using old fashioned disk partitioning. I guess they chose this path for windows virtual machine migration because windows running on vmware before, does usually not have the required virtio drivers installed to support the qemu hypervisors virtio solution for disk bus virtualization out of the box. It would mean the hypervisor has to simulate IDE or SCSI bus which comes with great overhead perfomance wise (in the case of migration)

So an direct attached lvm volume is the best solution performance wise. In the vmware world this would be an direct attached raw device either from local disk or SAN.

For fresh install on proxmox its better to chose qcow as disk image format with virtio-scsi bus (comparable to vhdx, vmdk, qemus disk format) and add virtio drivers during windows setup.

a month ago

m463

i ran into the same sort of documentation desert.

off the top of my head:

- keep in mind there is LVM and LVM2, and proxmox now uses lvm2

- I don't understand the thinpool allocation. You don't have to use lvm-thin if you don't want to deal with oversubscribed volumes, or don't care about snapshots or cloning storage.

- get to know "pvesm". A lot of things you can do in the gui

- when making linux VMs, I found it easier to use separate devices for the efi partition and the linux partition, such as:

  efidisk0: local-lvm:vm-205-disk-0,size=4M
  virtio0: local-lvm:vm-205-disk-1,iothread=1,size=1G
  virtio1: local-lvm:vm-205-disk-2,cache=writeback,discard=on,iothread=1,size=32G
(virtio0 = efi, virtio1 = /)

and I can mount/expand/resize /dev/mapper/big2-vm--205--disk--2 without having to deal with disk partitions

a month ago

[deleted]
a month ago

tiberious726

Anyone try to replace vsphere with the high-availabilty add-on to RHEL?

a month ago

justinclift

If you're open to alternatives, Proxmox does HA.

It also has some decent clustering capabilities enabling online VM migration between hosts (equivalent to VMotion), which can go a long way towards solving related use cases. :)

a month ago

rafaelturk

Proxmox is great! I just wish that they had a better initial plan, plans start at €1020.

a month ago

subract

I see plans with access to the Enterprise repos starting at €110/yr, and plans with 3 support tickets starting at €340. €1020 is the starting price for a plan with a 2hr SLA.

https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-comm...

a month ago

justinclift

For non production purposes, you're probably fine to use the the "No subscription" repositories. They're an official thing, although their website seems to go out of it's way to not mention it.

You don't actually need a subscription to run Proxmox, it's FOSS software after all.

a month ago

zer00eyz

I have been running proxmox at home for a few months now.

It has been, to say the least, an adventure. And I have nothing but good things to say about Proxmox at this point. Its running not only my home related items (MQTT, Homeassitant), it also plays host to some of the projects I'm working on (postgres, go apps, etc...) rather than runing some sort of local dev.

If you need to exit vmware, proxmox seems like a good way to go.

a month ago

eddieroger

I think "adventure" is how I'd put it, too. Perhaps that which I found most surprising was the difference in defaults between the two. ESXi gave me what I considered pretty good defaults, where Proxmox were more conservative or generic (struggling to find the right word). For example, I was surprised that I had to pick an option for CPU type instead of it defaulting to host, which I would have expected. Saying that, I never checked on ESXi, but I never had reason to look in to performance disparities there.

Once I found the surface, I have really grown to like it, expanding my footprint to use their backup server, too. Proxmox makes you work for it, but is worth it.

a month ago

SpecialistK

> I was surprised that I had to pick an option for CPU type instead of it defaulting to host

I believe the rationale for this is to prevent issues when migrating to different hosts that may not have the same CPU or CPU features. Definitely a more "conservative" choice - maybe it should be a node-wide option or only default to a generic CPU type when there is more than 1 node.

a month ago

rcarmo

I’ve been doing that for almost two years now, including ARM nodes (via a fork). It’s been awesome, and even though I am fully aware Proxmox does not match the entire VMware feature set (I used to migrate VMware stuff to Azure), it has been a game changer in multiple ways.

Case in point: just this weekend a drive started to die on one of my hosts (I still use HDDs on older machines). I backed up the VMs on it to my NAS (you can do that by just having a Samba storage entry defined across the entire cluster), replaced the disk, restored, done.

a month ago

d416

Your experience is very relatable. My first Proxmox adventure began with installing Proxmox 8 on 2 hetzner boxes: one with CPU, one with GPU. Spent two straight weekends on the CPU box and just when I was about to give up on proxmox completely I had a good night’s sleep and things finally ‘clicked’. Now I’m drinking the proxmox koolaid 100% and making it my go-to OS.

For the GPU box I completely abandoned the install after attempting to do the gymnastics around GPU passthrough. I like Proxmox but I’m not a masochist - Looking forward to the day when that just works.

a month ago

irusensei

I appreciate projects like Proxmox but it must be also said that you can achieve same functionality sans the UI with tools available on most Linux distributions: libvirt, lx{c,d}, podman etc.

a month ago

RamRodification

A big one hiding in that "etc" I think is Ceph. Proxmox has a very nice UI for setting it up easily.

a month ago

sunshine-o

I would love to see a serious comparison (features & performance) between VMWare ESXi, Proxmox VE and let's say a more stock RHEL or Ubuntu. And maybe even include FreeBSD/bhyve.

Because yes, in terms of core functionality it should be in the same ballpark. And in terms of UI, Virtual Machine Manager [0] was not that bad.

[0] https://virt-manager.org/

a month ago

zer00eyz

True...

And Proxmox is just skin on lxc and quemu/kvm.

I will say that as I have just started playing with the lxc api, having the Proxmox UI as a quick and easy visual cross check has been lovely.

Podman is an amazing alternative to docker, cant say enough good things about it.

a month ago

tlamponi

> And Proxmox is just skin on lxc and quemu/kvm.

Not really, we have a full-blown REST API that provides storage plugins for a dozen of technologies, disk management, system metrics reporting, management of LXC and QEMU (as in full-blown LXD/Incus and libvirt replacement), which alone probably is taking up a third of our code base, to provide replication, live-migration, local-storage (live-)migration, backup management, HA, good integration into our access control management including multifactor authentication, integration in to LDAP/AD or SSO like OpenID Connect, software defined storage and network integrations, our own kernel, qemu and lxc builds, and hundreds of other features. Don't even get me started on the devs required on each project to continue integration and upstream development and provide enterprise support that actually can fix problems.

In other words, wrapping QEMU or LXC to provide ones custom VM/CTs might be doable easily, but that isn't even a percent of what Proxmox VE offers you.

If a thin UI around LXC/QEMU is all one would need to be competitive with VMWare, then every web dev would be stupid to not create one as a weekend project, but reality is that there's much more required to actually provide the whole ecosystem a modern hyper-visor stack requires to even be considered for any production use case.

a month ago

sureglymop

Can also highly recommend OpenVSwitch and it integrates neatly into netplan.

a month ago

[deleted]
a month ago