Debian Patches
Status for libvirt/11.3.0-3+deb13u1
| Patch | Description | Author | Forwarded | Bugs | Origin | Last update |
|---|---|---|---|---|---|---|
| backport/qemuProcessStartWithMemoryState-Don-t-setup-qemu-for-inco.patch | qemuProcessStartWithMemoryState: Don't setup qemu for incoming migration when reverting internal snapshot The memory/device state of the VM for an internal snapshot is restored by qemu itself via a QMP command and is taken from the qcow2 image, thus we don't actually do any form of incoming migration. Commit 5b324c0a739fe00 which refactored the setup of the incoming migration state didn't take the above into account and inadvertently caused that qemu is being started with '-incoming defer' also when libvirt would want to revert an internal snapshot. Now when qemu expects incoming migration it doesn't activate the block backends as that would cause locking problems and image inconsistency, but also doesn't allow the use of the images. Since the block backends are not activated qemu then thinks that they don't actually support internal snapshots and reports: error: operation failed: load of internal snapshot 'foo1' job failed: Device 'libvirt-1-format' is writable but does not support snapshots Due to the above bug it's not possible to revert to internal snapshots in libvirt-11.2 and libvirt-11.3. (cherry picked from commit 889d2ae289cd95d612575ebc7a4e111ac33b0939) |
Peter Krempa <pkrempa@redhat.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/889d2ae289cd95d612575ebc7a4e111ac33b0939 | 2025-05-13 |
| debian/apparmor_profiles_local_include.patch | apparmor_profiles_local_include Include local apparmor profile |
Felix Geyer <fgeyer@debian.org> | not-needed | 2015-08-11 | ||
| debian/Use-sensible-editor-by-default.patch | Use sensible-editor by default It is the reasonable default for Debian. |
Andrea Bolognani <eof@kiyuko.org> | not-needed | 2020-08-18 | ||
| backport/qemu-Be-more-forgiving-when-acquiring-QUERY-job-when-form.patch | qemu: Be more forgiving when acquiring QUERY job when formatting domain XML In my previous commit of v11.0.0-rc1~115 I've made QEMU driver implementation for virDomainGetXMLDesc() (qemuDomainGetXMLDesc()) acquire QERY job. See its commit message for more info. But this unfortunately broke apps witch fetch domain XML for incoming migration (like virt-manager). The reason is that for incoming migration the VIR_ASYNC_JOB_MIGRATION_IN async job is set, but the mask of allowed synchronous jobs is empty (because QEMU can't talk on monitor really). This makes virDomainObjBeginJob() fail which in turn makes qemuDomainGetXMLDesc() fail too. It makes sense for qemuDomainGetXMLDesc() to acquire the job (e.g. so that it's coherent with another thread that might be in the middle of a MODIFY job). But failure to dump XML may be treated as broken daemon (e.g. virt-manager does so). Therefore, still try to acquire the QUERY job (if job mask permits it) but, do not treat failure as an error. (cherry picked from commit 441c23a7e626c13e6df1946303a0bc0a84180d1c) |
Michal Privoznik <mprivozn@redhat.com> | not-needed | https://gitlab.com/libvirt/libvirt/-/commits/441c23a7e626c13e6df1946303a0bc0a84180d1c | 2025-06-16 | |
| backport/tlscert-Don-t-force-keyEncipherment-for-ECDSA-and-ECDH.patch | tlscert: Don't force 'keyEncipherment' for ECDSA and ECDH Per RFC8813 [1] which amends RFC5580 [2] ECDSA, ECDH, and ECMQV algorithms must not have 'keyEncipherment' present, but our code did check it. Add exemption for known algorithms which don't use it. [1] https://datatracker.ietf.org/doc/rfc8813/ [2] https://datatracker.ietf.org/doc/rfc5480 (cherry picked from commit 11867b0224a2b8dc34755ff0ace446b6842df1c1) |
Peter Krempa <pkrempa@redhat.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/11867b0224a2b8dc34755ff0ace446b6842df1c1 | 2025-06-17 |
| backport/tls-Don-t-require-keyEncipherment-to-be-enabled-altoghthe.patch | tls: Don't require 'keyEncipherment' to be enabled altoghther Key encipherment is required only for RSA key exchange algorithm. With TLS 1.3 this is not even used as RSA is used only for authentication. Since we can't really check when it's required ahead of time drop the check completely. GnuTLS will moan if it will not be able to use RSA key exchange. In commit 11867b0224a2 I tried to relax the check for some eliptic curve algorithm that explicitly forbid it. Based on the above the proper solution is to completely remove it. (cherry picked from commit 8cecd3249e5fa5478a7c53567971b4d969274ea3) |
Peter Krempa <pkrempa@redhat.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/8cecd3249e5fa5478a7c53567971b4d969274ea3 | 2025-06-30 |
| backport/tests-virnettls-test-Drop-use-of-GNUTLS_KEY_KEY_ENCIPHERM.patch | tests: virnettls*test: Drop use of GNUTLS_KEY_KEY_ENCIPHERMENT It's not needed with TLS 1.3 any more. (cherry picked from commit e67952b0e612c9ad3c3eec8bb692589602953ee8) |
Peter Krempa <pkrempa@redhat.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/e67952b0e612c9ad3c3eec8bb692589602953ee8 | 2025-07-01 |
| backport/daemon-Drop-log-level-of-VIR_ERR_NO_SUPPORT-to-debug.patch | daemon: Drop log level of VIR_ERR_NO_SUPPORT to debug The error code signals that the API the user called is not supported by the driver. This can happen with some hypervisor drivers which don't have everything implemented yet. There's no point in spamming the log with it. (cherry picked from commit 37a1bd945899308d1c071bb885e5d1d9529d6b85) |
Peter Krempa <pkrempa@redhat.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/37a1bd945899308d1c071bb885e5d1d9529d6b85 | 2025-08-26 |
| backport/qemu-capabilities-Check-if-cpuModels-is-not-NULL-before-t.patch | qemu: capabilities: Check if cpuModels is not NULL before trying to dereference it accel->cpuModels field might be NULL if QEMU does not return CPU models. The following backtrace is observed in such cases: 0 virQEMUCapsProbeQMPCPUDefinitions (qemuCaps=qemuCaps@entry=0x7f1890003ae0, accel=accel@entry=0x7f1890003c10, mon=mon@entry=0x7f1890005270) at ../src/qemu/qemu_capabilities.c:3091 1 0x00007f18b42fa7b1 in virQEMUCapsInitQMPMonitor (qemuCaps=qemuCaps@entry=0x7f1890003ae0, mon=0x7f1890005270) at ../src/qemu/qemu_capabilities.c:5746 2 0x00007f18b42fafaf in virQEMUCapsInitQMPSingle (qemuCaps=qemuCaps@entry=0x7f1890003ae0, libDir=libDir@entry=0x7f186c1e70f0 "/var/lib/libvirt/qemu", runUid=runUid@entry=955, runGid=runGid@entry=955, onlyTCG=onlyTCG@entry=false) at ../src/qemu/qemu_capabilities.c:5832 3 0x00007f18b42fb1a5 in virQEMUCapsInitQMP (qemuCaps=0x7f1890003ae0, libDir=0x7f186c1e70f0 "/var/lib/libvirt/qemu", runUid=955, runGid=955) at ../src/qemu/qemu_capabilities.c:5848 4 virQEMUCapsNewForBinaryInternal (hostArch=VIR_ARCH_X86_64, binary=binary@entry=0x7f1868002fc0 "/usr/bin/qemu-system-alpha", libDir=0x7f186c1e70f0 "/var/lib/libvirt/qemu", runUid=955, runGid=955, hostCPUSignature=0x7f186c1e9f20 "AuthenticAMD, AMD Ryzen 9 7950X 16-Core Processor, family: 25, model: 97, stepping: 2", microcodeVersion=174068233, kernelVersion=0x7f186c194200 "6.14.9-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 29 May 2025 21:42:15 +0000", cpuData=0x7f186c1ea490) at ../src/qemu/qemu_capabilities.c:5907 5 0x00007f18b42fb4c9 in virQEMUCapsNewData (binary=0x7f1868002fc0 "/usr/bin/qemu-system-alpha", privData=0x7f186c194280) at ../src/qemu/qemu_capabilities.c:5942 6 0x00007f18bd42d302 in virFileCacheNewData (cache=0x7f186c193730, name=0x7f1868002fc0 "/usr/bin/qemu-system-alpha") at ../src/util/virfilecache.c:206 7 virFileCacheValidate (cache=cache@entry=0x7f186c193730, name=name@entry=0x7f1868002fc0 "/usr/bin/qemu-system-alpha", data=data@entry=0x7f18b67c37c0) at ../src/util/virfilecache.c:269 8 0x00007f18bd42d5b8 in virFileCacheLookup (cache=cache@entry=0x7f186c193730, name=name@entry=0x7f1868002fc0 "/usr/bin/qemu-system-alpha") at ../src/util/virfilecache.c:301 9 0x00007f18b42fb679 in virQEMUCapsCacheLookup (cache=cache@entry=0x7f186c193730, binary=binary@entry=0x7f1868002fc0 "/usr/bin/qemu-system-alpha") at ../src/qemu/qemu_capabilities.c:6036 10 0x00007f18b42fb785 in virQEMUCapsInitGuest (caps=<optimized out>, cache=<optimized out>, hostarch=VIR_ARCH_X86_64, guestarch=VIR_ARCH_ALPHA) at ../src/qemu/qemu_capabilities.c:1037 11 virQEMUCapsInit (cache=0x7f186c193730) at ../src/qemu/qemu_capabilities.c:1229 12 0x00007f18b431d311 in virQEMUDriverCreateCapabilities (driver=driver@entry=0x7f186c01f410) at ../src/qemu/qemu_conf.c:1553 13 0x00007f18b431d663 in virQEMUDriverGetCapabilities (driver=0x7f186c01f410, refresh=<optimized out>) at ../src/qemu/qemu_conf.c:1623 14 0x00007f18b435e3e4 in qemuConnectGetVersion (conn=<optimized out>, version=0x7f18b67c39b0) at ../src/qemu/qemu_driver.c:1492 15 0x00007f18bd69c5e8 in virConnectGetVersion (conn=0x55bc5f4cda20, hvVer=hvVer@entry=0x7f18b67c39b0) at ../src/libvirt-host.c:201 16 0x000055bc34ef3627 in remoteDispatchConnectGetVersion (server=0x55bc5f4b93f0, msg=0x55bc5f4cdf60, client=0x55bc5f4c66d0, rerr=0x7f18b67c3a80, ret=0x55bc5f4b8670) at src/remote/remote_daemon_dispatch_stubs.h:1265 17 remoteDispatchConnectGetVersionHelper (server=0x55bc5f4b93f0, client=0x55bc5f4c66d0, msg=0x55bc5f4cdf60, rerr=0x7f18b67c3a80, args=0x0, ret=0x55bc5f4b8670) at src/remote/remote_daemon_dispatch_stubs.h:1247 18 0x00007f18bd5506da in virNetServerProgramDispatchCall (prog=0x55bc5f4cae90, server=0x55bc5f4b93f0, client=0x55bc5f4c66d0, msg=0x55bc5f4cdf60) at ../src/rpc/virnetserverprogram.c:423 19 virNetServerProgramDispatch (prog=0x55bc5f4cae90, server=server@entry=0x55bc5f4b93f0, client=0x55bc5f4c66d0, msg=0x55bc5f4cdf60) at ../src/rpc/virnetserverprogram.c:299 20 0x00007f18bd556c32 in virNetServerProcessMsg (srv=srv@entry=0x55bc5f4b93f0, client=<optimized out>, prog=<optimized out>, msg=<optimized out>) at ../src/rpc/virnetserver.c:135 21 0x00007f18bd556f77 in virNetServerHandleJob (jobOpaque=0x55bc5f4d2bb0, opaque=0x55bc5f4b93f0) at ../src/rpc/virnetserver.c:155 22 0x00007f18bd47dd19 in virThreadPoolWorker (opaque=<optimized out>) at ../src/util/virthreadpool.c:164 23 0x00007f18bd47d253 in virThreadHelper (data=0x55bc5f4b7810) at ../src/util/virthread.c:256 24 0x00007f18bce117eb in start_thread (arg=<optimized out>) at pthread_create.c:448 25 0x00007f18bce9518c in __GI___clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 (cherry picked from commit e7239c619fcaf35b8b605ce07c5d5b15351b3a62) |
anonymix007 <48598263+anonymix007@users.noreply.github.com> | not-needed | debian | https://gitlab.com/libvirt/libvirt/-/commits/e7239c619fcaf35b8b605ce07c5d5b15351b3a62 | 2025-06-04 |
| debian/Debianize-libvirt-guests.patch | Debianize libvirt-guests | =?utf-8?q?Laurent_L=C3=A9onard?= <laurent@open-minds.org> | not-needed | 2010-12-09 | ||
| debian/Drop-inter-package-Also-lines-from-libvirtd.service.patch | Drop inter-package Also= lines from libvirtd.service systemctl handles these lines gracefully even when the corresponding unit is not present, e.g. because the daemon-lock package is not installed, but deb-systemd-helper doesn't. As a temporary workaround until this limitation is addressed, drop the lines triggering the failure. Note that we would technically only need to drop the reference to virtlockd.socket, since the daemon-log package is a hard dependency of the daemon package and thus we know that virtlogd.socket is always going to be present, but being more aggressive for consistency's sake seems preferable. |
Andrea Bolognani <eof@kiyuko.org> | not-needed | debian | 2025-04-13 |
All known versions for source package 'libvirt'
- 11.9.0-1 (forky, sid)
- 11.3.0-3+deb13u1 (trixie-proposed-updates)
- 11.3.0-3 (trixie)
- 11.3.0-2~bpo12+1 (bookworm-backports)
- 9.0.0-4+deb12u2 (bookworm)
