Debian Patches

Status for zfs-linux/2.1.11-1+deb12u1

Patch Description Author Forwarded Bugs Origin Last update
1000-dont-symlink-zed-scripts.patch track default symlinks, instead of symlinking=================================================================== invalid
1001-Prevent-manual-builds-in-the-DKMS-source.patch Block manual building in the DKMS source tree. To avoid messing up future DKMS builds and the zfs installation,
block manual building of the DKMS source tree.

===================================================================
unknown not-needed ubuntu 2017-10-06
1002-Check-for-META-and-DCH-consistency-in-autoconf.patch check for META and dch consistency in autoconf=================================================================== invalid
1003-relocate-zvol_wait.patch relocate the executable path=================================================================== invalid
1005-enable-zed.patch Enable zed emails The OpenZFS event daemon monitors pools. This patch enables the email
sending function by default (if zed is installed). This is consistent with
the default behavior of mdadm.

===================================================================
Richard Laager <rlaager@wiktel.com> not-needed
1006-zed-service-bindir.patch Fix the path to the zed binary on the systemd unit. We install zed into /usr/sbin manually meanwhile the upstream default
is installing it into /sbin.
Ubuntu packages also install zed to /usr/sbin, but they ship their own
zfs-zed unit.
===================================================================
Chris Dos <chris@chrisdos.com> no debian
1007-dkms-pkgcfg.patch =================================================================== no
2100-zfs-load-module.patch explicitly load the ZFS module via systemd service=================================================================== Ubuntu developers no
3100-remove-libzfs-module-timeout.patch Reduce the timeout to zero seconds when running in a container (LP: #1760173)

When inside a lxd container with zfs storage, zfs list or zpool status
appears to hang, no output for 10 seconds. Check if we are inside a
container and set the timeout to zero in this specific case.
no
force-verbose-rules.patch force libtool to produce verbose output=================================================================== Mo Zhou invalid
4620-zfs-vol-wait-fix-locked-encrypted-vols.patch don't wait for links when volume has property keystatus=unavailable zfs-volume-wait.service systemd unit does not start if the encrypted
zvol is locked. The /sbin/zvol_wait should not wait for links when the
volume has property keystatus=unavailable. This patch fixes this issue

===================================================================
James Dingwall no upstream ubuntu 2020-07-22
move-arcstat-1-to-8.patch commit c999ab77f8dad6c6655007baebe9c7992d6fe206

Move arcstat(1) to arcstat(8) to avoid conflict with binary package nordugrid-arc-client.
We regenerate Makefile.in, so we don't have to modify them explicitly here.

===================================================================
Mo Zhou <cdluminate@gmail.com> no 2021-01-15
skip-on-PREEMPT_RT.patch do not attempt to build the modules on PREEMPT_RT kernels
===================================================================
Andreas Beckmann <anbe@debian.org> yes debian upstream
zzstd-version-bump.patch Bump zzstd.ko module version number. All modules are going to be merged into one upstream soon. At the
moment all other modules increase with every build, but zzstd
one. Append zfs package version to zzstd module version number, to
make dkms module versions higher than kernel prebuilt ones.

===================================================================
Dimitri John Ledkov <dimitri.ledkov@canonical.com> no
cross-compile.patch Fix cross-compile of the dkms module.

===================================================================
Dimitri John Ledkov <dimitri.ledkov@canonical.com> no
0004-Increase-default-zfs_scan_vdev_limit-to-16MB.patch [PATCH] Increase default zfs_scan_vdev_limit to 16MB
For HDD based pools the default zfs_scan_vdev_limit of 4M
per-vdev can significantly limit the maximum scrub performance.
Increasing the default to 16M can double the scrub speed from
80 MB/s per disk to 160 MB/s per disk.

This does increase the memory footprint during scrub/resilver
but given the performance win this is a reasonable trade off.
Memory usage is capped at 1/4 of arc_c_max. Note that number
of outstanding I/Os has not changed and is still limited by
zfs_vdev_scrub_max_active.

Closes #14428
Brian Behlendorf <behlendorf1@llnl.gov> no 2023-01-24
0005-Increase-default-zfs_rebuild_vdev_limit-to-64MB.patch [PATCH] Increase default zfs_rebuild_vdev_limit to 64MB
When testing distributed rebuild performance with more capable
hardware it was observed than increasing the zfs_rebuild_vdev_limit
to 64M reduced the rebuild time by 17%. Beyond 64MB there was
some improvement (~2%) but it was not significant when weighed
against the increased memory usage. Memory usage is capped at 1/4
of arc_c_max.

Additionally, vr_bytes_inflight_max has been moved so it's updated
per-metaslab to allow the size to be adjust while a rebuild is
running.

Closes #14428
Brian Behlendorf <behlendorf1@llnl.gov> no 2023-01-24
0006-rootdelay-on-zfs-should-be-adaptive.patch [PATCH] rootdelay on zfs should be adaptive
The 'rootdelay' boot option currently pauses the boot for a specified
amount of time. The original intent was to ensure that slower
configurations would have ample time to enumerate the devices to make
importing the root pool successful. This, however, causes unnecessary
boot delay for environments like Azure which set this parameter by
default.

This commit changes the initramfs logic to pause until it can
successfully load the 'zfs' module. The timeout specified by
'rootdelay' now becomes the maximum amount of time that initramfs will
wait before failing the boot.

Closes #14430
George Wilson <george.wilson@delphix.com> no 2023-02-02
0009-zdb-zero-pad-checksum-output.patch [PATCH] zdb: zero-pad checksum output
The leading zeroes are part of the checksum so we should show them.

Closes #14464
=?UTF-8?q?Rob=20N=20=E2=98=85?= <robn@despairlabs.com> no 2023-02-08
0010-zdb-zero-pad-checksum-output-follow-up.patch [PATCH 01/13] zdb: zero-pad checksum output follow up
Apply zero padding for checksums consistently. The SNPRINTF_BLKPTR
macro was not updated in commit ac7648179c8 which results in the
`cli_root/zdb/zdb_checksum.ksh` test case reliably failing.

Closes #14497
Brian Behlendorf <behlendorf1@llnl.gov> no 2023-02-15
0013-Fix-Detach-spare-vdev-in-case-if-resilvering-does-no.patch [PATCH] Fix "Detach spare vdev in case if resilvering does not happen"

Spare vdev should detach from the pool when a disk is reinserted.
However, spare detachment depends on the completion of resilvering,
and if resilver does not schedule, the spare vdev keeps attached to
the pool until the next resilvering. When a zfs pool contains
several disks (25+ mirror), resilvering does not always happen when
a disk is reinserted. In this patch, spare vdev is manually detached
from the pool when resilvering does not occur and it has been tested
on both Linux and FreeBSD.

Closes #14722
Ameer Hamza <106930537+ixhamza@users.noreply.github.com> no 2023-04-19
0020-Fix-NULL-pointer-dereference-when-doing-concurrent-s.patch [PATCH] Fix NULL pointer dereference when doing concurrent 'send' operations

A NULL pointer will occur when doing a 'zfs send -S' on a dataset that
is still being received. The problem is that the new 'send' will
rightfully fail to own the datasets (i.e. dsl_dataset_own_force() will
fail), but then dmu_send() will still do the dsl_dataset_disown().

Closes #14903
Closes #14890
=?UTF-8?q?Lu=C3=ADs=20Henriques?= no 2023-05-30
0021-Revert-initramfs-use-mount.zfs-instead-of-mount.patch [PATCH] Revert "initramfs: use `mount.zfs` instead of `mount`"
This broke mounting of snapshots on / for users.

See https://github.com/openzfs/zfs/issues/9461#issuecomment-1376162949 for more context.

Closes #14908
Rich Ercolani <214141+rincebrain@users.noreply.github.com> no 2023-05-31
0022-zil-Don-t-expect-zio_shrink-to-succeed.patch [PATCH] zil: Don't expect zio_shrink() to succeed.
At least for RAIDZ zio_shrink() does not reduce zio size, but reduced
wsz in that case likely results in writing uninitialized memory.

Sponsored by: iXsystems, Inc.
Closes #14853
Alexander Motin <mav@FreeBSD.org> no 2023-05-11
0027-Linux-Never-sleep-in-kmem_cache_alloc-.-KM_NOSLEEP-1.patch [PATCH] Linux: Never sleep in kmem_cache_alloc(..., KM_NOSLEEP) (#14926)

When a kmem cache is exhausted and needs to be expanded a new
slab is allocated. KM_SLEEP callers can block and wait for the
allocation, but KM_NOSLEEP callers were incorrectly allowed to
block as well.

Resolve this by attempting an emergency allocation as a best
effort. This may fail but that's fine since any KM_NOSLEEP
consumer is required to handle an allocation failure.
Brian Behlendorf <behlendorf1@llnl.gov> no 2023-06-07
0028-dnode_is_dirty-check-dnode-and-its-data-for-dirtines.patch [PATCH] dnode_is_dirty: check dnode and its data for dirtiness
Over its history this the dirty dnode test has been changed between
checking for a dnodes being on `os_dirty_dnodes` (`dn_dirty_link`) and
`dn_dirty_record`.

de198f2d9 Fix lseek(SEEK_DATA/SEEK_HOLE) mmap consistency
2531ce372 Revert "Report holes when there are only metadata changes"
ec4f9b8f3 Report holes when there are only metadata changes
454365bba Fix dirty check in dmu_offset_next()
66aca2473 SEEK_HOLE should not block on txg_wait_synced()

Also illumos/illumos-gate@c543ec060d illumos/illumos-gate@2bcf0248e9

It turns out both are actually required.

In the case of appending data to a newly created file, the dnode proper
is dirtied (at least to change the blocksize) and dirty records are
added. Thus, a single logical operation is represented by separate
dirty indicators, and must not be separated.

The incorrect dirty check becomes a problem when the first block of a
file is being appended to while another process is calling lseek to skip
holes. There is a small window where the dnode part is undirtied while
there are still dirty records. In this case, `lseek(fd, 0, SEEK_DATA)`
would not know that the file is dirty, and would go to
`dnode_next_offset()`. Since the object has no data blocks yet, it
returns `ESRCH`, indicating no data found, which results in `ENXIO`
being returned to `lseek()`'s caller.

Since coreutils 9.2, `cp` performs sparse copies by default, that is, it
uses `SEEK_DATA` and `SEEK_HOLE` against the source file and attempts to
replicate the holes in the target. When it hits the bug, its initial
search for data fails, and it goes on to call `fallocate()` to create a
hole over the entire destination file.

This has come up more recently as users upgrade their systems, getting
OpenZFS 2.2 as well as a newer coreutils. However, this problem has been
reproduced against 2.1, as well as on FreeBSD 13 and 14.

This change simply updates the dirty check to check both types of dirty.
If there's anything dirty at all, we immediately go to the "wait for
sync" stage, It doesn't really matter after that; both changes are on
disk, so the dirty fields should be correct.

Closes #15571
Closes #15526
Rob N <robn@despairlabs.com> no 2023-11-29
0029-Zpool-can-start-allocating-from-metaslab-before-TRIM.patch [PATCH] Zpool can start allocating from metaslab before TRIMs have completed

When doing a manual TRIM on a zpool, the metaslab being TRIMmed is
potentially re-enabled before all queued TRIM zios for that metaslab
have completed. Since TRIM zios have the lowest priority, it is
possible to get into a situation where allocations occur from the
just re-enabled metaslab and cut ahead of queued TRIMs to the same
metaslab. If the ranges overlap, this will cause corruption.

We were able to trigger this pretty consistently with a small single
top-level vdev zpool (i.e. small number of metaslabs) with heavy
parallel write activity while performing a manual TRIM against a
somewhat 'slow' device (so TRIMs took a bit of time to complete).
With the patch, we've not been able to recreate it since. It was on
illumos, but inspection of the OpenZFS trim code looks like the
relevant pieces are largely unchanged and so it appears it would be
vulnerable to the same issue.

Closes #15395
Jason King <jasonbking@users.noreply.github.com> no 2023-10-12
0030-libshare-nfs-pass-through-ipv6-addresses-in-bracket.patch [PATCH] libshare: nfs: pass through ipv6 addresses in bracket notation

Recognize when the host part of a sharenfs attribute is an ipv6
Literal and pass that through without modification.

Closes #11939
felixdoerre <felixdoerre@users.noreply.github.com> no 2021-10-20

All known versions for source package 'zfs-linux'

Links