Changelog in Linux kernel 6.12.49

 
ALSA: firewire-motu: drop EPOLLOUT from poll return values as write is not supported [+ + +]
Author: Takashi Sakamoto <[email protected]>
Date:   Sat Aug 30 08:37:49 2025 +0900

    ALSA: firewire-motu: drop EPOLLOUT from poll return values as write is not supported
    
    [ Upstream commit aea3493246c474bc917d124d6fb627663ab6bef0 ]
    
    The ALSA HwDep character device of the firewire-motu driver incorrectly
    returns EPOLLOUT in poll(2), even though the driver implements no operation
    for write(2). This misleads userspace applications to believe write() is
    allowed, potentially resulting in unnecessarily wakeups.
    
    This issue dates back to the driver's initial code added by a commit
    71c3797779d3 ("ALSA: firewire-motu: add hwdep interface"), and persisted
    when POLLOUT was updated to EPOLLOUT by a commit a9a08845e9ac ('vfs: do
    bulk POLL* -> EPOLL* replacement("").').
    
    This commit fixes the bug.
    
    Signed-off-by: Takashi Sakamoto <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Takashi Iwai <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ALSA: hda/realtek: Fix mute led for HP Laptop 15-dw4xx [+ + +]
Author: Praful Adiga <[email protected]>
Date:   Thu Sep 18 12:40:18 2025 -0400

    ALSA: hda/realtek: Fix mute led for HP Laptop 15-dw4xx
    
    commit d33c3471047fc54966621d19329e6a23ebc8ec50 upstream.
    
    This laptop uses the ALC236 codec with COEF 0x7 and idx 1 to
    control the mute LED. Enable the existing quirk for this device.
    
    Signed-off-by: Praful Adiga <[email protected]>
    Cc: <[email protected]>
    Signed-off-by: Takashi Iwai <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
ASoC: Intel: catpt: Expose correct bit depth to userspace [+ + +]
Author: Amadeusz Sławiński <[email protected]>
Date:   Tue Sep 9 11:28:29 2025 +0200

    ASoC: Intel: catpt: Expose correct bit depth to userspace
    
    [ Upstream commit 690aa09b1845c0d5c3c29dabd50a9d0488c97c48 ]
    
    Currently wrong bit depth is exposed in hw params, causing clipped
    volume during playback. Expose correct parameters.
    
    Fixes: a126750fc865 ("ASoC: Intel: catpt: PCM operations")
    Reported-by: Andy Shevchenko <[email protected]>
    Tested-by: Andy Shevchenko <[email protected]>
    Reviewed-by: Cezary Rojewski <[email protected]>
    Signed-off-by: Amadeusz Sławiński <[email protected]>
    Message-ID: <[email protected]>
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ASoC: qcom: audioreach: Fix lpaif_type configuration for the I2S interface [+ + +]
Author: Mohammad Rafi Shaik <[email protected]>
Date:   Mon Sep 8 11:06:29 2025 +0530

    ASoC: qcom: audioreach: Fix lpaif_type configuration for the I2S interface
    
    commit 5f1af203ef964e7f7bf9d32716dfa5f332cc6f09 upstream.
    
    Fix missing lpaif_type configuration for the I2S interface.
    The proper lpaif interface type required to allow DSP to vote
    appropriate clock setting for I2S interface.
    
    Fixes: 25ab80db6b133 ("ASoC: qdsp6: audioreach: add module configuration command helpers")
    Cc: [email protected]
    Reviewed-by: Srinivas Kandagatla <[email protected]>
    Signed-off-by: Mohammad Rafi Shaik <[email protected]>
    Message-ID: <[email protected]>
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

ASoC: qcom: q6apm-lpass-dais: Fix missing set_fmt DAI op for I2S [+ + +]
Author: Mohammad Rafi Shaik <[email protected]>
Date:   Mon Sep 8 11:06:30 2025 +0530

    ASoC: qcom: q6apm-lpass-dais: Fix missing set_fmt DAI op for I2S
    
    commit 33b55b94bca904ca25a9585e3cd43d15f0467969 upstream.
    
    The q6i2s_set_fmt() function was defined but never linked into the
    I2S DAI operations, resulting DAI format settings is being ignored
    during stream setup. This change fixes the issue by properly linking
    the .set_fmt handler within the DAI ops.
    
    Fixes: 30ad723b93ade ("ASoC: qdsp6: audioreach: add q6apm lpass dai support")
    Cc: [email protected]
    Reviewed-by: Srinivas Kandagatla <[email protected]>
    Signed-off-by: Mohammad Rafi Shaik <[email protected]>
    Message-ID: <[email protected]>
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

ASoC: qcom: q6apm-lpass-dais: Fix NULL pointer dereference if source graph failed [+ + +]
Author: Krzysztof Kozlowski <[email protected]>
Date:   Thu Sep 4 12:18:50 2025 +0200

    ASoC: qcom: q6apm-lpass-dais: Fix NULL pointer dereference if source graph failed
    
    commit 68f27f7c7708183e7873c585ded2f1b057ac5b97 upstream.
    
    If earlier opening of source graph fails (e.g. ADSP rejects due to
    incorrect audioreach topology), the graph is closed and
    "dai_data->graph[dai->id]" is assigned NULL.  Preparing the DAI for sink
    graph continues though and next call to q6apm_lpass_dai_prepare()
    receives dai_data->graph[dai->id]=NULL leading to NULL pointer
    exception:
    
      qcom-apm gprsvc:service:2:1: Error (1) Processing 0x01001002 cmd
      qcom-apm gprsvc:service:2:1: DSP returned error[1001002] 1
      q6apm-lpass-dais 30000000.remoteproc:glink-edge:gpr:service@1:bedais: fail to start APM port 78
      q6apm-lpass-dais 30000000.remoteproc:glink-edge:gpr:service@1:bedais: ASoC: error at snd_soc_pcm_dai_prepare on TX_CODEC_DMA_TX_3: -22
      Unable to handle kernel NULL pointer dereference at virtual address 00000000000000a8
      ...
      Call trace:
       q6apm_graph_media_format_pcm+0x48/0x120 (P)
       q6apm_lpass_dai_prepare+0x110/0x1b4
       snd_soc_pcm_dai_prepare+0x74/0x108
       __soc_pcm_prepare+0x44/0x160
       dpcm_be_dai_prepare+0x124/0x1c0
    
    Fixes: 30ad723b93ad ("ASoC: qdsp6: audioreach: add q6apm lpass dai support")
    Cc: [email protected]
    Signed-off-by: Krzysztof Kozlowski <[email protected]>
    Reviewed-by: Srinivas Kandagatla <[email protected]>
    Message-ID: <[email protected]>
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

ASoC: SOF: Intel: hda-stream: Fix incorrect variable used in error message [+ + +]
Author: Colin Ian King <[email protected]>
Date:   Tue Sep 2 13:06:39 2025 +0100

    ASoC: SOF: Intel: hda-stream: Fix incorrect variable used in error message
    
    [ Upstream commit 35fc531a59694f24a2456569cf7d1a9c6436841c ]
    
    The dev_err message is reporting an error about capture streams however
    it is using the incorrect variable num_playback instead of num_capture.
    Fix this by using the correct variable num_capture.
    
    Fixes: a1d1e266b445 ("ASoC: SOF: Intel: Add Intel specific HDA stream operations")
    Signed-off-by: Colin Ian King <[email protected]>
    Acked-by: Peter Ujfalusi <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ASoC: wm8940: Correct PLL rate rounding [+ + +]
Author: Charles Keepax <[email protected]>
Date:   Thu Aug 21 09:26:37 2025 +0100

    ASoC: wm8940: Correct PLL rate rounding
    
    [ Upstream commit d05afb53c683ef7ed1228b593c3360f4d3126c58 ]
    
    Using a single value of 22500000 for both 48000Hz and 44100Hz audio
    will sometimes result in returning wrong dividers due to rounding.
    Update the code to use the actual value for both.
    
    Fixes: 294833fc9eb4 ("ASoC: wm8940: Rewrite code to set proper clocks")
    Reported-by: Ankur Tyagi <[email protected]>
    Signed-off-by: Charles Keepax <[email protected]>
    Tested-by: Ankur Tyagi <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ASoC: wm8940: Correct typo in control name [+ + +]
Author: Charles Keepax <[email protected]>
Date:   Thu Aug 21 09:26:38 2025 +0100

    ASoC: wm8940: Correct typo in control name
    
    [ Upstream commit b4799520dcd6fe1e14495cecbbe9975d847cd482 ]
    
    Fixes: 0b5e92c5e020 ("ASoC WM8940 Driver")
    Reported-by: Ankur Tyagi <[email protected]>
    Signed-off-by: Charles Keepax <[email protected]>
    Tested-by: Ankur Tyagi <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ASoC: wm8974: Correct PLL rate rounding [+ + +]
Author: Charles Keepax <[email protected]>
Date:   Thu Aug 21 09:26:39 2025 +0100

    ASoC: wm8974: Correct PLL rate rounding
    
    [ Upstream commit 9b17d3724df55ecc2bc67978822585f2b023be48 ]
    
    Using a single value of 22500000 for both 48000Hz and 44100Hz audio
    will sometimes result in returning wrong dividers due to rounding.
    Update the code to use the actual value for both.
    
    Fixes: 51b2bb3f2568 ("ASoC: wm8974: configure pll and mclk divider automatically")
    Signed-off-by: Charles Keepax <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Mark Brown <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
bonding: don't set oif to bond dev when getting NS target destination [+ + +]
Author: Hangbin Liu <[email protected]>
Date:   Tue Sep 16 08:01:26 2025 +0000

    bonding: don't set oif to bond dev when getting NS target destination
    
    [ Upstream commit a8ba87f04ca9cdec06776ce92dce1395026dc3bb ]
    
    Unlike IPv4, IPv6 routing strictly requires the source address to be valid
    on the outgoing interface. If the NS target is set to a remote VLAN interface,
    and the source address is also configured on a VLAN over a bond interface,
    setting the oif to the bond device will fail to retrieve the correct
    destination route.
    
    Fix this by not setting the oif to the bond device when retrieving the NS
    target destination. This allows the correct destination device (the VLAN
    interface) to be determined, so that bond_verify_device_path can return the
    proper VLAN tags for sending NS messages.
    
    Reported-by: David Wilder <[email protected]>
    Closes: https://lore.kernel.org/netdev/aGOKggdfjv0cApTO@fedora/
    Suggested-by: Jay Vosburgh <[email protected]>
    Tested-by: David Wilder <[email protected]>
    Acked-by: Jay Vosburgh <[email protected]>
    Fixes: 4e24be018eb9 ("bonding: add new parameter ns_targets")
    Signed-off-by: Hangbin Liu <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

bonding: set random address only when slaves already exist [+ + +]
Author: Hangbin Liu <[email protected]>
Date:   Wed Sep 10 02:43:34 2025 +0000

    bonding: set random address only when slaves already exist
    
    [ Upstream commit 35ae4e86292ef7dfe4edbb9942955c884e984352 ]
    
    After commit 5c3bf6cba791 ("bonding: assign random address if device
    address is same as bond"), bonding will erroneously randomize the MAC
    address of the first interface added to the bond if fail_over_mac =
    follow.
    
    Correct this by additionally testing for the bond being empty before
    randomizing the MAC.
    
    Fixes: 5c3bf6cba791 ("bonding: assign random address if device address is same as bond")
    Reported-by: Qiuling Ren <[email protected]>
    Signed-off-by: Hangbin Liu <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
btrfs: fix invalid extref key setup when replaying dentry [+ + +]
Author: Filipe Manana <[email protected]>
Date:   Wed Sep 3 16:53:21 2025 +0100

    btrfs: fix invalid extref key setup when replaying dentry
    
    [ Upstream commit b62fd63ade7cb573b114972ef8f9fa505be8d74a ]
    
    The offset for an extref item's key is not the object ID of the parent
    dir, otherwise we would not need the extref item and would use plain ref
    items. Instead the offset is the result of a hash computation that uses
    the object ID of the parent dir and the name associated to the entry.
    So fix this by setting the key offset at replay_one_name() to be the
    result of calling btrfs_extref_hash().
    
    Fixes: 725af92a6251 ("btrfs: Open-code name_in_log_ref in replay_one_name")
    Signed-off-by: Filipe Manana <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: tree-checker: fix the incorrect inode ref size check [+ + +]
Author: Qu Wenruo <[email protected]>
Date:   Tue Sep 16 07:54:06 2025 +0930

    btrfs: tree-checker: fix the incorrect inode ref size check
    
    commit 96fa515e70f3e4b98685ef8cac9d737fc62f10e1 upstream.
    
    [BUG]
    Inside check_inode_ref(), we need to make sure every structure,
    including the btrfs_inode_extref header, is covered by the item.  But
    our code is incorrectly using "sizeof(iref)", where @iref is just a
    pointer.
    
    This means "sizeof(iref)" will always be "sizeof(void *)", which is much
    smaller than "sizeof(struct btrfs_inode_extref)".
    
    This will allow some bad inode extrefs to sneak in, defeating tree-checker.
    
    [FIX]
    Fix the typo by calling "sizeof(*iref)", which is the same as
    "sizeof(struct btrfs_inode_extref)", and will be the correct behavior we
    want.
    
    Fixes: 71bf92a9b877 ("btrfs: tree-checker: Add check for INODE_REF")
    CC: [email protected] # 6.1+
    Reviewed-by: Johannes Thumshirn <[email protected]>
    Reviewed-by: Filipe Manana <[email protected]>
    Signed-off-by: Qu Wenruo <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
cgroup: split cgroup_destroy_wq into 3 workqueues [+ + +]
Author: Chen Ridong <[email protected]>
Date:   Tue Aug 19 01:07:24 2025 +0000

    cgroup: split cgroup_destroy_wq into 3 workqueues
    
    [ Upstream commit 79f919a89c9d06816dbdbbd168fa41d27411a7f9 ]
    
    A hung task can occur during [1] LTP cgroup testing when repeatedly
    mounting/unmounting perf_event and net_prio controllers with
    systemd.unified_cgroup_hierarchy=1. The hang manifests in
    cgroup_lock_and_drain_offline() during root destruction.
    
    Related case:
    cgroup_fj_function_perf_event cgroup_fj_function.sh perf_event
    cgroup_fj_function_net_prio cgroup_fj_function.sh net_prio
    
    Call Trace:
            cgroup_lock_and_drain_offline+0x14c/0x1e8
            cgroup_destroy_root+0x3c/0x2c0
            css_free_rwork_fn+0x248/0x338
            process_one_work+0x16c/0x3b8
            worker_thread+0x22c/0x3b0
            kthread+0xec/0x100
            ret_from_fork+0x10/0x20
    
    Root Cause:
    
    CPU0                            CPU1
    mount perf_event                umount net_prio
    cgroup1_get_tree                cgroup_kill_sb
    rebind_subsystems               // root destruction enqueues
                                    // cgroup_destroy_wq
    // kill all perf_event css
                                    // one perf_event css A is dying
                                    // css A offline enqueues cgroup_destroy_wq
                                    // root destruction will be executed first
                                    css_free_rwork_fn
                                    cgroup_destroy_root
                                    cgroup_lock_and_drain_offline
                                    // some perf descendants are dying
                                    // cgroup_destroy_wq max_active = 1
                                    // waiting for css A to die
    
    Problem scenario:
    1. CPU0 mounts perf_event (rebind_subsystems)
    2. CPU1 unmounts net_prio (cgroup_kill_sb), queuing root destruction work
    3. A dying perf_event CSS gets queued for offline after root destruction
    4. Root destruction waits for offline completion, but offline work is
       blocked behind root destruction in cgroup_destroy_wq (max_active=1)
    
    Solution:
    Split cgroup_destroy_wq into three dedicated workqueues:
    cgroup_offline_wq – Handles CSS offline operations
    cgroup_release_wq – Manages resource release
    cgroup_free_wq – Performs final memory deallocation
    
    This separation eliminates blocking in the CSS free path while waiting for
    offline operations to complete.
    
    [1] https://github.com/linux-test-project/ltp/blob/master/runtest/controllers
    Fixes: 334c3679ec4b ("cgroup: reimplement rebind_subsystems() using cgroup_apply_control() and friends")
    Reported-by: Gao Yingjie <[email protected]>
    Signed-off-by: Chen Ridong <[email protected]>
    Suggested-by: Teju Heo <[email protected]>
    Signed-off-by: Tejun Heo <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
cnic: Fix use-after-free bugs in cnic_delete_task [+ + +]
Author: Duoming Zhou <[email protected]>
Date:   Wed Sep 17 13:46:02 2025 +0800

    cnic: Fix use-after-free bugs in cnic_delete_task
    
    [ Upstream commit cfa7d9b1e3a8604afc84e9e51d789c29574fb216 ]
    
    The original code uses cancel_delayed_work() in cnic_cm_stop_bnx2x_hw(),
    which does not guarantee that the delayed work item 'delete_task' has
    fully completed if it was already running. Additionally, the delayed work
    item is cyclic, the flush_workqueue() in cnic_cm_stop_bnx2x_hw() only
    blocks and waits for work items that were already queued to the
    workqueue prior to its invocation. Any work items submitted after
    flush_workqueue() is called are not included in the set of tasks that the
    flush operation awaits. This means that after the cyclic work items have
    finished executing, a delayed work item may still exist in the workqueue.
    This leads to use-after-free scenarios where the cnic_dev is deallocated
    by cnic_free_dev(), while delete_task remains active and attempt to
    dereference cnic_dev in cnic_delete_task().
    
    A typical race condition is illustrated below:
    
    CPU 0 (cleanup)              | CPU 1 (delayed work callback)
    cnic_netdev_event()          |
      cnic_stop_hw()             | cnic_delete_task()
        cnic_cm_stop_bnx2x_hw()  | ...
          cancel_delayed_work()  | /* the queue_delayed_work()
          flush_workqueue()      |    executes after flush_workqueue()*/
                                 | queue_delayed_work()
      cnic_free_dev(dev)//free   | cnic_delete_task() //new instance
                                 |   dev = cp->dev; //use
    
    Replace cancel_delayed_work() with cancel_delayed_work_sync() to ensure
    that the cyclic delayed work item is properly canceled and that any
    ongoing execution of the work item completes before the cnic_dev is
    deallocated. Furthermore, since cancel_delayed_work_sync() uses
    __flush_work(work, true) to synchronously wait for any currently
    executing instance of the work item to finish, the flush_workqueue()
    becomes redundant and should be removed.
    
    This bug was identified through static analysis. To reproduce the issue
    and validate the fix, I simulated the cnic PCI device in QEMU and
    introduced intentional delays — such as inserting calls to ssleep()
    within the cnic_delete_task() function — to increase the likelihood
    of triggering the bug.
    
    Fixes: fdf24086f475 ("cnic: Defer iscsi connection cleanup")
    Signed-off-by: Duoming Zhou <[email protected]>
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg [+ + +]
Author: Herbert Xu <[email protected]>
Date:   Tue Sep 16 17:20:59 2025 +0800

    crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg
    
    commit 1b34cbbf4f011a121ef7b2d7d6e6920a036d5285 upstream.
    
    Issuing two writes to the same af_alg socket is bogus as the
    data will be interleaved in an unpredictable fashion.  Furthermore,
    concurrent writes may create inconsistencies in the internal
    socket state.
    
    Disallow this by adding a new ctx->write field that indiciates
    exclusive ownership for writing.
    
    Fixes: 8ff590903d5 ("crypto: algif_skcipher - User-space interface for skcipher operations")
    Reported-by: Muhammad Alifa Ramdhan <[email protected]>
    Reported-by: Bing-Jhong Billy Jheng <[email protected]>
    Signed-off-by: Herbert Xu <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

crypto: af_alg - Set merge to zero early in af_alg_sendmsg [+ + +]
Author: Herbert Xu <[email protected]>
Date:   Tue Sep 16 15:42:41 2025 +0800

    crypto: af_alg - Set merge to zero early in af_alg_sendmsg
    
    [ Upstream commit 9574b2330dbd2b5459b74d3b5e9619d39299fc6f ]
    
    If an error causes af_alg_sendmsg to abort, ctx->merge may contain
    a garbage value from the previous loop.  This may then trigger a
    crash on the next entry into af_alg_sendmsg when it attempts to do
    a merge that can't be done.
    
    Fix this by setting ctx->merge to zero near the start of the loop.
    
    Fixes: 8ff590903d5 ("crypto: algif_skcipher - User-space interface for skcipher operations")
    Reported-by: Muhammad Alifa Ramdhan <[email protected]>
    Reported-by: Bing-Jhong Billy Jheng <[email protected]>
    Signed-off-by: Herbert Xu <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
dm-raid: don't set io_min and io_opt for raid1 [+ + +]
Author: Mikulas Patocka <[email protected]>
Date:   Mon Sep 15 16:12:40 2025 +0200

    dm-raid: don't set io_min and io_opt for raid1
    
    commit a86556264696b797d94238d99d8284d0d34ed960 upstream.
    
    These commands
     modprobe brd rd_size=1048576
     vgcreate vg /dev/ram*
     lvcreate -m4 -L10 -n lv vg
    trigger the following warnings:
    device-mapper: table: 252:10: adding target device (start sect 0 len 24576) caused an alignment inconsistency
    device-mapper: table: 252:10: adding target device (start sect 0 len 24576) caused an alignment inconsistency
    
    The warnings are caused by the fact that io_min is 512 and physical block
    size is 4096.
    
    If there's chunk-less raid, such as raid1, io_min shouldn't be set to zero
    because it would be raised to 512 and it would trigger the warning.
    
    Signed-off-by: Mikulas Patocka <[email protected]>
    Reviewed-by: Martin K. Petersen <[email protected]>
    Cc: [email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
dm-stripe: fix a possible integer overflow [+ + +]
Author: Mikulas Patocka <[email protected]>
Date:   Mon Aug 11 13:17:32 2025 +0200

    dm-stripe: fix a possible integer overflow
    
    commit 1071d560afb4c245c2076494226df47db5a35708 upstream.
    
    There's a possible integer overflow in stripe_io_hints if we have too
    large chunk size. Test if the overflow happened, and if it did, don't set
    limits->io_min and limits->io_opt;
    
    Signed-off-by: Mikulas Patocka <[email protected]>
    Reviewed-by: John Garry <[email protected]>
    Suggested-by: Dongsheng Yang <[email protected]>
    Cc: [email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
dpaa2-switch: fix buffer pool seeding for control traffic [+ + +]
Author: Ioana Ciornei <[email protected]>
Date:   Wed Sep 10 17:48:25 2025 +0300

    dpaa2-switch: fix buffer pool seeding for control traffic
    
    [ Upstream commit 2690cb089502b80b905f2abdafd1bf2d54e1abef ]
    
    Starting with commit c50e7475961c ("dpaa2-switch: Fix error checking in
    dpaa2_switch_seed_bp()"), the probing of a second DPSW object errors out
    like below.
    
    fsl_dpaa2_switch dpsw.1: fsl_mc_driver_probe failed: -12
    fsl_dpaa2_switch dpsw.1: probe with driver fsl_dpaa2_switch failed with error -12
    
    The aforementioned commit brought to the surface the fact that seeding
    buffers into the buffer pool destined for control traffic is not
    successful and an access violation recoverable error can be seen in the
    MC firmware log:
    
    [E, qbman_rec_isr:391, QBMAN]  QBMAN recoverable event 0x1000000
    
    This happens because the driver incorrectly used the ID of the DPBP
    object instead of the hardware buffer pool ID when trying to release
    buffers into it.
    
    This is because any DPSW object uses two buffer pools, one managed by
    the Linux driver and destined for control traffic packet buffers and the
    other one managed by the MC firmware and destined only for offloaded
    traffic. And since the buffer pool managed by the MC firmware does not
    have an external facing DPBP equivalent, any subsequent DPBP objects
    created after the first DPSW will have a DPBP id different to the
    underlying hardware buffer ID.
    
    The issue was not caught earlier because these two numbers can be
    identical when all DPBP objects are created before the DPSW objects are.
    This is the case when the DPL file is used to describe the entire DPAA2
    object layout and objects are created at boot time and it's also true
    for the first DPSW being created dynamically using ls-addsw.
    
    Fix this by using the buffer pool ID instead of the DPBP id when
    releasing buffers into the pool.
    
    Fixes: 2877e4f7e189 ("staging: dpaa2-switch: setup buffer pool and RX path rings")
    Signed-off-by: Ioana Ciornei <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
drm/amd/display: Allow RX6xxx & RX7700 to invoke amdgpu_irq_get/put [+ + +]
Author: Ivan Lipski <[email protected]>
Date:   Tue Sep 2 16:20:09 2025 -0400

    drm/amd/display: Allow RX6xxx & RX7700 to invoke amdgpu_irq_get/put
    
    commit 29a2f430475357f760679b249f33e7282688e292 upstream.
    
    [Why&How]
    As reported on https://gitlab.freedesktop.org/drm/amd/-/issues/3936,
    SMU hang can occur if the interrupts are not enabled appropriately,
    causing a vblank timeout.
    
    This patch reverts commit 5009628d8509 ("drm/amd/display: Remove unnecessary
    amdgpu_irq_get/put"), but only for RX6xxx & RX7700 GPUs, on which the
    issue was observed.
    
    This will re-enable interrupts regardless of whether the user space needed
    it or not.
    
    Fixes: 5009628d8509 ("drm/amd/display: Remove unnecessary amdgpu_irq_get/put")
    Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3936
    Suggested-by: Sun peng Li <[email protected]>
    Reviewed-by: Sun peng Li <[email protected]>
    Signed-off-by: Ivan Lipski <[email protected]>
    Signed-off-by: Ray Wu <[email protected]>
    Tested-by: Daniel Wheeler <[email protected]>
    Signed-off-by: Alex Deucher <[email protected]>
    (cherry picked from commit 95d168b367aa28a59f94fc690ff76ebf69312c6d)
    Cc: [email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
drm/xe/tile: Release kobject for the failure path [+ + +]
Author: Shuicheng Lin <[email protected]>
Date:   Tue Aug 19 15:39:51 2025 +0000

    drm/xe/tile: Release kobject for the failure path
    
    [ Upstream commit 013e484dbd687a9174acf8f4450217bdb86ad788 ]
    
    Call kobject_put() for the failure path to release the kobject
    
    v2: remove extra newline. (Matt)
    
    Fixes: e3d0839aa501 ("drm/xe/tile: Abort driver load for sysfs creation failure")
    Cc: Himal Prasad Ghimiray <[email protected]>
    Reviewed-by: Matthew Brost <[email protected]>
    Signed-off-by: Shuicheng Lin <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Lucas De Marchi <[email protected]>
    (cherry picked from commit b98775bca99511cc22ab459a2de646cd2fa7241f)
    Signed-off-by: Rodrigo Vivi <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
drm/xe: Fix a NULL vs IS_ERR() in xe_vm_add_compute_exec_queue() [+ + +]
Author: Dan Carpenter <[email protected]>
Date:   Thu Aug 7 18:53:41 2025 +0300

    drm/xe: Fix a NULL vs IS_ERR() in xe_vm_add_compute_exec_queue()
    
    [ Upstream commit cbc7f3b4f6ca19320e2eacf8fc1403d6f331ce14 ]
    
    The xe_preempt_fence_create() function returns error pointers.  It
    never returns NULL.  Update the error checking to match.
    
    Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
    Signed-off-by: Dan Carpenter <[email protected]>
    Reviewed-by: Matthew Brost <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Rodrigo Vivi <[email protected]>
    (cherry picked from commit 75cc23ffe5b422bc3cbd5cf0956b8b86e4b0e162)
    Signed-off-by: Rodrigo Vivi <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
drm: bridge: anx7625: Fix NULL pointer dereference with early IRQ [+ + +]
Author: Loic Poulain <[email protected]>
Date:   Wed Jul 9 10:54:38 2025 +0200

    drm: bridge: anx7625: Fix NULL pointer dereference with early IRQ
    
    [ Upstream commit a10f910c77f280327b481e77eab909934ec508f0 ]
    
    If the interrupt occurs before resource initialization is complete, the
    interrupt handler/worker may access uninitialized data such as the I2C
    tcpc_client device, potentially leading to NULL pointer dereference.
    
    Signed-off-by: Loic Poulain <[email protected]>
    Fixes: 8bdfc5dae4e3 ("drm/bridge: anx7625: Add anx7625 MIPI DSI/DPI to DP")
    Reviewed-by: Dmitry Baryshkov <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Dmitry Baryshkov <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

drm: bridge: cdns-mhdp8546: Fix missing mutex unlock on error path [+ + +]
Author: Qi Xi <[email protected]>
Date:   Thu Sep 4 11:44:47 2025 +0800

    drm: bridge: cdns-mhdp8546: Fix missing mutex unlock on error path
    
    [ Upstream commit 288dac9fb6084330d968459c750c838fd06e10e6 ]
    
    Add missing mutex unlock before returning from the error path in
    cdns_mhdp_atomic_enable().
    
    Fixes: 935a92a1c400 ("drm: bridge: cdns-mhdp8546: Fix possible null pointer dereference")
    Reported-by: Hulk Robot <[email protected]>
    Signed-off-by: Qi Xi <[email protected]>
    Reviewed-by: Luca Ceresoli <[email protected]>
    Reviewed-by: Dmitry Baryshkov <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Luca Ceresoli <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
gup: optimize longterm pin_user_pages() for large folio [+ + +]
Author: Li Zhe <[email protected]>
Date:   Fri Jun 6 10:37:42 2025 +0800

    gup: optimize longterm pin_user_pages() for large folio
    
    commit a03db236aebfaeadf79396dbd570896b870bda01 upstream.
    
    In the current implementation of longterm pin_user_pages(), we invoke
    collect_longterm_unpinnable_folios().  This function iterates through the
    list to check whether each folio belongs to the "longterm_unpinnabled"
    category.  The folios in this list essentially correspond to a contiguous
    region of userspace addresses, with each folio representing a physical
    address in increments of PAGESIZE.
    
    If this userspace address range is mapped with large folio, we can
    optimize the performance of function collect_longterm_unpinnable_folios()
    by reducing the using of READ_ONCE() invoked in
    pofs_get_folio()->page_folio()->_compound_head().
    
    Also, we can simplify the logic of collect_longterm_unpinnable_folios().
    Instead of comparing with prev_folio after calling pofs_get_folio(), we
    can check whether the next page is within the same folio.
    
    The performance test results, based on v6.15, obtained through the
    gup_test tool from the kernel source tree are as follows.  We achieve an
    improvement of over 66% for large folio with pagesize=2M.  For small
    folio, we have only observed a very slight degradation in performance.
    
    Without this patch:
    
        [root@localhost ~] ./gup_test -HL -m 8192 -n 512
        TAP version 13
        1..1
        # PIN_LONGTERM_BENCHMARK: Time: get:14391 put:10858 us#
        ok 1 ioctl status 0
        # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
        [root@localhost ~]# ./gup_test -LT -m 8192 -n 512
        TAP version 13
        1..1
        # PIN_LONGTERM_BENCHMARK: Time: get:130538 put:31676 us#
        ok 1 ioctl status 0
        # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
    
    With this patch:
    
        [root@localhost ~] ./gup_test -HL -m 8192 -n 512
        TAP version 13
        1..1
        # PIN_LONGTERM_BENCHMARK: Time: get:4867 put:10516 us#
        ok 1 ioctl status 0
        # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
        [root@localhost ~]# ./gup_test -LT -m 8192 -n 512
        TAP version 13
        1..1
        # PIN_LONGTERM_BENCHMARK: Time: get:131798 put:31328 us#
        ok 1 ioctl status 0
        # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
    
    [[email protected]: whitespace fix, per David]
      Link: https://lkml.kernel.org/r/[email protected]
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: Li Zhe <[email protected]>
    Cc: David Hildenbrand <[email protected]>
    Cc: Dev Jain <[email protected]>
    Cc: Jason Gunthorpe <[email protected]>
    Cc: John Hubbard <[email protected]>
    Cc: Muchun Song <[email protected]>
    Cc: Peter Xu <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
i40e: remove redundant memory barrier when cleaning Tx descs [+ + +]
Author: Maciej Fijalkowski <[email protected]>
Date:   Fri Aug 22 17:16:17 2025 +0200

    i40e: remove redundant memory barrier when cleaning Tx descs
    
    [ Upstream commit e37084a26070c546ae7961ee135bbfb15fbe13fd ]
    
    i40e has a feature which writes to memory location last descriptor
    successfully sent. Memory barrier in i40e_clean_tx_irq() was used to
    avoid forward-reading descriptor fields in case DD bit was not set.
    Having mentioned feature in place implies that such situation will not
    happen as we know in advance how many descriptors HW has dealt with.
    
    Besides, this barrier placement was wrong. Idea is to have this
    protection *after* reading DD bit from HW descriptor, not before.
    Digging through git history showed me that indeed barrier was before DD
    bit check, anyways the commit introducing i40e_get_head() should have
    wiped it out altogether.
    
    Also, there was one commit doing s/read_barrier_depends/smp_rmb when get
    head feature was already in place, but it was only theoretical based on
    ixgbe experiences, which is different in these terms as that driver has
    to read DD bit from HW descriptor.
    
    Fixes: 1943d8ba9507 ("i40e/i40evf: enable hardware feature head write back")
    Signed-off-by: Maciej Fijalkowski <[email protected]>
    Reviewed-by: Aleksandr Loktionov <[email protected]>
    Tested-by: Rinitha S <[email protected]> (A Contingent worker at Intel)
    Signed-off-by: Tony Nguyen <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
ice: fix Rx page leak on multi-buffer frames [+ + +]
Author: Jacob Keller <[email protected]>
Date:   Mon Aug 25 16:00:14 2025 -0700

    ice: fix Rx page leak on multi-buffer frames
    
    [ Upstream commit 84bf1ac85af84d354c7a2fdbdc0d4efc8aaec34b ]
    
    The ice_put_rx_mbuf() function handles calling ice_put_rx_buf() for each
    buffer in the current frame. This function was introduced as part of
    handling multi-buffer XDP support in the ice driver.
    
    It works by iterating over the buffers from first_desc up to 1 plus the
    total number of fragments in the frame, cached from before the XDP program
    was executed.
    
    If the hardware posts a descriptor with a size of 0, the logic used in
    ice_put_rx_mbuf() breaks. Such descriptors get skipped and don't get added
    as fragments in ice_add_xdp_frag. Since the buffer isn't counted as a
    fragment, we do not iterate over it in ice_put_rx_mbuf(), and thus we don't
    call ice_put_rx_buf().
    
    Because we don't call ice_put_rx_buf(), we don't attempt to re-use the
    page or free it. This leaves a stale page in the ring, as we don't
    increment next_to_alloc.
    
    The ice_reuse_rx_page() assumes that the next_to_alloc has been incremented
    properly, and that it always points to a buffer with a NULL page. Since
    this function doesn't check, it will happily recycle a page over the top
    of the next_to_alloc buffer, losing track of the old page.
    
    Note that this leak only occurs for multi-buffer frames. The
    ice_put_rx_mbuf() function always handles at least one buffer, so a
    single-buffer frame will always get handled correctly. It is not clear
    precisely why the hardware hands us descriptors with a size of 0 sometimes,
    but it happens somewhat regularly with "jumbo frames" used by 9K MTU.
    
    To fix ice_put_rx_mbuf(), we need to make sure to call ice_put_rx_buf() on
    all buffers between first_desc and next_to_clean. Borrow the logic of a
    similar function in i40e used for this same purpose. Use the same logic
    also in ice_get_pgcnts().
    
    Instead of iterating over just the number of fragments, use a loop which
    iterates until the current index reaches to the next_to_clean element just
    past the current frame. Unlike i40e, the ice_put_rx_mbuf() function does
    call ice_put_rx_buf() on the last buffer of the frame indicating the end of
    packet.
    
    For non-linear (multi-buffer) frames, we need to take care when adjusting
    the pagecnt_bias. An XDP program might release fragments from the tail of
    the frame, in which case that fragment page is already released. Only
    update the pagecnt_bias for the first descriptor and fragments still
    remaining post-XDP program. Take care to only access the shared info for
    fragmented buffers, as this avoids a significant cache miss.
    
    The xdp_xmit value only needs to be updated if an XDP program is run, and
    only once per packet. Drop the xdp_xmit pointer argument from
    ice_put_rx_mbuf(). Instead, set xdp_xmit in the ice_clean_rx_irq() function
    directly. This avoids needing to pass the argument and avoids an extra
    bit-wise OR for each buffer in the frame.
    
    Move the increment of the ntc local variable to ensure its updated *before*
    all calls to ice_get_pgcnts() or ice_put_rx_mbuf(), as the loop logic
    requires the index of the element just after the current frame.
    
    Now that we use an index pointer in the ring to identify the packet, we no
    longer need to track or cache the number of fragments in the rx_ring.
    
    Cc: Christoph Petrausch <[email protected]>
    Cc: Jesper Dangaard Brouer <[email protected]>
    Reported-by: Jaroslav Pulchart <[email protected]>
    Closes: https://lore.kernel.org/netdev/CAK8fFZ4hY6GUJNENz3wY9jaYLZXGfpr7dnZxzGMYoE44caRbgw@mail.gmail.com/
    Fixes: 743bbd93cf29 ("ice: put Rx buffers after being done with current frame")
    Tested-by: Michal Kubiak <[email protected]>
    Signed-off-by: Jacob Keller <[email protected]>
    Acked-by: Jesper Dangaard Brouer <[email protected]>
    Tested-by: Priya Singh <[email protected]>
    Tested-by: Rinitha S <[email protected]> (A Contingent worker at Intel)
    Signed-off-by: Tony Nguyen <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

ice: store max_frame and rx_buf_len only in ice_rx_ring [+ + +]
Author: Jacob Keller <[email protected]>
Date:   Mon Sep 9 16:07:45 2024 -0700

    ice: store max_frame and rx_buf_len only in ice_rx_ring
    
    [ Upstream commit 7e61c89c6065731dfc11ac7a2c0dd27a910f2afb ]
    
    The max_frame and rx_buf_len fields of the VSI set the maximum frame size
    for packets on the wire, and configure the size of the Rx buffer. In the
    hardware, these are per-queue configuration. Most VSI types use a simple
    method to determine the size of the buffers for all queues.
    
    However, VFs may potentially configure different values for each queue.
    While the Linux iAVF driver does not do this, it is allowed by the virtchnl
    interface.
    
    The current virtchnl code simply sets the per-VSI fields inbetween calls to
    ice_vsi_cfg_single_rxq(). This technically works, as these fields are only
    ever used when programming the Rx ring, and otherwise not checked again.
    However, it is confusing to maintain.
    
    The Rx ring also already has an rx_buf_len field in order to access the
    buffer length in the hotpath. It also has extra unused bytes in the ring
    structure which we can make use of to store the maximum frame size.
    
    Drop the VSI max_frame and rx_buf_len fields. Add max_frame to the Rx ring,
    and slightly re-order rx_buf_len to better fit into the gaps in the
    structure layout.
    
    Change the ice_vsi_cfg_frame_size function so that it writes to the ring
    fields. Call this function once per ring in ice_vsi_cfg_rxqs(). This is
    done over calling it inside the ice_vsi_cfg_rxq(), because
    ice_vsi_cfg_rxq() is called in the virtchnl flow where the max_frame and
    rx_buf_len have already been configured.
    
    Change the accesses for rx_buf_len and max_frame to all point to the ring
    structure. This has the added benefit that ice_vsi_cfg_rxq() no longer has
    the surprise side effect of updating ring->rx_buf_len based on the VSI
    field.
    
    Update the virtchnl ice_vc_cfg_qs_msg() function to set the ring values
    directly, and drop references to the removed VSI fields.
    
    This now makes the VF logic clear, as the ring fields are obviously
    per-queue. This reduces the required cognitive load when reasoning about
    this logic.
    
    Note that removing the VSI fields does leave a 4 byte gap, but the ice_vsi
    structure has many gaps, and its layout is not as critical in the hot path.
    The structure may benefit from a more thorough repacking, but no attempt
    was made in this change.
    
    Signed-off-by: Jacob Keller <[email protected]>
    Tested-by: Rafal Romanowski <[email protected]>
    Signed-off-by: Tony Nguyen <[email protected]>
    Stable-dep-of: 84bf1ac85af8 ("ice: fix Rx page leak on multi-buffer frames")
    Signed-off-by: Sasha Levin <[email protected]>

 
igc: don't fail igc_probe() on LED setup error [+ + +]
Author: Kohei Enju <[email protected]>
Date:   Wed Sep 10 22:47:21 2025 +0900

    igc: don't fail igc_probe() on LED setup error
    
    [ Upstream commit 528eb4e19ec0df30d0c9ae4074ce945667dde919 ]
    
    When igc_led_setup() fails, igc_probe() fails and triggers kernel panic
    in free_netdev() since unregister_netdev() is not called. [1]
    This behavior can be tested using fault-injection framework, especially
    the failslab feature. [2]
    
    Since LED support is not mandatory, treat LED setup failures as
    non-fatal and continue probe with a warning message, consequently
    avoiding the kernel panic.
    
    [1]
     kernel BUG at net/core/dev.c:12047!
     Oops: invalid opcode: 0000 [#1] SMP NOPTI
     CPU: 0 UID: 0 PID: 937 Comm: repro-igc-led-e Not tainted 6.17.0-rc4-enjuk-tnguy-00865-gc4940196ab02 #64 PREEMPT(voluntary)
     Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
     RIP: 0010:free_netdev+0x278/0x2b0
     [...]
     Call Trace:
      <TASK>
      igc_probe+0x370/0x910
      local_pci_probe+0x3a/0x80
      pci_device_probe+0xd1/0x200
     [...]
    
    [2]
     #!/bin/bash -ex
    
     FAILSLAB_PATH=/sys/kernel/debug/failslab/
     DEVICE=0000:00:05.0
     START_ADDR=$(grep " igc_led_setup" /proc/kallsyms \
             | awk '{printf("0x%s", $1)}')
     END_ADDR=$(printf "0x%x" $((START_ADDR + 0x100)))
    
     echo $START_ADDR > $FAILSLAB_PATH/require-start
     echo $END_ADDR > $FAILSLAB_PATH/require-end
     echo 1 > $FAILSLAB_PATH/times
     echo 100 > $FAILSLAB_PATH/probability
     echo N > $FAILSLAB_PATH/ignore-gfp-wait
    
     echo $DEVICE > /sys/bus/pci/drivers/igc/bind
    
    Fixes: ea578703b03d ("igc: Add support for LEDs on i225/i226")
    Signed-off-by: Kohei Enju <[email protected]>
    Reviewed-by: Paul Menzel <[email protected]>
    Reviewed-by: Aleksandr Loktionov <[email protected]>
    Reviewed-by: Vitaly Lifshits <[email protected]>
    Reviewed-by: Kurt Kanzenbach <[email protected]>
    Tested-by: Mor Bar-Gabay <[email protected]>
    Signed-off-by: Tony Nguyen <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
io_uring/cmd: let cmds to know about dying task [+ + +]
Author: Pavel Begunkov <[email protected]>
Date:   Mon Nov 4 16:12:04 2024 +0000

    io_uring/cmd: let cmds to know about dying task
    
    Commit df3b8ca604f224eb4cd51669416ad4d607682273 upstream.
    
    When the taks that submitted a request is dying, a task work for that
    request might get run by a kernel thread or even worse by a half
    dismantled task. We can't just cancel the task work without running the
    callback as the cmd might need to do some clean up, so pass a flag
    instead. If set, it's not safe to access any task resources and the
    callback is expected to cancel the cmd ASAP.
    
    Reviewed-by: Jens Axboe <[email protected]>
    Reviewed-by: Ming Lei <[email protected]>
    Signed-off-by: Pavel Begunkov <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
io_uring/kbuf: drop WARN_ON_ONCE() from incremental length check [+ + +]
Author: Jens Axboe <[email protected]>
Date:   Thu Sep 18 15:45:41 2025 -0600

    io_uring/kbuf: drop WARN_ON_ONCE() from incremental length check
    
    Partially based on commit 98b6fa62c84f2e129161e976a5b9b3cb4ccd117b upstream.
    
    This can be triggered by userspace, so just drop it. The condition
    is appropriately handled.
    
    Signed-off-by: Jens Axboe <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
io_uring/msg_ring: kill alloc_cache for io_kiocb allocations [+ + +]
Author: Jens Axboe <[email protected]>
Date:   Thu Sep 18 14:16:53 2025 -0600

    io_uring/msg_ring: kill alloc_cache for io_kiocb allocations
    
    Commit df8922afc37aa2111ca79a216653a629146763ad upstream.
    
    A recent commit:
    
    fc582cd26e88 ("io_uring/msg_ring: ensure io_kiocb freeing is deferred for RCU")
    
    fixed an issue with not deferring freeing of io_kiocb structs that
    msg_ring allocates to after the current RCU grace period. But this only
    covers requests that don't end up in the allocation cache. If a request
    goes into the alloc cache, it can get reused before it is sane to do so.
    A recent syzbot report would seem to indicate that there's something
    there, however it may very well just be because of the KASAN poisoning
    that the alloc_cache handles manually.
    
    Rather than attempt to make the alloc_cache sane for that use case, just
    drop the usage of the alloc_cache for msg_ring request payload data.
    
    Fixes: 50cf5f3842af ("io_uring/msg_ring: add an alloc cache for io_kiocb entries")
    Link: https://lore.kernel.org/io-uring/[email protected]/
    Reported-by: [email protected]
    Signed-off-by: Jens Axboe <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
io_uring: backport io_should_terminate_tw() [+ + +]
Author: Jens Axboe <[email protected]>
Date:   Thu Sep 18 11:27:06 2025 -0600

    io_uring: backport io_should_terminate_tw()
    
    Parts of commit b6f58a3f4aa8dba424356c7a69388a81f4459300 upstream.
    
    Backport io_should_terminate_tw() helper to judge whether task_work
    should be run or terminated.
    
    Signed-off-by: Jens Axboe <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

io_uring: fix incorrect io_kiocb reference in io_link_skb [+ + +]
Author: Yang Xiuwei <[email protected]>
Date:   Fri Sep 19 17:03:52 2025 +0800

    io_uring: fix incorrect io_kiocb reference in io_link_skb
    
    [ Upstream commit 2c139a47eff8de24e3350dadb4c9d5e3426db826 ]
    
    In io_link_skb function, there is a bug where prev_notif is incorrectly
    assigned using 'nd' instead of 'prev_nd'. This causes the context
    validation check to compare the current notification with itself instead
    of comparing it with the previous notification.
    
    Fix by using the correct prev_nd parameter when obtaining prev_notif.
    
    Signed-off-by: Yang Xiuwei <[email protected]>
    Reviewed-by: Pavel Begunkov <[email protected]>
    Fixes: 6fe4220912d19 ("io_uring/notif: implement notification stacking")
    Signed-off-by: Jens Axboe <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

io_uring: include dying ring in task_work "should cancel" state [+ + +]
Author: Jens Axboe <[email protected]>
Date:   Thu Sep 18 10:21:14 2025 -0600

    io_uring: include dying ring in task_work "should cancel" state
    
    Commit 3539b1467e94336d5854ebf976d9627bfb65d6c3 upstream.
    
    When running task_work for an exiting task, rather than perform the
    issue retry attempt, the task_work is canceled. However, this isn't
    done for a ring that has been closed. This can lead to requests being
    successfully completed post the ring being closed, which is somewhat
    confusing and surprising to an application.
    
    Rather than just check the task exit state, also include the ring
    ref state in deciding whether or not to terminate a given request when
    run from task_work.
    
    Cc: [email protected] # 6.1+
    Link: https://github.com/axboe/liburing/discussions/1459
    Reported-by: Benedek Thaler <[email protected]>
    Signed-off-by: Jens Axboe <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
iommu/amd/pgtbl: Fix possible race while increase page table level [+ + +]
Author: Vasant Hegde <[email protected]>
Date:   Sat Sep 13 06:26:57 2025 +0000

    iommu/amd/pgtbl: Fix possible race while increase page table level
    
    commit 1e56310b40fd2e7e0b9493da9ff488af145bdd0c upstream.
    
    The AMD IOMMU host page table implementation supports dynamic page table levels
    (up to 6 levels), starting with a 3-level configuration that expands based on
    IOVA address. The kernel maintains a root pointer and current page table level
    to enable proper page table walks in alloc_pte()/fetch_pte() operations.
    
    The IOMMU IOVA allocator initially starts with 32-bit address and onces its
    exhuasted it switches to 64-bit address (max address is determined based
    on IOMMU and device DMA capability). To support larger IOVA, AMD IOMMU
    driver increases page table level.
    
    But in unmap path (iommu_v1_unmap_pages()), fetch_pte() reads
    pgtable->[root/mode] without lock. So its possible that in exteme corner case,
    when increase_address_space() is updating pgtable->[root/mode], fetch_pte()
    reads wrong page table level (pgtable->mode). It does compare the value with
    level encoded in page table and returns NULL. This will result is
    iommu_unmap ops to fail and upper layer may retry/log WARN_ON.
    
    CPU 0                                         CPU 1
    ------                                       ------
    map pages                                    unmap pages
    alloc_pte() -> increase_address_space()      iommu_v1_unmap_pages() -> fetch_pte()
      pgtable->root = pte (new root value)
                                                 READ pgtable->[mode/root]
                                                   Reads new root, old mode
      Updates mode (pgtable->mode += 1)
    
    Since Page table level updates are infrequent and already synchronized with a
    spinlock, implement seqcount to enable lock-free read operations on the read path.
    
    Fixes: 754265bcab7 ("iommu/amd: Fix race in increase_address_space()")
    Reported-by: Alejandro Jimenez <[email protected]>
    Cc: [email protected]
    Cc: Joao Martins <[email protected]>
    Cc: Suravee Suthikulpanit <[email protected]>
    Signed-off-by: Vasant Hegde <[email protected]>
    Signed-off-by: Joerg Roedel <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
iommu/vt-d: Fix __domain_mapping()'s usage of switch_to_super_page() [+ + +]
Author: Eugene Koira <[email protected]>
Date:   Wed Sep 3 13:53:29 2025 +0800

    iommu/vt-d: Fix __domain_mapping()'s usage of switch_to_super_page()
    
    commit dce043c07ca1ac19cfbe2844a6dc71e35c322353 upstream.
    
    switch_to_super_page() assumes the memory range it's working on is aligned
    to the target large page level. Unfortunately, __domain_mapping() doesn't
    take this into account when using it, and will pass unaligned ranges
    ultimately freeing a PTE range larger than expected.
    
    Take for example a mapping with the following iov_pfn range [0x3fe400,
    0x4c0600), which should be backed by the following mappings:
    
       iov_pfn [0x3fe400, 0x3fffff] covered by 2MiB pages
       iov_pfn [0x400000, 0x4bffff] covered by 1GiB pages
       iov_pfn [0x4c0000, 0x4c05ff] covered by 2MiB pages
    
    Under this circumstance, __domain_mapping() will pass [0x400000, 0x4c05ff]
    to switch_to_super_page() at a 1 GiB granularity, which will in turn
    free PTEs all the way to iov_pfn 0x4fffff.
    
    Mitigate this by rounding down the iov_pfn range passed to
    switch_to_super_page() in __domain_mapping()
    to the target large page level.
    
    Additionally add range alignment checks to switch_to_super_page.
    
    Fixes: 9906b9352a35 ("iommu/vt-d: Avoid duplicate removing in __domain_mapping()")
    Signed-off-by: Eugene Koira <[email protected]>
    Cc: [email protected]
    Reviewed-by: Nicolas Saenz Julienne <[email protected]>
    Reviewed-by: David Woodhouse <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Lu Baolu <[email protected]>
    Signed-off-by: Joerg Roedel <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
ksmbd: smbdirect: validate data_offset and data_length field of smb_direct_data_transfer [+ + +]
Author: Namjae Jeon <[email protected]>
Date:   Wed Sep 10 11:22:52 2025 +0900

    ksmbd: smbdirect: validate data_offset and data_length field of smb_direct_data_transfer
    
    commit 5282491fc49d5614ac6ddcd012e5743eecb6a67c upstream.
    
    If data_offset and data_length of smb_direct_data_transfer struct are
    invalid, out of bounds issue could happen.
    This patch validate data_offset and data_length field in recv_done.
    
    Cc: [email protected]
    Fixes: 2ea086e35c3d ("ksmbd: add buffer validation for smb direct")
    Reviewed-by: Stefan Metzmacher <[email protected]>
    Reported-by: Luigino Camastra, Aisle Research <[email protected]>
    Signed-off-by: Namjae Jeon <[email protected]>
    Signed-off-by: Steve French <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

ksmbd: smbdirect: verify remaining_data_length respects max_fragmented_recv_size [+ + +]
Author: Stefan Metzmacher <[email protected]>
Date:   Thu Sep 11 10:05:23 2025 +0900

    ksmbd: smbdirect: verify remaining_data_length respects max_fragmented_recv_size
    
    commit e1868ba37fd27c6a68e31565402b154beaa65df0 upstream.
    
    This is inspired by the check for data_offset + data_length.
    
    Cc: Steve French <[email protected]>
    Cc: Tom Talpey <[email protected]>
    Cc: [email protected]
    Cc: [email protected]
    Cc: [email protected]
    Fixes: 2ea086e35c3d ("ksmbd: add buffer validation for smb direct")
    Acked-by: Namjae Jeon <[email protected]>
    Signed-off-by: Stefan Metzmacher <[email protected]>
    Signed-off-by: Steve French <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions [+ + +]
Author: Sean Christopherson <[email protected]>
Date:   Mon May 5 11:03:00 2025 -0700

    KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions
    
    commit e3417ab75ab2e7dca6372a1bfa26b1be3ac5889e upstream.
    
    Set the magic BP_SPEC_REDUCE bit to mitigate SRSO when running VMs if and
    only if KVM has at least one active VM.  Leaving the bit set at all times
    unfortunately degrades performance by a wee bit more than expected.
    
    Use a dedicated spinlock and counter instead of hooking virtualization
    enablement, as changing the behavior of kvm.enable_virt_at_load based on
    SRSO_BP_SPEC_REDUCE is painful, and has its own drawbacks, e.g. could
    result in performance issues for flows that are sensitive to VM creation
    latency.
    
    Defer setting BP_SPEC_REDUCE until VMRUN is imminent to avoid impacting
    performance on CPUs that aren't running VMs, e.g. if a setup is using
    housekeeping CPUs.  Setting BP_SPEC_REDUCE in task context, i.e. without
    blasting IPIs to all CPUs, also helps avoid serializing 1<=>N transitions
    without incurring a gross amount of complexity (see the Link for details
    on how ugly coordinating via IPIs gets).
    
    Link: https://lore.kernel.org/all/[email protected]
    Fixes: 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
    Reported-by: Michael Larabel <[email protected]>
    Closes: https://www.phoronix.com/review/linux-615-amd-regression
    Cc: Borislav Petkov <[email protected]>
    Tested-by: Borislav Petkov (AMD) <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Sean Christopherson <[email protected]>
    Signed-off-by: Harshit Mogalapalli <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

KVM: SVM: Sync TPR from LAPIC into VMCB::V_TPR even if AVIC is active [+ + +]
Author: Maciej S. Szmigiero <[email protected]>
Date:   Mon Aug 25 18:44:28 2025 +0200

    KVM: SVM: Sync TPR from LAPIC into VMCB::V_TPR even if AVIC is active
    
    commit d02e48830e3fce9701265f6c5a58d9bdaf906a76 upstream.
    
    Commit 3bbf3565f48c ("svm: Do not intercept CR8 when enable AVIC")
    inhibited pre-VMRUN sync of TPR from LAPIC into VMCB::V_TPR in
    sync_lapic_to_cr8() when AVIC is active.
    
    AVIC does automatically sync between these two fields, however it does
    so only on explicit guest writes to one of these fields, not on a bare
    VMRUN.
    
    This meant that when AVIC is enabled host changes to TPR in the LAPIC
    state might not get automatically copied into the V_TPR field of VMCB.
    
    This is especially true when it is the userspace setting LAPIC state via
    KVM_SET_LAPIC ioctl() since userspace does not have access to the guest
    VMCB.
    
    Practice shows that it is the V_TPR that is actually used by the AVIC to
    decide whether to issue pending interrupts to the CPU (not TPR in TASKPRI),
    so any leftover value in V_TPR will cause serious interrupt delivery issues
    in the guest when AVIC is enabled.
    
    Fix this issue by doing pre-VMRUN TPR sync from LAPIC into VMCB::V_TPR
    even when AVIC is enabled.
    
    Fixes: 3bbf3565f48c ("svm: Do not intercept CR8 when enable AVIC")
    Cc: [email protected]
    Signed-off-by: Maciej S. Szmigiero <[email protected]>
    Reviewed-by: Naveen N Rao (AMD) <[email protected]>
    Link: https://lore.kernel.org/r/c231be64280b1461e854e1ce3595d70cde3a2e9d.1756139678.git.maciej.szmigiero@oracle.com
    [sean: tag for stable@]
    Signed-off-by: Sean Christopherson <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Linux: Linux 6.12.49 [+ + +]
Author: Greg Kroah-Hartman <[email protected]>
Date:   Thu Sep 25 11:13:51 2025 +0200

    Linux 6.12.49
    
    Link: https://lore.kernel.org/r/[email protected]
    Tested-by: Florian Fainelli <[email protected]>
    Tested-by: Linux Kernel Functional Testing <[email protected]>
    Tested-by: Brett A C Sheffield <[email protected]>
    Tested-by: Harshit Mogalapalli <[email protected]>
    Tested-by: Mark Brown <[email protected]>
    Tested-by: Jon Hunter <[email protected]>
    Tested-by: Brett Mastbergen <[email protected]>
    Tested-by: Peter Schneider <[email protected]>
    Tested-by: Ron Economos <[email protected]>
    Tested-by: Salvatore Bonaccorso <[email protected]>
    Tested-by: Miguel Ojeda <[email protected]>
    Tested-by: Shuah Khan <[email protected]>
    Tested-by: Hardik Garg <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
LoongArch: Align ACPI structures if ARCH_STRICT_ALIGN enabled [+ + +]
Author: Huacai Chen <[email protected]>
Date:   Thu Sep 18 19:44:01 2025 +0800

    LoongArch: Align ACPI structures if ARCH_STRICT_ALIGN enabled
    
    commit a9d13433fe17be0e867e51e71a1acd2731fbef8d upstream.
    
    ARCH_STRICT_ALIGN is used for hardware without UAL, now it only control
    the -mstrict-align flag. However, ACPI structures are packed by default
    so will cause unaligned accesses.
    
    To avoid this, define ACPI_MISALIGNMENT_NOT_SUPPORTED in asm/acenv.h to
    align ACPI structures if ARCH_STRICT_ALIGN enabled.
    
    Cc: [email protected]
    Reported-by: Binbin Zhou <[email protected]>
    Suggested-by: Xi Ruoyao <[email protected]>
    Suggested-by: Jiaxun Yang <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

LoongArch: Check the return value when creating kobj [+ + +]
Author: Tao Cui <[email protected]>
Date:   Thu Sep 18 19:44:04 2025 +0800

    LoongArch: Check the return value when creating kobj
    
    commit 51adb03e6b865c0c6790f29659ff52d56742de2e upstream.
    
    Add a check for the return value of kobject_create_and_add(), to ensure
    that the kobj allocation succeeds for later use.
    
    Cc: [email protected]
    Signed-off-by: Tao Cui <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

LoongArch: Fix unreliable stack for live patching [+ + +]
Author: Tiezhu Yang <[email protected]>
Date:   Thu Sep 18 19:44:08 2025 +0800

    LoongArch: Fix unreliable stack for live patching
    
    commit 677d4a52d4dc4a147d5e84af9ff207832578be70 upstream.
    
    When testing the kernel live patching with "modprobe livepatch-sample",
    there is a timeout over 15 seconds from "starting patching transition"
    to "patching complete". The dmesg command shows "unreliable stack" for
    user tasks in debug mode, here is one of the messages:
    
      livepatch: klp_try_switch_task: bash:1193 has an unreliable stack
    
    The "unreliable stack" is because it can not unwind from do_syscall()
    to its previous frame handle_syscall(). It should use fp to find the
    original stack top due to secondary stack in do_syscall(), but fp is
    not used for some other functions, then fp can not be restored by the
    next frame of do_syscall(), so it is necessary to save fp if task is
    not current, in order to get the stack top of do_syscall().
    
    Here are the call chains:
    
      klp_enable_patch()
        klp_try_complete_transition()
          klp_try_switch_task()
            klp_check_and_switch_task()
              klp_check_stack()
                stack_trace_save_tsk_reliable()
                  arch_stack_walk_reliable()
    
    When executing "rmmod livepatch-sample", there exists a similar issue.
    With this patch, it takes a short time for patching and unpatching.
    
    Before:
    
      # modprobe livepatch-sample
      # dmesg -T | tail -3
      [Sat Sep  6 11:00:20 2025] livepatch: 'livepatch_sample': starting patching transition
      [Sat Sep  6 11:00:35 2025] livepatch: signaling remaining tasks
      [Sat Sep  6 11:00:36 2025] livepatch: 'livepatch_sample': patching complete
    
      # echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled
      # rmmod livepatch_sample
      rmmod: ERROR: Module livepatch_sample is in use
      # rmmod livepatch_sample
      # dmesg -T | tail -3
      [Sat Sep  6 11:06:05 2025] livepatch: 'livepatch_sample': starting unpatching transition
      [Sat Sep  6 11:06:20 2025] livepatch: signaling remaining tasks
      [Sat Sep  6 11:06:21 2025] livepatch: 'livepatch_sample': unpatching complete
    
    After:
    
      # modprobe livepatch-sample
      # dmesg -T | tail -2
      [Tue Sep 16 16:19:30 2025] livepatch: 'livepatch_sample': starting patching transition
      [Tue Sep 16 16:19:31 2025] livepatch: 'livepatch_sample': patching complete
    
      # echo 0 > /sys/kernel/livepatch/livepatch_sample/enabled
      # rmmod livepatch_sample
      # dmesg -T | tail -2
      [Tue Sep 16 16:19:36 2025] livepatch: 'livepatch_sample': starting unpatching transition
      [Tue Sep 16 16:19:37 2025] livepatch: 'livepatch_sample': unpatching complete
    
    Cc: [email protected] # v6.9+
    Fixes: 199cc14cb4f1 ("LoongArch: Add kernel livepatching support")
    Reported-by: Xi Zhang <[email protected]>
    Signed-off-by: Tiezhu Yang <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

LoongArch: Update help info of ARCH_STRICT_ALIGN [+ + +]
Author: Tiezhu Yang <[email protected]>
Date:   Thu Sep 18 19:43:42 2025 +0800

    LoongArch: Update help info of ARCH_STRICT_ALIGN
    
    commit f5003098e2f337d8e8a87dc636250e3fa978d9ad upstream.
    
    Loongson-3A6000 and 3C6000 CPUs also support unaligned memory access, so
    the current description is out of date to some extent.
    
    Actually, all of Loongson-3 series processors based on LoongArch support
    unaligned memory access, this hardware capability is indicated by the bit
    20 (UAL) of CPUCFG1 register, update the help info to reflect the reality.
    
    Cc: [email protected]
    Signed-off-by: Tiezhu Yang <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

LoongArch: vDSO: Check kcalloc() result in init_vdso() [+ + +]
Author: Guangshuo Li <[email protected]>
Date:   Thu Sep 18 19:44:10 2025 +0800

    LoongArch: vDSO: Check kcalloc() result in init_vdso()
    
    commit ac398f570724c41e5e039d54e4075519f6af7408 upstream.
    
    Add a NULL-pointer check after the kcalloc() call in init_vdso(). If
    allocation fails, return -ENOMEM to prevent a possible dereference of
    vdso_info.code_mapping.pages when it is NULL.
    
    Cc: [email protected]
    Fixes: 2ed119aef60d ("LoongArch: Set correct size for vDSO code mapping")
    Signed-off-by: Guangshuo Li <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
minmax.h: add whitespace around operators and after commas [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:17 2025 +0000

    minmax.h: add whitespace around operators and after commas
    
    [ Upstream commit 71ee9b16251ea4bf7c1fe222517c82bdb3220acc ]
    
    Patch series "minmax.h: Cleanups and minor optimisations".
    
    Some tidyups and minor changes to minmax.h.
    
    This patch (of 7):
    
    Link: https://lkml.kernel.org/r/[email protected]
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: move all the clamp() definitions after the min/max() ones [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Nov 18 19:14:19 2024 +0000

    minmax.h: move all the clamp() definitions after the min/max() ones
    
    commit c3939872ee4a6b8bdcd0e813c66823b31e6e26f7 upstream.
    
    At some point the definitions for clamp() got added in the middle of the
    ones for min() and max().  Re-order the definitions so they are more
    sensibly grouped.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: reduce the #define expansion of min(), max() and clamp() [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:19 2025 +0000

    minmax.h: reduce the #define expansion of min(), max() and clamp()
    
    [ Upstream commit b280bb27a9f7c91ddab730e1ad91a9c18a051f41 ]
    
    Since the test for signed values being non-negative only relies on
    __builtion_constant_p() (not is_constexpr()) it can use the 'ux' variable
    instead of the caller supplied expression.  This means that the #define
    parameters are only expanded twice.  Once in the code and once quoted in
    the error message.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: remove some #defines that are only expanded once [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:23 2025 +0000

    minmax.h: remove some #defines that are only expanded once
    
    [ Upstream commit 2b97aaf74ed534fb838d09867d09a3ca5d795208 ]
    
    The bodies of __signed_type_use() and __unsigned_type_use() are much the
    same size as their names - so put the bodies in the only line that expands
    them.
    
    Similarly __signed_type() is defined separately for 64bit and then used
    exactly once just below.
    
    Change the test for __signed_type from CONFIG_64BIT to one based on gcc
    defined macros so that the code is valid if it gets used outside of a
    kernel build.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: simplify the variants of clamp() [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:22 2025 +0000

    minmax.h: simplify the variants of clamp()
    
    [ Upstream commit 495bba17cdf95e9703af1b8ef773c55ef0dfe703 ]
    
    Always pass a 'type' through to __clamp_once(), pass '__auto_type' from
    clamp() itself.
    
    The expansion of __types_ok3() is reasonable so it isn't worth the added
    complexity of avoiding it when a fixed type is used for all three values.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: update some comments [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:18 2025 +0000

    minmax.h: update some comments
    
    [ Upstream commit 10666e99204818ef45c702469488353b5bb09ec7 ]
    
    - Change three to several.
    - Remove the comment about retaining constant expressions, no longer true.
    - Realign to nearer 80 columns and break on major punctiation.
    - Add a leading comment to the block before __signed_type() and __is_nonneg()
      Otherwise the block explaining the cast is a bit 'floating'.
      Reword the rest of that comment to improve readability.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

minmax.h: use BUILD_BUG_ON_MSG() for the lo < hi test in clamp() [+ + +]
Author: David Laight <[email protected]>
Date:   Mon Sep 22 10:31:20 2025 +0000

    minmax.h: use BUILD_BUG_ON_MSG() for the lo < hi test in clamp()
    
    [ Upstream commit a5743f32baec4728711bbc01d6ac2b33d4c67040 ]
    
    Use BUILD_BUG_ON_MSG(statically_true(ulo > uhi), ...) for the sanity check
    of the bounds in clamp().  Gives better error coverage and one less
    expansion of the arguments.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Laight <[email protected]>
    Cc: Andy Shevchenko <[email protected]>
    Cc: Arnd Bergmann <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Dan Carpenter <[email protected]>
    Cc: Jason A. Donenfeld <[email protected]>
    Cc: Jens Axboe <[email protected]>
    Cc: Lorenzo Stoakes <[email protected]>
    Cc: Mateusz Guzik <[email protected]>
    Cc: Matthew Wilcox <[email protected]>
    Cc: Pedro Falcato <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Eliav Farber <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mm/gup: check ref_count instead of lru before migration [+ + +]
Author: Hugh Dickins <[email protected]>
Date:   Sun Sep 21 10:39:54 2025 -0400

    mm/gup: check ref_count instead of lru before migration
    
    [ Upstream commit 98c6d259319ecf6e8d027abd3f14b81324b8c0ad ]
    
    Patch series "mm: better GUP pin lru_add_drain_all()", v2.
    
    Series of lru_add_drain_all()-related patches, arising from recent mm/gup
    migration report from Will Deacon.
    
    This patch (of 5):
    
    Will Deacon reports:-
    
    When taking a longterm GUP pin via pin_user_pages(),
    __gup_longterm_locked() tries to migrate target folios that should not be
    longterm pinned, for example because they reside in a CMA region or
    movable zone.  This is done by first pinning all of the target folios
    anyway, collecting all of the longterm-unpinnable target folios into a
    list, dropping the pins that were just taken and finally handing the list
    off to migrate_pages() for the actual migration.
    
    It is critically important that no unexpected references are held on the
    folios being migrated, otherwise the migration will fail and
    pin_user_pages() will return -ENOMEM to its caller.  Unfortunately, it is
    relatively easy to observe migration failures when running pKVM (which
    uses pin_user_pages() on crosvm's virtual address space to resolve stage-2
    page faults from the guest) on a 6.15-based Pixel 6 device and this
    results in the VM terminating prematurely.
    
    In the failure case, 'crosvm' has called mlock(MLOCK_ONFAULT) on its
    mapping of guest memory prior to the pinning.  Subsequently, when
    pin_user_pages() walks the page-table, the relevant 'pte' is not present
    and so the faulting logic allocates a new folio, mlocks it with
    mlock_folio() and maps it in the page-table.
    
    Since commit 2fbb0c10d1e8 ("mm/munlock: mlock_page() munlock_page() batch
    by pagevec"), mlock/munlock operations on a folio (formerly page), are
    deferred.  For example, mlock_folio() takes an additional reference on the
    target folio before placing it into a per-cpu 'folio_batch' for later
    processing by mlock_folio_batch(), which drops the refcount once the
    operation is complete.  Processing of the batches is coupled with the LRU
    batch logic and can be forcefully drained with lru_add_drain_all() but as
    long as a folio remains unprocessed on the batch, its refcount will be
    elevated.
    
    This deferred batching therefore interacts poorly with the pKVM pinning
    scenario as we can find ourselves in a situation where the migration code
    fails to migrate a folio due to the elevated refcount from the pending
    mlock operation.
    
    Hugh Dickins adds:-
    
    !folio_test_lru() has never been a very reliable way to tell if an
    lru_add_drain_all() is worth calling, to remove LRU cache references to
    make the folio migratable: the LRU flag may be set even while the folio is
    held with an extra reference in a per-CPU LRU cache.
    
    5.18 commit 2fbb0c10d1e8 may have made it more unreliable.  Then 6.11
    commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding
    to LRU batch") tried to make it reliable, by moving LRU flag clearing; but
    missed the mlock/munlock batches, so still unreliable as reported.
    
    And it turns out to be difficult to extend 33dfe9204f29's LRU flag
    clearing to the mlock/munlock batches: if they do benefit from batching,
    mlock/munlock cannot be so effective when easily suppressed while !LRU.
    
    Instead, switch to an expected ref_count check, which was more reliable
    all along: some more false positives (unhelpful drains) than before, and
    never a guarantee that the folio will prove migratable, but better.
    
    Note on PG_private_2: ceph and nfs are still using the deprecated
    PG_private_2 flag, with the aid of netfs and filemap support functions.
    Although it is consistently matched by an increment of folio ref_count,
    folio_expected_ref_count() intentionally does not recognize it, and ceph
    folio migration currently depends on that for PG_private_2 folios to be
    rejected.  New references to the deprecated flag are discouraged, so do
    not add it into the collect_longterm_unpinnable_folios() calculation: but
    longterm pinning of transiently PG_private_2 ceph and nfs folios (an
    uncommon case) may invoke a redundant lru_add_drain_all().  And this makes
    easy the backport to earlier releases: up to and including 6.12, btrfs
    also used PG_private_2, but without a ref_count increment.
    
    Note for stable backports: requires 6.16 commit 86ebd50224c0 ("mm:
    add folio_expected_ref_count() for reference count calculation").
    
    Link: https://lkml.kernel.org/r/[email protected]
    Link: https://lkml.kernel.org/r/[email protected]
    Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region")
    Signed-off-by: Hugh Dickins <[email protected]>
    Reported-by: Will Deacon <[email protected]>
    Closes: https://lore.kernel.org/linux-mm/[email protected]/
    Acked-by: Kiryl Shutsemau <[email protected]>
    Acked-by: David Hildenbrand <[email protected]>
    Cc: "Aneesh Kumar K.V" <[email protected]>
    Cc: Axel Rasmussen <[email protected]>
    Cc: Chris Li <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Jason Gunthorpe <[email protected]>
    Cc: Johannes Weiner <[email protected]>
    Cc: John Hubbard <[email protected]>
    Cc: Keir Fraser <[email protected]>
    Cc: Konstantin Khlebnikov <[email protected]>
    Cc: Li Zhe <[email protected]>
    Cc: Matthew Wilcox (Oracle) <[email protected]>
    Cc: Peter Xu <[email protected]>
    Cc: Rik van Riel <[email protected]>
    Cc: Shivank Garg <[email protected]>
    Cc: Vlastimil Babka <[email protected]>
    Cc: Wei Xu <[email protected]>
    Cc: yangge <[email protected]>
    Cc: Yuanchu Xie <[email protected]>
    Cc: Yu Zhao <[email protected]>
    Cc: <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mm: add folio_expected_ref_count() for reference count calculation [+ + +]
Author: Shivank Garg <[email protected]>
Date:   Sun Sep 21 10:39:53 2025 -0400

    mm: add folio_expected_ref_count() for reference count calculation
    
    [ Upstream commit 86ebd50224c0734d965843260d0dc057a9431c61 ]
    
    Patch series " JFS: Implement migrate_folio for jfs_metapage_aops" v5.
    
    This patchset addresses a warning that occurs during memory compaction due
    to JFS's missing migrate_folio operation.  The warning was introduced by
    commit 7ee3647243e5 ("migrate: Remove call to ->writepage") which added
    explicit warnings when filesystem don't implement migrate_folio.
    
    The syzbot reported following [1]:
      jfs_metapage_aops does not implement migrate_folio
      WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline]
      WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
      Modules linked in:
      CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full)
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
      RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline]
      RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007
    
    To fix this issue, this series implement metapage_migrate_folio() for JFS
    which handles both single and multiple metapages per page configurations.
    
    While most filesystems leverage existing migration implementations like
    filemap_migrate_folio(), buffer_migrate_folio_norefs() or
    buffer_migrate_folio() (which internally used folio_expected_refs()),
    JFS's metapage architecture requires special handling of its private data
    during migration.  To support this, this series introduce the
    folio_expected_ref_count(), which calculates external references to a
    folio from page/swap cache, private data, and page table mappings.
    
    This standardized implementation replaces the previous ad-hoc
    folio_expected_refs() function and enables JFS to accurately determine
    whether a folio has unexpected references before attempting migration.
    
    Implement folio_expected_ref_count() to calculate expected folio reference
    counts from:
    - Page/swap cache (1 per page)
    - Private data (1)
    - Page table mappings (1 per map)
    
    While originally needed for page migration operations, this improved
    implementation standardizes reference counting by consolidating all
    refcount contributors into a single, reusable function that can benefit
    any subsystem needing to detect unexpected references to folios.
    
    The folio_expected_ref_count() returns the sum of these external
    references without including any reference the caller itself might hold.
    Callers comparing against the actual folio_ref_count() must account for
    their own references separately.
    
    Link: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299 [1]
    Link: https://lkml.kernel.org/r/[email protected]
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: David Hildenbrand <[email protected]>
    Signed-off-by: Shivank Garg <[email protected]>
    Suggested-by: Matthew Wilcox <[email protected]>
    Co-developed-by: David Hildenbrand <[email protected]>
    Cc: Alistair Popple <[email protected]>
    Cc: Dave Kleikamp <[email protected]>
    Cc: Donet Tom <[email protected]>
    Cc: Jane Chu <[email protected]>
    Cc: Kefeng Wang <[email protected]>
    Cc: Zi Yan <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Stable-dep-of: 98c6d259319e ("mm/gup: check ref_count instead of lru before migration")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

mm: revert "mm: vmscan.c: fix OOM on swap stress test" [+ + +]
Author: Hugh Dickins <[email protected]>
Date:   Mon Sep 8 15:21:12 2025 -0700

    mm: revert "mm: vmscan.c: fix OOM on swap stress test"
    
    commit 8d79ed36bfc83d0583ab72216b7980340478cdfb upstream.
    
    This reverts commit 0885ef470560: that was a fix to the reverted
    33dfe9204f29b415bbc0abb1a50642d1ba94f5e9.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Signed-off-by: Hugh Dickins <[email protected]>
    Acked-by: David Hildenbrand <[email protected]>
    Cc: "Aneesh Kumar K.V" <[email protected]>
    Cc: Axel Rasmussen <[email protected]>
    Cc: Chris Li <[email protected]>
    Cc: Christoph Hellwig <[email protected]>
    Cc: Jason Gunthorpe <[email protected]>
    Cc: Johannes Weiner <[email protected]>
    Cc: John Hubbard <[email protected]>
    Cc: Keir Fraser <[email protected]>
    Cc: Konstantin Khlebnikov <[email protected]>
    Cc: Li Zhe <[email protected]>
    Cc: Matthew Wilcox (Oracle) <[email protected]>
    Cc: Peter Xu <[email protected]>
    Cc: Rik van Riel <[email protected]>
    Cc: Shivank Garg <[email protected]>
    Cc: Vlastimil Babka <[email protected]>
    Cc: Wei Xu <[email protected]>
    Cc: Will Deacon <[email protected]>
    Cc: yangge <[email protected]>
    Cc: Yuanchu Xie <[email protected]>
    Cc: Yu Zhao <[email protected]>
    Cc: <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mmc: mvsdio: Fix dma_unmap_sg() nents value [+ + +]
Author: Thomas Fourier <[email protected]>
Date:   Tue Aug 26 09:58:08 2025 +0200

    mmc: mvsdio: Fix dma_unmap_sg() nents value
    
    commit 8ab2f1c35669bff7d7ed1bb16bf5cc989b3e2e17 upstream.
    
    The dma_unmap_sg() functions should be called with the same nents as the
    dma_map_sg(), not the value the map function returned.
    
    Fixes: 236caa7cc351 ("mmc: SDIO driver for Marvell SoCs")
    Signed-off-by: Thomas Fourier <[email protected]>
    Reviewed-by: Linus Walleij <[email protected]>
    Cc: [email protected]
    Signed-off-by: Ulf Hansson <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mptcp: pm: nl: announce deny-join-id0 flag [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 19 23:52:23 2025 +0200

    mptcp: pm: nl: announce deny-join-id0 flag
    
    commit 2293c57484ae64c9a3c847c8807db8c26a3a4d41 upstream.
    
    During the connection establishment, a peer can tell the other one that
    it cannot establish new subflows to the initial IP address and port by
    setting the 'C' flag [1]. Doing so makes sense when the sender is behind
    a strict NAT, operating behind a legacy Layer 4 load balancer, or using
    anycast IP address for example.
    
    When this 'C' flag is set, the path-managers must then not try to
    establish new subflows to the other peer's initial IP address and port.
    The in-kernel PM has access to this info, but the userspace PM didn't.
    
    The RFC8684 [1] is strict about that:
    
      (...) therefore the receiver MUST NOT try to open any additional
      subflows toward this address and port.
    
    So it is important to tell the userspace about that as it is responsible
    for the respect of this flag.
    
    When a new connection is created and established, the Netlink events
    now contain the existing but not currently used 'flags' attribute. When
    MPTCP_PM_EV_FLAG_DENY_JOIN_ID0 is set, it means no other subflows
    to the initial IP address and port -- info that are also part of the
    event -- can be established.
    
    Link: https://datatracker.ietf.org/doc/html/rfc8684#section-3.1-20.6 [1]
    Fixes: 702c2f646d42 ("mptcp: netlink: allow userspace-driven subflow establishment")
    Reported-by: Marek Majkowski <[email protected]>
    Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/532
    Reviewed-by: Mat Martineau <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250912-net-mptcp-pm-uspace-deny_join_id0-v1-2-40171884ade8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    [ Conflicts in mptcp_pm.yaml, because the indentation has been modified
      in commit ec362192aa9e ("netlink: specs: fix up indentation errors"),
      which is not in this version. Applying the same modifications, but at
      a different level. ]
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

mptcp: propagate shutdown to subflows when possible [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:25:50 2025 +0200

    mptcp: propagate shutdown to subflows when possible
    
    commit f755be0b1ff429a2ecf709beeb1bcd7abc111c2b upstream.
    
    When the MPTCP DATA FIN have been ACKed, there is no more MPTCP related
    metadata to exchange, and all subflows can be safely shutdown.
    
    Before this patch, the subflows were actually terminated at 'close()'
    time. That's certainly fine most of the time, but not when the userspace
    'shutdown()' a connection, without close()ing it. When doing so, the
    subflows were staying in LAST_ACK state on one side -- and consequently
    in FIN_WAIT2 on the other side -- until the 'close()' of the MPTCP
    socket.
    
    Now, when the DATA FIN have been ACKed, all subflows are shutdown. A
    consequence of this is that the TCP 'FIN' flag can be set earlier now,
    but the end result is the same. This affects the packetdrill tests
    looking at the end of the MPTCP connections, but for a good reason.
    
    Note that tcp_shutdown() will check the subflow state, so no need to do
    that again before calling it.
    
    Fixes: 3721b9b64676 ("mptcp: Track received DATA_FIN sequence number and add related helpers")
    Cc: [email protected]
    Fixes: 16a9a9da1723 ("mptcp: Add helper to process acks of DATA_FIN")
    Reviewed-by: Mat Martineau <[email protected]>
    Reviewed-by: Geliang Tang <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

mptcp: set remote_deny_join_id0 on SYN recv [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:52:20 2025 +0200

    mptcp: set remote_deny_join_id0 on SYN recv
    
    [ Upstream commit 96939cec994070aa5df852c10fad5fc303a97ea3 ]
    
    When a SYN containing the 'C' flag (deny join id0) was received, this
    piece of information was not propagated to the path-manager.
    
    Even if this flag is mainly set on the server side, a client can also
    tell the server it cannot try to establish new subflows to the client's
    initial IP address and port. The server's PM should then record such
    info when received, and before sending events about the new connection.
    
    Fixes: df377be38725 ("mptcp: add deny_join_id0 in mptcp_options_received")
    Reviewed-by: Mat Martineau <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250912-net-mptcp-pm-uspace-deny_join_id0-v1-1-40171884ade8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

mptcp: tfo: record 'deny join id0' info [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:52:23 2025 +0200

    mptcp: tfo: record 'deny join id0' info
    
    [ Upstream commit 92da495cb65719583aa06bc946aeb18a10e1e6e2 ]
    
    When TFO is used, the check to see if the 'C' flag (deny join id0) was
    set was bypassed.
    
    This flag can be set when TFO is used, so the check should also be done
    when TFO is used.
    
    Note that the set_fully_established label is also used when a 4th ACK is
    received. In this case, deny_join_id0 will not be set.
    
    Fixes: dfc8d0603033 ("mptcp: implement delayed seq generation for passive fastopen")
    Reviewed-by: Mat Martineau <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250912-net-mptcp-pm-uspace-deny_join_id0-v1-4-40171884ade8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
net/mlx5e: Harden uplink netdev access against device unbind [+ + +]
Author: Jianbo Liu <[email protected]>
Date:   Mon Sep 15 15:24:32 2025 +0300

    net/mlx5e: Harden uplink netdev access against device unbind
    
    [ Upstream commit 6b4be64fd9fec16418f365c2d8e47a7566e9eba5 ]
    
    The function mlx5_uplink_netdev_get() gets the uplink netdevice
    pointer from mdev->mlx5e_res.uplink_netdev. However, the netdevice can
    be removed and its pointer cleared when unbound from the mlx5_core.eth
    driver. This results in a NULL pointer, causing a kernel panic.
    
     BUG: unable to handle page fault for address: 0000000000001300
     at RIP: 0010:mlx5e_vport_rep_load+0x22a/0x270 [mlx5_core]
     Call Trace:
      <TASK>
      mlx5_esw_offloads_rep_load+0x68/0xe0 [mlx5_core]
      esw_offloads_enable+0x593/0x910 [mlx5_core]
      mlx5_eswitch_enable_locked+0x341/0x420 [mlx5_core]
      mlx5_devlink_eswitch_mode_set+0x17e/0x3a0 [mlx5_core]
      devlink_nl_eswitch_set_doit+0x60/0xd0
      genl_family_rcv_msg_doit+0xe0/0x130
      genl_rcv_msg+0x183/0x290
      netlink_rcv_skb+0x4b/0xf0
      genl_rcv+0x24/0x40
      netlink_unicast+0x255/0x380
      netlink_sendmsg+0x1f3/0x420
      __sock_sendmsg+0x38/0x60
      __sys_sendto+0x119/0x180
      do_syscall_64+0x53/0x1d0
      entry_SYSCALL_64_after_hwframe+0x4b/0x53
    
    Ensure the pointer is valid before use by checking it for NULL. If it
    is valid, immediately call netdev_hold() to take a reference, and
    preventing the netdevice from being freed while it is in use.
    
    Fixes: 7a9fb35e8c3a ("net/mlx5e: Do not reload ethernet ports when changing eswitch mode")
    Signed-off-by: Jianbo Liu <[email protected]>
    Reviewed-by: Cosmin Ratiu <[email protected]>
    Reviewed-by: Jiri Pirko <[email protected]>
    Reviewed-by: Dragos Tatulea <[email protected]>
    Signed-off-by: Tariq Toukan <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
net/tcp: Fix a NULL pointer dereference when using TCP-AO with TCP_REPAIR [+ + +]
Author: Anderson Nascimento <[email protected]>
Date:   Thu Sep 11 20:07:44 2025 -0300

    net/tcp: Fix a NULL pointer dereference when using TCP-AO with TCP_REPAIR
    
    [ Upstream commit 2e7bba08923ebc675b1f0e0e0959e68e53047838 ]
    
    A NULL pointer dereference can occur in tcp_ao_finish_connect() during a
    connect() system call on a socket with a TCP-AO key added and TCP_REPAIR
    enabled.
    
    The function is called with skb being NULL and attempts to dereference it
    on tcp_hdr(skb)->seq without a prior skb validation.
    
    Fix this by checking if skb is NULL before dereferencing it.
    
    The commentary is taken from bpf_skops_established(), which is also called
    in the same flow. Unlike the function being patched,
    bpf_skops_established() validates the skb before dereferencing it.
    
    int main(void){
            struct sockaddr_in sockaddr;
            struct tcp_ao_add tcp_ao;
            int sk;
            int one = 1;
    
            memset(&sockaddr,'\0',sizeof(sockaddr));
            memset(&tcp_ao,'\0',sizeof(tcp_ao));
    
            sk = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
    
            sockaddr.sin_family = AF_INET;
    
            memcpy(tcp_ao.alg_name,"cmac(aes128)",12);
            memcpy(tcp_ao.key,"ABCDEFGHABCDEFGH",16);
            tcp_ao.keylen = 16;
    
            memcpy(&tcp_ao.addr,&sockaddr,sizeof(sockaddr));
    
            setsockopt(sk, IPPROTO_TCP, TCP_AO_ADD_KEY, &tcp_ao,
            sizeof(tcp_ao));
            setsockopt(sk, IPPROTO_TCP, TCP_REPAIR, &one, sizeof(one));
    
            sockaddr.sin_family = AF_INET;
            sockaddr.sin_port = htobe16(123);
    
            inet_aton("127.0.0.1", &sockaddr.sin_addr);
    
            connect(sk,(struct sockaddr *)&sockaddr,sizeof(sockaddr));
    
    return 0;
    }
    
    $ gcc tcp-ao-nullptr.c -o tcp-ao-nullptr -Wall
    $ unshare -Urn
    
    BUG: kernel NULL pointer dereference, address: 00000000000000b6
    PGD 1f648d067 P4D 1f648d067 PUD 1982e8067 PMD 0
    Oops: Oops: 0000 [#1] SMP NOPTI
    Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop
    Reference Platform, BIOS 6.00 11/12/2020
    RIP: 0010:tcp_ao_finish_connect (net/ipv4/tcp_ao.c:1182)
    
    Fixes: 7c2ffaf21bd6 ("net/tcp: Calculate TCP-AO traffic keys")
    Signed-off-by: Anderson Nascimento <[email protected]>
    Reviewed-by: Dmitry Safonov <[email protected]>
    Reviewed-by: Eric Dumazet <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
net: liquidio: fix overflow in octeon_init_instr_queue() [+ + +]
Author: Alexey Nepomnyashih <[email protected]>
Date:   Wed Sep 17 15:30:58 2025 +0000

    net: liquidio: fix overflow in octeon_init_instr_queue()
    
    [ Upstream commit cca7b1cfd7b8a0eff2a3510c5e0f10efe8fa3758 ]
    
    The expression `(conf->instr_type == 64) << iq_no` can overflow because
    `iq_no` may be as high as 64 (`CN23XX_MAX_RINGS_PER_PF`). Casting the
    operand to `u64` ensures correct 64-bit arithmetic.
    
    Fixes: f21fb3ed364b ("Add support of Cavium Liquidio ethernet adapters")
    Signed-off-by: Alexey Nepomnyashih <[email protected]>
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

net: natsemi: fix `rx_dropped` double accounting on `netif_rx()` failure [+ + +]
Author: Yeounsu Moon <[email protected]>
Date:   Sat Sep 13 15:01:36 2025 +0900

    net: natsemi: fix `rx_dropped` double accounting on `netif_rx()` failure
    
    [ Upstream commit 93ab4881a4e2b9657bdce4b8940073bfb4ed5eab ]
    
    `netif_rx()` already increments `rx_dropped` core stat when it fails.
    The driver was also updating `ndev->stats.rx_dropped` in the same path.
    Since both are reported together via `ip -s -s` command, this resulted
    in drops being counted twice in user-visible stats.
    
    Keep the driver update on `if (unlikely(!skb))`, but skip it after
    `netif_rx()` errors.
    
    Fixes: caf586e5f23c ("net: add a core netdev->rx_dropped counter")
    Signed-off-by: Yeounsu Moon <[email protected]>
    Reviewed-by: Simon Horman <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

net: rfkill: gpio: Fix crash due to dereferencering uninitialized pointer [+ + +]
Author: Hans de Goede <[email protected]>
Date:   Sat Sep 13 13:35:15 2025 +0200

    net: rfkill: gpio: Fix crash due to dereferencering uninitialized pointer
    
    commit b6f56a44e4c1014b08859dcf04ed246500e310e5 upstream.
    
    Since commit 7d5e9737efda ("net: rfkill: gpio: get the name and type from
    device property") rfkill_find_type() gets called with the possibly
    uninitialized "const char *type_name;" local variable.
    
    On x86 systems when rfkill-gpio binds to a "BCM4752" or "LNV4752"
    acpi_device, the rfkill->type is set based on the ACPI acpi_device_id:
    
            rfkill->type = (unsigned)id->driver_data;
    
    and there is no "type" property so device_property_read_string() will fail
    and leave type_name uninitialized, leading to a potential crash.
    
    rfkill_find_type() does accept a NULL pointer, fix the potential crash
    by initializing type_name to NULL.
    
    Note likely sofar this has not been caught because:
    
    1. Not many x86 machines actually have a "BCM4752"/"LNV4752" acpi_device
    2. The stack happened to contain NULL where type_name is stored
    
    Fixes: 7d5e9737efda ("net: rfkill: gpio: get the name and type from device property")
    Cc: [email protected]
    Cc: Heikki Krogerus <[email protected]>
    Signed-off-by: Hans de Goede <[email protected]>
    Reviewed-by: Heikki Krogerus <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
nilfs2: fix CFI failure when accessing /sys/fs/nilfs2/features/* [+ + +]
Author: Nathan Chancellor <[email protected]>
Date:   Sat Sep 6 23:43:34 2025 +0900

    nilfs2: fix CFI failure when accessing /sys/fs/nilfs2/features/*
    
    commit 025e87f8ea2ae3a28bf1fe2b052bfa412c27ed4a upstream.
    
    When accessing one of the files under /sys/fs/nilfs2/features when
    CONFIG_CFI_CLANG is enabled, there is a CFI violation:
    
      CFI failure at kobj_attr_show+0x59/0x80 (target: nilfs_feature_revision_show+0x0/0x30; expected type: 0xfc392c4d)
      ...
      Call Trace:
       <TASK>
       sysfs_kf_seq_show+0x2a6/0x390
       ? __cfi_kobj_attr_show+0x10/0x10
       kernfs_seq_show+0x104/0x15b
       seq_read_iter+0x580/0xe2b
      ...
    
    When the kobject of the kset for /sys/fs/nilfs2 is initialized, its ktype
    is set to kset_ktype, which has a ->sysfs_ops of kobj_sysfs_ops.  When
    nilfs_feature_attr_group is added to that kobject via
    sysfs_create_group(), the kernfs_ops of each files is sysfs_file_kfops_rw,
    which will call sysfs_kf_seq_show() when ->seq_show() is called.
    sysfs_kf_seq_show() in turn calls kobj_attr_show() through
    ->sysfs_ops->show().  kobj_attr_show() casts the provided attribute out to
    a 'struct kobj_attribute' via container_of() and calls ->show(), resulting
    in the CFI violation since neither nilfs_feature_revision_show() nor
    nilfs_feature_README_show() match the prototype of ->show() in 'struct
    kobj_attribute'.
    
    Resolve the CFI violation by adjusting the second parameter in
    nilfs_feature_{revision,README}_show() from 'struct attribute' to 'struct
    kobj_attribute' to match the expected prototype.
    
    Link: https://lkml.kernel.org/r/[email protected]
    Fixes: aebe17f68444 ("nilfs2: add /sys/fs/nilfs2/features group")
    Signed-off-by: Nathan Chancellor <[email protected]>
    Signed-off-by: Ryusuke Konishi <[email protected]>
    Reported-by: kernel test robot <[email protected]>
    Closes: https://lore.kernel.org/oe-lkp/[email protected]/
    Cc: <[email protected]>
    Signed-off-by: Andrew Morton <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
nvme: fix PI insert on write [+ + +]
Author: Christoph Hellwig <[email protected]>
Date:   Mon Aug 25 15:32:49 2025 +0200

    nvme: fix PI insert on write
    
    [ Upstream commit 7ac3c2889bc060c3f67cf44df0dbb093a835c176 ]
    
    I recently ran into an issue where the PI generated using the block layer
    integrity code differs from that from a kernel using the PRACT fallback
    when the block layer integrity code is disabled, and I tracked this down
    to us using PRACT incorrectly.
    
    The NVM Command Set Specification (section 5.33 in 1.2, similar in older
    versions) specifies the PRACT insert behavior as:
    
      Inserted protection information consists of the computed CRC for the
      protection information format (refer to section 5.3.1) in the Guard
      field, the LBAT field value in the Application Tag field, the LBST
      field value in the Storage Tag field, if defined, and the computed
      reference tag in the Logical Block Reference Tag.
    
    Where the computed reference tag is defined as following for type 1 and
    type 2 using the text below that is duplicated in the respective bullet
    points:
    
      the value of the computed reference tag for the first logical block of
      the command is the value contained in the Initial Logical Block
      Reference Tag (ILBRT) or Expected Initial Logical Block Reference Tag
      (EILBRT) field in the command, and the computed reference tag is
      incremented for each subsequent logical block.
    
    So we need to set ILBRT field, but we currently don't.  Interestingly
    this works fine on my older type 1 formatted SSD, but Qemu trips up on
    this.  We already set ILBRT for Write Same since commit aeb7bb061be5
    ("nvme: set the PRACT bit when using Write Zeroes with T10 PI").
    
    To ease this, move the PI type check into nvme_set_ref_tag.
    
    Reviewed-by: Martin K. Petersen <[email protected]>
    Signed-off-by: Christoph Hellwig <[email protected]>
    Signed-off-by: Keith Busch <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
objtool/LoongArch: Mark special atomic instruction as INSN_BUG type [+ + +]
Author: Tiezhu Yang <[email protected]>
Date:   Thu Sep 18 19:43:36 2025 +0800

    objtool/LoongArch: Mark special atomic instruction as INSN_BUG type
    
    commit 539d7344d4feaea37e05863e9aa86bd31f28e46f upstream.
    
    When compiling with LLVM and CONFIG_RUST is set, there exists the
    following objtool warning:
    
      rust/compiler_builtins.o: warning: objtool: __rust__unordsf2(): unexpected end of section .text.unlikely.
    
    objdump shows that the end of section .text.unlikely is an atomic
    instruction:
    
      amswap.w        $zero, $ra, $zero
    
    According to the LoongArch Reference Manual, if the amswap.w atomic
    memory access instruction has the same register number as rd and rj,
    the execution will trigger an Instruction Non-defined Exception, so
    mark the above instruction as INSN_BUG type to fix the warning.
    
    Cc: [email protected]
    Signed-off-by: Tiezhu Yang <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

objtool/LoongArch: Mark types based on break immediate code [+ + +]
Author: Tiezhu Yang <[email protected]>
Date:   Thu Sep 18 19:43:36 2025 +0800

    objtool/LoongArch: Mark types based on break immediate code
    
    commit baad7830ee9a56756b3857348452fe756cb0a702 upstream.
    
    If the break immediate code is 0, it should mark the type as
    INSN_TRAP. If the break immediate code is 1, it should mark the
    type as INSN_BUG.
    
    While at it, format the code style and add the code comment for nop.
    
    Cc: [email protected]
    Suggested-by: WANG Rui <[email protected]>
    Signed-off-by: Tiezhu Yang <[email protected]>
    Signed-off-by: Huacai Chen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
octeon_ep: fix VF MAC address lifecycle handling [+ + +]
Author: Sathesh B Edara <[email protected]>
Date:   Tue Sep 16 06:32:07 2025 -0700

    octeon_ep: fix VF MAC address lifecycle handling
    
    [ Upstream commit a72175c985132885573593222a7b088cf49b07ae ]
    
    Currently, VF MAC address info is not updated when the MAC address is
    configured from VF, and it is not cleared when the VF is removed. This
    leads to stale or missing MAC information in the PF, which may cause
    incorrect state tracking or inconsistencies when VFs are hot-plugged
    or reassigned.
    
    Fix this by:
     - storing the VF MAC address in the PF when it is set from VF
     - clearing the stored VF MAC address when the VF is removed
    
    This ensures that the PF always has correct VF MAC state.
    
    Fixes: cde29af9e68e ("octeon_ep: add PF-VF mailbox communication")
    Signed-off-by: Sathesh B Edara <[email protected]>
    Reviewed-by: Simon Horman <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
octeontx2-pf: Fix use-after-free bugs in otx2_sync_tstamp() [+ + +]
Author: Duoming Zhou <[email protected]>
Date:   Wed Sep 17 14:38:53 2025 +0800

    octeontx2-pf: Fix use-after-free bugs in otx2_sync_tstamp()
    
    [ Upstream commit f8b4687151021db61841af983f1cb7be6915d4ef ]
    
    The original code relies on cancel_delayed_work() in otx2_ptp_destroy(),
    which does not ensure that the delayed work item synctstamp_work has fully
    completed if it was already running. This leads to use-after-free scenarios
    where otx2_ptp is deallocated by otx2_ptp_destroy(), while synctstamp_work
    remains active and attempts to dereference otx2_ptp in otx2_sync_tstamp().
    Furthermore, the synctstamp_work is cyclic, the likelihood of triggering
    the bug is nonnegligible.
    
    A typical race condition is illustrated below:
    
    CPU 0 (cleanup)           | CPU 1 (delayed work callback)
    otx2_remove()             |
      otx2_ptp_destroy()      | otx2_sync_tstamp()
        cancel_delayed_work() |
        kfree(ptp)            |
                              |   ptp = container_of(...); //UAF
                              |   ptp-> //UAF
    
    This is confirmed by a KASAN report:
    
    BUG: KASAN: slab-use-after-free in __run_timer_base.part.0+0x7d7/0x8c0
    Write of size 8 at addr ffff88800aa09a18 by task bash/136
    ...
    Call Trace:
     <IRQ>
     dump_stack_lvl+0x55/0x70
     print_report+0xcf/0x610
     ? __run_timer_base.part.0+0x7d7/0x8c0
     kasan_report+0xb8/0xf0
     ? __run_timer_base.part.0+0x7d7/0x8c0
     __run_timer_base.part.0+0x7d7/0x8c0
     ? __pfx___run_timer_base.part.0+0x10/0x10
     ? __pfx_read_tsc+0x10/0x10
     ? ktime_get+0x60/0x140
     ? lapic_next_event+0x11/0x20
     ? clockevents_program_event+0x1d4/0x2a0
     run_timer_softirq+0xd1/0x190
     handle_softirqs+0x16a/0x550
     irq_exit_rcu+0xaf/0xe0
     sysvec_apic_timer_interrupt+0x70/0x80
     </IRQ>
    ...
    Allocated by task 1:
     kasan_save_stack+0x24/0x50
     kasan_save_track+0x14/0x30
     __kasan_kmalloc+0x7f/0x90
     otx2_ptp_init+0xb1/0x860
     otx2_probe+0x4eb/0xc30
     local_pci_probe+0xdc/0x190
     pci_device_probe+0x2fe/0x470
     really_probe+0x1ca/0x5c0
     __driver_probe_device+0x248/0x310
     driver_probe_device+0x44/0x120
     __driver_attach+0xd2/0x310
     bus_for_each_dev+0xed/0x170
     bus_add_driver+0x208/0x500
     driver_register+0x132/0x460
     do_one_initcall+0x89/0x300
     kernel_init_freeable+0x40d/0x720
     kernel_init+0x1a/0x150
     ret_from_fork+0x10c/0x1a0
     ret_from_fork_asm+0x1a/0x30
    
    Freed by task 136:
     kasan_save_stack+0x24/0x50
     kasan_save_track+0x14/0x30
     kasan_save_free_info+0x3a/0x60
     __kasan_slab_free+0x3f/0x50
     kfree+0x137/0x370
     otx2_ptp_destroy+0x38/0x80
     otx2_remove+0x10d/0x4c0
     pci_device_remove+0xa6/0x1d0
     device_release_driver_internal+0xf8/0x210
     pci_stop_bus_device+0x105/0x150
     pci_stop_and_remove_bus_device_locked+0x15/0x30
     remove_store+0xcc/0xe0
     kernfs_fop_write_iter+0x2c3/0x440
     vfs_write+0x871/0xd70
     ksys_write+0xee/0x1c0
     do_syscall_64+0xac/0x280
     entry_SYSCALL_64_after_hwframe+0x77/0x7f
    ...
    
    Replace cancel_delayed_work() with cancel_delayed_work_sync() to ensure
    that the delayed work item is properly canceled before the otx2_ptp is
    deallocated.
    
    This bug was initially identified through static analysis. To reproduce
    and test it, I simulated the OcteonTX2 PCI device in QEMU and introduced
    artificial delays within the otx2_sync_tstamp() function to increase the
    likelihood of triggering the bug.
    
    Fixes: 2958d17a8984 ("octeontx2-pf: Add support for ptp 1-step mode on CN10K silicon")
    Signed-off-by: Duoming Zhou <[email protected]>
    Reviewed-by: Vadim Fedorenko <[email protected]>
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
pcmcia: omap_cf: Mark driver struct with __refdata to prevent section mismatch [+ + +]
Author: Geert Uytterhoeven <[email protected]>
Date:   Wed Aug 13 17:50:14 2025 +0200

    pcmcia: omap_cf: Mark driver struct with __refdata to prevent section mismatch
    
    [ Upstream commit d1dfcdd30140c031ae091868fb5bed084132bca1 ]
    
    As described in the added code comment, a reference to .exit.text is ok
    for drivers registered via platform_driver_probe().  Make this explicit
    to prevent the following section mismatch warning
    
        WARNING: modpost: drivers/pcmcia/omap_cf: section mismatch in reference: omap_cf_driver+0x4 (section: .data) -> omap_cf_remove (section: .exit.text)
    
    that triggers on an omap1_defconfig + CONFIG_OMAP_CF=m build.
    
    Signed-off-by: Geert Uytterhoeven <[email protected]>
    Acked-by: Aaro Koskinen <[email protected]>
    Reviewed-by: Uwe Kleine-König <[email protected]>
    Signed-off-by: Dominik Brodowski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
perf/x86/intel: Fix crash in icl_update_topdown_event() [+ + +]
Author: Kan Liang <[email protected]>
Date:   Thu Jun 12 07:38:18 2025 -0700

    perf/x86/intel: Fix crash in icl_update_topdown_event()
    
    commit b0823d5fbacb1c551d793cbfe7af24e0d1fa45ed upstream.
    
    The perf_fuzzer found a hard-lockup crash on a RaptorLake machine:
    
      Oops: general protection fault, maybe for address 0xffff89aeceab400: 0000
      CPU: 23 UID: 0 PID: 0 Comm: swapper/23
      Tainted: [W]=WARN
      Hardware name: Dell Inc. Precision 9660/0VJ762
      RIP: 0010:native_read_pmc+0x7/0x40
      Code: cc e8 8d a9 01 00 48 89 03 5b cd cc cc cc cc 0f 1f ...
      RSP: 000:fffb03100273de8 EFLAGS: 00010046
      ....
      Call Trace:
        <TASK>
        icl_update_topdown_event+0x165/0x190
        ? ktime_get+0x38/0xd0
        intel_pmu_read_event+0xf9/0x210
        __perf_event_read+0xf9/0x210
    
    CPUs 16-23 are E-core CPUs that don't support the perf metrics feature.
    The icl_update_topdown_event() should not be invoked on these CPUs.
    
    It's a regression of commit:
    
      f9bdf1f95339 ("perf/x86/intel: Avoid disable PMU if !cpuc->enabled in sample read")
    
    The bug introduced by that commit is that the is_topdown_event() function
    is mistakenly used to replace the is_topdown_count() call to check if the
    topdown functions for the perf metrics feature should be invoked.
    
    Fix it.
    
    Fixes: f9bdf1f95339 ("perf/x86/intel: Avoid disable PMU if !cpuc->enabled in sample read")
    Closes: https://lore.kernel.org/lkml/[email protected]/
    Reported-by: Vince Weaver <[email protected]>
    Signed-off-by: Kan Liang <[email protected]>
    Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
    Signed-off-by: Ingo Molnar <[email protected]>
    Tested-by: Vince Weaver <[email protected]>
    Cc: [email protected] # v6.15+
    Link: https://lore.kernel.org/r/[email protected]
    [ omitted PEBS check ]
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Angel Adetula <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
platform/x86: asus-wmi: Fix ROG button mapping, tablet mode on ASUS ROG Z13 [+ + +]
Author: Antheas Kapenekakis <[email protected]>
Date:   Fri Aug 8 17:47:10 2025 +0200

    platform/x86: asus-wmi: Fix ROG button mapping, tablet mode on ASUS ROG Z13
    
    commit 132bfcd24925d4d4531a19b87acb8474be82a017 upstream.
    
    On commit 9286dfd5735b ("platform/x86: asus-wmi: Fix spurious rfkill on
    UX8406MA"), Mathieu adds a quirk for the Zenbook Duo to ignore the code
    0x5f (WLAN button disable). On that laptop, this code is triggered when
    the device keyboard is attached.
    
    On the ASUS ROG Z13 2025, this code is triggered when pressing the side
    button of the device, which is used to open Armoury Crate in Windows.
    
    As this is becoming a pattern, where newer Asus laptops use this keycode
    for emitting events, let's convert the wlan ignore quirk to instead
    allow emitting codes, so that userspace programs can listen to it and
    so that it does not interfere with the rfkill state.
    
    With this patch, the Z13 wil emit KEY_PROG3 and the Duo will remain
    unchanged and emit no event. While at it, add a quirk for the Z13 to
    switch into tablet mode when removing the keyboard.
    
    Signed-off-by: Antheas Kapenekakis <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Reviewed-by: Ilpo Järvinen <[email protected]>
    Signed-off-by: Ilpo Järvinen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

platform/x86: asus-wmi: Re-add extra keys to ignore_key_wlan quirk [+ + +]
Author: Antheas Kapenekakis <[email protected]>
Date:   Tue Sep 16 09:28:18 2025 +0200

    platform/x86: asus-wmi: Re-add extra keys to ignore_key_wlan quirk
    
    commit 225d1ee0f5ba3218d1814d36564fdb5f37b50474 upstream.
    
    It turns out that the dual screen models use 0x5E for attaching and
    detaching the keyboard instead of 0x5F. So, re-add the codes by
    reverting commit cf3940ac737d ("platform/x86: asus-wmi: Remove extra
    keys from ignore_key_wlan quirk"). For our future reference, add a
    comment next to 0x5E indicating that it is used for that purpose.
    
    Fixes: cf3940ac737d ("platform/x86: asus-wmi: Remove extra keys from ignore_key_wlan quirk")
    Reported-by: Rahul Chandra <[email protected]>
    Closes: https://lore.kernel.org/all/10020-68c90c80-d-4ac6c580@106290038/
    Cc: [email protected]
    Signed-off-by: Antheas Kapenekakis <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Reviewed-by: Ilpo Järvinen <[email protected]>
    Signed-off-by: Ilpo Järvinen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
power: supply: bq27xxx: fix error return in case of no bq27000 hdq battery [+ + +]
Author: H. Nikolaus Schaller <[email protected]>
Date:   Sat Aug 23 12:34:56 2025 +0200

    power: supply: bq27xxx: fix error return in case of no bq27000 hdq battery
    
    commit 2c334d038466ac509468fbe06905a32d202117db upstream.
    
    Since commit
    
            commit f16d9fb6cf03 ("power: supply: bq27xxx: Retrieve again when busy")
    
    the console log of some devices with hdq enabled but no bq27000 battery
    (like e.g. the Pandaboard) is flooded with messages like:
    
    [   34.247833] power_supply bq27000-battery: driver failed to report 'status' property: -1
    
    as soon as user-space is finding a /sys entry and trying to read the
    "status" property.
    
    It turns out that the offending commit changes the logic to now return the
    value of cache.flags if it is <0. This is likely under the assumption that
    it is an error number. In normal errors from bq27xxx_read() this is indeed
    the case.
    
    But there is special code to detect if no bq27000 is installed or accessible
    through hdq/1wire and wants to report this. In that case, the cache.flags
    are set historically by
    
            commit 3dd843e1c26a ("bq27000: report missing device better.")
    
    to constant -1 which did make reading properties return -ENODEV. So everything
    appeared to be fine before the return value was passed upwards.
    
    Now the -1 is returned as -EPERM instead of -ENODEV, triggering the error
    condition in power_supply_format_property() which then floods the console log.
    
    So we change the detection of missing bq27000 battery to simply set
    
            cache.flags = -ENODEV
    
    instead of -1.
    
    Fixes: f16d9fb6cf03 ("power: supply: bq27xxx: Retrieve again when busy")
    Cc: Jerry Lv <[email protected]>
    Cc: [email protected]
    Signed-off-by: H. Nikolaus Schaller <[email protected]>
    Link: https://lore.kernel.org/r/692f79eb6fd541adb397038ea6e750d4de2deddf.1755945297.git.hns@goldelico.com
    Signed-off-by: Sebastian Reichel <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

power: supply: bq27xxx: restrict no-battery detection to bq27000 [+ + +]
Author: H. Nikolaus Schaller <[email protected]>
Date:   Sat Aug 23 12:34:57 2025 +0200

    power: supply: bq27xxx: restrict no-battery detection to bq27000
    
    commit 1e451977e1703b6db072719b37cd1b8e250b9cc9 upstream.
    
    There are fuel gauges in the bq27xxx series (e.g. bq27z561) which may in some
    cases report 0xff as the value of BQ27XXX_REG_FLAGS that should not be
    interpreted as "no battery" like for a disconnected battery with some built
    in bq27000 chip.
    
    So restrict the no-battery detection originally introduced by
    
        commit 3dd843e1c26a ("bq27000: report missing device better.")
    
    to the bq27000.
    
    There is no need to backport further because this was hidden before
    
            commit f16d9fb6cf03 ("power: supply: bq27xxx: Retrieve again when busy")
    
    Fixes: f16d9fb6cf03 ("power: supply: bq27xxx: Retrieve again when busy")
    Suggested-by: Jerry Lv <[email protected]>
    Cc: [email protected]
    Signed-off-by: H. Nikolaus Schaller <[email protected]>
    Link: https://lore.kernel.org/r/dd979fa6855fd051ee5117016c58daaa05966e24.1755945297.git.hns@goldelico.com
    Signed-off-by: Sebastian Reichel <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
qed: Don't collect too many protection override GRC elements [+ + +]
Author: Jamie Bainbridge <[email protected]>
Date:   Wed Sep 10 16:29:16 2025 +1000

    qed: Don't collect too many protection override GRC elements
    
    [ Upstream commit 56c0a2a9ddc2f5b5078c5fb0f81ab76bbc3d4c37 ]
    
    In the protection override dump path, the firmware can return far too
    many GRC elements, resulting in attempting to write past the end of the
    previously-kmalloc'ed dump buffer.
    
    This will result in a kernel panic with reason:
    
     BUG: unable to handle kernel paging request at ADDRESS
    
    where "ADDRESS" is just past the end of the protection override dump
    buffer. The start address of the buffer is:
     p_hwfn->cdev->dbg_features[DBG_FEATURE_PROTECTION_OVERRIDE].dump_buf
    and the size of the buffer is buf_size in the same data structure.
    
    The panic can be arrived at from either the qede Ethernet driver path:
    
        [exception RIP: qed_grc_dump_addr_range+0x108]
     qed_protection_override_dump at ffffffffc02662ed [qed]
     qed_dbg_protection_override_dump at ffffffffc0267792 [qed]
     qed_dbg_feature at ffffffffc026aa8f [qed]
     qed_dbg_all_data at ffffffffc026b211 [qed]
     qed_fw_fatal_reporter_dump at ffffffffc027298a [qed]
     devlink_health_do_dump at ffffffff82497f61
     devlink_health_report at ffffffff8249cf29
     qed_report_fatal_error at ffffffffc0272baf [qed]
     qede_sp_task at ffffffffc045ed32 [qede]
     process_one_work at ffffffff81d19783
    
    or the qedf storage driver path:
    
        [exception RIP: qed_grc_dump_addr_range+0x108]
     qed_protection_override_dump at ffffffffc068b2ed [qed]
     qed_dbg_protection_override_dump at ffffffffc068c792 [qed]
     qed_dbg_feature at ffffffffc068fa8f [qed]
     qed_dbg_all_data at ffffffffc0690211 [qed]
     qed_fw_fatal_reporter_dump at ffffffffc069798a [qed]
     devlink_health_do_dump at ffffffff8aa95e51
     devlink_health_report at ffffffff8aa9ae19
     qed_report_fatal_error at ffffffffc0697baf [qed]
     qed_hw_err_notify at ffffffffc06d32d7 [qed]
     qed_spq_post at ffffffffc06b1011 [qed]
     qed_fcoe_destroy_conn at ffffffffc06b2e91 [qed]
     qedf_cleanup_fcport at ffffffffc05e7597 [qedf]
     qedf_rport_event_handler at ffffffffc05e7bf7 [qedf]
     fc_rport_work at ffffffffc02da715 [libfc]
     process_one_work at ffffffff8a319663
    
    Resolve this by clamping the firmware's return value to the maximum
    number of legal elements the firmware should return.
    
    Fixes: d52c89f120de8 ("qed*: Utilize FW 8.37.2.0")
    Signed-off-by: Jamie Bainbridge <[email protected]>
    Link: https://patch.msgid.link/f8e1182934aa274c18d0682a12dbaf347595469c.1757485536.git.jamie.bainbridge@gmail.com
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
rds: ib: Increment i_fastreg_wrs before bailing out [+ + +]
Author: Håkon Bugge <[email protected]>
Date:   Thu Sep 11 15:33:34 2025 +0200

    rds: ib: Increment i_fastreg_wrs before bailing out
    
    commit 4351ca3fcb3ffecf12631b4996bf085a2dad0db6 upstream.
    
    We need to increment i_fastreg_wrs before we bail out from
    rds_ib_post_reg_frmr().
    
    We have a fixed budget of how many FRWR operations that can be
    outstanding using the dedicated QP used for memory registrations and
    de-registrations. This budget is enforced by the atomic_t
    i_fastreg_wrs. If we bail out early in rds_ib_post_reg_frmr(), we will
    "leak" the possibility of posting an FRWR operation, and if that
    accumulates, no FRWR operation can be carried out.
    
    Fixes: 1659185fb4d0 ("RDS: IB: Support Fastreg MR (FRMR) memory registration mode")
    Fixes: 3a2886cca703 ("net/rds: Keep track of and wait for FRWR segments in use upon shutdown")
    Cc: [email protected]
    Signed-off-by: Håkon Bugge <[email protected]>
    Reviewed-by: Allison Henderson <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Revert "net/mlx5e: Update and set Xon/Xoff upon port speed set" [+ + +]
Author: Tariq Toukan <[email protected]>
Date:   Wed Sep 17 16:48:54 2025 +0300

    Revert "net/mlx5e: Update and set Xon/Xoff upon port speed set"
    
    [ Upstream commit 3fbfe251cc9f6d391944282cdb9bcf0bd02e01f8 ]
    
    This reverts commit d24341740fe48add8a227a753e68b6eedf4b385a.
    It causes errors when trying to configure QoS, as well as
    loss of L2 connectivity (on multi-host devices).
    
    Reported-by: Jakub Kicinski <[email protected]>
    Link: https://lore.kernel.org/[email protected]
    Fixes: d24341740fe4 ("net/mlx5e: Update and set Xon/Xoff upon port speed set")
    Signed-off-by: Tariq Toukan <[email protected]>
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
rtc: pcf2127: fix SPI command byte for PCF2131 backport [+ + +]
Author: Bruno Thomsen <[email protected]>
Date:   Wed Aug 20 21:30:16 2025 +0200

    rtc: pcf2127: fix SPI command byte for PCF2131 backport
    
    When commit fa78e9b606a472495ef5b6b3d8b45c37f7727f9d upstream was
    backported to LTS branches linux-6.12.y and linux-6.6.y, the SPI regmap
    config fix got applied to the I2C regmap config. Most likely due to a new
    RTC get/set parm feature introduced in 6.14 causing regmap config sections
    in the buttom of the driver to move. LTS branch linux-6.1.y and earlier
    does not have PCF2131 device support.
    
    Issue can be seen in buttom of this diff in stable/linux.git tree:
    git diff master..linux-6.12.y -- drivers/rtc/rtc-pcf2127.c
    
    Fixes: ee61aec8529e ("rtc: pcf2127: fix SPI command byte for PCF2131")
    Fixes: 5cdd1f73401d ("rtc: pcf2127: fix SPI command byte for PCF2131")
    Cc: [email protected]
    Cc: Alexandre Belloni <[email protected]>
    Cc: Elena Popa <[email protected]>
    Cc: Hugo Villeneuve <[email protected]>
    Signed-off-by: Bruno Thomsen <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
selftests: mptcp: avoid spurious errors on TCP disconnect [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:25:52 2025 +0200

    selftests: mptcp: avoid spurious errors on TCP disconnect
    
    commit 8708c5d8b3fb3f6d5d3b9e6bfe01a505819f519a upstream.
    
    The disconnect test-case, with 'plain' TCP sockets generates spurious
    errors, e.g.
    
      07 ns1 TCP   -> ns1 (dead:beef:1::1:10006) MPTCP
      read: Connection reset by peer
      read: Connection reset by peer
      (duration   155ms) [FAIL] client exit code 3, server 3
    
      netns ns1-FloSdv (listener) socket stat for 10006:
      TcpActiveOpens                  2                  0.0
      TcpPassiveOpens                 2                  0.0
      TcpEstabResets                  2                  0.0
      TcpInSegs                       274                0.0
      TcpOutSegs                      276                0.0
      TcpOutRsts                      3                  0.0
      TcpExtPruneCalled               2                  0.0
      TcpExtRcvPruned                 1                  0.0
      TcpExtTCPPureAcks               104                0.0
      TcpExtTCPRcvCollapsed           2                  0.0
      TcpExtTCPBacklogCoalesce        42                 0.0
      TcpExtTCPRcvCoalesce            43                 0.0
      TcpExtTCPChallengeACK           1                  0.0
      TcpExtTCPFromZeroWindowAdv      42                 0.0
      TcpExtTCPToZeroWindowAdv        41                 0.0
      TcpExtTCPWantZeroWindowAdv      13                 0.0
      TcpExtTCPOrigDataSent           164                0.0
      TcpExtTCPDelivered              165                0.0
      TcpExtTCPRcvQDrop               1                  0.0
    
    In the failing scenarios (TCP -> MPTCP), the involved sockets are
    actually plain TCP ones, as fallbacks for passive sockets at 2WHS time
    cause the MPTCP listeners to actually create 'plain' TCP sockets.
    
    Similar to commit 218cc166321f ("selftests: mptcp: avoid spurious errors
    on disconnect"), the root cause is in the user-space bits: the test
    program tries to disconnect as soon as all the pending data has been
    spooled, generating an RST. If such option reaches the peer before the
    connection has reached the closed status, the TCP socket will report an
    error to the user-space, as per protocol specification, causing the
    above failure. Note that it looks like this issue got more visible since
    the "tcp: receiver changes" series from commit 06baf9bfa6ca ("Merge
    branch 'tcp-receiver-changes'").
    
    Address the issue by explicitly waiting for the TCP sockets (-t) to
    reach a closed status before performing the disconnect. More precisely,
    the test program now waits for plain TCP sockets or TCP subflows in
    addition to the MPTCP sockets that were already monitored.
    
    While at it, use 'ss' with '-n' to avoid resolving service names, which
    is not needed here.
    
    Fixes: 218cc166321f ("selftests: mptcp: avoid spurious errors on disconnect")
    Cc: [email protected]
    Suggested-by: Paolo Abeni <[email protected]>
    Reviewed-by: Mat Martineau <[email protected]>
    Reviewed-by: Geliang Tang <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

selftests: mptcp: connect: catch IO errors on listen side [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:25:51 2025 +0200

    selftests: mptcp: connect: catch IO errors on listen side
    
    commit 14e22b43df25dbd4301351b882486ea38892ae4f upstream.
    
    IO errors were correctly printed to stderr, and propagated up to the
    main loop for the server side, but the returned value was ignored. As a
    consequence, the program for the listener side was no longer exiting
    with an error code in case of IO issues.
    
    Because of that, some issues might not have been seen. But very likely,
    most issues either had an effect on the client side, or the file
    transfer was not the expected one, e.g. the connection got reset before
    the end. Still, it is better to fix this.
    
    The main consequence of this issue is the error that was reported by the
    selftests: the received and sent files were different, and the MIB
    counters were not printed. Also, when such errors happened during the
    'disconnect' tests, the program tried to continue until the timeout.
    
    Now when an IO error is detected, the program exits directly with an
    error.
    
    Fixes: 05be5e273c84 ("selftests: mptcp: add disconnect tests")
    Cc: [email protected]
    Reviewed-by: Mat Martineau <[email protected]>
    Reviewed-by: Geliang Tang <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

selftests: mptcp: sockopt: fix error messages [+ + +]
Author: Geliang Tang <[email protected]>
Date:   Fri Sep 12 14:52:24 2025 +0200

    selftests: mptcp: sockopt: fix error messages
    
    [ Upstream commit b86418beade11d45540a2d20c4ec1128849b6c27 ]
    
    This patch fixes several issues in the error reporting of the MPTCP sockopt
    selftest:
    
    1. Fix diff not printed: The error messages for counter mismatches had
       the actual difference ('diff') as argument, but it was missing in the
       format string. Displaying it makes the debugging easier.
    
    2. Fix variable usage: The error check for 'mptcpi_bytes_acked' incorrectly
       used 'ret2' (sent bytes) for both the expected value and the difference
       calculation. It now correctly uses 'ret' (received bytes), which is the
       expected value for bytes_acked.
    
    3. Fix off-by-one in diff: The calculation for the 'mptcpi_rcv_delta' diff
       was 's.mptcpi_rcv_delta - ret', which is off-by-one. It has been
       corrected to 's.mptcpi_rcv_delta - (ret + 1)' to match the expected
       value in the condition above it.
    
    Fixes: 5dcff89e1455 ("selftests: mptcp: explicitly tests aggregate counters")
    Signed-off-by: Geliang Tang <[email protected]>
    Reviewed-by: Matthieu Baerts (NGI0) <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250912-net-mptcp-pm-uspace-deny_join_id0-v1-5-40171884ade8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

selftests: mptcp: userspace pm: validate deny-join-id0 flag [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Fri Sep 12 14:52:22 2025 +0200

    selftests: mptcp: userspace pm: validate deny-join-id0 flag
    
    [ Upstream commit 24733e193a0d68f20d220e86da0362460c9aa812 ]
    
    The previous commit adds the MPTCP_PM_EV_FLAG_DENY_JOIN_ID0 flag. Make
    sure it is correctly announced by the other peer when it has been
    received.
    
    pm_nl_ctl will now display 'deny_join_id0:1' when monitoring the events,
    and when this flag was set by the other peer.
    
    The 'Fixes' tag here below is the same as the one from the previous
    commit: this patch here is not fixing anything wrong in the selftests,
    but it validates the previous fix for an issue introduced by this commit
    ID.
    
    Fixes: 702c2f646d42 ("mptcp: netlink: allow userspace-driven subflow establishment")
    Reviewed-by: Mat Martineau <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250912-net-mptcp-pm-uspace-deny_join_id0-v1-3-40171884ade8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
smb: client: fix filename matching of deferred files [+ + +]
Author: Paulo Alcantara <[email protected]>
Date:   Wed Sep 17 16:03:22 2025 -0300

    smb: client: fix filename matching of deferred files
    
    [ Upstream commit 93ed9a2951308db374cba4562533dde97bac70d3 ]
    
    Fix the following case where the client would end up closing both
    deferred files (foo.tmp & foo) after unlink(foo) due to strstr() call
    in cifs_close_deferred_file_under_dentry():
    
      fd1 = openat(AT_FDCWD, "foo", O_WRONLY|O_CREAT|O_TRUNC, 0666);
      fd2 = openat(AT_FDCWD, "foo.tmp", O_WRONLY|O_CREAT|O_TRUNC, 0666);
      close(fd1);
      close(fd2);
      unlink("foo");
    
    Fixes: e3fc065682eb ("cifs: Deferred close performance improvements")
    Signed-off-by: Paulo Alcantara (Red Hat) <[email protected]>
    Reviewed-by: Enzo Matsumiya <[email protected]>
    Cc: Frank Sorenson <[email protected]>
    Cc: David Howells <[email protected]>
    Cc: [email protected]
    Signed-off-by: Steve French <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

smb: client: fix smbdirect_recv_io leak in smbd_negotiate() error path [+ + +]
Author: Stefan Metzmacher <[email protected]>
Date:   Thu Sep 18 03:06:46 2025 +0200

    smb: client: fix smbdirect_recv_io leak in smbd_negotiate() error path
    
    [ Upstream commit daac51c7032036a0ca5f1aa419ad1b0471d1c6e0 ]
    
    During tests of another unrelated patch I was able to trigger this
    error: Objects remaining on __kmem_cache_shutdown()
    
    Cc: Steve French <[email protected]>
    Cc: Tom Talpey <[email protected]>
    Cc: Long Li <[email protected]>
    Cc: Namjae Jeon <[email protected]>
    Cc: [email protected]
    Cc: [email protected]
    Fixes: f198186aa9bb ("CIFS: SMBD: Establish SMB Direct connection")
    Signed-off-by: Stefan Metzmacher <[email protected]>
    Signed-off-by: Steve French <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

smb: client: let smbd_destroy() call disable_work_sync(&info->post_send_credits_work) [+ + +]
Author: Stefan Metzmacher <[email protected]>
Date:   Tue Aug 12 13:03:19 2025 +0200

    smb: client: let smbd_destroy() call disable_work_sync(&info->post_send_credits_work)
    
    [ Upstream commit d9dcbbcf9145b68aa85c40947311a6907277e097 ]
    
    In smbd_destroy() we may destroy the memory so we better
    wait until post_send_credits_work is no longer pending
    and will never be started again.
    
    I actually just hit the case using rxe:
    
    WARNING: CPU: 0 PID: 138 at drivers/infiniband/sw/rxe/rxe_verbs.c:1032 rxe_post_recv+0x1ee/0x480 [rdma_rxe]
    ...
    [ 5305.686979] [    T138]  smbd_post_recv+0x445/0xc10 [cifs]
    [ 5305.687135] [    T138]  ? srso_alias_return_thunk+0x5/0xfbef5
    [ 5305.687149] [    T138]  ? __kasan_check_write+0x14/0x30
    [ 5305.687185] [    T138]  ? __pfx_smbd_post_recv+0x10/0x10 [cifs]
    [ 5305.687329] [    T138]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
    [ 5305.687356] [    T138]  ? srso_alias_return_thunk+0x5/0xfbef5
    [ 5305.687368] [    T138]  ? srso_alias_return_thunk+0x5/0xfbef5
    [ 5305.687378] [    T138]  ? _raw_spin_unlock_irqrestore+0x11/0x60
    [ 5305.687389] [    T138]  ? srso_alias_return_thunk+0x5/0xfbef5
    [ 5305.687399] [    T138]  ? get_receive_buffer+0x168/0x210 [cifs]
    [ 5305.687555] [    T138]  smbd_post_send_credits+0x382/0x4b0 [cifs]
    [ 5305.687701] [    T138]  ? __pfx_smbd_post_send_credits+0x10/0x10 [cifs]
    [ 5305.687855] [    T138]  ? __pfx___schedule+0x10/0x10
    [ 5305.687865] [    T138]  ? __pfx__raw_spin_lock_irq+0x10/0x10
    [ 5305.687875] [    T138]  ? queue_delayed_work_on+0x8e/0xa0
    [ 5305.687889] [    T138]  process_one_work+0x629/0xf80
    [ 5305.687908] [    T138]  ? srso_alias_return_thunk+0x5/0xfbef5
    [ 5305.687917] [    T138]  ? __kasan_check_write+0x14/0x30
    [ 5305.687933] [    T138]  worker_thread+0x87f/0x1570
    ...
    
    It means rxe_post_recv was called after rdma_destroy_qp().
    This happened because put_receive_buffer() was triggered
    by ib_drain_qp() and called:
    queue_work(info->workqueue, &info->post_send_credits_work);
    
    Cc: Steve French <[email protected]>
    Cc: Tom Talpey <[email protected]>
    Cc: Long Li <[email protected]>
    Cc: Namjae Jeon <[email protected]>
    Cc: [email protected]
    Cc: [email protected]
    Fixes: f198186aa9bb ("CIFS: SMBD: Establish SMB Direct connection")
    Signed-off-by: Stefan Metzmacher <[email protected]>
    Signed-off-by: Steve French <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
tcp: Clear tcp_sk(sk)->fastopen_rsk in tcp_disconnect(). [+ + +]
Author: Kuniyuki Iwashima <[email protected]>
Date:   Mon Sep 15 17:56:46 2025 +0000

    tcp: Clear tcp_sk(sk)->fastopen_rsk in tcp_disconnect().
    
    [ Upstream commit 45c8a6cc2bcd780e634a6ba8e46bffbdf1fc5c01 ]
    
    syzbot reported the splat below where a socket had tcp_sk(sk)->fastopen_rsk
    in the TCP_ESTABLISHED state. [0]
    
    syzbot reused the server-side TCP Fast Open socket as a new client before
    the TFO socket completes 3WHS:
    
      1. accept()
      2. connect(AF_UNSPEC)
      3. connect() to another destination
    
    As of accept(), sk->sk_state is TCP_SYN_RECV, and tcp_disconnect() changes
    it to TCP_CLOSE and makes connect() possible, which restarts timers.
    
    Since tcp_disconnect() forgot to clear tcp_sk(sk)->fastopen_rsk, the
    retransmit timer triggered the warning and the intended packet was not
    retransmitted.
    
    Let's call reqsk_fastopen_remove() in tcp_disconnect().
    
    [0]:
    WARNING: CPU: 2 PID: 0 at net/ipv4/tcp_timer.c:542 tcp_retransmit_timer (net/ipv4/tcp_timer.c:542 (discriminator 7))
    Modules linked in:
    CPU: 2 UID: 0 PID: 0 Comm: swapper/2 Not tainted 6.17.0-rc5-g201825fb4278 #62 PREEMPT(voluntary)
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
    RIP: 0010:tcp_retransmit_timer (net/ipv4/tcp_timer.c:542 (discriminator 7))
    Code: 41 55 41 54 55 53 48 8b af b8 08 00 00 48 89 fb 48 85 ed 0f 84 55 01 00 00 0f b6 47 12 3c 03 74 0c 0f b6 47 12 3c 04 74 04 90 <0f> 0b 90 48 8b 85 c0 00 00 00 48 89 ef 48 8b 40 30 e8 6a 4f 06 3e
    RSP: 0018:ffffc900002f8d40 EFLAGS: 00010293
    RAX: 0000000000000002 RBX: ffff888106911400 RCX: 0000000000000017
    RDX: 0000000002517619 RSI: ffffffff83764080 RDI: ffff888106911400
    RBP: ffff888106d5c000 R08: 0000000000000001 R09: ffffc900002f8de8
    R10: 00000000000000c2 R11: ffffc900002f8ff8 R12: ffff888106911540
    R13: ffff888106911480 R14: ffff888106911840 R15: ffffc900002f8de0
    FS:  0000000000000000(0000) GS:ffff88907b768000(0000) knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 00007f8044d69d90 CR3: 0000000002c30003 CR4: 0000000000370ef0
    Call Trace:
     <IRQ>
     tcp_write_timer (net/ipv4/tcp_timer.c:738)
     call_timer_fn (kernel/time/timer.c:1747)
     __run_timers (kernel/time/timer.c:1799 kernel/time/timer.c:2372)
     timer_expire_remote (kernel/time/timer.c:2385 kernel/time/timer.c:2376 kernel/time/timer.c:2135)
     tmigr_handle_remote_up (kernel/time/timer_migration.c:944 kernel/time/timer_migration.c:1035)
     __walk_groups.isra.0 (kernel/time/timer_migration.c:533 (discriminator 1))
     tmigr_handle_remote (kernel/time/timer_migration.c:1096)
     handle_softirqs (./arch/x86/include/asm/jump_label.h:36 ./include/trace/events/irq.h:142 kernel/softirq.c:580)
     irq_exit_rcu (kernel/softirq.c:614 kernel/softirq.c:453 kernel/softirq.c:680 kernel/softirq.c:696)
     sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1050 (discriminator 35) arch/x86/kernel/apic/apic.c:1050 (discriminator 35))
     </IRQ>
    
    Fixes: 8336886f786f ("tcp: TCP Fast Open Server - support TFO listeners")
    Reported-by: syzkaller <[email protected]>
    Signed-off-by: Kuniyuki Iwashima <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
tls: make sure to abort the stream if headers are bogus [+ + +]
Author: Jakub Kicinski <[email protected]>
Date:   Tue Sep 16 17:28:13 2025 -0700

    tls: make sure to abort the stream if headers are bogus
    
    [ Upstream commit 0aeb54ac4cd5cf8f60131b4d9ec0b6dc9c27b20d ]
    
    Normally we wait for the socket to buffer up the whole record
    before we service it. If the socket has a tiny buffer, however,
    we read out the data sooner, to prevent connection stalls.
    Make sure that we abort the connection when we find out late
    that the record is actually invalid. Retrying the parsing is
    fine in itself but since we copy some more data each time
    before we parse we can overflow the allocated skb space.
    
    Constructing a scenario in which we're under pressure without
    enough data in the socket to parse the length upfront is quite
    hard. syzbot figured out a way to do this by serving us the header
    in small OOB sends, and then filling in the recvbuf with a large
    normal send.
    
    Make sure that tls_rx_msg_size() aborts strp, if we reach
    an invalid record there's really no way to recover.
    
    Reported-by: Lee Jones <[email protected]>
    Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser")
    Reviewed-by: Sabrina Dubroca <[email protected]>
    Signed-off-by: Jakub Kicinski <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Paolo Abeni <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
um: Fix FD copy size in os_rcv_fd_msg() [+ + +]
Author: Tiwei Bie <[email protected]>
Date:   Mon Sep 1 08:27:15 2025 +0800

    um: Fix FD copy size in os_rcv_fd_msg()
    
    [ Upstream commit df447a3b4a4b961c9979b4b3ffb74317394b9b40 ]
    
    When copying FDs, the copy size should not include the control
    message header (cmsghdr). Fix it.
    
    Fixes: 5cde6096a4dd ("um: generalize os_rcv_fd")
    Signed-off-by: Tiwei Bie <[email protected]>
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

um: virtio_uml: Fix use-after-free after put_device in probe [+ + +]
Author: Miaoqian Lin <[email protected]>
Date:   Thu Aug 28 15:00:51 2025 +0800

    um: virtio_uml: Fix use-after-free after put_device in probe
    
    [ Upstream commit 7ebf70cf181651fe3f2e44e95e7e5073d594c9c0 ]
    
    When register_virtio_device() fails in virtio_uml_probe(),
    the code sets vu_dev->registered = 1 even though
    the device was not successfully registered.
    This can lead to use-after-free or other issues.
    
    Fixes: 04e5b1fb0183 ("um: virtio: Remove device on disconnect")
    Signed-off-by: Miaoqian Lin <[email protected]>
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
usb: xhci: introduce macro for ring segment list iteration [+ + +]
Author: Niklas Neronin <[email protected]>
Date:   Wed Sep 17 08:39:06 2025 -0400

    usb: xhci: introduce macro for ring segment list iteration
    
    [ Upstream commit 3f970bd06c5295e742ef4f9cf7808a3cb74a6816 ]
    
    Add macro to streamline and standardize the iteration over ring
    segment list.
    
    xhci_for_each_ring_seg(): Iterates over the entire ring segment list.
    
    The xhci_free_segments_for_ring() function's while loop has not been
    updated to use the new macro. This function has some underlying issues,
    and as a result, it will be handled separately in a future patch.
    
    Suggested-by: Andy Shevchenko <[email protected]>
    Reviewed-by: Andy Shevchenko <[email protected]>
    Signed-off-by: Niklas Neronin <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: a5c98e8b1398 ("xhci: dbc: Fix full DbC transfer ring after several reconnects")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

usb: xhci: remove option to change a default ring's TRB cycle bit [+ + +]
Author: Niklas Neronin <[email protected]>
Date:   Wed Sep 17 08:39:07 2025 -0400

    usb: xhci: remove option to change a default ring's TRB cycle bit
    
    [ Upstream commit e1b0fa863907a61e86acc19ce2d0633941907c8e ]
    
    The TRB cycle bit indicates TRB ownership by the Host Controller (HC) or
    Host Controller Driver (HCD). New rings are initialized with 'cycle_state'
    equal to one, and all its TRBs' cycle bits are set to zero. When handling
    ring expansion, set the source ring cycle bits to the same value as the
    destination ring.
    
    Move the cycle bit setting from xhci_segment_alloc() to xhci_link_rings(),
    and remove the 'cycle_state' argument from xhci_initialize_ring_info().
    The xhci_segment_alloc() function uses kzalloc_node() to allocate segments,
    ensuring that all TRB cycle bits are initialized to zero.
    
    Signed-off-by: Niklas Neronin <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: a5c98e8b1398 ("xhci: dbc: Fix full DbC transfer ring after several reconnects")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
vmxnet3: unregister xdp rxq info in the reset path [+ + +]
Author: Sankararaman Jayaraman <[email protected]>
Date:   Thu Mar 20 10:25:22 2025 +0530

    vmxnet3: unregister xdp rxq info in the reset path
    
    commit 0dd765fae295832934bf28e45dd5a355e0891ed4 upstream.
    
    vmxnet3 does not unregister xdp rxq info in the
    vmxnet3_reset_work() code path as vmxnet3_rq_destroy()
    is not invoked in this code path. So, we get below message with a
    backtrace.
    
    Missing unregister, handled but fix driver
    WARNING: CPU:48 PID: 500 at net/core/xdp.c:182
    __xdp_rxq_info_reg+0x93/0xf0
    
    This patch fixes the problem by moving the unregister
    code of XDP from vmxnet3_rq_destroy() to vmxnet3_rq_cleanup().
    
    Fixes: 54f00cce1178 ("vmxnet3: Add XDP support.")
    Signed-off-by: Sankararaman Jayaraman <[email protected]>
    Signed-off-by: Ronak Doshi <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Jakub Kicinski <[email protected]>
    [ Ajay: Modified to apply on v6.6, v6.12 ]
    Signed-off-by: Ajay Kaher <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
wifi: mac80211: fix incorrect type for ret [+ + +]
Author: Liao Yuanhong <[email protected]>
Date:   Mon Aug 25 10:29:11 2025 +0800

    wifi: mac80211: fix incorrect type for ret
    
    [ Upstream commit a33b375ab5b3a9897a0ab76be8258d9f6b748628 ]
    
    The variable ret is declared as a u32 type, but it is assigned a value
    of -EOPNOTSUPP. Since unsigned types cannot correctly represent negative
    values, the type of ret should be changed to int.
    
    Signed-off-by: Liao Yuanhong <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

wifi: mac80211: increase scan_ies_len for S1G [+ + +]
Author: Lachlan Hodges <[email protected]>
Date:   Tue Aug 26 18:54:37 2025 +1000

    wifi: mac80211: increase scan_ies_len for S1G
    
    [ Upstream commit 7e2f3213e85eba00acb4cfe6d71647892d63c3a1 ]
    
    Currently the S1G capability element is not taken into account
    for the scan_ies_len, which leads to a buffer length validation
    failure in ieee80211_prep_hw_scan() and subsequent WARN in
    __ieee80211_start_scan(). This prevents hw scanning from functioning.
    To fix ensure we accommodate for the S1G capability length.
    
    Signed-off-by: Lachlan Hodges <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

wifi: wilc1000: avoid buffer overflow in WID string configuration [+ + +]
Author: [email protected] <[email protected]>
Date:   Fri Aug 29 22:58:43 2025 +0000

    wifi: wilc1000: avoid buffer overflow in WID string configuration
    
    [ Upstream commit fe9e4d0c39311d0f97b024147a0d155333f388b5 ]
    
    Fix the following copy overflow warning identified by Smatch checker.
    
     drivers/net/wireless/microchip/wilc1000/wlan_cfg.c:184 wilc_wlan_parse_response_frame()
            error: '__memcpy()' 'cfg->s[i]->str' copy overflow (512 vs 65537)
    
    This patch introduces size check before accessing the memory buffer.
    The checks are base on the WID type of received data from the firmware.
    For WID string configuration, the size limit is determined by individual
    element size in 'struct wilc_cfg_str_vals' that is maintained in 'len' field
    of 'struct wilc_cfg_str'.
    
    Reported-by: Dan Carpenter <[email protected]>
    Closes: https://lore.kernel.org/linux-wireless/[email protected]
    Suggested-by: Dan Carpenter <[email protected]>
    Signed-off-by: Ajay Singh <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Johannes Berg <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
 
x86/bugs: Add SRSO_USER_KERNEL_NO support [+ + +]
Author: Borislav Petkov (AMD) <[email protected]>
Date:   Mon Nov 11 17:22:08 2024 +0100

    x86/bugs: Add SRSO_USER_KERNEL_NO support
    
    commit 877818802c3e970f67ccb53012facc78bef5f97a upstream.
    
    If the machine has:
    
      CPUID Fn8000_0021_EAX[30] (SRSO_USER_KERNEL_NO) -- If this bit is 1,
      it indicates the CPU is not subject to the SRSO vulnerability across
      user/kernel boundaries.
    
    have it fall back to IBPB on VMEXIT only, in the case it is going to run
    VMs:
    
      Speculative Return Stack Overflow: Mitigation: IBPB on VMEXIT only
    
    Signed-off-by: Borislav Petkov (AMD) <[email protected]>
    Reviewed-by: Nikolay Borisov <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    [ Harshit: Conflicts resolved as this commit: 7c62c442b6eb ("x86/vmscape:
      Enumerate VMSCAPE bug") has been applied already to 6.12.y ]
    Signed-off-by: Harshit Mogalapalli <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

x86/bugs: KVM: Add support for SRSO_MSR_FIX [+ + +]
Author: Borislav Petkov <[email protected]>
Date:   Tue Feb 18 12:13:33 2025 +0100

    x86/bugs: KVM: Add support for SRSO_MSR_FIX
    
    commit 8442df2b49ed9bcd67833ad4f091d15ac91efd00 upstream.
    
    Add support for
    
      CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it
      indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate
      SRSO.
    
    Enable BpSpecReduce to mitigate SRSO across guest/host boundaries.
    
    Switch back to enabling the bit when virtualization is enabled and to
    clear the bit when virtualization is disabled because using a MSR slot
    would clear the bit when the guest is exited and any training the guest
    has done, would potentially influence the host kernel when execution
    enters the kernel and hasn't VMRUN the guest yet.
    
    More detail on the public thread in Link below.
    
    Co-developed-by: Sean Christopherson <[email protected]>
    Signed-off-by: Sean Christopherson <[email protected]>
    Signed-off-by: Borislav Petkov (AMD) <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Harshit Mogalapalli <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
xhci: dbc: decouple endpoint allocation from initialization [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Wed Sep 17 08:39:08 2025 -0400

    xhci: dbc: decouple endpoint allocation from initialization
    
    [ Upstream commit 220a0ffde02f962c13bc752b01aa570b8c65a37b ]
    
    Decouple allocation of endpoint ring buffer from initialization
    of the buffer, and initialization of endpoint context parts from
    from the rest of the contexts.
    
    It allows driver to clear up and reinitialize endpoint rings
    after disconnect without reallocating everything.
    
    This is a prerequisite for the next patch that prevents the transfer
    ring from filling up with cancelled (no-op) TRBs if a debug cable is
    reconnected several times without transferring anything.
    
    Cc: [email protected]
    Fixes: dfba2174dc42 ("usb: xhci: Add DbC support in xHCI driver")
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: a5c98e8b1398 ("xhci: dbc: Fix full DbC transfer ring after several reconnects")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

xhci: dbc: Fix full DbC transfer ring after several reconnects [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Wed Sep 17 08:39:09 2025 -0400

    xhci: dbc: Fix full DbC transfer ring after several reconnects
    
    [ Upstream commit a5c98e8b1398534ae1feb6e95e2d3ee5215538ed ]
    
    Pending requests will be flushed on disconnect, and the corresponding
    TRBs will be turned into No-op TRBs, which are ignored by the xHC
    controller once it starts processing the ring.
    
    If the USB debug cable repeatedly disconnects before ring is started
    then the ring will eventually be filled with No-op TRBs.
    No new transfers can be queued when the ring is full, and driver will
    print the following error message:
    
        "xhci_hcd 0000:00:14.0: failed to queue trbs"
    
    This is a normal case for 'in' transfers where TRBs are always enqueued
    in advance, ready to take on incoming data. If no data arrives, and
    device is disconnected, then ring dequeue will remain at beginning of
    the ring while enqueue points to first free TRB after last cancelled
    No-op TRB.
    s
    Solve this by reinitializing the rings when the debug cable disconnects
    and DbC is leaving the configured state.
    Clear the whole ring buffer and set enqueue and dequeue to the beginning
    of ring, and set cycle bit to its initial state.
    
    Cc: [email protected]
    Fixes: dfba2174dc42 ("usb: xhci: Add DbC support in xHCI driver")
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>