| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| A flaw was found in the libtiff library. A remote attacker could exploit a signed integer overflow vulnerability in the putcontig8bitYCbCr44tile function by providing a specially crafted TIFF file. This flaw can lead to an out-of-bounds heap write due to incorrect memory pointer calculations, potentially causing a denial of service (application crash) or arbitrary code execution. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: xt_tcpmss: check remaining length before reading optlen
Quoting reporter:
In net/netfilter/xt_tcpmss.c (lines 53-68), the TCP option parser reads
op[i+1] directly without validating the remaining option length.
If the last byte of the option field is not EOL/NOP (0/1), the code attempts
to index op[i+1]. In the case where i + 1 == optlen, this causes an
out-of-bounds read, accessing memory past the optlen boundary
(either reading beyond the stack buffer _opt or the
following payload). |
| In the Linux kernel, the following vulnerability has been resolved:
perf/x86: Fix potential bad container_of in intel_pmu_hw_config
Auto counter reload may have a group of events with software events
present within it. The software event PMU isn't the x86_hybrid_pmu and
a container_of operation in intel_pmu_set_acr_caused_constr (via the
hybrid helper) could cause out of bound memory reads. Avoid this by
guarding the call to intel_pmu_set_acr_caused_constr with an
is_x86_event check. |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: caiaq: fix stack out-of-bounds read in init_card
The loop creates a whitespace-stripped copy of the card shortname
where `len < sizeof(card->id)` is used for the bounds check. Since
sizeof(card->id) is 16 and the local id buffer is also 16 bytes,
writing 16 non-space characters fills the entire buffer,
overwriting the terminating nullbyte.
When this non-null-terminated string is later passed to
snd_card_set_id() -> copy_valid_id_string(), the function scans
forward with `while (*nid && ...)` and reads past the end of the
stack buffer, reading the contents of the stack.
A USB device with a product name containing many non-ASCII, non-space
characters (e.g. multibyte UTF-8) will reliably trigger this as follows:
BUG: KASAN: stack-out-of-bounds in copy_valid_id_string
sound/core/init.c:696 [inline]
BUG: KASAN: stack-out-of-bounds in snd_card_set_id_no_lock+0x698/0x74c
sound/core/init.c:718
The off-by-one has been present since commit bafeee5b1f8d ("ALSA:
snd_usb_caiaq: give better shortname") from June 2009 (v2.6.31-rc1),
which first introduced this whitespace-stripping loop. The original
code never accounted for the null terminator when bounding the copy.
Fix this by changing the loop bound to `sizeof(card->id) - 1`,
ensuring at least one byte remains as the null terminator. |
| In Modem IMS, there is a possible improper input validation. This could lead to remote denial of service with no additional execution privileges needed. |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: hci_sync: fix stack buffer overflow in hci_le_big_create_sync
hci_le_big_create_sync() uses DEFINE_FLEX to allocate a
struct hci_cp_le_big_create_sync on the stack with room for 0x11 (17)
BIS entries. However, conn->num_bis can hold up to HCI_MAX_ISO_BIS (31)
entries — validated against ISO_MAX_NUM_BIS (0x1f) in the caller
hci_conn_big_create_sync(). When conn->num_bis is between 18 and 31,
the memcpy that copies conn->bis into cp->bis writes up to 14 bytes
past the stack buffer, corrupting adjacent stack memory.
This is trivially reproducible: binding an ISO socket with
bc_num_bis = ISO_MAX_NUM_BIS (31) and calling listen() will
eventually trigger hci_le_big_create_sync() from the HCI command
sync worker, causing a KASAN-detectable stack-out-of-bounds write:
BUG: KASAN: stack-out-of-bounds in hci_le_big_create_sync+0x256/0x3b0
Write of size 31 at addr ffffc90000487b48 by task kworker/u9:0/71
Fix this by changing the DEFINE_FLEX count from the incorrect 0x11 to
HCI_MAX_ISO_BIS, which matches the maximum number of BIS entries that
conn->bis can actually carry. |
| In the Linux kernel, the following vulnerability has been resolved:
xfs: delete attr leaf freemap entries when empty
Back in commit 2a2b5932db6758 ("xfs: fix attr leaf header freemap.size
underflow"), Brian Foster observed that it's possible for a small
freemap at the end of the end of the xattr entries array to experience
a size underflow when subtracting the space consumed by an expansion of
the entries array. There are only three freemap entries, which means
that it is not a complete index of all free space in the leaf block.
This code can leave behind a zero-length freemap entry with a nonzero
base. Subsequent setxattr operations can increase the base up to the
point that it overlaps with another freemap entry. This isn't in and of
itself a problem because the code in _leaf_add that finds free space
ignores any freemap entry with zero size.
However, there's another bug in the freemap update code in _leaf_add,
which is that it fails to update a freemap entry that begins midway
through the xattr entry that was just appended to the array. That can
result in the freemap containing two entries with the same base but
different sizes (0 for the "pushed-up" entry, nonzero for the entry
that's actually tracking free space). A subsequent _leaf_add can then
allocate xattr namevalue entries on top of the entries array, leading to
data loss. But fixing that is for later.
For now, eliminate the possibility of confusion by zeroing out the base
of any freemap entry that has zero size. Because the freemap is not
intended to be a complete index of free space, a subsequent failure to
find any free space for a new xattr will trigger block compaction, which
regenerates the freemap.
It looks like this bug has been in the codebase for quite a long time. |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: SMP: derive legacy responder STK authentication from MITM state
The legacy responder path in smp_random() currently labels the stored
STK as authenticated whenever pending_sec_level is BT_SECURITY_HIGH.
That reflects what the local service requested, not what the pairing
flow actually achieved.
For Just Works/Confirm legacy pairing, SMP_FLAG_MITM_AUTH stays clear
and the resulting STK should remain unauthenticated even if the local
side requested HIGH security. Use the established MITM state when
storing the responder STK so the key metadata matches the pairing result.
This also keeps the legacy path aligned with the Secure Connections code,
which already treats JUST_WORKS/JUST_CFM as unauthenticated. |
| In the Linux kernel, the following vulnerability has been resolved:
block: avoid to reuse `hctx` not removed from cpuhp callback list
If the 'hctx' isn't removed from cpuhp callback list, we can't reuse it,
otherwise use-after-free may be triggered. |
| In the Linux kernel, the following vulnerability has been resolved:
net: ethernet: lantiq_etop: fix double free in detach
The number of the currently released descriptor is never incremented
which results in the same skb being released multiple times. |
| In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: Fix "scheduling while atomic" in IPsec MAC address query
Fix a "scheduling while atomic" bug in mlx5e_ipsec_init_macs() by
replacing mlx5_query_mac_address() with ether_addr_copy() to get the
local MAC address directly from netdev->dev_addr.
The issue occurs because mlx5_query_mac_address() queries the hardware
which involves mlx5_cmd_exec() that can sleep, but it is called from
the mlx5e_ipsec_handle_event workqueue which runs in atomic context.
The MAC address is already available in netdev->dev_addr, so no need
to query hardware. This avoids the sleeping call and resolves the bug.
Call trace:
BUG: scheduling while atomic: kworker/u112:2/69344/0x00000200
__schedule+0x7ab/0xa20
schedule+0x1c/0xb0
schedule_timeout+0x6e/0xf0
__wait_for_common+0x91/0x1b0
cmd_exec+0xa85/0xff0 [mlx5_core]
mlx5_cmd_exec+0x1f/0x50 [mlx5_core]
mlx5_query_nic_vport_mac_address+0x7b/0xd0 [mlx5_core]
mlx5_query_mac_address+0x19/0x30 [mlx5_core]
mlx5e_ipsec_init_macs+0xc1/0x720 [mlx5_core]
mlx5e_ipsec_build_accel_xfrm_attrs+0x422/0x670 [mlx5_core]
mlx5e_ipsec_handle_event+0x2b9/0x460 [mlx5_core]
process_one_work+0x178/0x2e0
worker_thread+0x2ea/0x430 |
| In the Linux kernel, the following vulnerability has been resolved:
soc: ti: pruss: Fix double free in pruss_clk_mux_setup()
In the pruss_clk_mux_setup(), the devm_add_action_or_reset() indirectly
calls pruss_of_free_clk_provider(), which calls of_node_put(clk_mux_np)
on the error path. However, after the devm_add_action_or_reset()
returns, the of_node_put(clk_mux_np) is called again, causing a double
free.
Fix by returning directly, to avoid the duplicate of_node_put(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: consume xmit errors of GSO frames
udpgro_frglist.sh and udpgro_bench.sh are the flakiest tests
currently in NIPA. They fail in the same exact way, TCP GRO
test stalls occasionally and the test gets killed after 10min.
These tests use veth to simulate GRO. They attach a trivial
("return XDP_PASS;") XDP program to the veth to force TSO off
and NAPI on.
Digging into the failure mode we can see that the connection
is completely stuck after a burst of drops. The sender's snd_nxt
is at sequence number N [1], but the receiver claims to have
received (rcv_nxt) up to N + 3 * MSS [2]. Last piece of the puzzle
is that senders rtx queue is not empty (let's say the block in
the rtx queue is at sequence number N - 4 * MSS [3]).
In this state, sender sends a retransmission from the rtx queue
with a single segment, and sequence numbers N-4*MSS:N-3*MSS [3].
Receiver sees it and responds with an ACK all the way up to
N + 3 * MSS [2]. But sender will reject this ack as TCP_ACK_UNSENT_DATA
because it has no recollection of ever sending data that far out [1].
And we are stuck.
The root cause is the mess of the xmit return codes. veth returns
an error when it can't xmit a frame. We end up with a loss event
like this:
-------------------------------------------------
| GSO super frame 1 | GSO super frame 2 |
|-----------------------------------------------|
| seg | seg | seg | seg | seg | seg | seg | seg |
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
-------------------------------------------------
x ok ok <ok>| ok ok ok <x>
\\
snd_nxt
"x" means packet lost by veth, and "ok" means it went thru.
Since veth has TSO disabled in this test it sees individual segments.
Segment 1 is on the retransmit queue and will be resent.
So why did the sender not advance snd_nxt even tho it clearly did
send up to seg 8? tcp_write_xmit() interprets the return code
from the core to mean that data has not been sent at all. Since
TCP deals with GSO super frames, not individual segment the crux
of the problem is that loss of a single segment can be interpreted
as loss of all. TCP only sees the last return code for the last
segment of the GSO frame (in <> brackets in the diagram above).
Of course for the problem to occur we need a setup or a device
without a Qdisc. Otherwise Qdisc layer disconnects the protocol
layer from the device errors completely.
We have multiple ways to fix this.
1) make veth not return an error when it lost a packet.
While this is what I think we did in the past, the issue keeps
reappearing and it's annoying to debug. The game of whack
a mole is not great.
2) fix the damn return codes
We only talk about NETDEV_TX_OK and NETDEV_TX_BUSY in the
documentation, so maybe we should make the return code from
ndo_start_xmit() a boolean. I like that the most, but perhaps
some ancient, not-really-networking protocol would suffer.
3) make TCP ignore the errors
It is not entirely clear to me what benefit TCP gets from
interpreting the result of ip_queue_xmit()? Specifically once
the connection is established and we're pushing data - packet
loss is just packet loss?
4) this fix
Ignore the rc in the Qdisc-less+GSO case, since it's unreliable.
We already always return OK in the TCQ_F_CAN_BYPASS case.
In the Qdisc-less case let's be a bit more conservative and only
mask the GSO errors. This path is taken by non-IP-"networks"
like CAN, MCTP etc, so we could regress some ancient thing.
This is the simplest, but also maybe the hackiest fix?
Similar fix has been proposed by Eric in the past but never committed
because original reporter was working with an OOT driver and wasn't
providing feedback (see Link). |
| In the Linux kernel, the following vulnerability has been resolved:
atm: fore200e: fix use-after-free in tasklets during device removal
When the PCA-200E or SBA-200E adapter is being detached, the fore200e
is deallocated. However, the tx_tasklet or rx_tasklet may still be running
or pending, leading to use-after-free bug when the already freed fore200e
is accessed again in fore200e_tx_tasklet() or fore200e_rx_tasklet().
One of the race conditions can occur as follows:
CPU 0 (cleanup) | CPU 1 (tasklet)
fore200e_pca_remove_one() | fore200e_interrupt()
fore200e_shutdown() | tasklet_schedule()
kfree(fore200e) | fore200e_tx_tasklet()
| fore200e-> // UAF
Fix this by ensuring tx_tasklet or rx_tasklet is properly canceled before
the fore200e is released. Add tasklet_kill() in fore200e_shutdown() to
synchronize with any pending or running tasklets. Moreover, since
fore200e_reset() could prevent further interrupts or data transfers,
the tasklet_kill() should be placed after fore200e_reset() to prevent
the tasklet from being rescheduled in fore200e_interrupt(). Finally,
it only needs to do tasklet_kill() when the fore200e state is greater
than or equal to FORE200E_STATE_IRQ, since tasklets are uninitialized
in earlier states. In a word, the tasklet_kill() should be placed in
the FORE200E_STATE_IRQ branch within the switch...case structure.
This bug was identified through static analysis. |
| In IMS, there is a possible system crash due to improper input validation. This could lead to remote denial of service with no additional execution privileges needed. |
| In the Linux kernel, the following vulnerability has been resolved:
dpaa2-switch: validate num_ifs to prevent out-of-bounds write
The driver obtains sw_attr.num_ifs from firmware via dpsw_get_attributes()
but never validates it against DPSW_MAX_IF (64). This value controls
iteration in dpaa2_switch_fdb_get_flood_cfg(), which writes port indices
into the fixed-size cfg->if_id[DPSW_MAX_IF] array. When firmware reports
num_ifs >= 64, the loop can write past the array bounds.
Add a bound check for num_ifs in dpaa2_switch_init().
dpaa2_switch_fdb_get_flood_cfg() appends the control interface (port
num_ifs) after all matched ports. When num_ifs == DPSW_MAX_IF and all
ports match the flood filter, the loop fills all 64 slots and the control
interface write overflows by one entry.
The check uses >= because num_ifs == DPSW_MAX_IF is also functionally
broken.
build_if_id_bitmap() silently drops any ID >= 64:
if (id[i] < DPSW_MAX_IF)
bmap[id[i] / 64] |= ... |
| An out-of-bounds read in the ParseIP6Extended function (/bgp/bgp.go) of gobgp v4.3.0 allows attackers to cause a Denial of Service (DoS) via supplying a crafted BGP UPDATE message. |
| In the Linux kernel, the following vulnerability has been resolved:
PCI: Fix pci_slot_trylock() error handling
Commit a4e772898f8b ("PCI: Add missing bridge lock to pci_bus_lock()")
delegates the bridge device's pci_dev_trylock() to pci_bus_trylock() in
pci_slot_trylock(), but it forgets to remove the corresponding
pci_dev_unlock() when pci_bus_trylock() fails.
Before a4e772898f8b, the code did:
if (!pci_dev_trylock(dev)) /* <- lock bridge device */
goto unlock;
if (dev->subordinate) {
if (!pci_bus_trylock(dev->subordinate)) {
pci_dev_unlock(dev); /* <- unlock bridge device */
goto unlock;
}
}
After a4e772898f8b the bridge-device lock is no longer taken, but the
pci_dev_unlock(dev) on the failure path was left in place, leading to the
bug.
This yields one of two errors:
1. A warning that the lock is being unlocked when no one holds it.
2. An incorrect unlock of a lock that belongs to another thread.
Fix it by removing the now-redundant pci_dev_unlock(dev) on the failure
path.
[Same patch later posted by Keith at
https://patch.msgid.link/20260116184150.3013258-1-kbusch@meta.com] |
| In the Linux kernel, the following vulnerability has been resolved:
LoongArch: Make cpumask_of_node() robust against NUMA_NO_NODE
The arch definition of cpumask_of_node() cannot handle NUMA_NO_NODE -
which is a valid index - so add a check for this. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: rtw89: pci: validate sequence number of TX release report
Hardware rarely reports abnormal sequence number in TX release report,
which will access out-of-bounds of wd_ring->pages array, causing NULL
pointer dereference.
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 1 PID: 1085 Comm: irq/129-rtw89_p Tainted: G S U
6.1.145-17510-g2f3369c91536 #1 (HASH:69e8 1)
Call Trace:
<IRQ>
rtw89_pci_release_tx+0x18f/0x300 [rtw89_pci (HASH:4c83 2)]
rtw89_pci_napi_poll+0xc2/0x190 [rtw89_pci (HASH:4c83 2)]
net_rx_action+0xfc/0x460 net/core/dev.c:6578 net/core/dev.c:6645 net/core/dev.c:6759
handle_softirqs+0xbe/0x290 kernel/softirq.c:601
? rtw89_pci_interrupt_threadfn+0xc5/0x350 [rtw89_pci (HASH:4c83 2)]
__local_bh_enable_ip+0xeb/0x120 kernel/softirq.c:499 kernel/softirq.c:423
</IRQ>
<TASK>
rtw89_pci_interrupt_threadfn+0xf8/0x350 [rtw89_pci (HASH:4c83 2)]
? irq_thread+0xa7/0x340 kernel/irq/manage.c:0
irq_thread+0x177/0x340 kernel/irq/manage.c:1205 kernel/irq/manage.c:1314
? thaw_kernel_threads+0xb0/0xb0 kernel/irq/manage.c:1202
? irq_forced_thread_fn+0x80/0x80 kernel/irq/manage.c:1220
kthread+0xea/0x110 kernel/kthread.c:376
? synchronize_irq+0x1a0/0x1a0 kernel/irq/manage.c:1287
? kthread_associate_blkcg+0x80/0x80 kernel/kthread.c:331
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>
To prevent crash, validate rpp_info.seq before using. |