github xanmod/linux 5.13.1-rt1-xanmod1

2 years ago
  • 4beda54 Linux 5.13.1-rt1-xanmod1
  • e9d81aa Merge tag 'v5.13-rt1' into 5.13
  • 7e175e6 Add localversion for -RT release
  • 6f3f622 POWERPC: Allow to enable RT
  • 646e1ac powerpc: Avoid recursive header includes
  • a1a11e9 powerpc/stackprotector: work around stack-guard init from atomic
  • f35ef21 powerpc/kvm: Disable in-kernel MPIC emulation for PREEMPT_RT
  • d91a27c powerpc/pseries/iommu: Use a locallock instead local_irq_save()
  • 9d22762 powerpc: traps: Use PREEMPT_RT
  • dd5ac56 ARM64: Allow to enable RT
  • fb140d3 ARM: Allow to enable RT
  • 2b7f161 arm64: fpsimd: Delay freeing memory in fpsimd_flush_thread()
  • b7f7c9d KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable()
  • ae21940 ARM: enable irq in translation/section permission fault handlers
  • 184bec3 arch/arm64: Add lazy preempt support
  • 2f9ba40 powerpc: Add support for lazy preemption
  • 95e0021 arm: Add support for lazy preemption
  • f2f9e49 x86: Support for lazy preemption
  • 5edd169 x86/entry: Use should_resched() in idtentry_exit_cond_resched()
  • 2d1c363 sched: Add support for lazy preemption
  • 4d50732 x86: Enable RT also on 32bit
  • 9e6ddcc x86: Allow to enable RT
  • 60e58fe x86: kvm Require const tsc for RT
  • 0cf7fc6 signal/x86: Delay calling signals in atomic
  • a60399b sysfs: Add /sys/kernel/realtime entry
  • 2e2c38e tpm_tis: fix stall after iowrite*()s
  • cf64608 tty/serial/pl011: Make the locking work on RT
  • b42b6e6 tty/serial/omap: Make the locking RT aware
  • f5faab2 drm/i915/gt: Only disable interrupts for the timeline lock on !force-threaded
  • 872271c drm/i915: skip DRM_I915_LOW_LEVEL_TRACEPOINTS with NOTRACE
  • 4cad323 drm/i915: disable tracing on -RT
  • a2a18b7 drm/i915: Don't disable interrupts on PREEMPT_RT during atomic updates
  • 13e2d41 drm,radeon,i915: Use preempt_disable/enable_rt() where recommended
  • 3ca5dbf random: Make it work on rt
  • 4d07cdf x86: stackprotector: Avoid random pool on rt
  • 625193b panic: skip get_random_bytes for RT_FULL in init_oops_id
  • 435b065 crypto: cryptd - add a lock instead preempt_disable/local_bh_disable
  • 6329790 crypto: limit more FPU-enabled sections
  • 5190ea9 scsi/fcoe: Make RT aware.
  • 8b6bd58 md: raid5: Make raid5_percpu handling RT aware
  • 35da1a4 drivers/block/zram: Replace bit spinlocks with rtmutex for -rt
  • a377dcd block/mq: do not invoke preempt_disable()
  • 8596053 net: Remove preemption disabling in netif_rx()
  • 7bcfc0a net: dev: always take qdisc's busylock in __dev_xmit_skb()
  • b91f0be net: Dequeue in dev_cpu_dead() without the lock
  • 08f72cc net: Use skbufhead with raw lock
  • 34d9adb sunrpc: Make svc_xprt_do_enqueue() use get_cpu_light()
  • cd5a07c net/core: use local_bh_disable() in netif_rx_ni()
  • 256ed20 net: Properly annotate the try-lock for the seqlock
  • 13f621d net/Qdisc: use a seqlock instead seqcount
  • 72d6f4f rcutorture: Avoid problematic critical section nesting on RT
  • 8abf1b2 rcu: Delay RCU-selftests
  • 0b9358c fs: namespace: Use cpu_chill() in trylock loops
  • 3e649cd rt: Introduce cpu_chill()
  • 6577676 fs/dcache: disable preemption on i_dir_seq's write side
  • b6b25f3 fs/dcache: use swait_queue instead of waitqueue
  • bbd667d ptrace: fix ptrace vs tasklist_lock race
  • c8d6844 signal: Revert ptrace preempt magic
  • c6d6f9c mm/scatterlist: Do not disable irqs on RT
  • d69a7f2 mm/vmalloc: Another preempt disable region which sucks
  • 4e9fa61 mm/zsmalloc: copy with get_cpu_var() and locking
  • 782bf49 mm/memcontrol: Replace local_irq_disable with local locks
  • 25de25a mm/memcontrol: Don't call schedule_work_on in preemption disabled context
  • 6e041ac mm: memcontrol: Replace disable-IRQ locking with a local_lock
  • 8580430 mm: memcontrol: Add an argument to refill_stock() to indicate locking
  • d0964c3 u64_stats: Disable preemption on 32bit-UP/SMP with RT during updates
  • 6288847 mm/memcontrol: Disable preemption in __mod_memcg_lruvec_state()
  • fd1745b mm/vmstat: Protect per cpu variables with preempt disable on RT
  • 07be330 mm: slub: Don't enable partial CPU caches on PREEMPT_RT by default
  • 1473115 mm: page_alloc: Use migrate_disable() in drain_local_pages_wq()
  • 7b51cfe mm, slub: Duct tape lockdep_assert_held(local_lock_t) on RT
  • ef1919f irqwork: push most work into softirq context
  • 61f56f3 softirq: Disable softirq stacks for RT
  • d98b564 softirq: Check preemption after reenabling interrupts
  • 6d59282 cpuset: Convert callback_lock to raw_spinlock_t
  • bf37891 sched: Disable TTWU_QUEUE on RT
  • e1ca6ff sched: Do not account rcu_preempt_depth on RT in might_sleep()
  • 62c341d kernel/sched: move stack + kprobe clean up to __put_task_struct()
  • 7da8773 sched: Move mmdrop to RCU on RT
  • 93fcb6d sched: Limit the number of task migrations per batch
  • efad5fe kernel/sched: add {put|get}_cpu_light()
  • 468c014 preempt: Provide preempt_*_(no)rt variants
  • 32f3c13 lockdep: disable self-test
  • 63bf6d3 lockdep: selftest: fix warnings due to missing PREEMPT_RT conditionals
  • d73448c lockdep: selftest: Only do hardirq context test for raw spinlock
  • a89aefc lockdep: Make it RT aware
  • 5419cc6 locking: don't check for __LINUX_SPINLOCK_TYPES_H on -RT archs
  • d1184a4 locking/RT: Add might sleeping annotation.
  • df4f4e0 locking/local_lock: Add RT support
  • 0fdc3cb locking/local_lock: Prepare for RT support
  • ba5d7ea locking/rtmutex: Add adaptive spinwait mechanism
  • 8e39f8f locking/rtmutex: Implement equal priority lock stealing
  • aa8c4cd preempt: Adjust PREEMPT_LOCK_OFFSET for RT
  • eaaa5e8 rtmutex: Prevent lockdep false positive with PI futexes
  • a796a11 futex: Prevent requeue_pi() lock nesting issue on RT
  • caf90d1 futex: Clarify comment in futex_requeue()
  • e67ddc7 futex: Restructure futex_requeue()
  • f3ffb1c futex: Correct the number of requeued waiters for PI
  • d8411ab futex: Cleanup stale comments
  • 1e1c70c futex: Validate waiter correctly in futex_proxy_trylock_atomic()
  • fc6e6a8 lib/test_lockup: Adapt to changed variables.
  • d64c4ab locking/rtmutex: Add mutex variant for RT
  • 8cbe9cb locking/mutex: Exclude non-ww_mutex API for RT
  • 9e1721d locking/mutex: Rearrange items in mutex.h
  • 61df591 locking/mutex: Replace struct mutex in core code
  • 9e38af0 locking/ww_mutex: Switch to _mutex_t
  • 0d00cb9 locking/mutex: Rename the ww_mutex relevant functions
  • c065271 locking/mutex: Introduce _mutex_t
  • b0bb8d3 locking/mutex: Make mutex::wait_lock raw
  • 52f869a locking/ww_mutex: Move ww_mutex declarations into ww_mutex.h
  • 5e5f46d locking/mutex: Move waiter to core header
  • 72a31aa locking/mutex: Consolidate core headers
  • 387b127 locking/rwlock: Provide RT variant
  • 3779e68 locking/spinlock: Provide RT variant
  • 052c362 locking/rtmutex: Provide the spin/rwlock core lock function
  • a78d151 locking/spinlock: Provide RT variant header
  • 1d0de8e locking/spinlock: Provide RT specific spinlock type
  • 1eabaec locking/rtmutex: Include only rbtree types
  • 3f10059 rbtree: Split out the rbtree type definitions
  • 98c59a9 locking/lockdep: Reduce includes in debug_locks.h
  • e0594a0 locking/rtmutex: Prevent future include recursion hell
  • d1739ad locking/spinlock: Split the lock types header
  • a891c54 locking/rtmutex: Guard regular sleeping locks specific functions
  • a7844d9 locking/rtmutex: Prepare RT rt_mutex_wake_q for RT locks
  • 908d294 locking/rtmutex: Use rt_mutex_wake_q_head
  • a0477bf locking/rtmutex: Provide rt_mutex_wake_q and helpers
  • 5328155 locking/rtmutex: Add wake_state to rt_mutex_waiter
  • 1a83a3b locking/rwsem: Add rtmutex based R/W semaphore implementation
  • 28b8677 locking: Add base code for RT rw_semaphore and rwlock
  • e03cbdc locking/rtmutex: Provide lockdep less variants of rtmutex interfaces
  • fb5c624 locking/rtmutex: Provide rt_mutex_slowlock_locked()
  • d6de1c1 rtmutex: Split API and implementation
  • 9abf291 rtmutex: Convert macros to inlines
  • 6d3f059 sched/wake_q: Provide WAKE_Q_HEAD_INITIALIZER
  • e5cc3ca sched: Provide schedule point for RT locks
  • bf96032 sched: Rework the __schedule() preempt argument
  • 8b3163b sched: Prepare for RT sleeping spin/rwlocks
  • e537607 sched: Introduce TASK_RTLOCK_WAIT
  • 7b569a8 sched: Split out the wakeup state check
  • bee357c debugobjects: Make RT aware
  • e1a2ed9 trace: Add migrate-disabled counter to tracing output
  • f507f34 pid.h: include atomic.h
  • 930fe8d wait.h: include atomic.h
  • 276abf4 efi: Allow efi=runtime
  • cdfc123 efi: Disable runtime services on RT
  • 6107960 net/core: disable NET_RX_BUSY_POLL on RT
  • fdfbb25 sched: Disable CONFIG_RT_GROUP_SCHED on RT
  • 8ec5e35 mm: Allow only SLUB on RT
  • 853484e kconfig: Disable config options which are not RT compatible
  • b0873a0 leds: trigger: disable CPU trigger on -RT
  • 3480984 jump-label: disable if stop_machine() is used
  • f9bffbd genirq: Disable irqpoll on -rt
  • bae73e9 genirq: update irq_set_irqchip_state documentation
  • abe17fc smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq().
  • d7a1345 samples/kfifo: Rename read_lock/write_lock
  • 554e55b tcp: Remove superfluous BH-disable around listening_hash
  • 2ba152e net: Move lockdep where it belongs
  • ba75f58 shmem: Use raw_spinlock_t for ->stat_lock
  • 109e285 mm: workingset: replace IRQ-off check with a lockdep assert.
  • 64d8a21 cgroup: use irqsave in cgroup_rstat_flush_locked()
  • 8ba34ad notifier: Make atomic_notifiers use raw_spinlock
  • 10bc787 genirq: Move prio assignment into the newly created thread
  • 3581157 kthread: Move prio/affinite change into the newly created thread
  • 0585cfb mm, slub: Correct ordering in slab_unlock()
  • 340e7c4 mm, slub: convert kmem_cpu_slab protection to local_lock
  • 2180da7 mm, slub: use migrate_disable() on PREEMPT_RT
  • 98ac7c8 mm, slub: make slab_lock() disable irqs with PREEMPT_RT
  • dde8c73 mm, slub: optionally save/restore irqs in slab_[un]lock()/
  • de1f249 mm: slub: Make object_map_lock a raw_spinlock_t
  • 12a3a78 mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context
  • 6e256a7 mm, slab: make flush_slab() possible to call with irqs enabled
  • f66b34c mm, slub: don't disable irqs in slub_cpu_dead()
  • 1574226 mm, slub: only disable irq with spin_lock in __unfreeze_partials()
  • 02194b5 mm, slub: detach percpu partial list in unfreeze_partials() using this_cpu_cmpxchg()
  • 62047a8 mm, slub: detach whole partial list at once in unfreeze_partials()
  • 85fd98f mm, slub: discard slabs in unfreeze_partials() without irqs disabled
  • bfcb75f mm, slub: move irq control into unfreeze_partials()
  • e6acdc5 mm, slub: call deactivate_slab() without disabling irqs
  • fc54ebb mm, slub: make locking in deactivate_slab() irq-safe
  • 378a859 mm, slub: move reset of c->page and freelist out of deactivate_slab()
  • a1bedf1 mm, slub: stop disabling irqs around get_partial()
  • e7fa6bb mm, slub: check new pages with restored irqs
  • 843f169 mm, slub: validate slab from partial list or page allocator before making it cpu slab
  • 033708e mm, slub: restore irqs around calling new_slab()
  • aa890dd mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc()
  • 8c1d368 mm, slub: do initial checks in ___slab_alloc() with irqs enabled
  • 12c69ba mm, slub: move disabling/enabling irqs to ___slab_alloc()
  • 78ed20c mm, slub: simplify kmem_cache_cpu and tid setup
  • ae3b1f1 mm, slub: restructure new page checks in ___slab_alloc()
  • d071b8e mm, slub: return slab page from get_partial() and set c->page afterwards
  • 1b92ed6 mm, slub: dissolve new_slab_objects() into ___slab_alloc()
  • 3c7b04f mm, slub: extract get_partial() from new_slab_objects()
  • 49dde93 mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab()
  • 26d8900 mm, slub: remove redundant unfreeze_partials() from put_cpu_partial()
  • 8f9b6e2 mm, slub: don't disable irq for debug_check_no_locks_freed()
  • bec329b mm, slub: allocate private object map for validate_slab_cache()
  • cea6298 mm, slub: allocate private object map for sysfs listings
  • 0492d54 mm, slub: don't call flush_all() from list_locations()
  • 72f1ab0 mm/page_alloc: Split per cpu page lists and zone stats -fix
  • 7d4d69c mm/page_alloc: Update PGFREE outside the zone lock in __free_pages_ok
  • 8ec908f mm/page_alloc: Avoid conflating IRQs disabled with zone->lock
  • b6ff396 mm/page_alloc: Explicitly acquire the zone lock in __free_pages_ok
  • 16e165b mm/page_alloc: Reduce duration that IRQs are disabled for VM counters
  • c7285ff mm/page_alloc: Batch the accounting updates in the bulk allocator
  • 069f3cf mm/vmstat: Inline NUMA event counter updates
  • 39642ef mm/vmstat: Convert NUMA statistics to basic NUMA counters
  • 7e05740 mm/page_alloc: Convert per-cpu list protection to local_lock
  • b40b27f mm/page_alloc: Split per cpu page lists and zone stats
  • d36c3eb timers: Move clearing of base::timer_running under base::lock
  • 1d1164a highmem: Don't disable preemption on RT in kmap_atomic()
  • 63cf1e4 printk: add pr_flush()
  • b41f91f printk: add console handover
  • 4a181ae printk: remove deferred printing
  • c4049cf printk: move console printing to kthreads
  • 4b788a5 printk: introduce kernel sync mode
  • 7995ace printk: use seqcount_latch for console_seq
  • 19aa624 printk: combine boot_delay_msec() into printk_delay()
  • 109255d printk: relocate printk_delay() and vprintk_default()
  • b94b127 serial: 8250: implement write_atomic
  • 8d4fe69 kdb: only use atomic consoles for output mirroring
  • 735eda8 console: add write_atomic interface
  • 8c1c981 printk: convert @syslog_lock to spin_lock
  • 114233f printk: remove safe buffers
  • 2baa483 printk: track/limit recursion

Don't miss a new linux release

NewReleases is sending notifications on new releases.