gvisor/pkg/syncevent/broadcaster.go

221 lines
6.6 KiB
Go
Raw Normal View History

Add //pkg/syncevent. Package syncevent is intended to subsume ~all uses of channels in the sentry (including //pkg/waiter), as well as //pkg/sleep. Compared to channels: - Delivery of events to a syncevent.Receiver allows *synchronous* execution of an arbitrary callback, whereas delivery of events to a channel requires a goroutine to receive from that channel, resulting in substantial scheduling overhead. (This is also part of the motivation for the waiter package.) - syncevent.Waiter can wait on multiple event sources without the high O(N) overhead of select. (This is the same motivation as for the sleep package.) Compared to the waiter package: - syncevent.Waiters are intended to be persistent (i.e. per-kernel.Task), and syncevent.Broadcaster (analogous to waiter.Queue) is a hash table rather than a linked list, such that blocking is (usually) allocation-free. - syncevent.Source (analogous to waiter.Waitable) does not include an equivalent to waiter.Waitable.Readiness(), since this is inappropriate for transient events (see e.g. //pkg/sentry/kernel/time.ClockEventSource). Compared to the sleep package: - syncevent events are represented by bits in a bitmask rather than discrete sleep.Waker objects, reducing overhead and making it feasible to broadcast events to multiple syncevent.Receivers. - syncevent.Receiver invokes an arbitrary callback, which is required by the sentry's epoll implementation. (syncevent.Waiter, which is analogous to sleep.Sleeper, pairs a syncevent.Receiver with a callback that wakes a waiting goroutine; the implementation of this aspect is nearly identical to that of sleep.Sleeper, except that it represents *runtime.g as unsafe.Pointer rather than uintptr.) - syncevent.Waiter.Wait (analogous to sleep.Sleeper.Fetch(block=true)) does not automatically un-assert returned events. This is useful in cases where the path for handling an event is not the same as the path that observes it, such as for application signals (a la Linux's TIF_SIGPENDING). - Unlike sleep.Sleeper, which Fetches Wakers in the order that they were Asserted, the event bitmasks used by syncevent.Receiver have no way of preserving event arrival order. (This is similar to select, which goes out of its way to randomize event ordering.) The disadvantage of the syncevent package is that, since events are represented by bits in a uint64 bitmask, each syncevent.Receiver can "only" multiplex between 64 distinct events; this does not affect any known use case. Benchmarks: BenchmarkBroadcasterSubscribeUnsubscribe BenchmarkBroadcasterSubscribeUnsubscribe-12 45133884 26.3 ns/op BenchmarkMapSubscribeUnsubscribe BenchmarkMapSubscribeUnsubscribe-12 28504662 41.8 ns/op BenchmarkQueueSubscribeUnsubscribe BenchmarkQueueSubscribeUnsubscribe-12 22747668 45.6 ns/op BenchmarkBroadcasterSubscribeUnsubscribeBatch BenchmarkBroadcasterSubscribeUnsubscribeBatch-12 31609177 37.8 ns/op BenchmarkMapSubscribeUnsubscribeBatch BenchmarkMapSubscribeUnsubscribeBatch-12 17563906 62.1 ns/op BenchmarkQueueSubscribeUnsubscribeBatch BenchmarkQueueSubscribeUnsubscribeBatch-12 26248838 46.6 ns/op BenchmarkBroadcasterBroadcastRedundant BenchmarkBroadcasterBroadcastRedundant/0 BenchmarkBroadcasterBroadcastRedundant/0-12 100907563 11.8 ns/op BenchmarkBroadcasterBroadcastRedundant/1 BenchmarkBroadcasterBroadcastRedundant/1-12 85103068 13.3 ns/op BenchmarkBroadcasterBroadcastRedundant/4 BenchmarkBroadcasterBroadcastRedundant/4-12 52716502 22.3 ns/op BenchmarkBroadcasterBroadcastRedundant/16 BenchmarkBroadcasterBroadcastRedundant/16-12 20278165 58.7 ns/op BenchmarkBroadcasterBroadcastRedundant/64 BenchmarkBroadcasterBroadcastRedundant/64-12 5905428 205 ns/op BenchmarkMapBroadcastRedundant BenchmarkMapBroadcastRedundant/0 BenchmarkMapBroadcastRedundant/0-12 87532734 13.5 ns/op BenchmarkMapBroadcastRedundant/1 BenchmarkMapBroadcastRedundant/1-12 28488411 36.3 ns/op BenchmarkMapBroadcastRedundant/4 BenchmarkMapBroadcastRedundant/4-12 19628920 60.9 ns/op BenchmarkMapBroadcastRedundant/16 BenchmarkMapBroadcastRedundant/16-12 6026980 192 ns/op BenchmarkMapBroadcastRedundant/64 BenchmarkMapBroadcastRedundant/64-12 1640858 754 ns/op BenchmarkQueueBroadcastRedundant BenchmarkQueueBroadcastRedundant/0 BenchmarkQueueBroadcastRedundant/0-12 96904807 12.0 ns/op BenchmarkQueueBroadcastRedundant/1 BenchmarkQueueBroadcastRedundant/1-12 73521873 16.3 ns/op BenchmarkQueueBroadcastRedundant/4 BenchmarkQueueBroadcastRedundant/4-12 39209468 31.2 ns/op BenchmarkQueueBroadcastRedundant/16 BenchmarkQueueBroadcastRedundant/16-12 10810058 105 ns/op BenchmarkQueueBroadcastRedundant/64 BenchmarkQueueBroadcastRedundant/64-12 2998046 376 ns/op BenchmarkBroadcasterBroadcastAck BenchmarkBroadcasterBroadcastAck/1 BenchmarkBroadcasterBroadcastAck/1-12 44472397 26.4 ns/op BenchmarkBroadcasterBroadcastAck/4 BenchmarkBroadcasterBroadcastAck/4-12 17653509 69.7 ns/op BenchmarkBroadcasterBroadcastAck/16 BenchmarkBroadcasterBroadcastAck/16-12 4082617 260 ns/op BenchmarkBroadcasterBroadcastAck/64 BenchmarkBroadcasterBroadcastAck/64-12 1220534 1027 ns/op BenchmarkMapBroadcastAck BenchmarkMapBroadcastAck/1 BenchmarkMapBroadcastAck/1-12 26760705 44.2 ns/op BenchmarkMapBroadcastAck/4 BenchmarkMapBroadcastAck/4-12 11495636 100 ns/op BenchmarkMapBroadcastAck/16 BenchmarkMapBroadcastAck/16-12 2937590 343 ns/op BenchmarkMapBroadcastAck/64 BenchmarkMapBroadcastAck/64-12 861037 1344 ns/op BenchmarkQueueBroadcastAck BenchmarkQueueBroadcastAck/1 BenchmarkQueueBroadcastAck/1-12 19832679 55.0 ns/op BenchmarkQueueBroadcastAck/4 BenchmarkQueueBroadcastAck/4-12 5618214 189 ns/op BenchmarkQueueBroadcastAck/16 BenchmarkQueueBroadcastAck/16-12 1569980 713 ns/op BenchmarkQueueBroadcastAck/64 BenchmarkQueueBroadcastAck/64-12 437672 2814 ns/op BenchmarkWaiterNotifyRedundant BenchmarkWaiterNotifyRedundant-12 650823090 1.96 ns/op BenchmarkSleeperNotifyRedundant BenchmarkSleeperNotifyRedundant-12 619871544 1.61 ns/op BenchmarkChannelNotifyRedundant BenchmarkChannelNotifyRedundant-12 298903778 3.67 ns/op BenchmarkWaiterNotifyWaitAck BenchmarkWaiterNotifyWaitAck-12 68358360 17.8 ns/op BenchmarkSleeperNotifyWaitAck BenchmarkSleeperNotifyWaitAck-12 25044883 41.2 ns/op BenchmarkChannelNotifyWaitAck BenchmarkChannelNotifyWaitAck-12 29572404 40.2 ns/op BenchmarkSleeperMultiNotifyWaitAck BenchmarkSleeperMultiNotifyWaitAck-12 16122969 73.8 ns/op BenchmarkWaiterTempNotifyWaitAck BenchmarkWaiterTempNotifyWaitAck-12 46111489 25.8 ns/op BenchmarkSleeperTempNotifyWaitAck BenchmarkSleeperTempNotifyWaitAck-12 15541882 73.6 ns/op BenchmarkWaiterNotifyWaitMultiAck BenchmarkWaiterNotifyWaitMultiAck-12 65878500 18.2 ns/op BenchmarkSleeperNotifyWaitMultiAck BenchmarkSleeperNotifyWaitMultiAck-12 28798623 41.5 ns/op BenchmarkChannelNotifyWaitMultiAck BenchmarkChannelNotifyWaitMultiAck-12 11308468 101 ns/op BenchmarkWaiterNotifyAsyncWaitAck BenchmarkWaiterNotifyAsyncWaitAck-12 2475387 492 ns/op BenchmarkSleeperNotifyAsyncWaitAck BenchmarkSleeperNotifyAsyncWaitAck-12 2184507 518 ns/op BenchmarkChannelNotifyAsyncWaitAck BenchmarkChannelNotifyAsyncWaitAck-12 2120365 562 ns/op BenchmarkWaiterNotifyAsyncWaitMultiAck BenchmarkWaiterNotifyAsyncWaitMultiAck-12 2351247 494 ns/op BenchmarkSleeperNotifyAsyncWaitMultiAck BenchmarkSleeperNotifyAsyncWaitMultiAck-12 2205799 522 ns/op BenchmarkChannelNotifyAsyncWaitMultiAck BenchmarkChannelNotifyAsyncWaitMultiAck-12 1238079 928 ns/op Updates #1074 PiperOrigin-RevId: 295834087
2020-02-18 23:17:45 +00:00
// Copyright 2020 The gVisor Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package syncevent
import (
"gvisor.dev/gvisor/pkg/sync"
)
// Broadcaster is an implementation of Source that supports any number of
// subscribed Receivers.
//
// The zero value of Broadcaster is valid and has no subscribed Receivers.
// Broadcaster is not copyable by value.
//
// All Broadcaster methods may be called concurrently from multiple goroutines.
type Broadcaster struct {
// Broadcaster is implemented as a hash table where keys are assigned by
// the Broadcaster and returned as SubscriptionIDs, making it safe to use
// the identity function for hashing. The hash table resolves collisions
// using linear probing and features Robin Hood insertion and backward
// shift deletion in order to support a relatively high load factor
// efficiently, which matters since the cost of Broadcast is linear in the
// size of the table.
// mu protects the following fields.
mu sync.Mutex
// Invariants: len(table) is 0 or a power of 2.
table []broadcasterSlot
// load is the number of entries in table with receiver != nil.
load int
lastID SubscriptionID
}
type broadcasterSlot struct {
// Invariants: If receiver == nil, then filter == NoEvents and id == 0.
// Otherwise, id != 0.
receiver *Receiver
filter Set
id SubscriptionID
}
const (
broadcasterMinNonZeroTableSize = 2 // must be a power of 2 > 1
broadcasterMaxLoadNum = 13
broadcasterMaxLoadDen = 16
)
// SubscribeEvents implements Source.SubscribeEvents.
func (b *Broadcaster) SubscribeEvents(r *Receiver, filter Set) SubscriptionID {
b.mu.Lock()
// Assign an ID for this subscription.
b.lastID++
id := b.lastID
// Expand the table if over the maximum load factor:
//
// load / len(b.table) > broadcasterMaxLoadNum / broadcasterMaxLoadDen
// load * broadcasterMaxLoadDen > broadcasterMaxLoadNum * len(b.table)
b.load++
if (b.load * broadcasterMaxLoadDen) > (broadcasterMaxLoadNum * len(b.table)) {
// Double the number of slots in the new table.
newlen := broadcasterMinNonZeroTableSize
if len(b.table) != 0 {
newlen = 2 * len(b.table)
}
if newlen <= cap(b.table) {
// Reuse excess capacity in the current table, moving entries not
// already in their first-probed positions to better ones.
newtable := b.table[:newlen]
newmask := uint64(newlen - 1)
for i := range b.table {
if b.table[i].receiver != nil && uint64(b.table[i].id)&newmask != uint64(i) {
entry := b.table[i]
b.table[i] = broadcasterSlot{}
broadcasterTableInsert(newtable, entry.id, entry.receiver, entry.filter)
}
}
b.table = newtable
} else {
newtable := make([]broadcasterSlot, newlen)
// Copy existing entries to the new table.
for i := range b.table {
if b.table[i].receiver != nil {
broadcasterTableInsert(newtable, b.table[i].id, b.table[i].receiver, b.table[i].filter)
}
}
// Switch to the new table.
b.table = newtable
}
}
broadcasterTableInsert(b.table, id, r, filter)
b.mu.Unlock()
return id
}
// Preconditions:
// * table must not be full.
// * len(table) is a power of 2.
Add //pkg/syncevent. Package syncevent is intended to subsume ~all uses of channels in the sentry (including //pkg/waiter), as well as //pkg/sleep. Compared to channels: - Delivery of events to a syncevent.Receiver allows *synchronous* execution of an arbitrary callback, whereas delivery of events to a channel requires a goroutine to receive from that channel, resulting in substantial scheduling overhead. (This is also part of the motivation for the waiter package.) - syncevent.Waiter can wait on multiple event sources without the high O(N) overhead of select. (This is the same motivation as for the sleep package.) Compared to the waiter package: - syncevent.Waiters are intended to be persistent (i.e. per-kernel.Task), and syncevent.Broadcaster (analogous to waiter.Queue) is a hash table rather than a linked list, such that blocking is (usually) allocation-free. - syncevent.Source (analogous to waiter.Waitable) does not include an equivalent to waiter.Waitable.Readiness(), since this is inappropriate for transient events (see e.g. //pkg/sentry/kernel/time.ClockEventSource). Compared to the sleep package: - syncevent events are represented by bits in a bitmask rather than discrete sleep.Waker objects, reducing overhead and making it feasible to broadcast events to multiple syncevent.Receivers. - syncevent.Receiver invokes an arbitrary callback, which is required by the sentry's epoll implementation. (syncevent.Waiter, which is analogous to sleep.Sleeper, pairs a syncevent.Receiver with a callback that wakes a waiting goroutine; the implementation of this aspect is nearly identical to that of sleep.Sleeper, except that it represents *runtime.g as unsafe.Pointer rather than uintptr.) - syncevent.Waiter.Wait (analogous to sleep.Sleeper.Fetch(block=true)) does not automatically un-assert returned events. This is useful in cases where the path for handling an event is not the same as the path that observes it, such as for application signals (a la Linux's TIF_SIGPENDING). - Unlike sleep.Sleeper, which Fetches Wakers in the order that they were Asserted, the event bitmasks used by syncevent.Receiver have no way of preserving event arrival order. (This is similar to select, which goes out of its way to randomize event ordering.) The disadvantage of the syncevent package is that, since events are represented by bits in a uint64 bitmask, each syncevent.Receiver can "only" multiplex between 64 distinct events; this does not affect any known use case. Benchmarks: BenchmarkBroadcasterSubscribeUnsubscribe BenchmarkBroadcasterSubscribeUnsubscribe-12 45133884 26.3 ns/op BenchmarkMapSubscribeUnsubscribe BenchmarkMapSubscribeUnsubscribe-12 28504662 41.8 ns/op BenchmarkQueueSubscribeUnsubscribe BenchmarkQueueSubscribeUnsubscribe-12 22747668 45.6 ns/op BenchmarkBroadcasterSubscribeUnsubscribeBatch BenchmarkBroadcasterSubscribeUnsubscribeBatch-12 31609177 37.8 ns/op BenchmarkMapSubscribeUnsubscribeBatch BenchmarkMapSubscribeUnsubscribeBatch-12 17563906 62.1 ns/op BenchmarkQueueSubscribeUnsubscribeBatch BenchmarkQueueSubscribeUnsubscribeBatch-12 26248838 46.6 ns/op BenchmarkBroadcasterBroadcastRedundant BenchmarkBroadcasterBroadcastRedundant/0 BenchmarkBroadcasterBroadcastRedundant/0-12 100907563 11.8 ns/op BenchmarkBroadcasterBroadcastRedundant/1 BenchmarkBroadcasterBroadcastRedundant/1-12 85103068 13.3 ns/op BenchmarkBroadcasterBroadcastRedundant/4 BenchmarkBroadcasterBroadcastRedundant/4-12 52716502 22.3 ns/op BenchmarkBroadcasterBroadcastRedundant/16 BenchmarkBroadcasterBroadcastRedundant/16-12 20278165 58.7 ns/op BenchmarkBroadcasterBroadcastRedundant/64 BenchmarkBroadcasterBroadcastRedundant/64-12 5905428 205 ns/op BenchmarkMapBroadcastRedundant BenchmarkMapBroadcastRedundant/0 BenchmarkMapBroadcastRedundant/0-12 87532734 13.5 ns/op BenchmarkMapBroadcastRedundant/1 BenchmarkMapBroadcastRedundant/1-12 28488411 36.3 ns/op BenchmarkMapBroadcastRedundant/4 BenchmarkMapBroadcastRedundant/4-12 19628920 60.9 ns/op BenchmarkMapBroadcastRedundant/16 BenchmarkMapBroadcastRedundant/16-12 6026980 192 ns/op BenchmarkMapBroadcastRedundant/64 BenchmarkMapBroadcastRedundant/64-12 1640858 754 ns/op BenchmarkQueueBroadcastRedundant BenchmarkQueueBroadcastRedundant/0 BenchmarkQueueBroadcastRedundant/0-12 96904807 12.0 ns/op BenchmarkQueueBroadcastRedundant/1 BenchmarkQueueBroadcastRedundant/1-12 73521873 16.3 ns/op BenchmarkQueueBroadcastRedundant/4 BenchmarkQueueBroadcastRedundant/4-12 39209468 31.2 ns/op BenchmarkQueueBroadcastRedundant/16 BenchmarkQueueBroadcastRedundant/16-12 10810058 105 ns/op BenchmarkQueueBroadcastRedundant/64 BenchmarkQueueBroadcastRedundant/64-12 2998046 376 ns/op BenchmarkBroadcasterBroadcastAck BenchmarkBroadcasterBroadcastAck/1 BenchmarkBroadcasterBroadcastAck/1-12 44472397 26.4 ns/op BenchmarkBroadcasterBroadcastAck/4 BenchmarkBroadcasterBroadcastAck/4-12 17653509 69.7 ns/op BenchmarkBroadcasterBroadcastAck/16 BenchmarkBroadcasterBroadcastAck/16-12 4082617 260 ns/op BenchmarkBroadcasterBroadcastAck/64 BenchmarkBroadcasterBroadcastAck/64-12 1220534 1027 ns/op BenchmarkMapBroadcastAck BenchmarkMapBroadcastAck/1 BenchmarkMapBroadcastAck/1-12 26760705 44.2 ns/op BenchmarkMapBroadcastAck/4 BenchmarkMapBroadcastAck/4-12 11495636 100 ns/op BenchmarkMapBroadcastAck/16 BenchmarkMapBroadcastAck/16-12 2937590 343 ns/op BenchmarkMapBroadcastAck/64 BenchmarkMapBroadcastAck/64-12 861037 1344 ns/op BenchmarkQueueBroadcastAck BenchmarkQueueBroadcastAck/1 BenchmarkQueueBroadcastAck/1-12 19832679 55.0 ns/op BenchmarkQueueBroadcastAck/4 BenchmarkQueueBroadcastAck/4-12 5618214 189 ns/op BenchmarkQueueBroadcastAck/16 BenchmarkQueueBroadcastAck/16-12 1569980 713 ns/op BenchmarkQueueBroadcastAck/64 BenchmarkQueueBroadcastAck/64-12 437672 2814 ns/op BenchmarkWaiterNotifyRedundant BenchmarkWaiterNotifyRedundant-12 650823090 1.96 ns/op BenchmarkSleeperNotifyRedundant BenchmarkSleeperNotifyRedundant-12 619871544 1.61 ns/op BenchmarkChannelNotifyRedundant BenchmarkChannelNotifyRedundant-12 298903778 3.67 ns/op BenchmarkWaiterNotifyWaitAck BenchmarkWaiterNotifyWaitAck-12 68358360 17.8 ns/op BenchmarkSleeperNotifyWaitAck BenchmarkSleeperNotifyWaitAck-12 25044883 41.2 ns/op BenchmarkChannelNotifyWaitAck BenchmarkChannelNotifyWaitAck-12 29572404 40.2 ns/op BenchmarkSleeperMultiNotifyWaitAck BenchmarkSleeperMultiNotifyWaitAck-12 16122969 73.8 ns/op BenchmarkWaiterTempNotifyWaitAck BenchmarkWaiterTempNotifyWaitAck-12 46111489 25.8 ns/op BenchmarkSleeperTempNotifyWaitAck BenchmarkSleeperTempNotifyWaitAck-12 15541882 73.6 ns/op BenchmarkWaiterNotifyWaitMultiAck BenchmarkWaiterNotifyWaitMultiAck-12 65878500 18.2 ns/op BenchmarkSleeperNotifyWaitMultiAck BenchmarkSleeperNotifyWaitMultiAck-12 28798623 41.5 ns/op BenchmarkChannelNotifyWaitMultiAck BenchmarkChannelNotifyWaitMultiAck-12 11308468 101 ns/op BenchmarkWaiterNotifyAsyncWaitAck BenchmarkWaiterNotifyAsyncWaitAck-12 2475387 492 ns/op BenchmarkSleeperNotifyAsyncWaitAck BenchmarkSleeperNotifyAsyncWaitAck-12 2184507 518 ns/op BenchmarkChannelNotifyAsyncWaitAck BenchmarkChannelNotifyAsyncWaitAck-12 2120365 562 ns/op BenchmarkWaiterNotifyAsyncWaitMultiAck BenchmarkWaiterNotifyAsyncWaitMultiAck-12 2351247 494 ns/op BenchmarkSleeperNotifyAsyncWaitMultiAck BenchmarkSleeperNotifyAsyncWaitMultiAck-12 2205799 522 ns/op BenchmarkChannelNotifyAsyncWaitMultiAck BenchmarkChannelNotifyAsyncWaitMultiAck-12 1238079 928 ns/op Updates #1074 PiperOrigin-RevId: 295834087
2020-02-18 23:17:45 +00:00
func broadcasterTableInsert(table []broadcasterSlot, id SubscriptionID, r *Receiver, filter Set) {
entry := broadcasterSlot{
receiver: r,
filter: filter,
id: id,
}
mask := uint64(len(table) - 1)
i := uint64(id) & mask
disp := uint64(0)
for {
if table[i].receiver == nil {
table[i] = entry
return
}
// If we've been displaced farther from our first-probed slot than the
// element stored in this one, swap elements and switch to inserting
// the replaced one. (This is Robin Hood insertion.)
slotDisp := (i - uint64(table[i].id)) & mask
if disp > slotDisp {
table[i], entry = entry, table[i]
disp = slotDisp
}
i = (i + 1) & mask
disp++
}
}
// UnsubscribeEvents implements Source.UnsubscribeEvents.
func (b *Broadcaster) UnsubscribeEvents(id SubscriptionID) {
b.mu.Lock()
mask := uint64(len(b.table) - 1)
i := uint64(id) & mask
for {
if b.table[i].id == id {
// Found the element to remove. Move all subsequent elements
// backward until we either find an empty slot, or an element that
// is already in its first-probed slot. (This is backward shift
// deletion.)
for {
next := (i + 1) & mask
if b.table[next].receiver == nil {
break
}
if uint64(b.table[next].id)&mask == next {
break
}
b.table[i] = b.table[next]
i = next
}
b.table[i] = broadcasterSlot{}
break
}
i = (i + 1) & mask
}
// If a table 1/4 of the current size would still be at or under the
// maximum load factor (i.e. the current table size is at least two
// expansions bigger than necessary), halve the size of the table to reduce
// the cost of Broadcast. Since we are concerned with iteration time and
// not memory usage, reuse the existing slice to reduce future allocations
// from table re-expansion.
b.load--
if len(b.table) > broadcasterMinNonZeroTableSize && (b.load*(4*broadcasterMaxLoadDen)) <= (broadcasterMaxLoadNum*len(b.table)) {
newlen := len(b.table) / 2
newtable := b.table[:newlen]
for i := newlen; i < len(b.table); i++ {
if b.table[i].receiver != nil {
broadcasterTableInsert(newtable, b.table[i].id, b.table[i].receiver, b.table[i].filter)
b.table[i] = broadcasterSlot{}
}
}
b.table = newtable
}
b.mu.Unlock()
}
// Broadcast notifies all Receivers subscribed to the Broadcaster of the subset
// of events to which they subscribed. The order in which Receivers are
// notified is unspecified.
func (b *Broadcaster) Broadcast(events Set) {
b.mu.Lock()
for i := range b.table {
if intersection := events & b.table[i].filter; intersection != 0 {
// We don't need to check if broadcasterSlot.receiver is nil, since
// if it is then broadcasterSlot.filter is 0.
b.table[i].receiver.Notify(intersection)
}
}
b.mu.Unlock()
}
// FilteredEvents returns the set of events for which Broadcast will notify at
// least one Receiver, i.e. the union of filters for all subscribed Receivers.
func (b *Broadcaster) FilteredEvents() Set {
var es Set
b.mu.Lock()
for i := range b.table {
es |= b.table[i].filter
}
b.mu.Unlock()
return es
}