最後更新: 2019-04-08


BBR (Bottleneck Bandwidth and RTT) 是 Google 開發的 congestion control 模型

Linux kernel Mainline: 4.9

傳統 TCP 基於丟包反饋的協議 (loss-based congestion control since the late 1980s)

In shallow buffers, packet loss happens before congestion.
In deep buffers, congestion happens before packet loss.
(filling the deep buffers in many last-mile links and causing seconds of needless queuing delay)

When these systems realize that some of the data packets don 't make it to their final destination,
they start sending the data more slowly, which ideally reduces the amount of congestion.

it uses recent measurements of the network’s delivery rate and round-trip time to build an explicit model
that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay




在 Centos 7 要用 的 Kernel 才有 bbr - link

grep '_BBR' /boot/config-*


改用 bbr


# Default: cubic

# Default: pfifo_fast

sysctl -p


lsmod | grep bbr

# cat /proc/sys/net/ipv4/tcp_congestion_control

sysctl -n net.ipv4.tcp_congestion_control

# cat /proc/sys/net/core/default_qdisc

sysctl -n net.core.default_qdisc




Linux v4.13-rc1 and beyond 's bbr there is new support for TCP-level pacing.

This means that there is no longer a requirement to install the "fq" qdisc to use BBR. Any qdisc will do.

BBR congestion control depends on pacing, and pacing is currently handled by sch_fq packet scheduler for performance reasons,

and also because implemening pacing with FQ was convenient to truly avoid bursts.




# Server

yum install httpd -y

systemctl start httpd

cd /var/www/html

dd if=/dev/zero of=test.bin bs=1M count=512

# Client

mount tmpfs /mnt/tmpfs -t tmpfs

cd /mnt/tmpfs

wget -O test.bin

test without packet loss 結果

# cubic, pfifo_fast

108 MB/s

# bbr, fq


test with packet loss


# loss 1%

tc qdisc add dev ens4 root netem loss random 1%

# rollback

tc qdisc del dev ens4 root



# 1% drop => 60M

# 2% drop => 20M

# 3% drop => 10M


# 1% drop => 95M

# 2% drop => 75M

# 3% drop => 45M


monitor Linux TCP BBR connections


ss -tin

bbr wscale:7,7 rto:201 rtt:0.669/0.208 ato:40 mss:1448 rcvmss:536 advmss:1448

cwnd:190 bytes_acked:68157360 bytes_received:147

segs_out:47071 segs_in:23749 data_segs_out:47070 data_segs_in:1


send 3289.9Mbps lastsnd:1 lastrcv:572 lastack:1 pacing_rate 2132.7Mbps delivery_rate 1619.8Mbps

app_limited busy:572ms rcv_space:14600 notsent:16774 minrtt:0.083

* 當對方 connect 時才見到