最後更新: 2019-04-08
介紹
BBR (Bottleneck Bandwidth and RTT) 是 Google 開發的 congestion control 模型
https://github.com/google/bbr
Linux kernel Mainline: 4.9
傳統 TCP 基於丟包反饋的協議 (loss-based congestion control since the late 1980s)
In shallow buffers, packet loss happens before congestion.
In deep buffers, congestion happens before packet loss.
(filling the deep buffers in many last-mile links and causing seconds of needless queuing delay)
When these systems realize that some of the data packets don 't make it to their final destination,
they start sending the data more slowly, which ideally reduces the amount of congestion.
it uses recent measurements of the network’s delivery rate and round-trip time to build an explicit model
that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay
Usage
在 Centos 7 要用 elrepo.org 的 Kernel 才有 bbr - link
grep '_BBR' /boot/config-*
/boot/config-4.9.0-4-amd64:CONFIG_TCP_CONG_BBR=m
改用 bbr
/etc/sysctl.conf
# Default: cubic net.ipv4.tcp_congestion_control=bbr # Default: pfifo_fast net.core.default_qdisc=fq
sysctl -p
Checking
lsmod | grep bbr
# cat /proc/sys/net/ipv4/tcp_congestion_control
sysctl -n net.ipv4.tcp_congestion_control
# cat /proc/sys/net/core/default_qdisc
sysctl -n net.core.default_qdisc
fq
Linux v4.13-rc1 and beyond 's bbr there is new support for TCP-level pacing.
This means that there is no longer a requirement to install the "fq" qdisc to use BBR. Any qdisc will do.
BBR congestion control depends on pacing, and pacing is currently handled by sch_fq packet scheduler for performance reasons,
and also because implemening pacing with FQ was convenient to truly avoid bursts.
測試速度
# Server
yum install httpd -y
systemctl start httpd
cd /var/www/html
dd if=/dev/zero of=test.bin bs=1M count=512
# Client
mount tmpfs /mnt/tmpfs -t tmpfs
cd /mnt/tmpfs
wget http://192.168.88.182/test.bin -O test.bin
test without packet loss 結果
# cubic, pfifo_fast
108 MB/s
# bbr, fq
124MB/s
test with packet loss
Server
# loss 1%
tc qdisc add dev ens4 root netem loss random 1%
# rollback
tc qdisc del dev ens4 root
結果
cubic:
# 1% drop => 60M
# 2% drop => 20M
# 3% drop => 10M
bbr:
# 1% drop => 95M
# 2% drop => 75M
# 3% drop => 45M
monitor Linux TCP BBR connections
ss -tin
bbr wscale:7,7 rto:201 rtt:0.669/0.208 ato:40 mss:1448 rcvmss:536 advmss:1448
cwnd:190 bytes_acked:68157360 bytes_received:147
segs_out:47071 segs_in:23749 data_segs_out:47070 data_segs_in:1
bbr:(bw:1619.8Mbps,mrtt:0.083,pacing_gain:2.88672,cwnd_gain:2.88672)
send 3289.9Mbps lastsnd:1 lastrcv:572 lastack:1 pacing_rate 2132.7Mbps delivery_rate 1619.8Mbps
app_limited busy:572ms rcv_space:14600 notsent:16774 minrtt:0.083
* 當對方 connect 時才見到