使用k6压测web服务器 http/1.1 和 http/2.0
2024-02-26 15:13:18

想试一下 http/1.1 和 http/2.0 有啥区别,正好又想起前段时间了解到的一个压测工具 k6,这不正好,拿 k6 来压一下试试。

安装 k6

官方文档在这里:https://grafana.com/docs/k6/latest/get-started/installation/

使用 go 编写服务端

创建 http 和 https 服务端。在创建 https 服务端时,还需要配置证书,参考 使用golang创建http2和h2c服务端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
package main

import (
"fmt"
"log"
"net/http"
"sync"

"golang.org/x/net/http2"
)

func main() {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Protocol: ", r.Proto)
log.Println("Protocol: ", r.Proto)
})

var wg sync.WaitGroup

h1s := &http.Server{
Addr: "0.0.0.0:8080",
Handler: handler,
}

log.Println("start http1.1 server without tls server on: 8080") // 这里创建的是 http/1.1 的服务器,和另外一篇文章是有区别。
wg.Add(1)
go func() {
defer wg.Done()
// err := h1s.ListenAndServeTLS("mydomain.com.crt", "mydomain.com.key")
err := h1s.ListenAndServe()
log.Fatal(err)
}()

h2s := &http.Server{
Addr: "0.0.0.0:8090",
Handler: handler,
}
http2.ConfigureServer(h2s, &http2.Server{})
log.Println("start http2 server with tls on: 8090")
wg.Add(1)
go func() {
defer wg.Done()
err := h2s.ListenAndServeTLS("mydomain.com.crt", "mydomain.com.key")
log.Fatal(err)
}()

h2c := &http.Server{
Addr: "0.0.0.0:9000",
Handler: handler,
}
http2.ConfigureServer(h2c, &http2.Server{})
log.Println("start http2 server without tls on: 9000")
wg.Add(1)
go func() {
defer wg.Done()
err := h2c.ListenAndServe()
log.Fatal(err)
}()

wg.Wait()
}

编写 k6 脚本

k6 可以通过命令行参数来配置参数,也支持使用配置文件。由于 k6 支持使用 k6 new 生成一个默认的模板,所以使用配置文件是很方便的。

生成一个配置文件

1
k6 new

编辑 script.js

http.get('http://test.k6.io'); 修改为 http.get("https://localhost:8090"); 这里按需填写地址即可。
由于我们服务端设置了证书,而 k6 是没有导入证书的,所以需要设置跳过证书验证。

1
2
3
4
5
6
7
8
export const options = {
// A number specifying the number of VUs to run concurrently.
vus: 8,
// A string specifying the total duration of the test run.
duration: "10s",
insecureSkipTLSVerify: true,
// The following section contains c
}

压测

执行命令 k6 run script.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
root@kusaka-virtual-machine:~/test# k6 run script.js 

/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io

execution: local
script: script.js
output: -

scenarios: (100.00%) 1 scenario, 8 max VUs, 40s max duration (incl. graceful stop):
* default: 8 looping VUs for 10s (gracefulStop: 30s)


data_received..................: 2.6 MB 255 kB/s
data_sent......................: 1.4 MB 144 kB/s
http_req_blocked...............: avg=3.47µs min=227ns med=404ns max=17.53ms p(90)=513ns p(95)=544ns
http_req_connecting............: avg=97ns min=0s med=0s max=1.33ms p(90)=0s p(95)=0s
http_req_duration..............: avg=1.97ms min=138.6µs med=1.42ms max=77.28ms p(90)=3.82ms p(95)=5.19ms
{ expected_response:true }...: avg=1.97ms min=138.6µs med=1.42ms max=77.28ms p(90)=3.82ms p(95)=5.19ms
http_req_failed................: 0.00% ✓ 0 ✗ 39626
http_req_receiving.............: avg=566.71µs min=9.44µs med=202.58µs max=44.77ms p(90)=1.4ms p(95)=2.21ms
http_req_sending...............: avg=251.31µs min=24.39µs med=41.09µs max=39.36ms p(90)=407.78µs p(95)=881.68µs
http_req_tls_handshaking.......: avg=2.48µs min=0s med=0s max=16.76ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=1.15ms min=0s med=856.01µs max=29.65ms p(90)=2.32ms p(95)=3.07ms
http_reqs......................: 39626 3962.247921/s
iteration_duration.............: avg=1.99ms min=165.25µs med=1.5ms max=46.3ms p(90)=3.89ms p(95)=5.2ms
iterations.....................: 39626 3962.247921/s
vus............................: 8 min=8 max=8
vus_max........................: 8 min=8 max=8


running (10.0s), 0/8 VUs, 39626 complete and 0 interrupted iterations
default ✓ [======================================] 8 VUs 10s

比较 http/1.1 和 http/2.0

分别访问 8090 端口和 8080 端口。

我应该是用错方式了,测出来使用 http/1.1 和 http/2.0 几乎没有区别。第一次是访问的 8090 端口,服务端打印出来的协议是 HTTP/2.0,第二次访问的是 8080 端口,服务端打印出来的协议是 HTTP/1.1,但是可以看到,http 请求的数量都在 3800 左右,几乎没有区别。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
root@kusaka-virtual-machine:~/test# k6 run script.js 

/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io

execution: local
script: script.js
output: -

scenarios: (100.00%) 1 scenario, 8 max VUs, 40s max duration (incl. graceful stop):
* default: 8 looping VUs for 10s (gracefulStop: 30s)


data_received..................: 2.5 MB 247 kB/s
data_sent......................: 1.4 MB 139 kB/s
http_req_blocked...............: avg=3.89µs min=238ns med=421ns max=39.68ms p(90)=521ns p(95)=552ns
http_req_connecting............: avg=77ns min=0s med=0s max=658.32µs p(90)=0s p(95)=0s
http_req_duration..............: avg=2.01ms min=137.91µs med=1.5ms max=68.7ms p(90)=3.98ms p(95)=5.16ms
{ expected_response:true }...: avg=2.01ms min=137.91µs med=1.5ms max=68.7ms p(90)=3.98ms p(95)=5.16ms
http_req_failed................: 0.00% ✓ 0 ✗ 38321
http_req_receiving.............: avg=658.75µs min=8.71µs med=257.49µs max=33.57ms p(90)=1.71ms p(95)=2.5ms
http_req_sending...............: avg=256.06µs min=24.69µs med=55.3µs max=38.98ms p(90)=466.9µs p(95)=859.19µs
http_req_tls_handshaking.......: avg=2.72µs min=0s med=0s max=39.14ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=1.1ms min=0s med=837.62µs max=28.51ms p(90)=2.21ms p(95)=2.87ms
http_reqs......................: 38321 3831.506661/s
iteration_duration.............: avg=2.06ms min=165.5µs med=1.59ms max=47.18ms p(90)=4.07ms p(95)=5.23ms
iterations.....................: 38321 3831.506661/s
vus............................: 8 min=8 max=8
vus_max........................: 8 min=8 max=8


running (10.0s), 0/8 VUs, 38321 complete and 0 interrupted iterations
default ✓ [======================================] 8 VUs 10s
root@kusaka-virtual-machine:~/test# k6 run script.js

/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io

execution: local
script: script.js
output: -

scenarios: (100.00%) 1 scenario, 8 max VUs, 40s max duration (incl. graceful stop):
* default: 8 looping VUs for 10s (gracefulStop: 30s)


data_received..................: 5.2 MB 518 kB/s
data_sent......................: 3.0 MB 302 kB/s
http_req_blocked...............: avg=5.5µs min=1.09µs med=1.78µs max=6.61ms p(90)=2.63µs p(95)=3.46µs
http_req_connecting............: avg=322ns min=0s med=0s max=3.45ms p(90)=0s p(95)=0s
http_req_duration..............: avg=1.96ms min=124.23µs med=1.54ms max=39.52ms p(90)=3.93ms p(95)=4.95ms
{ expected_response:true }...: avg=1.96ms min=124.23µs med=1.54ms max=39.52ms p(90)=3.93ms p(95)=4.95ms
http_req_failed................: 0.00% ✓ 0 ✗ 37811
http_req_receiving.............: avg=120.47µs min=10.45µs med=24.95µs max=26.94ms p(90)=230.5µs p(95)=387.88µs
http_req_sending...............: avg=29.93µs min=4.73µs med=7.73µs max=14.72ms p(90)=15.27µs p(95)=56.93µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=1.81ms min=103.29µs med=1.41ms max=39.48ms p(90)=3.71ms p(95)=4.68ms
http_reqs......................: 37811 3780.428615/s
iteration_duration.............: avg=2.08ms min=152.17µs med=1.67ms max=39.58ms p(90)=4.09ms p(95)=5.14ms
iterations.....................: 37811 3780.428615/s
vus............................: 5 min=5 max=8
vus_max........................: 8 min=8 max=8


running (10.0s), 0/8 VUs, 37811 complete and 0 interrupted iterations
default ✓ [======================================] 8 VUs 10s

k6 的不足

k6 目前不支持压测非加密的http2 server(H2C),也就是上面代码中的 9000 端口对应的服务端。如果使用 k6 去测试,那么只能使用 HTTP/1.1 的协议,无法像 curl 那样使用额外的参数 ``–http2-prior-knowledge` 去指定 HTTP/2.0 协议。

相关的讨论可以看这里。https://github.com/grafana/k6/issues/970 ,早在 2019 年就已经在讨论了,不过看起来目前还是没有办法在没有 breaking change 的情况下引入对 H2C 的支持。