nginx+lua+redis
Release: 2016-08-08 09:19:06
Original
1174 people have browsed it
Recently, I am using nginx+lua+redis to build a system to support high-concurrency and high-traffic applications. During development, I suddenly thought whether golang could achieve the same effect. So I wrote a simple code to compare. I won’t go into details. There are many introductions on the Internet about building high-concurrency applications with nginx+lua+redis. I am using openresty+lua+redis. Posting the test results first, the machine is the new low-profile air released in 2013 - (1.3 GHz Intel Core i5, 4 GB 1600 MHz DDR3), command: ab -n 1000 -c 100 http://localhost :8880/openresty+lua+redis:
Concurrency Level: 100
Time taken for tests: 0.458 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 689000 bytes
HTML transferred: 533000 bytes
Requests per second: 2183.67 [#/sec] (mean)
Time per request: 45.794 [ms] (mean)
Time per request: 0.458 [ms] (mean, across all concurrent requests)
Transfer rate: 1469.29 [Kbytes/sec] received
golang+redis:
Concurrency Level: 100
Time taken for tests: 0.503 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 650000 bytes
HTML transferred: 532000 bytes
Requests per second: 1988.22 [#/sec] (mean)
Time per request: 50.296 [ms] (mean)
Time per request: 0.503 [ms] (mean, across all concurrent requests)
Transfer rate: 1262.05 [Kbytes/sec] received
lua code:-- redis 配置localparams={host='127.0.0.1',port=6379,}localred=redis:new()localok,err=red:connect(params.host,params.port)ifnotokthenngx.say("failed to connect: ",err)returnendlocalposition_key=ngx.var.position_keylocalcontent=red:get(position_key)ngx.print(content)golang code:packagemainimport("fmt""github.com/garyburd/redigo/redis""log""net/http""time")funcgetConn()(redis.Conn,error){conn,err:=redis.DialTimeout("tcp",":6379",0,1*time.Second,1*time.Second)iferr!=nil{fmt.Println(err)}returnconn,err}funcindexHandler(whttp.ResponseWriter,r*http.Request){conn,err:=getConn()iferr!=nil{http.Error(w,err.Error(),http.StatusInternalServerError)return}result,err:=conn.Do("get","content_1")iferr!=nil{http.Error(w,err.Error(),http.StatusInternalServerError)return}fmt.Fprintf(w,"Hello, %q",result)}funcmain(){http.HandleFunc("/",indexHandler)err:=http.ListenAndServe(":8880",nil)iferr!=nil{log.Fatal("ListenAndServe: ",err.Error())}}After many stress tests, we found that the combination of nginx + lua + redis is indeed efficient, and the golang + redis solution is actually not much different. Compared with the way the entire system is developed to deployed, golang may be more suitable and more in line with development habits. After all, the development and testing of nginx + lua is a bit awkward. Supplementary connection pool usage and test resultsAfter the last test, I felt that this code still has room for improvement, so I checked how to use redis connection pool in golang (actually the use of redigo), and How to use redis connection pool in Lua (actually the use of rest.redis).
First the results:
openresty + lua + redis
Concurrency Level: 100
Time taken for tests: 0.284 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 687000 bytes
HTML transferred: 531000 bytes
Requests per second: 3522.03 [#/sec] (mean)
Time per request: 28.393 [ms] (mean)
Time per request: 0.284 [ms] (mean, across all concurrent requests)
Transfer rate: 2362.93 [Kbytes/sec] received
Then look at golang:
golang + redis
Concurrency Level: 100
Time taken for tests: 0.327 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 650000 bytes
HTML transferred: 532000 bytes
Requests per second: 3058.52 [#/sec] (mean)
Time per request: 32.696 [ms] (mean)
Time per request: 0.327 [ms] (mean, across all concurrent requests)
Transfer rate: 1941.44 [Kbytes/sec] received
lua code:
-- redis 配置localparams={host='127.0.0.1',port=6379,}localred=redis:new()localok,err=red:connect(params.host,params.port)ifnotokthenngx.say("failed to connect: ",err)returnendlocalposition_key=ngx.var.position_keylocalcontent=red:get(position_key)ngx.print(content)localok,err=red:set_keepalive(10000,100)ifnotokthenngx.say("failed to set keepalive: ",err)returnendgolang code:
packagemainimport("flag""fmt""github.com/garyburd/redigo/redis""log""net/http""runtime""time")var(pool*redis.PoolredisServer=flag.String("redisServer",":6379",""))funcindexHandler(whttp.ResponseWriter,r*http.Request){t0:=time.Now()conn:=pool.Get()t1:=time.Now()fmt.Printf("The call took %v to run.\n",t1.Sub(t0))deferconn.Close()result,err:=conn.Do("get","content_1")iferr!=nil{http.Error(w,err.Error(),http.StatusInternalServerError)return}fmt.Fprintf(w,"Hello, %q",result)}funcnewPool(serverstring)*redis.Pool{return&redis.Pool{MaxIdle:3,IdleTimeout:240*time.Second,Dial:func()(redis.Conn,error){c,err:=redis.Dial("tcp",server)iferr!=nil{returnnil,err}returnc,err},TestOnBorrow:func(credis.Conn,ttime.Time)error{_,err:=c.Do("PING")returnerr},}}funcmain(){runtime.GOMAXPROCS(runtime.NumCPU())flag.Parse()pool=newPool(*redisServer)http.HandleFunc("/",indexHandler)err:=http.ListenAndServe(":8880",nil)iferr!=nil{log.Fatal("ListenAndServe: ",err.Error())}}In addition to adding a thread pool, golang also sets the number of cpu cores.
However, this test is not very rigorous. Redis, nginx, golang http server, and ab stress test are all on the same machine, and they will affect each other. If you are interested, you can deploy and test it separately.
The above has introduced nginx+lua+redis, including aspects of it. I hope it will be helpful to friends who are interested in PHP tutorials.
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
-
2024-10-22 09:46:29
-
2024-10-13 13:53:41
-
2024-10-12 12:15:51
-
2024-10-11 22:47:31
-
2024-10-11 19:36:51
-
2024-10-11 15:50:41
-
2024-10-11 15:07:41
-
2024-10-11 14:21:21
-
2024-10-11 12:59:11
-
2024-10-11 12:17:31