How I wrote a JSON parser in Go - Part 4 - Optimization

Opemipo · December 20, 2019

In the last post, I talked about how I implemented all the features in the JSON specification. That’s nice but I was curious to see how my implementation compared performance wise with the existing standard library implementation.

One of the clues that I had a performance problem was when I tried to compare the time it takes to load two huge strings when my code was at this state. My implementation took forever to load it and I had no idea why. After a bit of guessing, I still couldn’t figure it out so I wrote benchmarks. Go has builtin support for benchmarks. A benchmark looks like this.

func BenchmarkMyMapOfString(b *testing.B) {
	b.ReportAllocs()
	str, err := ioutil.ReadFile("testdata/map_of_string.json")
	if err != nil {
		panic(err)
	}
	Load(str)
}

I wrote the benchmarks so I could use them to test the performance of different features - string parsing, array parsing and object parsing. It turns out my string parsing was taking forever. I looked at the standard library’s implementation to figure out why and after a while of looking I noticed the major difference was that they were using the strconv.Unquote function. I used Unquote to parse strings and suddenly my performance was comparable with the standard libraries implementation. This also made me realise that my model for how Go works wasn’t complete. It appears that if you are going to be doing something like this

s := make([]rune, 0)
for i := range(1000){
    s = append(s, rune('a'))
}

it is going to be slow because allocations take time. I certainly wasn’t thinking about that before. Once this was fixed, the performance of the two implementations was comparable. My job was done! You can find the code here

Json parser ✅

In the last post, I will be summarising what I learned from this project.

Stay tuned!

Twitter, Facebook