The garbage collector (GC) does not collect garbage constantly, but at certain intervals. If your code allocates memory for some data structures and then frees them - and so on in a circle - this puts pressure on the GC, even forcing to contact the OS to allocate new memory. runtime Imagine this scenario: we select a chunk (e.g., ), work with it, then we release it. A certain amount of time will pass before the GC collect this chunk. If, during this time, we select another similar chunk and the memory allocated by the OS is insufficient, the application will need to request more memory from the OS. Regarding application time, the memory request from the OS can be time-consuming. Meanwhile, that previously 'used' chunk is left unused. []byte What should you do? Here are your options: create a pool reset the state of the chunk put waste chunks into the pool take new chunks from the pool Create a pool import ( "sync" ) var bytesPool = sync.Pool{ New: func() interface{} { return []byte{} }, } /* In this example, the `New` function is not needed (see the explanation below). If the pool is empty, and `New` is not `nil`, it will be used to create a new object. It will need to be converted from `interace{}` - converted to the required type */ Reset state // let ary be a []byte of certain length and capacity ary = ary[:0] // truncate len, save cap Put it into the pool /* Anyway, we may end up with too big chunks, which we do not need (at least not often) - let's throw them away, otherwise, a 2048 byte chunk will be used where only 500-800 bytes are needed. If the number is large, it will negatively impact memory. This is what we aim to address. */ const maxCap = 1024 if cap(ary) <= maxCap { // putting limited-size chunks bytesPool.Put(ary) } Take from the pool nextAry := bytesPool.Get().([]byte) Explanation about New The New function creates an empty , as well as conversions to and back. In the case of , we will most likely build it up using append, which basically makes this approach unprofitable because: []byte{} interface{} []byte creation of of zero capacity []byte double conversion to and back again interface{} append will still create a new chunk append can be fed nil, only of type (not ) []byte interface{} It is much more convenient to create two functions that would deal with all the fuss with the pool: // get func getBytes() (b []byte) { ifc := bytesPool.Get() if ifc != nil { b = ifc.([]byte) } return } // put func putBytes(b []byte) { if cap(b) <= maxCap { b = b[:0] // reset bytesPool.Put(b) } } Some final points to note pool is goroutine-safe pool will not necessarily release the data on the first GC wakeup, but it can release it at any time there is no possibility to define and set the pool size no need to worry about pool overflow You don't need to create a pool everywhere you go; it was designed as a buffer for efficiently sharing common objects, not only within a package but also across multiple packages you probably have or will have situations where the need/ability to help GC will be obvious