This post discusses the author's experience in optimizing a PHP script to process one billion rows of data. The initial naive implementation took 25 minutes to run, but through various optimizations, the runtime was reduced to 27.7 seconds. The optimizations included using fgets() instead of fgetcsv(), using references where

8m read timeFrom dev.to
Post cover image
Table of contents
A first naive approachfgets() instead of fgetcsv()Using a reference where possibleOnly one comparisonAdding type castsWhat about JIT?What more?Can we make it even faster?Is this the end?What did I learn on the way?The END
5 Comments

Sort: