Thread of 3 visible posts; 1 more post hidden or not public
this ( https://benhoyt.com/writings/count-words/ ) has lead me to the question: what can be done to (minimally) speed up pre-processing 7+ GB text files? write a parser in zig which operates on encounters of \n\n and continue to process the returned data fields with the (multithreaded) go application? Use awk and process that in go?
although the awk idea is nonsense I think?
so maybe zig -> database (for last-update timestamp) <-> go-application
but if we're there and it differs by the microseconds more, why not zig for the entire thing.
or is there a unix base application which beats the timing at very big text record files?
I'm essentially at my bullshit pattern again of "can I replace parts of this with awk?" if it really matters in speed.. or "replace it with zig in the future" (which no one in the team knows)