Huge data files - what happened earlier?
Archive - Originally posted on "The Horse's Mouth" - 2006-06-15 05:44:14 - Graham EllisWhen I'm programming a log file analysis in Perl, I'll often "slurp" the whole file into a list which I can then traverse efficiently as many times as I need. If I need to look backwards from some interesting event to see what happened in the immediate lead up to it, I can do so simply by looking at a few previous list elements.
This technique fails if my log file can be so massive that it won't fit into my computer memory, or if doing so produces severe swapping. Fortunately, there are other ways of dealing with this in Perl.
* Using a queue, each item that's read from the file and is a potential prior event is pushed (on unshift-ed) onto a list
* When an event that triggers a report is found, the prior events in the queue can be examimed and reported on
* The queue of prior events is kept trimmed to an appropriate length.
I've applied this technique to quotations for private courses in our log files ... when I find a quote request, I want to know the background.
Code to add all .html and .php requests to a queue just in case the user calls up a quote:
next unless (m!T \S+\.(php|html)!);
unshift @queue,$_; # Add to queue
$#queue = 20; # Trim queue
Code to report when a trigger event is found
if (m!GET /net/quote\.html!) { # is it a quote?
($host,$page) = (split)[0,6];
print "Host: $host\nQuote: $page\n";
foreach (@queue) { # If so, report on previous requests
($ohost,$opage) = (split)[0,6];
print "From: $opage\n" if ($host eq $ohost);
}
print "\n";
The code to keep the queue trimmed back is part of the "add to queue" code just above.
Here's an example of the results that I get from running my program ...
Host: cache2.uk.Ichangedthis.com
Quote: /net/quote.html?where=bs32+4tr
From: /course/otc.html
and this tells me that the person who was looking for a quotation for training in BS32 had previously visited the page of general information about on site training ...
Full source code of the example that I'm discussing here is available in our handling huge data resource, and is taught on our Perl for larger projects course.