Main Content

Perl - progress bar, supressing ^C and coping with huge data flows

Archive - Originally posted on "The Horse's Mouth" - 2007-10-20 08:08:54 - Graham Ellis

If you're handling a huge amount of data (gigabytes!) in a Perl program, memory won't allow you to slurp it all into a list and you'll traverse the data with a loop from file or from database. And because of the sheer volume of data, it may take a while to process. During such proessing, you may wish to:
• See a progress bar on the screen
• Be able to ask for intermediate results
• Disable ^C (Control-C) interrupts to avoid accidental termination

A progress bar can be printed out as text, using \r rather than \n at the end of each line. If you do that, though, you need to turn autoflush mode on so that yo actually see the reports rather than having them buffered, and you should use a formatted print that always generates the same length line to avoid "droppings" being left on the end when a short line overwrites a long line. If you're likely to send your output to a file, the progress bar should go to STDERR.

You can replace your ^C handler by changing the signal handler in $SIG{INT} to call a sub of your choice. As such a sub can be called at any stage in your loop it's good practise to simply note that ^C has been entered by setting a flag, and then checking at the end of each loop iteration whether or not there's a need to report. And don't forget to turn off that need to report once you have done the output - otherwise you'll report at EVERY iteratin thereafter!

Having used ^C to generate a report rather than kill your process, how can you actually kill it? There are other ways via the OS, but better to allow the user to enter ^C twice in succession, and on the second entry in (say) 3 seconds give an interactive "Quit?" option.

Net result of applying these techniques? A program that you can really control, and a great deal of time saved as you no longer have to wait for it to run to completion every time. Psychologically much better too - you know how it's getting on rather than having to wait and see.

On yesterdays's Perl for Larger Projects course, I wrote a complete example showing almost all of the above in a single program that analyses a massive web log file. I've commented it, so if you're already something of a Perl expert or have access to our Handling Huge Data course notes it wll already make sense to you. Source code Link. If this is something you need to learn, we would welcome you on the course

Here is some sample output to complete the example.

[trainee@campion huge]$ perl hugehunter
we have done 160000 lines ( 10.60%)
^C
Intermediate report
6 crawl-66-249-73-81.googlebot.com
1 crawler4067.ask.com
2 host86-148-48-188.range86-148.btcentralplus.com
1 livebot-65-55-209-97.search.live.com
1 lj511086.crawl.yahoo.net
1 lj511737.crawl.yahoo.net
2 lj511971.crawl.yahoo.net
1 lj512128.crawl.yahoo.net
1 mail.bezanilla.cl
[etc]
Quit? y
[trainee@campion huge]$


Note - example used logs to STDOUT not STDERR. You'll need to select STDERR before you set autoflush if you switch it over as recommended.