How do you include another file like #include in C code?

There are three ways to include another file in Perl. Each have their distinction edge, but all gets the job done.

Method #1: Use the command ‘use’.

Sample code:

# Include the file "sample_include.pm"
use sample_include;

Comment:

Note that the include file has to be Perl module (*.pm). For further reading on how to use command ‘use’, refer to the perldocs either through the command line or website:

perldoc -f use

What the above code does is load sample_include.pm as part of the file. Generally, this will contain some sort of package and subroutines, which can be called explicitly such as <package name>::<subroutine>. See example below:

# sample_include.pm
package sample_include;
sub test {
print "This is a subroutine";
}
1; # Needed here to return true when loading PM files.

# Sample program
use sample_include;
sample_include::test()  # calls subroutine test.

There are methods to allow nonexplicit package (also known as barewords such as test()). See Exporter and Classes for more info.

Method #2: Use the command ‘require’.

Sample code:

# Include the file "sample_include.pm"
require sample_include;

Comment:

Like the first method, this also needs to be a Perl module. For further reading on how to use this command, type the following or go here:

perldoc -f require

What’s the difference between require and use?

Technically, the use command uses require when loading the module. Use does more work, trying to import subroutines or variables into the main file. The below two section are equivalent:

# Load in sample_file.pm
use sample_file;

# Load in sample_file.pm
BEGIN {
require sample_file;
sample_file->import( LIST );
}

Method #3: Use the command ‘do’.

Sample code:

# Include the file "sample_include.pm"
do "sample_include.pm"

Comment:

Unlike the other two commands, ‘do’ does not require to be a Perl Module. However, it should be a Perl file, since it will execute the contents of the file. See here for more details.

One interesting note, the do mechanism can only be called once. The other mechanisms work if you include them multiple times in a .pm file, but the ‘do’ in a Perl module can only be called once (whether it’s intentional or not). Example:

#sample_include.pm
do "another_file.pl";
1

#another_include.pm
package another_include;
use sample_include;
&func();
1

# main file
use sample_include;
use another_include;
&func();

# another_file.pl
sub func {
print "Who will run this function?";
}

The main file will execute sample_include.pm first (before another_include). Do will execute and associate func to be with main::func(). However, when another_include.pm tries to run sample_include.pm, it ends up unable to work (for reasons unknown). As a result, when sample_include tries to run &func(), it will return an error.

The proper method is to make sure that ‘do’ command will only run once anywhere when including the files. Otherwise, some functions will not work as intended.

>perl main.pl
Undefined subrounte &another_include::func called at another_include.pm line 4.

For more reference (and a better tutorial), here’s another article that explains more in detail about includes: Including files.

Advertisements
Categories: Perl

How do you read an entire file into a string?

August 2, 2009 2 comments

Method #1: Use the kernel system call.

Sample code:

# Read entire file into string:
$output = `cat sample_file.txt`;

Comment:

This is the simpliest method, but NOT recommended. There are many issues with this, such as platform-dependent, and bad programming practice (launching a shell to get the output).

  • Windows won’t be able to run this.
  • It launches a new shell (bad programming practice) to get the output.

Method #2: Open the file through Perl.

Sample code:

# Read entire file into string:
open(FILE, "sample_file.txt") or die "Error: no file found.";
$output = do {local $/; <FILE> };

Comment:

* Recommended.

This is the recommended method. The reason this works is due to the special character $/ defined by Perl.

Normally reading <FILE> returns a line from the file. This is because <FILE> is read until it hits the delimiter defined by $/, which is “\n” by default. By creating a local $/ in the do loop, it reads until it hits at the end of the file, since $/ is undefined.

Examples:

# $/ = "\n", reads until end of line.
$output = <FILE>;

# $/ = "c", reads until it hits 'c'.
$output = do {local $/="c"; <FILE> };

# $/ is undefined, reads until eof.
$output = do {local $/=; <FILE> };

Method #3: Use File::Slurp.

Sample code:

# Read entire file into string:
use File::Slurp qa( slurp );
$output = slurp("sample_file.txt");

Comment:

* Highest performance.

This relies on File::Slurp Perl package to read efficiently an entire file. Though one could use read_file() defined by Slurp, it’s better to use slurp() because it will be supported in Perl 6 as part of the standard package.

Since it isn’t a standard function, File::Slurp will have to be installed on every machine that use this code. Because of this issue, method 2 is preferred. However, if performance is crucial in the program, then use this method.

Install this module is very easy. Download through the link, extract and run the commands (as root):

$ perl Makefile.PL
$ make
$ make install

Nowadays linux distrubutions have easy package installers. Glancing at Ubuntu, I found this command to install File::Slurp:

apt-get install libfile-policy-perl

Final comment:

The first two methods are NOT a good method to read really large files. Some claim that the 3rd method can handle large files efficiently, though through some experimentation I haven’t reproduced the desired result. Ideally it make sense: Perl’s I/O operation is not as efficient, and Perl::Slurp tries to bypass this by using sysread() command.

Running a quick test, I found the performance between these three methods when reading a 100 MB text file:

Method #1: 1.450 seconds
Method #2: 0.754 seconds
Method #3: 0.744 seconds

This was too fast for checking memory usage, but reading a 500 MB file showed that the program used up to 70% of my memory resources of a 1.25 GB RAM laptop.

For more details on this topic, this is a decent article about trying to increase performance for large files. Though out of date on some issues (File::Slurp implementation has improved since this article was written), it has some good data.

Addendum added Aug 8th, 2009 (3rd method and expanding on final comment).

Categories: Perl