perlfaq8 - System Interaction
version 5.021011
This section of the Perl FAQ covers questions involving operating system interaction. Topics include interprocess communication (IPC), control over the user-interface (keyboard, screen and pointing devices), and most anything else not related to data manipulation.
Read the FAQs and documentation specific to the port of perl to your operating system (eg, perlvms, perlplan9, ...). These should contain more detailed information on the vagaries of your perl.
The $^O
variable ($OSNAME
if you use English
) contains an
indication of the name of the operating system (not its release
number) that your perl binary was built for.
(contributed by brian d foy)
The exec
function's job is to turn your process into another
command and never to return. If that's not what you want to do, don't
use exec
. :)
If you want to run an external command and still keep your Perl process
going, look at a piped open
, fork
, or system
.
How you access/control keyboards, screens, and pointing devices ("mice") is system-dependent. Try the following modules:
- Term::Cap Standard perl distribution
- Term::ReadKey CPAN
- Term::ReadLine::Gnu CPAN
- Term::ReadLine::Perl CPAN
- Term::Screen CPAN
- Term::Cap Standard perl distribution
- Curses CPAN
- Term::ANSIColor CPAN
- Tk CPAN
- Wx CPAN
- Gtk2 CPAN
- Qt4 kdebindings4 package
Some of these specific cases are shown as examples in other answers in this section of the perlfaq.
In general, you don't, because you don't know whether the recipient has a color-aware display device. If you know that they have an ANSI terminal that understands color, you can use the Term::ANSIColor module from CPAN:
Or like this:
Controlling input buffering is a remarkably system-dependent matter. On many systems, you can just use the stty command as shown in getc, but as you see, that's already getting you into portability snags.
The Term::ReadKey module from CPAN offers an easy-to-use interface that should be more efficient than shelling out to stty for each key. It even includes limited support for Windows.
- use Term::ReadKey;
- ReadMode('cbreak');
- $key = ReadKey(0);
- ReadMode('normal');
However, using the code requires that you have a working C compiler and can use it to build and install a CPAN module. Here's a solution using the standard POSIX module, which is already on your system (assuming your system supports POSIX).
- use HotKey;
- $key = readkey();
And here's the HotKey
module, which hides the somewhat mystifying calls
to manipulate the POSIX termios structures.
- # HotKey.pm
- package HotKey;
- use strict;
- use warnings;
- use parent 'Exporter';
- our @EXPORT = qw(cbreak cooked readkey);
- use POSIX qw(:termios_h);
- my ($term, $oterm, $echo, $noecho, $fd_stdin);
- $fd_stdin = fileno(STDIN);
- $term = POSIX::Termios->new();
- $term->getattr($fd_stdin);
- $oterm = $term->getlflag();
- $echo = ECHO | ECHOK | ICANON;
- $noecho = $oterm & ~$echo;
- sub cbreak {
- $term->setlflag($noecho); # ok, so i don't want echo either
- $term->setcc(VTIME, 1);
- $term->setattr($fd_stdin, TCSANOW);
- }
- sub cooked {
- $term->setlflag($oterm);
- $term->setcc(VTIME, 0);
- $term->setattr($fd_stdin, TCSANOW);
- }
- sub readkey {
- my $key = '';
- cbreak();
- sysread(STDIN, $key, 1);
- cooked();
- return $key;
- }
- END { cooked() }
- 1;
The easiest way to do this is to read a key in nonblocking mode with the Term::ReadKey module from CPAN, passing it an argument of -1 to indicate not to block:
(contributed by brian d foy)
To clear the screen, you just have to print the special sequence that tells the terminal to clear the screen. Once you have that sequence, output it when you want to clear the screen.
You can use the Term::ANSIScreen module to get the special
sequence. Import the cls
function (or the :screen
tag):
The Term::Cap module can also get the special sequence if you want
to deal with the low-level details of terminal control. The Tputs
method returns the string for the given capability:
On Windows, you can use the Win32::Console module. After creating
an object for the output filehandle you want to affect, call the
Cls
method:
If you have a command-line program that does the job, you can call it in backticks to capture whatever it outputs so you can use it later:
If you have Term::ReadKey module installed from CPAN, you can use it to fetch the width and height in characters and in pixels:
This is more portable than the raw ioctl
, but not as
illustrative:
- require './sys/ioctl.ph';
- die "no TIOCGWINSZ " unless defined &TIOCGWINSZ;
- open(my $tty_fh, "+</dev/tty") or die "No tty: $!";
- unless (ioctl($tty_fh, &TIOCGWINSZ, $winsize='')) {
- die sprintf "$0: ioctl TIOCGWINSZ (%08x: $!)\n", &TIOCGWINSZ;
- }
- my ($row, $col, $xpixel, $ypixel) = unpack('S4', $winsize);
- print "(row,col) = ($row,$col)";
- print " (xpixel,ypixel) = ($xpixel,$ypixel)" if $xpixel || $ypixel;
- print "\n";
(This question has nothing to do with the web. See a different FAQ for that.)
There's an example of this in crypt). First, you put the
terminal into "no echo" mode, then just read the password normally.
You may do this with an old-style ioctl()
function, POSIX terminal
control (see POSIX or its documentation the Camel Book), or a call
to the stty program, with varying degrees of portability.
You can also do this for most systems using the Term::ReadKey module from CPAN, which is easier to use and in theory more portable.
This depends on which operating system your program is running on. In
the case of Unix, the serial ports will be accessible through files in
/dev
; on other systems, device names will doubtless differ.
Several problem areas common to all device interaction are the
following:
Your system may use lockfiles to control multiple access. Make sure you follow the correct protocol. Unpredictable behavior can result from multiple processes reading from one device.
If you expect to use both read and write operations on the device,
you'll have to open it for update (see open for
details). You may wish to open it without running the risk of
blocking by using sysopen()
and O_RDWR|O_NDELAY|O_NOCTTY
from the
Fcntl module (part of the standard perl distribution). See
sysopen for more on this approach.
Some devices will be expecting a "\r" at the end of each line rather than a "\n". In some ports of perl, "\r" and "\n" are different from their usual (Unix) ASCII values of "\015" and "\012". You may have to give the numeric values you want directly, using octal ("\015"), hex ("0x0D"), or as a control-character specification ("\cM").
Even though with normal text files a "\n" will do the trick, there is still no unified scheme for terminating a line that is portable between Unix, DOS/Win, and Macintosh, except to terminate ALL line ends with "\015\012", and strip what you don't need from the output. This applies especially to socket I/O and autoflushing, discussed next.
If you expect characters to get to your device when you print()
them,
you'll want to autoflush that filehandle. You can use select()
and the $|
variable to control autoflushing (see perlvar/$
and select, or perlfaq5, "How do I flush/unbuffer an
output filehandle? Why must I do this?"):
You'll also see code that does this without a temporary variable, as in
Or if you don't mind pulling in a few thousand lines
of code just because you're afraid of a little $|
variable:
- use IO::Handle;
- $dev_fh->autoflush(1);
As mentioned in the previous item, this still doesn't work when using socket I/O between Unix and Macintosh. You'll need to hard code your line terminators, in that case.
If you are doing a blocking read()
or sysread()
, you'll have to
arrange for an alarm handler to provide a timeout (see
alarm). If you have a non-blocking open, you'll likely
have a non-blocking read, which means you may have to use a 4-arg
select()
to determine whether I/O is ready on that device (see
select.
While trying to read from his caller-id box, the notorious Jamie
Zawinski <[email protected]>
, after much gnashing of teeth and
fighting with sysread
, sysopen
, POSIX's tcgetattr
business,
and various other functions that go bump in the night, finally came up
with this:
- sub open_modem {
- use IPC::Open2;
- my $stty = `/bin/stty -g`;
- open2( \*MODEM_IN, \*MODEM_OUT, "cu -l$modem_device -s2400 2>&1");
- # starting cu hoses /dev/tty's stty settings, even when it has
- # been opened on a pipe...
- system("/bin/stty $stty");
- $_ = <MODEM_IN>;
- chomp;
- if ( !m/^Connected/ ) {
- print STDERR "$0: cu printed `$_' instead of `Connected'\n";
- }
- }
You spend lots and lots of money on dedicated hardware, but this is bound to get you talked about.
Seriously, you can't if they are Unix password files--the Unix password system employs one-way encryption. It's more like hashing than encryption. The best you can do is check whether something else hashes to the same string. You can't turn a hash back into the original string. Programs like Crack can forcibly (and intelligently) try to guess passwords, but don't (can't) guarantee quick success.
If you're worried about users selecting bad passwords, you should proactively check when they try to change their password (by modifying passwd(1), for example).
(contributed by brian d foy)
There's not a single way to run code in the background so you don't have to wait for it to finish before your program moves on to other tasks. Process management depends on your particular operating system, and many of the techniques are covered in perlipc.
Several CPAN modules may be able to help, including IPC::Open2 or IPC::Open3, IPC::Run, Parallel::Jobs, Parallel::ForkManager, POE, Proc::Background, and Win32::Process. There are many other modules you might use, so check those namespaces for other options too.
If you are on a Unix-like system, you might be able to get away with a
system call where you put an &
on the end of the command:
- system("cmd &")
You can also try using fork
, as described in perlfunc (although
this is the same thing that many of the modules will do for you).
Both the main process and the backgrounded one (the "child" process)
share the same STDIN, STDOUT and STDERR filehandles. If both try to
access them at once, strange things can happen. You may want to close
or reopen these for the child. You can get around this with
open
ing a pipe (see open) but on some systems this
means that the child process cannot outlive the parent.
You'll have to catch the SIGCHLD signal, and possibly SIGPIPE too.
SIGCHLD is sent when the backgrounded process finishes. SIGPIPE is
sent when you write to a filehandle whose child process has closed (an
untrapped SIGPIPE can cause your program to silently die). This is
not an issue with system("cmd&")
.
You have to be prepared to "reap" the child process when it finishes.
You can also use a double fork. You immediately wait()
for your
first child, and the init daemon will wait()
for your grandchild once
it exits.
See Signals in perlipc for other examples of code to do this.
Zombies are not an issue with system("prog &")
.
You don't actually "trap" a control character. Instead, that character generates a signal which is sent to your terminal's currently foregrounded process group, which you then trap in your process. Signals are documented in Signals in perlipc and the section on "Signals" in the Camel.
You can set the values of the %SIG
hash to be the functions you want
to handle the signal. After perl catches the signal, it looks in %SIG
for a key with the same name as the signal, then calls the subroutine
value for that key.
Perl versions before 5.8 had in its C source code signal handlers which
would catch the signal and possibly run a Perl function that you had set
in %SIG
. This violated the rules of signal handling at that level
causing perl to dump core. Since version 5.8.0, perl looks at %SIG
after the signal has been caught, rather than while it is being caught.
Previous versions of this answer were incorrect.
If perl was installed correctly and your shadow library was written
properly, the getpw*()
functions described in perlfunc should in
theory provide (read-only) access to entries in the shadow password
file. To change the file, make a new shadow password file (the format
varies from system to system--see passwd(1) for specifics) and use
pwd_mkdb(8)
to install it (see pwd_mkdb(8) for more details).
Assuming you're running under sufficient permissions, you should be
able to set the system-wide date and time by running the date(1)
program. (There is no way to set the time and date on a per-process
basis.) This mechanism will work for Unix, MS-DOS, Windows, and NT;
the VMS equivalent is set time
.
However, if all you want to do is change your time zone, you can probably get away with setting an environment variable:
- $ENV{TZ} = "MST7MDT"; # Unixish
- $ENV{'SYS$TIMEZONE_DIFFERENTIAL'}="-5" # vms
- system('trn', 'comp.lang.perl.misc');
If you want finer granularity than the 1 second that the sleep()
function provides, the easiest way is to use the select()
function as
documented in select. Try the Time::HiRes and
the BSD::Itimer modules (available from CPAN, and starting from
Perl 5.8 Time::HiRes is part of the standard distribution).
(contributed by brian d foy)
The Time::HiRes module (part of the standard distribution as of
Perl 5.8) measures time with the gettimeofday()
system call, which
returns the time in microseconds since the epoch. If you can't install
Time::HiRes for older Perls and you are on a Unixish system, you
may be able to call gettimeofday(2)
directly. See
syscall.
You can use the END
block to simulate atexit()
. Each package's
END
block is called when the program or thread ends. See the perlmod
manpage for more details about END
blocks.
For example, you can use this to make sure your filter program managed to finish its output without filling up the disk:
The END
block isn't called when untrapped signals kill the program,
though, so if you use END
blocks you should also use
- use sigtrap qw(die normal-signals);
Perl's exception-handling mechanism is its eval()
operator. You
can use eval()
as setjmp
and die()
as longjmp
. For
details of this, see the section on signals, especially the time-out
handler for a blocking flock()
in Signals in perlipc or the
section on "Signals" in Programming Perl.
If exception handling is all you're interested in, use one of the many CPAN modules that handle exceptions, such as Try::Tiny.
If you want the atexit()
syntax (and an rmexit()
as well), try the
AtExit
module available from CPAN.
Some Sys-V based systems, notably Solaris 2.X, redefined some of the standard socket constants. Since these were constant across all architectures, they were often hardwired into perl code. The proper way to deal with this is to "use Socket" to get the correct values.
Note that even though SunOS and Solaris are binary compatible, these values are different. Go figure.
In most cases, you write an external module to do it--see the answer
to "Where can I learn about linking C with Perl? [h2xs, xsubpp]".
However, if the function is a system call, and your system supports
syscall()
, you can use the syscall
function (documented in
perlfunc).
Remember to check the modules that came with your distribution, and CPAN as well--someone may already have written a module to do it. On Windows, try Win32::API. On Macs, try Mac::Carbon. If no module has an interface to the C function, you can inline a bit of C in your Perl source with Inline::C.
Historically, these would be generated by the h2ph tool, part of the
standard perl distribution. This program converts cpp(1)
directives
in C header files to files containing subroutine definitions, like
SYS_getitimer()
, which you can use as arguments to your functions.
It doesn't work perfectly, but it usually gets most of the job done.
Simple files like errno.h, syscall.h, and socket.h were fine,
but the hard ones like ioctl.h nearly always need to be hand-edited.
Here's how to install the *.ph files:
- 1. Become the super-user
- 2. cd /usr/include
- 3. h2ph *.h */*.h
If your system supports dynamic loading, for reasons of portability and sanity you probably ought to use h2xs (also part of the standard perl distribution). This tool converts C header files to Perl extensions. See perlxstut for how to get started with h2xs.
If your system doesn't support dynamic loading, you still probably ought to use h2xs. See perlxstut and ExtUtils::MakeMaker for more information (in brief, just use make perl instead of a plain make to rebuild perl with a new static extension).
Some operating systems have bugs in the kernel that make setuid scripts inherently insecure. Perl gives you a number of options (described in perlsec) to work around such systems.
The IPC::Open2 module (part of the standard perl distribution) is
an easy-to-use approach that internally uses pipe()
, fork()
, and
exec()
to do the job. Make sure you read the deadlock warnings in
its documentation, though (see IPC::Open2). See
Bidirectional Communication with Another Process in perlipc and
Bidirectional Communication with Yourself in perlipc
You may also use the IPC::Open3 module (part of the standard perl distribution), but be warned that it has a different order of arguments from IPC::Open2 (see IPC::Open3).
You're confusing the purpose of system()
and backticks (``). system()
runs a command and returns exit status information (as a 16 bit value:
the low 7 bits are the signal the process died from, if any, and
the high 8 bits are the actual exit value). Backticks (``) run a
command and return what it sent to STDOUT.
There are three basic ways of running external commands:
With system()
, both STDOUT and STDERR will go the same place as the
script's STDOUT and STDERR, unless the system()
command redirects them.
Backticks and open()
read only the STDOUT of your command.
You can also use the open3()
function from IPC::Open3. Benjamin
Goldberg provides some sample code:
To capture a program's STDOUT, but discard its STDERR:
To capture a program's STDERR, but discard its STDOUT:
To capture a program's STDERR, and let its STDOUT go to our own STDERR:
To read both a command's STDOUT and its STDERR separately, you can redirect them to temp files, let the command run, then read the temp files:
But there's no real need for both to be tempfiles... the following should work just as well, without deadlocking:
And it'll be faster, too, since we can begin processing the program's stdout immediately, rather than waiting for the program to finish.
With any of these, you can change file descriptors before the call:
or you can use Bourne shell file-descriptor redirection:
- $output = `$cmd 2>some_file`;
- open (PIPE, "cmd 2>some_file |");
You can also use file-descriptor redirection to make STDERR a duplicate of STDOUT:
- $output = `$cmd 2>&1`;
- open (PIPE, "cmd 2>&1 |");
Note that you cannot simply open STDERR to be a dup of STDOUT in your Perl program and avoid calling the shell to do the redirection. This doesn't work:
- open(STDERR, ">&STDOUT");
- $alloutput = `cmd args`; # stderr still escapes
This fails because the open()
makes STDERR go to where STDOUT was
going at the time of the open()
. The backticks then make STDOUT go to
a string, but don't change STDERR (which still goes to the old
STDOUT).
Note that you must use Bourne shell (sh(1)
) redirection syntax in
backticks, not csh(1)
! Details on why Perl's system()
and backtick
and pipe opens all use the Bourne shell are in the
versus/csh.whynot article in the "Far More Than You Ever Wanted To
Know" collection in http://www.cpan.org/misc/olddoc/FMTEYEWTK.tgz . To
capture a command's STDERR and STDOUT together:
- $output = `cmd 2>&1`; # either with backticks
- $pid = open(PH, "cmd 2>&1 |"); # or with an open pipe
- while (<PH>) { } # plus a read
To capture a command's STDOUT but discard its STDERR:
- $output = `cmd 2>/dev/null`; # either with backticks
- $pid = open(PH, "cmd 2>/dev/null |"); # or with an open pipe
- while (<PH>) { } # plus a read
To capture a command's STDERR but discard its STDOUT:
- $output = `cmd 2>&1 1>/dev/null`; # either with backticks
- $pid = open(PH, "cmd 2>&1 1>/dev/null |"); # or with an open pipe
- while (<PH>) { } # plus a read
To exchange a command's STDOUT and STDERR in order to capture the STDERR but leave its STDOUT to come out our old STDERR:
- $output = `cmd 3>&1 1>&2 2>&3 3>&-`; # either with backticks
- $pid = open(PH, "cmd 3>&1 1>&2 2>&3 3>&-|");# or with an open pipe
- while (<PH>) { } # plus a read
To read both a command's STDOUT and its STDERR separately, it's easiest to redirect them separately to files, and then read from those files when the program is done:
- system("program args 1>program.stdout 2>program.stderr");
Ordering is important in all these examples. That's because the shell processes file descriptor redirections in strictly left to right order.
The first command sends both standard out and standard error to the temporary file. The second command sends only the old standard output there, and the old standard error shows up on the old standard out.
If the second argument to a piped open()
contains shell
metacharacters, perl fork()
s, then exec()
s a shell to decode the
metacharacters and eventually run the desired program. If the program
couldn't be run, it's the shell that gets the message, not Perl. All
your Perl program can find out is whether the shell itself could be
successfully started. You can still capture the shell's STDERR and
check it for error messages. See How can I capture STDERR from an external command? elsewhere in this document, or use the
IPC::Open3 module.
If there are no shell metacharacters in the argument of open()
, Perl
runs the command directly, without using the shell, and can correctly
report whether the command started.
Strictly speaking, nothing. Stylistically speaking, it's not a good
way to write maintainable code. Perl has several operators for
running external commands. Backticks are one; they collect the output
from the command for use in your program. The system
function is
another; it doesn't do this.
Writing backticks in your program sends a clear message to the readers of your code that you wanted to collect the output of the command. Why send a clear message that isn't true?
Consider this line:
- `cat /etc/termcap`;
You forgot to check $?
to see whether the program even ran
correctly. Even if you wrote
- print `cat /etc/termcap`;
this code could and probably should be written as
which will echo the cat command's output as it is generated, instead of waiting until the program has completed to print it out. It also checks the return value.
system
also provides direct control over whether shell wildcard
processing may take place, whereas backticks do not.
This is a bit tricky. You can't simply write the command like this:
- @ok = `grep @opts '$search_string' @filenames`;
As of Perl 5.8.0, you can use open()
with multiple arguments.
Just like the list forms of system()
and exec()
, no shell
escapes happen.
You can also:
Just as with system()
, no shell escapes happen when you exec()
a
list. Further examples of this can be found in Safe Pipe Opens in perlipc.
Note that if you're using Windows, no solution to this vexing issue is
even possible. Even though Perl emulates fork()
, you'll still be
stuck, because Windows does not have an argc/argv-style API.
This happens only if your perl is compiled to use stdio instead of
perlio, which is the default. Some (maybe all?) stdios set error and
eof flags that you may need to clear. The POSIX module defines
clearerr()
that you can use. That is the technically correct way to
do it. Here are some less reliable workarounds:
Try keeping around the seekpointer and go there, like this:
If that doesn't work, try seeking to a different part of the file and then back.
If that doesn't work, try seeking to a different part of the file, reading something, and then seeking back.
If that doesn't work, give up on your stdio package and use sysread.
Learn Perl and rewrite it. Seriously, there's no simple converter. Things that are awkward to do in the shell are easy to do in Perl, and this very awkwardness is what would make a shell->perl converter nigh-on impossible to write. By rewriting it, you'll think about what you're really trying to do, and hopefully will escape the shell's pipeline datastream paradigm, which while convenient for some matters, causes many inefficiencies.
Try the Net::FTP, TCP::Client, and Net::Telnet modules (available from CPAN). http://www.cpan.org/scripts/netstuff/telnet.emul.shar will also help for emulating the telnet protocol, but Net::Telnet is quite probably easier to use.
If all you want to do is pretend to be telnet but don't need the initial telnet handshaking, then the standard dual-process approach will suffice:
- use IO::Socket; # new in 5.004
- my $handle = IO::Socket::INET->new('www.perl.com:80')
- or die "can't connect to port 80 on www.perl.com $!";
- $handle->autoflush(1);
- if (fork()) { # XXX: undef means failure
- select($handle);
- print while <STDIN>; # everything from stdin to socket
- } else {
- print while <$handle>; # everything from socket to stdout
- }
- close $handle;
- exit;
Once upon a time, there was a library called chat2.pl (part of the standard perl distribution), which never really got finished. If you find it somewhere, don't use it. These days, your best bet is to look at the Expect module available from CPAN, which also requires two other modules from CPAN, IO::Pty and IO::Stty.
First of all note that if you're doing this for security reasons (to avoid people seeing passwords, for example) then you should rewrite your program so that critical information is never given as an argument. Hiding the arguments won't make your program completely secure.
To actually alter the visible command line, you can assign to the variable $0 as documented in perlvar. This won't work on all operating systems, though. Daemon programs like sendmail place their state there, as in:
- $0 = "orcus [accepting connections]";
In the strictest sense, it can't be done--the script executes as a
different process from the shell it was started from. Changes to a
process are not reflected in its parent--only in any children
created after the change. There is shell magic that may allow you to
fake it by eval()
ing the script's output in your shell; check out the
comp.unix.questions FAQ for details.
Assuming your system supports such things, just send an appropriate signal to the process (see kill). It's common to first send a TERM signal, wait a little bit, and then send a KILL signal to finish it off.
If by daemon process you mean one that's detached (disassociated from its tty), then the following process is reported to work on most Unixish systems. Non-Unix users should check their Your_OS::Process module for other solutions.
Open /dev/tty and use the TIOCNOTTY ioctl on it. See tty(1)
for details. Or better yet, you can just use the POSIX::setsid()
function, so you don't have to worry about process groups.
Change directory to /
Reopen STDIN, STDOUT, and STDERR so they're not connected to the old tty.
Background yourself like this:
The Proc::Daemon module, available from CPAN, provides a function to perform these actions for you.
(contributed by brian d foy)
This is a difficult question to answer, and the best answer is only a guess.
What do you really want to know? If you merely want to know if one of
your filehandles is connected to a terminal, you can try the -t
file test:
However, you might be out of luck if you expect that means there is a real person on the other side. With the Expect module, another program can pretend to be a person. The program might even come close to passing the Turing test.
The IO::Interactive module does the best it can to give you an
answer. Its is_interactive
function returns an output filehandle;
that filehandle points to standard output if the module thinks the
session is interactive. Otherwise, the filehandle is a null handle
that simply discards the output:
This still doesn't guarantee that a real person is answering your prompts or reading your output.
If you want to know how to handle automated testing for your
distribution, you can check the environment. The CPAN
Testers, for instance, set the value of AUTOMATED_TESTING
:
Use the alarm()
function, probably in conjunction with a signal
handler, as documented in Signals in perlipc and the section on
"Signals" in the Camel. You may instead use the more flexible
Sys::AlarmCall module available from CPAN.
The alarm()
function is not implemented on all versions of Windows.
Check the documentation for your specific version of Perl.
(contributed by Xho)
Use the BSD::Resource module from CPAN. As an example:
This sets the soft and hard limits to 10 and 20 seconds, respectively. After 10 seconds of time spent running on the CPU (not "wall" time), the process will be sent a signal (XCPU on some systems) which, if not trapped, will cause the process to terminate. If that signal is trapped, then after 10 more seconds (20 seconds in total) the process will be killed with a non-trappable signal.
See the BSD::Resource and your systems documentation for the gory details.
Use the reaper code from Signals in perlipc to call wait()
when a
SIGCHLD is received, or else use the double-fork technique described
in How do I start a process in the background? in perlfaq8.
The DBI module provides an abstract interface to most database servers and types, including Oracle, DB2, Sybase, mysql, Postgresql, ODBC, and flat files. The DBI module accesses each database type through a database driver, or DBD. You can see a complete list of available drivers on CPAN: http://www.cpan.org/modules/by-module/DBD/ . You can read more about DBI on http://dbi.perl.org/ .
Other modules provide more specific access: Win32::ODBC, Alzabo,
iodbc
, and others found on CPAN Search: http://search.cpan.org/ .
You can't. You need to imitate the system()
call (see perlipc for
sample code) and then have a signal handler for the INT signal that
passes the signal on to the subprocess. Or you can check for it:
If you're lucky enough to be using a system that supports
non-blocking reads (most Unixish systems do), you need only to use the
O_NDELAY
or O_NONBLOCK
flag from the Fcntl
module in conjunction with
sysopen()
:
(answer contributed by brian d foy)
When you run a Perl script, something else is running the script for you, and that something else may output error messages. The script might emit its own warnings and error messages. Most of the time you cannot tell who said what.
You probably cannot fix the thing that runs perl, but you can change how perl outputs its warnings by defining a custom warning and die functions.
Consider this script, which has an error you may not notice immediately.
- #!/usr/locl/bin/perl
- print "Hello World\n";
I get an error when I run this from my shell (which happens to be
bash). That may look like perl forgot it has a print()
function,
but my shebang line is not the path to perl, so the shell runs the
script, and I get the error.
- $ ./test
- ./test: line 3: print: command not found
A quick and dirty fix involves a little bit of code, but this may be all you need to figure out the problem.
The perl message comes out with "Perl" in front. The BEGIN
block
works at compile time so all of the compilation errors and warnings
get the "Perl:" prefix too.
- Perl: Useless use of division (/) in void context at ./test line 9.
- Perl: Name "main::a" used only once: possible typo at ./test line 8.
- Perl: Name "main::x" used only once: possible typo at ./test line 9.
- Perl: Use of uninitialized value in addition (+) at ./test line 8.
- Perl: Use of uninitialized value in division (/) at ./test line 9.
- Perl: Illegal division by zero at ./test line 9.
- Perl: Illegal division by zero at -e line 3.
If I don't see that "Perl:", it's not from perl.
You could also just know all the perl errors, and although there are some people who may know all of them, you probably don't. However, they all should be in the perldiag manpage. If you don't find the error in there, it probably isn't a perl error.
Looking up every message is not the easiest way, so let perl to do it for you. Use the diagnostics pragma with turns perl's normal messages into longer discussions on the topic.
- use diagnostics;
If you don't get a paragraph or two of expanded discussion, it might not be perl's message.
(contributed by brian d foy)
The easiest way is to have a module also named CPAN do it for you by using
the cpan
command that comes with Perl. You can give it a list of modules
to install:
- $ cpan IO::Interactive Getopt::Whatever
If you prefer CPANPLUS
, it's just as easy:
- $ cpanp i IO::Interactive Getopt::Whatever
If you want to install a distribution from the current directory, you can
tell CPAN.pm
to install .
(the full stop):
- $ cpan .
See the documentation for either of those commands to see what else you can do.
If you want to try to install a distribution by yourself, resolving all dependencies on your own, you follow one of two possible build paths.
For distributions that use Makefile.PL:
- $ perl Makefile.PL
- $ make test install
For distributions that use Build.PL:
- $ perl Build.PL
- $ ./Build test
- $ ./Build install
Some distributions may need to link to libraries or other third-party code and their build and installation sequences may be more complicated. Check any README or INSTALL files that you may find.
(contributed by brian d foy)
Perl runs require
statement at run-time. Once Perl loads, compiles,
and runs the file, it doesn't do anything else. The use
statement
is the same as a require
run at compile-time, but Perl also calls the
import
method for the loaded package. These two are the same:
However, you can suppress the import
by using an explicit, empty
import list. Both of these still happen at compile-time:
Since use
will also call the import
method, the actual value
for MODULE
must be a bareword. That is, use
cannot load files
by name, although require
can:
- require "$ENV{HOME}/lib/Foo.pm"; # no @INC searching!
See the entry for use
in perlfunc for more details.
When you build modules, tell Perl where to install the modules.
If you want to install modules for your own use, the easiest way might be local::lib, which you can download from CPAN. It sets various installation settings for you, and uses those same settings within your programs.
If you want more flexibility, you need to configure your CPAN client for your particular situation.
For Makefile.PL
-based distributions, use the INSTALL_BASE option
when generating Makefiles:
- perl Makefile.PL INSTALL_BASE=/mydir/perl
You can set this in your CPAN.pm
configuration so modules
automatically install in your private library directory when you use
the CPAN.pm shell:
- % cpan
- cpan> o conf makepl_arg INSTALL_BASE=/mydir/perl
- cpan> o conf commit
For Build.PL
-based distributions, use the --install_base option:
- perl Build.PL --install_base /mydir/perl
You can configure CPAN.pm
to automatically use this option too:
- % cpan
- cpan> o conf mbuild_arg "--install_base /mydir/perl"
- cpan> o conf commit
INSTALL_BASE tells these tools to put your modules into /mydir/perl/lib/perl5. See How do I add a directory to my include path (@INC) at runtime? for details on how to run your newly installed modules.
There is one caveat with INSTALL_BASE, though, since it acts differently from the PREFIX and LIB settings that older versions of ExtUtils::MakeMaker advocated. INSTALL_BASE does not support installing modules for multiple versions of Perl or different architectures under the same directory. You should consider whether you really want that and, if you do, use the older PREFIX and LIB settings. See the ExtUtils::Makemaker documentation for more details.
(contributed by brian d foy)
If you know the directory already, you can add it to @INC
as you would
for any other directory. You might <use lib> if you know the directory
at compile time:
- use lib $directory;
The trick in this task is to find the directory. Before your script does
anything else (such as a chdir
), you can get the current working
directory with the Cwd
module, which comes with Perl:
You can do a similar thing with the value of $0
, which holds the
script name. That might hold a relative path, but rel2abs
can turn
it into an absolute path. Once you have the
The FindBin module, which comes with Perl, might work. It finds the
directory of the currently running script and puts it in $Bin
, which
you can then use to construct the right library path:
- use FindBin qw($Bin);
You can also use local::lib to do much of the same thing. Install modules using local::lib's settings then use the module in your program:
- use local::lib; # sets up a local lib at ~/perl5
See the local::lib documentation for more details.
Here are the suggested ways of modifying your include path, including environment variables, run-time switches, and in-code statements:
PERLLIB
environment variable
- $ export PERLLIB=/path/to/my/dir
- $ perl program.pl
PERL5LIB
environment variable
- $ export PERL5LIB=/path/to/my/dir
- $ perl program.pl
perl -Idir
command line flag
- $ perl -I/path/to/my/dir program.pl
lib
pragma:
- use lib "$ENV{HOME}/myown_perllib";
The last is particularly useful because it knows about machine-dependent
architectures. The lib.pm
pragmatic module was first
included with the 5.002 release of Perl.
Modules are installed on a case-by-case basis (as provided by the methods described in the previous section), and in the operating system. All of these paths are stored in @INC, which you can display with the one-liner
- perl -e 'print join("\n",@INC,"")'
The same information is displayed at the end of the output from the command
- perl -V
To find out where a module's source code is located, use
- perldoc -l Encode
to display the path to the module. In some cases (for example, the AutoLoader
module), this command will show the path to a separate pod
file; the module
itself should be in the same directory, with a 'pm' file extension.
It's a Perl 4 style file defining values for system networking
constants. Sometimes it is built using h2ph when Perl is installed,
but other times it is not. Modern programs should use use Socket;
instead.
Copyright (c) 1997-2010 Tom Christiansen, Nathan Torkington, and other authors as noted. All rights reserved.
This documentation is free; you can redistribute it and/or modify it under the same terms as Perl itself.
Irrespective of its distribution, all code examples in this file are hereby placed into the public domain. You are permitted and encouraged to use this code in your own programs for fun or for profit as you see fit. A simple comment in the code giving credit would be courteous but is not required.