Date: Mon, 19 Apr 1999 11:40:27 +0200
From: Wojciech Purczynski <[email protected]>
To: [email protected]Subject: Buffer overflow in BASH
Some days ago I foudn a buffer overflow in BASH.
BASH wrongly allocates memory for lines read from redirected
standard input. If you use CMD << _EOF_WORD_ operator to
redirect standard input BASH will read following lines from
the command input (either tty or shell script) into
dynamically allocated memory until it encounters _EOF_WORD_.
The BASH allocates only 1000 bytes for first line regardless
of line length. I looked at the source code and this is what I
found in 'make_cmd.c':
if (len + document_index >= document_size)
{
document_size = document_size ? 2 * (document_size + len)
: 1000; /* XXX */
document = xrealloc (document, document_size);
}
So, if we type a line longer than 1000 characters the BASH
will exit with a reason like 'Segmentation fault (core dumped)'
(it's my favorite :) ).
Here is an example script:
--- start of test.sh ---
#!/bin/bash
cat << _EOF_
_here_should_be_line_longer_than_1000_bytes________
_EOF_
--- end of test.sh ---
I have a question to the authors of BASH:
What does '/* XXX */' mean? It's not my remark!
Fix:
Just replace '1000' with '1000+len' and everything should be
OK.
Patch:
--- start of bash-1.14.7-redir.patch ------ make_cmd.c Fri Jul 1 01:15:03 1994
+++ make_cmd.c.redir Mon Apr 5 22:33:43 1999
@@ -424,7 +424,7 @@
if (len + document_index >= document_size)
{
document_size = document_size ? 2 * (document_size
+ len)
- : 1000; /* XXX */
+ : 1000+len; /* much better,huh? */
document = xrealloc (document, document_size);
}
--- start of bash-1.14.7-redir.patch ---
I think that all versions up to 1.14.7 have this bug but I have no
time to check it.
Vooyec <[email protected]>