|
Server : Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.8e-fips-rhel5 DAV/2 PHP/5.2.17 System : Linux localhost 2.6.18-419.el5 #1 SMP Fri Feb 24 22:47:42 UTC 2017 x86_64 User : nobody ( 99) PHP Version : 5.2.17 Disable Function : NONE Directory : /proc/21585/root/usr/share/doc/gzip-1.3.5/ |
Upload File : |
TODO file for gzip.
Copyright (C) 1999, 2001 Free Software Foundation, Inc.
Copyright (C) 1992, 1993 Jean-loup Gailly
This file is part of gzip (GNU zip).
gzip is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
gzip is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with tar; see the file COPYING. If not, write to
the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA.
Some of the planned features include:
- Internationalize by using gettext and setlocale.
- Structure the sources so that the compression and decompression code
form a library usable by any program, and write both gzip and zip on
top of this library. This would ideally be a reentrant (thread safe)
library, but this would degrade performance. In the meantime, you can
look at the sample program zread.c.
The library should have one mode in which compressed data is sent
as soon as input is available, instead of waiting for complete
blocks. This can be useful for sending compressed data to/from interactive
programs.
- Make it convenient to define alternative user interfaces (in
particular for windowing environments).
- Support in-memory compression for arbitrarily large amounts of data
(zip currently supports in-memory compression only for a single buffer.)
- Map files in memory when possible, this is generally much faster
than read/write. (zip currently maps entire files at once, this
should be done in chunks to reduce memory usage.)
- Add a super-fast compression method, suitable for implementing
file systems with transparent compression. One problem is that the
best candidate (lzrw1) is patented twice (Waterworth 4,701,745
and Gibson & Graybill 5,049,881). The lzrw series of algorithms
are available by ftp in ftp.adelaide.edu.au:/pub/compression/lzrw*.
- Add a super-tight (but slow) compression method, suitable for long
term archives. One problem is that the best versions of arithmetic
coding are patented (4,286,256 4,295,125 4,463,342 4,467,317
4,633,490 4,652,856 4,891,643 4,905,297 4,935,882 4,973,961
5,023,611 5,025,258).
Note: I will introduce new compression methods only if they are
significantly better in either speed or compression ratio than the
existing method(s). So the total number of different methods should
reasonably not exceed 3. (The current 9 compression levels are just
tuning parameters for a single method, deflation.)
- Add optional error correction. One problem is that the current version
of ecc cannot recover from inserted or missing bytes. It would be
nice to recover from the most common error (transfer of a binary
file in ascii mode).
- Add a block size (-b) option to improve error recovery in case of
failure of a complete sector. Each block could be extracted
independently, but this reduces the compression ratio.
For one possible approach to this, please see:
http://www.samba.org/netfilter/diary/gzip.rsync.patch
- Use a larger window size to deal with some large redundant files that
'compress' currently handles better than gzip.
- Implement the -e (encrypt) option.
Send comments to <bug-gzip@gnu.org>.