[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Aspell-en license Once again.

On Tue, 5 Nov 2002, Walter Landry wrote:

> Jeff Licquia <licquia@debian.org> wrote:
> > I vote that we treat the copyright to this list the same way we treat
> > patents generally: wait for someone to complain before pulling the
> > list.  The situation is analogous; just as we cannot know which patents
> > we currently infringe upon, given the volume of patents and the volume
> > of code we distribute, so here we cannot know which copyrights we
> > infringe upon, due to our disconnection from their original authors.
> Similar arguments were made during the KDE-Qt mess.  There weren't any
> authors who were threatening anyone.  I'm really not a big fan of
> hoping someone doesn't sue.  Debian does that for patents because it
> wouldn't be able to function otherwise.  But here we have a clear case
> of something being not freely licensed.  I think the only thing that
> muddies things up is whether those people have the right to restrict
> distribution.  Certainly in the US and Australia they don't, but in
> the EU they might.

This is NOT a clear case of 'something being not freely licensed'.

1) The exact license of the DEC word list is not clear.

2) Aspell-en comes from SCOWL which is a compilation of several word 
lists, one of them which in DEC which is also a compilation of several 
word lists.  Basically the so called possible non freely licensed word 
list is being used in such an indirect way it could be argued that how I 
am using it is considered fair use.

You can not treat a list of words like you can code.  If someone looks at
a copyrighted list of words and uses some of the words in his list, is
that person violating the copyright? If someone looks at someone else's
code and goes off and reimplements the program a anew by using the original
code as a reference but with out actually copying any code, is that person
violating the copyright of the original code?  The answer to the second
question is a definite No, at least according to RMS.  RMS specifically
gave me the all clear to use some code the was created in this exact manner
(the yet to be merged affix compression code).  So if the second case is
not violating any copyright than why is the first?

I once again attached the README file to both SCOWL and the DEC word list 
for you to look over.


Spell Checking Oriented Word Lists (SCOWL)
Revision 4a
April 4, 2001
by Kevin Atkinson

The SCOWL is a collection of word lists split up in various sizes, and
other categories, intended to be suitable for use in spell checkers.
However, I am sure it will have numerous other uses as well.

The latest version can be found at http://wordlist.sourceforge.net/

The directory final/ contains the actual word lists broken up into
various sizes and categories.  The r/ directory contains Readmes from
the various sources used to create this package.

The other directories contain the necessary information to recreate the
word lists from the raw data.  Unless you are interested in improving the
words lists you should not need to worry about what's here.  See the
section on recreating the words lists for more information on what's

Except for the special word lists the files follow the following
naming convention:
  <spelling category>-<classification>.<size>
Where the spelling category is one of
  english, american, british, canadian, 
  variant_0, varaint_1, variant_2
Classification is one of
  abbreviations, contractions, proper-names, upper, words
And size is one of
  10, 20, 35 (small), 50 (medium), 60, 65, 70 (large), 80 (huge), 95 (insane)
The special word lists follow are in the following format:
Where description is one of:
  roman-numerals, hacker

When combining the words lists the "english" spelling category should
be used as well as one of "american", "british", or "canadian".  Great
care has been taken so that that only one spelling for any particular
word is included in the main list.  When two variants were considered
equal I randomly picked one for inclusion in the main word list.
Unfortunately this means that my choice in how to spell a word may not
match your choice.  If this is the case you can try including the
"variant_0" spelling category which includes most variants which are
considered almost equal.  The "variant_1" spelling category include
variants which are also generally considerd acceptable, and
"variant_2" contains varinats which are seldem used.

The "abbreviation" category includes abbreviations and acronyms which
are not also normal words. The "contractions" category should be self
explanatory. The "upper" category includes upper case words and proper
names which are common enough to appear in a typical dictionary. The
"proper-names" category included all the additional uppercase words.
Final the "words" category contains all the normal English words.

To give you an idea of what the words in the various sizes look like
here is a sample of 25 random words found only in that size:

10: afternoon assumed ban bearing begins brown candidate chain
    competition emergency fear full ignoring is laboratory likely mind
    represents shortly small space specifically steal stick usage

20: balancing brilliantly broadcasting chancellor degraded delays
    donations dug excuses gut homes imaginary influenced investigations
    lean mayor paperback parked paths performs rescue speculate stole
    takers warehouse

35: adjoins affinities bale conspirators crowed dames denoted
    dictatorships eccentricities employments fulling golfer gyrations
    hierarchies kitchens lash masticate moratorium overestimate preach
    rhododendrons scaffolding swirl tornadoes welders

50: alder allspices augury careening dentins dollops earmuff gauziest
    handballs invitingly minibike paramilitary pertly pluckiness popes
    reapply sachets scribblers swaddle sweetbreads topographic
    undervalues unleaded wineries wordinesses

60: armorials birefringence camerawoman clerestories congregationalisms
    coupler ductility extortioners goldbricks grader haircloth
    inappreciably inseparabilities metacarpi metallurgic narratology
    negativism oarswomen outdoorsy predigested pruners roguishness
    shatterproof thereunto yearlong

65: bimolecularly dreg forebodingly gingerliness harpings
    incomprehensibleness intolerability intransitiveness intubates
    kinesics millijoules opinionatedness railer religiosity
    restoratively retiredness selfness simplemindedness teated tupelo
    typedefs unbodied unmurmuring unstuffy upstandingness

70: stragal cisterna copolymer counterstamp cuneal enchiridion
    enphytotic headsail jailhouse krait lobo miliary nubbly obsecrate
    oculus palladic phalangeal retroaction sialoid skiplane subtangent
    sudoriferous surmullet tupelo whorehouse

80: aardwolves agglomerative anticapitalisms bedells calorized cyanoses
    cynipid dichroites dogdoms epithalamion groggeries illuviations
    interparietal laidly misidentification outfooting phytopathologic
    potamogeton quantitive sculpturings semiattached skeeters turps
    untunes willowing

95: amplectant antares bedungs carbonometry creatinuria datals
    demasculinisation diose dreint gegger hemadynameter hemispasm
    hyperexophoria inoxidize macrodantin plurisyllabic relection
    rhinobatidae rhubarbings symphyllous tenture unallowably unincestuous
    wauble womanship

And here is the _rough_ count on the number of words in each size:

  Size  Words   Proper Names  Running Total 

   10    4,500                    4,500                
   20    8,500                   13,000
   35   40,000                   53,000
   50   40,000       7,000      100,000
   60   32,000      13,000      145,000
   65   12,000       1,000      158,000
   70   40,000      23,000      221,000
   80  165,000      25,000      411,000
   95  213,000      53,000      677,000

(The "Words" column does not include the proper name count.)

Size 35 is the recommended small size, 50 the medium and 70 the large.
The size 65 level included the contents of the ispell "medium" word
list which tends to include a lot of technical terms as well as a lot
of strange affixations of common words.  Sizes 70 and below contain
words found in most dictionaries while the 80 size contains all the
strange and unusual words people like to use in word games such as
Scrabble (TM).  While a lot of the the words in the 80 size are not
used very often, they are all generally considered valid words in the
English language.  The 95 contains just about every English word in
existence and then some.  Many of the words at the 95 level will
probally not be considered valid english words by most people.  I
don't recommend anyone use levels above 70 for spell checking as they
contain rarely used words which can hide misspellings of similar more
commonly used words.  For example the word "ort" can hide a common
typo of "or".  No one should need to use a size larger than 80, the 95
size is labeled insane for a reason.

Due to the nature of how the small size lists are created not all
inflections of a word are included at the same level.  For example the
10 size includes "absence" but not "absences", "accept/ed/ing" but not
"accepts", "address/ed/es" but not "addressing", and so on.  This
problem is very noticeable at the 10 and 20 size, and present but not
very noticeable at the 35, 50, and 60 size.  Because of this I do not
recommend you exclusively use the 10 or 20 size.  Please see the
section "Future Plans" for more information on what I plan on doing to
help make this problem less noticable.

Accents are present on certain words such as café in iso8859-1 format.


From Revision 4 to 4a (April 4, 2001)

  Reran the scripts on a never version of AGID (3a) which fixes a bug
  which caused some common words to be improperly marked as variants.

From Revision 3 to 4 (January 28, 2001)

  Split the variant "spelling category" up into 3 different levels.
  Added words in the Ispell word list at the 65 level.

  Other changes due to using more recent versions of various sources
  included a more accurete version of AGID thanks to the word of
  Alan Beale

From Revision 2 to 3 (August 18, 2000)

  Renamed special-unix-terms to special-hacker and added a large
  number of communly used words within the hacker (not cracker)

  Added a couple more signature words including "newbie".

  Minor changes due to changes in the inflection database.

From Revision 1 to 2 (August 5, 2000)

  Moved the male and female name lists from the mwords package and the
  DEC name lists form the 50 level to the 60 level and moved Alan's
  name list from the 60 level to the 50 level.  Also added the top
  1000 male, female, and last names from the 1990 Census report to the
  50 level.  This reduced the number of names in the 50 level from
  17,000 to 7,000.

  Added a large number of Uppercase words to the 50 level.

  Properly accented the possessive form of some words.

  Minor other changes due to changes in my raw data files which have
  not been released yet.  Email if you are interested in these files.


The collective work is Copyright 2000 by Kevin Atkinson as well as any
of the copyrights mentioned below:

  Copyright 2000 by Kevin Atkinson

  Permission to use, copy, modify, distribute and sell these word
  lists, the associated scripts, the output created from the scripts,
  and its documentation for any purpose is hereby granted without fee,
  provided that the above copyright notice appears in all copies and
  that both that copyright notice and this permission notice appear in
  supporting documentation. Kevin Atkinson makes no representations
  about the suitability of this array for any purpose. It is provided
  "as is" without express or implied warranty.

Alan Beale <biljir@pobox.com> also deserves special credit as he has,
in addition to providing the 12Dicts package and being a major
contributor to the ENABLE word list, given me an incredible amount of
feedback and created a number of special lists (those found in the
Supplement) in order to help improve the overall quality of SCOWL.

The 10 level includes the 1000 most common English words (according to
the Moby (TM) Words II [MWords] package), a subset of the 1000 most
common words on the Internet (again, according to Moby Words II), and
frequently class 16 from Brian Kelk's "UK English Wordlist
with Frequency Classification".

The MWords package was explicitly placed in the public domain:

    The Moby lexicon project is complete and has
    been place into the public domain. Use, sell,
    rework, excerpt and use in any way on any platform.

    Placing this material on internal or public servers is
    also encouraged. The compiler is not aware of any
    export restrictions so freely distribute world-wide.

    You can verify the public domain status by contacting

    Grady Ward
    3449 Martha Ct.
    Arcata, CA  95521-4884


The "UK English Wordlist With Frequency Classification" is also in the
Public Domain:

  Date: Sat, 08 Jul 2000 20:27:21 +0100
  From: Brian Kelk <Brian.Kelk@cl.cam.ac.uk>

  > I was wondering what the copyright status of your "UK English
  > Wordlist With Frequency Classification" word list as it seems to
  > be lacking any copyright notice.

  There were many many sources in total, but any text marked
  "copyright" was avoided. Locally-written documentation was one
  source. An earlier version of the list resided in a filespace called
  PUBLIC on the University mainframe, because it was considered public

  Date: Tue, 11 Jul 2000 19:31:34 +0100

  > So are you saying your word list is also in the public domain?

  That is the intention.

The 20 level includes frequency classes 7-15 from Brian's word list.

The 35 level includes frequency classes 2-6 and words appearing in at
least 11 of 12 dictionaries as indicated in the 12Dicts package.  All
words from the 12Dicts package have had likely inflections added via
my inflection database.

The 12Dicts package and Supplement is in the Public Domain.

The WordNet database, which was used in the creation of the
Inflections database, is under the following copyright:

  This software and database is being provided to you, the LICENSEE,
  by Princeton University under the following license.  By obtaining,
  using and/or copying this software and database, you agree that you
  have read, understood, and will comply with these terms and

  Permission to use, copy, modify and distribute this software and
  database and its documentation for any purpose and without fee or
  royalty is hereby granted, provided that you agree to comply with
  the following copyright notice and statements, including the
  disclaimer, and that the same appear on ALL copies of the software,
  database and documentation, including modifications that you make
  for internal use or for distribution.

  WordNet 1.6 Copyright 1997 by Princeton University.  All rights


  The name of Princeton University or Princeton may not be used in
  advertising or publicity pertaining to distribution of the software
  and/or database.  Title to copyright in this software, database and
  any associated documentation shall at all times remain with
  Princeton University and LICENSEE agrees to preserve same.

The 50 level includes Brian's frequency class 1, words words appearing
in at least 5 of 12 of the dictionaries as indicated in the 12Dicts
package, and uppercase words in at least 4 of the previous 12
dictionaries.  A decent number of proper names is also included: The
top 1000 male, female, and Last names from the 1990 Census report; a
list of names sent to me by Alan Beale; and a few names that I added
myself.  Finally a small list of abbreviations not commonly found in
other word lists is included.

The name files form the Census report is a government document which I
don't think can be copyrighted.

The name list from Alan Beale is also derived from the linux words
list, which is derived from the DEC list.  He also added a bunch of
miscellaneous names to the list, which he released to the Public Domain.

The DEC Word list doesn't have a formal name.  It is labeled as "FILE:
english.words; VERSION: DEC-SRC-92-04-05" and was put together by Jorge
Stolfi <stolfi@src.dec.com> DEC Systems Research Center.  The DEC Word
list has the following copyright statement:


  To the best of my knowledge, all the files I used to build these
  wordlists were available for public distribution and use, at least
  for non-commercial purposes.  I have confirmed this assumption with
  the authors of the lists, whenever they were known.

  Therefore, it is safe to assume that the wordlists in this package
  can also be freely copied, distributed, modified, and used for
  personal, educational, and research purposes.  (Use of these files in
  commercial products may require written permission from DEC and/or
  the authors of the original lists.)

  Whenever you distribute any of these wordlists, please distribute
  also the accompanying README file.  If you distribute a modified
  copy of one of these wordlists, please include the original README
  file with a note explaining your modifications.  Your users will
  surely appreciate that.


  These files, like the original wordlists on which they are based,
  are still very incomplete, uneven, and inconsitent, and probably
  contain many errors.  They are offered "as is" without any warranty
  of correctness or fitness for any particular purpose.  Neither I nor
  my employer can be held responsible for any losses or damages that
  may result from their use.

However since this Word List is used in the linux.words package which
the author claims is free of any copyright I assume it is OK to use
for most purposes.  If you want to use this in a commercial project
and this concerns you the information from the DEC word list can
easily be removed without much sacrifice in quality as only the name
lists were used.

The file special-jargon.50 uses common.lst and word.lst from the
"Unofficial Jargon File Word Lists" which is derived from "The Jargon
File".  All of which is in the Public Domain.  This file also contain
a few extra UNIX terms which are found in the file "unix-terms" in the
special/ directory.

The 60 level includes Brian's frequency class 0 and all words
appearing in at least 2 of the 12 dictionaries as indicated by the
12Dicts package.  A large number of names are also included: The 4,946
female names and 3,897 male names from the MWords package and the
files "computer.names", "misc.names", and "org.names" from the DEC

The 65 level includes words found in the Ispell "medium" word list.
The Ispell word lists are under the same copyright of Ispell itself
which is:

  Copyright 1993, Geoff Kuenning, Granada Hills, CA
  All rights reserved.

  Redistribution and use in source and binary forms, with or without
  modification, are permitted provided that the following conditions
  are met:

  1. Redistributions of source code must retain the above copyright
     notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.
  3. All modifications to the source code must be clearly marked as
     such.  Binary redistributions based on modified source code
     must be clearly marked as modified versions in the documentation
     and/or other materials provided with the distribution.
  4. All advertising materials mentioning features or use of this software
     must display the following acknowledgment:
     This product includes software developed by Geoff Kuenning and
     other unpaid contributors.
  5. The name of Geoff Kuenning may not be used to endorse or promote
     products derived from this software without specific prior
     written permission.


The 70 level includes the 74,550 common dictionary words and the 21,986 names
list from the MWords package.  The common dictionary words, like those
from the 12Dicts package, have had all likely inflections added.

The 80 level includes the ENABLE word list, all the lists in the
ENABLE supplement package (except for ABLE), the "UK Advanced Cryptics
Dictionary" (UKACD), the list of signature words in from YAWL package,
and the 10,196 places list from the MWords package.

The ENABLE package, mainted by M\Cooper <thegrendel@theriver.com>,
is in the Public Domain:

  The ENABLE master word list, WORD.LST, is herewith formally released
  into the Public Domain. Anyone is free to use it or distribute it in
  any manner they see fit. No fee or registration is required for its
  use nor are "contributions" solicited (if you feel you absolutely
  must contribute something for your own peace of mind, the authors of
  the ENABLE list ask that you make a donation on their behalf to your
  favorite charity). This word list is our gift to the Scrabble
  community, as an alternate to "official" word lists. Game designers
  may feel free to incorporate the WORD.LST into their games. Please
  mention the source and credit us as originators of the list. Note
  that if you, as a game designer, use the WORD.LST in your product,
  you may still copyright and protect your product, but you may *not*
  legally copyright or in any way restrict redistribution of the
  WORD.LST portion of your product. This *may* under law restrict your
  rights to restrict your users' rights, but that is only fair.

UKACD, by J Ross Beresford <ross@bryson.demon.co.uk>, is under the
following copyright:

  Copyright (c) J Ross Beresford 1993-1999. All Rights Reserved.

  The following restriction is placed on the use of this publication:
  if The UK Advanced Cryptics Dictionary is used in a software package
  or redistributed in any form, the copyright notice must be
  prominently displayed and the text of this document must be included

  There are no other restrictions: I would like to see the list
  distributed as widely as possible.

The 95 level includes the 354,984 single words and 256,772 compound
words from the MWords package, ABLE.LST from the ENABLE Supplement,
and some additional words found in my part-of-speech database that
were not found anywhere else.

Accent information was taken from UKACD.

My VARCON package was used to create the American, British, and
Canadian word list. 

Since the original word lists used used in the
VARCON package came from the Ispell distribution they are under the
Ispell copyright.

The variant word lists were created from a list of variants found in
the 12dicts supplement package as well as a list of variants I created

The Readmes for the various packages used can be found in the
appropriate directory under the r/ directory.


In order to help alleviate the problem of inflected forms of a
word appearing at different levels I plan on using the following rules:

  If the word is in the base form: only include that word.
  If the word is in a plural form: include the base word and the plural
  If the word is a verb form (other than plural):  include all verb forms
  If the word is an ad* form: include all ad* forms

There is a very nice frequency analyse of the BNC corpus done by
Adam Kilgarriff.  Unlike Brain's word lists the BNC lists include part
of speech information.  I plan on somehow using these lists as Adam
Kilgarriff has given me the OK to use it in SCOWL.  These lists will
greatly reduce the problem of inflected forms of a word appearing at
different levels due to the part-of-speech information.

I also plan on perhaps putting the data in a database and use SQL
queries to create the wordlists instead of tons of "sort"s, "comm"s,
and Perl scripts.


In order to recreate the word lists you need a modern version of Perl,
bash, the traditional set of shell utilities, a system that supports
symbolic links, and quite possibly GNU Make.  Once you have downloaded
all the necessary raw data in the r/ directory you should be able to
type "rm final/* && make all" and the word lists in the final/
directory should be recreated.  If you have any problems fell free to
contact me; however, unless you are interested in improving the
scripts used, I will likely ignore you as there should be little need
for anyone not interested in improving the word list to do so.

The src/ directory contains the numerous scripts used in the creation
of the final product. 

The r/ directory contains the raw data used to
create the final product.  In order for the scripts to work various
word lists and databases need to be created and put into this
directory.  See the README file in the r/ directory for more

The l/ directory contains symbolic links used by the actual scripts.

Finally, the working/ directory is where all the intermittent files go
that are not specific to one source.

FILE: english.words


    Jorge Stolfi <stolfi@src.dec.com>
    DEC Systems Research Center

    Andy Tanenbaum <ast@cs.vu.nl>
    Barry Brachman <brachman@cs.ubc.ca>
    Geoff Kuenning <geoff@itcorp.com>
    Henk Smit <henk@cs.vu.nl>
    Walt Buehring <buehring%ti-csl@csnet-relay>


    The file english.words is a list  of over 104,000
    English words compiled from several public domain wordlists.  

    The file has one word per line, and is sorted with sort(1)
    in plain ASCII collating sequence.

    The file is supposed to include all verb forms ("-s", "-ed",
    "-ing"), noun plurals and possesives, and forms derived by various
    prefixes and suffixes ("un-", "re-", "-ly", "-er", "-ation", etc.)
    However, the list is still highly incomplete and inconsistent: not
    all stems have all forms, and some forms (notably possesive
    plural) are missing altogether.

    The file is NOT supposed to contain any "proper" names, such as
    the names of ordinary persons, corporations and organizations;
    nations, countries and other geographical names; mythological
    figures; biological genera; and trademarked products.  It is also
    not supposed to contain abbreviations, measurement symbols, and
    acronyms. (Some of these are available in separate files; see

    The pronoun "I" and its contractions ("I'm", "I've") are
    capitalized as usual; the other words are all in lowercase.
    Besides the letters [a-zA-Z], the file uses only hyphen
    apostrophe, and newline.


    In the same directory as englis.words there are a few
    complementary word lists, all derived from the same sources [1--8]
    as the main list:


        A list of common English proper names and their derivatives.
        The list includes: person names ("John", "Abigail",
        "Barrymore"); countries, nations, and cities ("Germany",
        "Gypsies", "Moscow"); historical, biblical and mythological
        figures ("Columbus", "Isaiah", "Ulysses"); important
        trademarked products ("Xerox", "Teflon"); biological genera
        ("Aerobacter"); and some of their derivatives ("Germans",
        "Xeroxed", "Newtonian").

        A list of foreign-sounding names of persons and places
        ("Antonio", "Albuquerque", "Balzac", "Stravinski"), extracted
        from the lists [1--8].  (The distinction betweeen
        "English-sounding" and "foreign-sounding" is of course rather


        A short lists names of corporations and other institutions
        ("Pepsico", "Amtrak", "Medicare"), and a few derivatives.  

        The file also includes some initialisms --- acronyms and
        abbreviations that are generally pronounced as words rather
        than spelled out ("NASA", "UNESCO").


        A list of common abbreviations ("etc.", "Dr.", "Wed."),
        acronyms ("A&M", "CPU", "IEEE"), and measurement symbols
        ("ft", "cm", "ns", "kHz").

        A list of words from the original wordlists
        that I decided were either wrong or unsuitable for inclusion
        in the file english.words or any of the other auxiliary 
        lists. It includes
          typos ("accupy", "aquariia", "automatontons")
          spelling errors ("abcissa", "alleviater", "analagous")
          bogus derived forms ("homeown", "unfavorablies", "catched")
          uncapitalized proper names ("afghanistan", "algol", "decnet")
          uncapitalized acronyms ("apl", "ccw", "ibm")
          unpunctuated abbreviations ("amp", "approx", "etc")
          British spellings ("advertize", "archaeology")
          archaic words ("bedight")
          rare variants ("babirousa")
          unassimilated foreign words ("bambino", "oui", "caballero")
          mis-hyphenated compounds ("babylike", "backarrows")
          computer keywords and slang ("lconvert", "noecho", "prog"), 

        (I apologize for excluding British spellings.  I should have
        split the list in three sublists--- common English, British,
        American---as ispell does.  But there are only so many hours
        in a day...)


        A list of about 5,000 lowercase words from the "mts.dict"
        wordlist [6] that weren't included in english.words.

        This list seems to include lots of "trash", like uncapitalized
        proper names and weird words.  It would take me several days
        to sort this mess, so I decided to leave it as a separate
        file.  Use at your own risk...

    The original wordlists from which those files were compiled are
    listed below.  They were obtained by anonymous FTP on 92-Feb-10.

    [1] file: ispell/ispell/english.lrg
        size: 690778 bytes
        contact: Walt Buehring <buehring%ti-csl@csnet-relay>
        from: phloem.uoregon.edu: /pub/src/ispell.3.0.tar.Z

          * The (unexpanded) "large" english wordlist for ispell 3.0.

    [2] file: ispell/ispell/english.sml+
        size: 575226 bytes
        contact: Walt Buehring <buehring%ti-csl@csnet-relay>
        from: phloem.uoregon.edu: /pub/src/ispell.3.0.tar.Z

          * The (expanded) "small" english wordlist for ispell 3.0.

    [3] file: words.english.Z
        size: 217119 bytes (479261 bytes uncompressed)
        contact: Henk Smit <henk@cs.vu.nl>
        from: donau.et.tudelft.nl: /pub/words/

          * From the README file on ftp.cs.vu.nl:

                This list is made out of 2 lists,
                  the normal /usr/dict/words on most Unix systems,
                  TeX english wordlist (available at archive.cs.ruu.nl)

    [4] file: dict.2
        size:   274848 bytes
        contact: H Morrow Long <long-morrow@CS.YALE.EDU>
        from: bulldog.cs.yale.edu: /pub/dict.shar

          * According to H. Morrow, it came with some version
            of the "ispell" package.

    [5] file: minix.dict
        size: 357226 bytes
        author: Andy Tanenbaum <ast@cs.vu.nl>
        from: cs.ubc.ca: /pub/wordlists-1.0.tar.Z

          * From the README file:

            Article 1997 of comp.os.minix:
            From: ast@botter.UUCP
            Subject: A spelling checker for MINIX
            Date: 6 Jan 88 22:28:22 GMT
            Reply-To: ast@cs.vu.nl (Andy Tanenbaum)
            Organization: VU Informatica, Amsterdam

            This dictionary is NOT based on the UNIX dictionary so it
            is free of AT&T copyright.

            I built the dictionary from three sources.  First, I
            started by sorting and uniq'ing some public domain
            dictionaries.  Second, as some of you probably know, I
            have written somewhere between 3 and 6 books (depending on
            precisely what you count) and an additional 50 published
            papers on operating systems, networks, compilers,
            languages, etc.  This data base, which is online, is
            nonnegligible :-) Finally, I added a number of words that
            I thought ought to be in the dictionary including all the
            U.S. states, all the European and some other major
            countries, principal U.S. and world cities, and a bunch of
            technical terms.  I don't want my spelling checker to barf
            on arpanet, diskless, modem, login, internetwork,
            subdirectory, superuser, vlsi, or winchester just because
            Webster wouldn't approve of them.

            All in all, the dictionary is over 40,000 words.  If you
            have any suggestions for additions or deletions, please
            post them.  But please be sure you are not infringing on
            anyone's copyright in doing so.

              Andy Tanenbaum (ast@cs.vu.nl)

    [6] file: mts.dict
        size: 346983 bytes
        contact: Barry Brachman <brachman@cs.ubc.ca>
        from: cs.ubc.ca: /pub/wordlists-1.0.tar.Z

          * From the README file:

            These word lists were collected by Barry Brachman
            <brachman@cs.ubc.ca> at the University of British
            Columbia.  They may be freely distributed as long as this
            notice accompanies them.

            mts.dict contains only words that are not in
            /usr/dict/words.  [But note that your version of
            /usr/dict/words may be different from mine!  Use "sort -u"
            to get a list of unique words. ]

              From wc:

              24259   24259  198596 /usr/dict/words
              35475   35475  346992 mts.dict
              -----   ----- -------
              59734   59734  545588 total

    [7] file: words.english.Z
        size: 288385 bytes (644217 bytes uncompressed)
        from: ftp.hawaii.edu: /pub/editors/LEXICAL/word-lists/
        author: unknown.

    COMMENTS: The "large" list from ispell 3.0 [1] is the most
    complete, and contains almost all the words of the "small" ispell
    list [2], of Andy Tannenbaum's list minix.dict [5], and of the
    lists from Delft and Yale [3, 4], as well as /usr/dict/words. It
    leaves out some 500--1000 words from each of these lists.

    On the other hand, the file mts.dict from UBC [6] contains some 7000
    words that are not in the ispell list [1].  Therefore, mts.dict
    seems to be largely orthogonal to the list [1--5].

    The file words.english from Hawaii [7] seems to be the union of
    mts.dict [6], Andy's file minix.dict [5], and /usr/dict/words,
    except that it omits some 250 words from the latter.


    The file english.words is a slightly cleaned-up version of
    the "large" english wordlist [1] that comes with the ispell
    3.0 package, which is available from phloem.uoregon.edu.  

    First, I expanded the prefixes and suffixes using "isexpand" and
    some Gnuemacs hacking, and removed all words with capitals or
    periods.  Then I compared the result with other publicly available
    wordlists [2--7], and did a little bit of manual cleanup.  That
    meant removing some 8500 words that were obviously wrong or
    inappropriate, and adding about 4800 new words.  Those 8500
    words were largely distributed among the other lists.

    The table below gives the number of lowercase words in each
    original list ("lcase"), and how many of such words were included
    ("accept") and not included ("reject") in the final file

      ref  site: file                lcase  accept  reject
      ---  ----------------------  -------  ------  ------
      [1]  uoregon: english.lrg     103124  102000    1124
      [2]  uoregon: english.sml+     56694   56223     471
      [3]  tudelft: words.english    48150   47305     845
      [4]  yale: dict.2              47355   46577     778
      [5]  ubc: minix.dict           38699   38394     305
      [6]  ubc: mts.dict             35215   28874    6341
      [7]  hawaii: words.english     65165   57558    7607


  To the best of my knowledge, all the files I used to build these
  wordlists were available for public distribution and use, at least
  for non-commercial purposes.  I have confirmed this assumption with
  the authors of the lists, whenever they were known.
  Therefore, it is safe to assume that the wordlists in this package
  can also be freely copied, distributed, modified, and used for
  personal, educational, and research purposes.  (Use of these files in
  commercial products may require written permission from DEC and/or
  the authors of the original lists.)
  Whenever you distribute any of these wordlists, please distribute
  also the accompanying README file.  If you distribute a modified
  copy of one of these wordlists, please include the original README
  file with a note explaining your modifications.  Your users will
  surely appreciate that.


  These files, like the original wordlists on which they are based,
  are still very incomplete, uneven, and inconsitent, and probably
  contain many errors.  They are offered "as is" without any warranty
  of correctness or fitness for any particular purpose.  Neither I nor
  my employer can be held responsible for any losses or damages that
  may result from their use.

Reply to: