Re: COBOL compiler
On Tue, 2003-08-26 at 18:23, Bijan Soleymani wrote:
> On Tue, Aug 26, 2003 at 02:30:57PM -0500, Ron Johnson wrote:
> > On Tue, 2003-08-26 at 13:29, David Turetsky wrote:
> > > > On Tue, 2003-08-26 at 10:05, Kirk Strauser wrote:
> > > > From: Ron Johnson [mailto:ron.l.johnson@cox.net]
> > > > For example, COBOL has intrinsic constructs for easily handling
> > > > ISAM files in a variety of manners. Likewise, there is a very
> > > > powerful intrinsic SORT verb.
> > > >
> > >
> > > Yes, but how does that compare with similarly powerful features in Perl?
> >
> > I *knew* someone would ask about the Programmable Extraction and
> > Reporting Language...
> >
> > Please don't think that I am implying that Perl or C are bad languages.
> > I certainly wouldn't write a logfile analyzer in COBOL.
> >
> > For my knowledge, how would Perl sort the contents of a file,
> > allowing the programmer to use a complex algorithm as an
> > input filter, and then take the output stream, processing it
> > 1 record at a time, without needing to write to and then read
> > from temporary files with all of the extra SLOC that that entails?
>
> 1) Read from records from file into an array.
> 2) Order them with any perl code you want and store them in an array.
> 3) Use a nice foreach loop to process them.
> --- Outline of Perl code ---
> #!/usr/bin/perl
> @records = read_records_function("records.txt");
> #either
> @sorted = sort @records; #to put things in alphabetical order
> #or
> @sorted = sort function @records; #to sort using a function
> # or even
> @sorted = sort {sort-function-code} @records; #to have it in-line
> foreach $record (@sorted)
> {
> # code to process records
> }
That's great for in-memory stuff.
What about when there are, say, 10 or 40M records to process?
And what if you only need to SORT a fraction of those 40M records,
and the winnowing algorithm is very complicated, possibly needing
to access other files or database tables in the process?
> > One thing that I don't think that any of the "modern" languages
> > do, without extra libraries and function calls, is BCD arithmatic.
>
> That's true. You could overload the arithmetic operators in perl to get
> this. But it would be using function calls behind your back.
>
> > Here's a simplistic example of how COBOL is specialized:
> > Say we have 2 record definitions:
> > 01 A-SMALL-REC.
> > 05 FIRST-NAME PIC X(15).
> > 05 LAST-NAME PIC X(15).
> > 01 A-LARGE-REC.
> > 05 HONORIFIC PIC X(5).
> > 05 FIRST-NAME PIC X(15).
> > 05 MIDDLE-I PIC X.
> > 05 LAST-NAME PIC X(15).
> > 05 MODIFIER PIC X(5).
> >
> > MOVE 'JOHN' TO A-SMALL-REC.FIRST-NAME.
> > MOVE 'DOE' TO A-SMALL-REC.LAST-NAME.
> > MOVE SPACES TO A-LARGE-REC.
> >
> > MOVE CORRESPONDING A-SMALL-REC TO A-LARGE-REC.
> >
> > Here, A-SMALL-REC.FIRST-NAME and A-SMALL-REC.LAST-NAME will be
> > moved to the corresponding fields in A-LARGE-REC.
> > In such a trivial example, so what? If, however, there are many
> > fields in A-SMALL-REC, then MOVE CORRESPONDING is a big coding
> > time-saver, and ensures that if the records definitions ever
> > change, the code will still work.
>
> That is cool for fixed records. But perl is big on dynamic stuff. In
Which is why I say that COBOL is a specialized tool, just as, I
think, C should be thought of as a specialized tool.
> perl you use hashes (associative arrays) for things like that.
> so:
> %small = (
> first_name => "John",
> last_name => "Doe",
> );
>
> foreach $key (keys %small)
> {
> $large{$key} = $small{$key};
> }
Interesting, but I'd rather the MOVE CORRESPONDING...
--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA
An ad run by the NEA (the US's biggest public school TEACHERS
UNION) in the Spring and Summer of 2003 asks a teenager if he
can find sodium and *chloride* in the periodic table of the elements.
And they wonder why people think public schools suck...
Reply to: