[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sysadmin qualifications (Re: apt-get vs. aptitude)



Jerry Stuckle wrote:


Try again. States do not differentiate between civil engineers, mechanical engineers, etc. and other engineers. Use of the term "Engineer" is what is illegal. Check with your state licensing board. The three states I've checked (Maryland, Texas and North Carolina) are all this way.

Then an awful lot of folks working at NSA, NASA, and in the Research Park area, with titles like "sr. software engineer" are working illegally.

Massachusetts, New York, and California do not. And in general, (which I expect is true of Maryland, Texas, and North Carolina) - a license is required to call yourself a "professional engineer" and use the letters P.E. after your name. A license is also required to work on certain kinds of projects - mostly in construction (an awful lot of states regulate PEs under their "Board of Engineering and Land Surveying").

Just as another reference point, I just worked on a proposal for an "intelligent transportation system" for Baltimore's transit system (Maryland Transportation Agency). It originally had a requirement that "designs be sealed by a professional engineer licensed in MD." As soon as someone pointed out that this was a software system, not a construction project, that requirement was removed.


And yes, there have been attempts to license programmers. But that is a separate issue.

Generally, it's considered misrepresentation to call yourself an
engineer without at least a 4-year degree from a uniersity with an
accredited program.


And a state license, as indicated above.

You're simply wrong.


And "Systems Programming" has never mean someone who writes operating
systems; I've known a lot of systems programmers, who's job was to
ensure the system ran.  In some cases it meant compiling the OS with
the require options; other times it meant configuring the OS to meet
their needs.  But they didn't write OS's.

It's ALWAYS meant that, back to the early days of the field.


That's very interesting.  Because when I was working for IBM (late
70's on) on mainframes, all of our customers had "Systems
Programmers".  But NONE of them wrote an OS - IBM did that. The
systems programmers were, however, responsible for installation and
fine tuning of the software on the mainframes.

There are a lot of "Systems Programmers" out there doing exactly that
job.  There are very few in comparison who actually write operating
systems.

It seems your experience is somewhat limited to PC-based systems.
Hmm...., in rough cronological order:
- DG Nova
- IBM 360 (batch and TSO)
- Multics
- pretty much every major flavor of DEC System (PDP-1, PDP-10/20, PDP-8,
PDP-11, VAX, a few others)
-- including some pretty funky operating systems - ITS, Cronus, and
TENEX come to mind
- both varieties of LISP machine
- various embedded systems (wrote microcode for embedded avionic
machines at one point)
- various embeded micro-controllers (both basic TTL logic and z80-class)
- various Sun desktop and server class machines
- BBN Butterfly
- IBM RS/6000
- a lot of server-class machines (ran a hosting company for a while)
- yeah, and a lot of Macs, some PCs, a few Android devices, and a couple
of TRS-80s in there along the way


So young? I started on an IBM 1410, several years before the 360 was introduced. And I see you've played with a few minis. But you obviously have limited experience in large shops.

Well, we had them around. And I happened to manage the engineering time sharing services for a mid-sized defense contractor at one point. The DECSYSTEM-20 I was responsible for sat right next to the 370/something that ran all our MIS stuff, and the guy who "owned" MIS was a peer.

Sounds to me like you, on the other hand, have only worked on IBM "big iron" - which is actually a pretty simplistic and structurd environment when it comes to programming - particularly in the early days. (Systems Analysts did the thinking, programmers wrote what they were told - usually in Cobol or PL/1).



Can't find any "official definition" - but the WikiPedia definition is
reasonably accurate: "*System programming* (or *systems programming*) is
the activity of computer programming
<http://en.wikipedia.org/wiki/Computer_programming> system software
<http://en.wikipedia.org/wiki/System_software>. The primary
distinguishing characteristic of systems programming when compared to
application programming
<http://en.wikipedia.org/wiki/Application_programming> is that
application <http://en.wikipedia.org/wiki/Application_software>
programming aims to produce software which provides services to the user
(e.g. word processor <http://en.wikipedia.org/wiki/Word_processor>),
whereas systems programming aims to produce software which provides
services to the computer hardware
<http://en.wikipedia.org/wiki/Computer_hardware> (e.g. disk defragmenter
<http://en.wikipedia.org/wiki/Defragmentation>). It requires a greater
degree of hardware awareness.


And Wikipedia is an authority?  Nope.  It's just one person's opinion.
Tomorrow someone else may update it with another opinion.
No.. I quoted it because it's accurate - and it's consistent with usage
since at least 1971, when I first encountered the term.


Not at all accurate in my experience - since 1967. And my experience included large shops and working with systems programmers at those shops.

All IBM? All MIS?

By the way, the Bureau of Labor Statistics defines things this way:

15-1031 Computer Software Engineers, Applications
Develop, create, and modify general computer applications software or specialized utility programs. Analyze user needs and develop software solutions. Design software or customize software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Exclude "Computer Hardware Engineers" (17-2061).

15-1032 Computer Software Engineers, Systems Software
Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. Set operational specifications and formulate and analyze software requirements. Apply principles and techniques of computer science, engineering, and mathematical analysis.

For reference, the other relevant categories are:

15-1132 Software Developers, Applications
Develop, create, and modify general computer applications software or specialized utility programs. Analyze user needs and develop software solutions. Design software or customize software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. May supervise computer programmers.

15-1133 Software Developers, Systems Software
Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. Set operational specifications and formulate and analyze software requirements. May design embedded systems software. Apply principles and techniques of computer science, engineering, and mathematical analysis.

15-1131 Computer Programmers
Create, modify, and test the code, forms, and script that allow computer applications to run. Work from specifications drawn up by software developers or other individuals. May assist software developers by analyzing user needs and designing software solutions. May develop and write computer programs to store, locate, and retrieve specific documents, data, and information.

And from that, it's pretty clear that "programmer" is the low end of the totem pole ("Work from specifications drawn up by software developers or other individuals. May assist software developers...")

So, ok, maybe a "programmer" doesn't have to know very much about o/s or hardare issues - but then real "developers" and "software engineers" sure do.

--------
As another reference point, I just pulled this from BAE's job postings (non-random - the Nashua group used to be Sanders Associates, where I worked early in my career):


 Senior Software Engineer



	
Job Number: 	397933
Location: 	Nashua, NH
Category: 	Engineering
Security Clearance Status: 	Active
Security Clearance Type: 	Secret
Experience Level: 	Regular
Travel Required: 	20%
Shift: 	1st
US Citizenship Required: 	Yes
Posting Date: 	10/15/2013



BAE Systems is looking for an experienced Electronic Warfare (EW) embedded system software developer with experience in Radar Warning (RW) and / or Electronic Attack.

Develops complex software designs or evaluates and recommends changes to existing designs. Codes, unit test and integrates software. Verifies complex designs to ensure conformance with specifications and customer requirements. Assists in the development of software requirements, cost estimates and the preparation of proposals. Does or leads one or more Software Engineering activities based on project needs. Supports new business acquisition

Required Education:
   Bachelors degree in an Engineering or Scientific field or equivalent
   experience and 4+ year(s) related experience

Required Skills:
   Has skills to perform the following functions:
   1. Real time embedded software development.
   2. Allocate requirements to software, develop software requirements,
   evaluate the impact of requirements changes.
   3. Develop software design and associated documentation or lead
   other in this task.
   4. Code, unit test and integrate complex software designs.
   5. Verify complex software designs to ensure conformance with
   functional specifications and customer requirements.
   6. Perform evaluations/trade studies for complex engineering
   development tools, or perform complex engineering development tool
   design.
   7. Provide inputs to sections of technical proposals.
   8. Assist upper management in identifying potential new products or
   business.
   9. Active US DoD Secret Security clearance




That's been the usage since the days I took courses in it at MIT (early
1970s), and how the term is used in all the textbooks by folks like
Jerry Saltzer, John Donovan, Corbato - names you should recognive if
you've been in the field for 40 years.


That's very interesting, because some of the Systems Programmers I
knew also graduated from MIT.  And they agreed they were systems
programmers.

And yes, I recognize those names.  But they never were that highly
regarded except in academia.

Hmmm... John Donovan was THE corporate guru for about 20 years - until
he had a personal scandle that blew up his career.  I'd consider Saltzer
and Corbato to have some pretty good creds beyond academia - places like
DARPA, DoD in general, all those large corporations (like IBM) who
funded a lot of their work.

You can consider them that way. I'm just talking about what my experience was - including when I was at IBM. And AFAIK, IBM never funded any of their work. Please show which projects of theirs were funded by IBM. Even one would be great.

They sure fund a lot at MIT now. Back then, they may not have - as I recall, the reason Multics was designed on GE hardware, was because IBM declined to provide any. I know that IBM poured a bunch of money into Project Athena - and the same folks were still running the Computer Science Dept.

And I never said they didn't work beyond academia (DARPA is basically an extension of academia, as are many DOD research jobs). I said they weren't highly regarded beyond academia - including John Donovan. And if he were so highly regarded, a personal scandal would not have ruined a professional reputation.
Well he started out as one of MIT's youngest tenured professors, and retired as one. On the other hand, going off the deep end and going to jail for something that involved shootings among family members tends to ruin one's career as a guru to the executive suite. The fact that he'd ammassed a huge pile of money before that means that it didn't really matter. (Interesting article on one man's decline at http://www.bostonmagazine.com/2006/07/professor-donovans-magnificent-entanglements/)

Still, his textbooks on "Systems Programming" and "Operating Systems" were considered THE books in the 1970s (focused on IBM MVS as I recall).




The programmers where I'm currently working - application systems for
buses (vehicle location, engine monitoring and diagnostics,
scheduling,
passenger information) -- yeah, they have to worry about things
like how
often vehicles send updates over-the-air, the vageries of data
transmission over cell networks (what failure modes to account for),
etc., etc., etc.


That's not "real time".  "Real time" is when you get an input and you
have to make an immediate decision and output it.  Motor controllers
are one example; constantly adjusting motor speed to keep a conveyor
belt running at optimum capacity as the load changes. Another is
steering a radiotelescope to aim at a specific point in the sky and
keep it there.  Oh, and once they radiotelescope is properly aimed,
process the information coming from it and 30-odd others spaced over a
couple of hundred square miles, accounting for the propagation delay
from each one, and combining the outputs into one digital signal which
can be further processed or stored.

There's a spectrum of real-time.  Everything from radar jamming
(picoseconds count), to things that happen on the order of seconds
(reporting and predicting vehicle locations). Basically anything where
timing and i/o count.


Reporting and predicting vehicle locations is considered "real time"
by a small portion of the computing world, I will agree.


And worrying about the vagaries of data transmission over cellular
networks requires no network knowledge below the application level. In
fact, I doubt your programmers even know HOW the cellular network
operates at OSI layers 6 and below.

Ummm... yes.  Our guys write protocols for stuffing vehicle state data
into UDP packets, drivers for talking across funny busses (e.g. J1908
for talking to things like message signs, engine control units,
fareboxes).  Occaisionally we have to deal with controlling on-board
video systems and distributing video streams.  Busses also typically
have multiple systems that share a router that talks both cellular
(on-the-road) and WiFi (at the depot) - lots of resource management
going on.  And don't get me started on running data over trunked radio
networks designed for voice.


So?  How much do they (or YOU, for that matter) know about what goes
on in your transmissions?  Do you actually create the bits in the UDP
packets?  Do you actually modulate the packets for sending across the
airwaves, then demodulate them at the bus?  Do you ACK the message
reception, and resend if a packet (or ACK) doesn't make it?

Or do you call a set of API's with a destination and data and lest the
system take over?

For our OTAR protocol, we munge the bits.  Same again for J1908.


You actually changed the bits in the header/trailers? Or just the data? If the former, how did you get the network to handle them? Did you rewrite the network code all along the way, also?

Ummm.... when you create a UDP packet, you have to populate the headers and and trailers. Our over-the-air protocol is proprietary, so yes, we wrote the code. J1908, I'm not sure - it's a rather funny serial bus protocol that looks like it's based on HDLC framing. Not sure how much of that is done in the chips these days, and how much our guys wrote in the form of drivers.





When I worked on military systems - trainers, weapons control,
command &
control, intelligence, ..... - you couldn't turn your head without
having to deal with "real world" issues - both of the hardware and
networks one was running on, and the external world you had to
interact
with.


I'm sorry your military systems were so unstable.  But programmers
don't worry about the hardware.

Hah.. Tell that to someone who's designing a radar jammer, or a fire
control system.  Now we're talking "hard real-time" - no timing jitter
allowed as control loops execute. Yeah, now we're talking about custom microcode to execute funny algorithms, and interrupt-driven programming.


I'm sure you've designed a lot of radar jammers.  And I'll bet you've
designed fire control systems, also.

Radar jammers yes.  Fire control no.

Whoops - you said microcode.  You just exposed yourself.  The only
people who write microcode are the designers of the CPU.  Once it's
out of the factory, people may write assembler (translated into
machine code).  But they don't write microcode.

Umm yup.  That was 1984, designing an embedded processor that went into
electronic warfare pods.  16-layer board, wall-to-wall ECL flatpacks,
mostly MSI.  We designed to a standard Air Force macro-instruction set;
and added about 6 very specialized additional instructions to twiddle
bits in very funny ways.  I wrote a bunch of the microcode, personally.


And real time systems do not use interrupts.  They cannot handle the
semi-random timing changes caused by interrupts.  If there are
interrupts which must be handled, they are handled by an entirely
different processor whose only job is to deal with interrupts.

Tell me how many real-time systems you've designed.  Most of the ones
I've dealt with have been very much event driven.


Only about a dozen. And while all of them were event driven, most (a couple weren't all that critical in the timing) handled interrupts via a separate processor from the main controller.


For that matter, anybody who has to deal with displays - e.g. for
simulators or games - has to worry about hardware.


There are thousands of display cards out there for the PC alone. Each
one has a different interface.  Those who write real simulators do it
for a specific set of hardware.  Game developers can't (and don't)
worry about all the different hardware out there.

Then again, we never hired "programmers" - we hired software engineers,
who were expected to have some serious electrical engineering and
computer hardware background.


Are they licensed engineers?  See above.

They tend to have Masters and Doctoral degrees from electrical
engineering, computer engineering, and computer science programs.

That's not answering the question.

No. They are not licensed "professional engineers" (see above). I don't think I've ever encounted a PE working in computing, and only a few EEs with "PE" after their names.





If you think anybody can code a halfway decent distributed
application,
without worrying about latency, transmission errors and recovery,
network topology, and other aspects of the underlying "stuff" - I'd
sure
like some of what you've been smoking.


Been doing it for 30+ years - starting when I was working for IBM back
in '82.

For example?  Distributed systems were still a black art in the early
1980s.  Those were the days of SNA, DECNET, and X.25 public networks -
most of what went across those was remote job entry and remote terminal access. Distributed applications were mostly research projects. Network
links were still 56kbps if you were lucky (and noisy as hell), and an
IBM 3081 ran at what, 40 MIPS or so.


Not at all.  There were a lot of distributed systems on mainframes
back then.  Most large companies had them; airlines were a great
example. But even large banks and insurance companies used distributed
systems.

IBM had perhaps the largest; hundreds of computers around the world
capable of talking to each other and getting data from each other. And
I could log into any of them (if I had authorization) from almost any
terminal in an IBM office, and even many customer sites.

As I recall, most of the business stuf was running on centralized
mainframes, with distributed terminals.  Though I guess SAGE dates back
to the 50s and that was certainly IBM, and both air traffic control and
things like the SABRE and APOLLO reservation systems emerged from that
technology base.


Then you don't know much about large companies. Smaller companies, yes. But not the big companies. Way too much for one mainframe to handle. And none of them had anything to do with SAGE, SABRE or any other reservation system.

I believe we're talking about the 1980s. A lot of that technology was pretty germinal at that point - and a lot of it came from IBM's STRETCH computer developed for SAGE. And you're the one who mentioned airlines.



What could you have been working on in the 1980s that WASN'T incredibly
sensitive to memory use, disk usage, cpu use, and network bandwidth?


Large mainframes.  Distributed computing resolved much of the memory
and cpu usage; network bandwidth was a problem, but then programmers
knew how to minimize data transfer.  For instance, rather than
transfer large amounts of data across the network then filter on the
receiving end, programs on the hosting computer would do the filtering
and only send what was necessary.

In other words, programmers had to know enough about the network's
limitations to worry about minimizing data transfer (as opposed to
today's coders who seem to think that bandwidth is infinite, and then
get messed up when they start writing apps for cell phones, where people
get charged for bandwidth).  I expect latency and error recovery were
also considerations for those systems.  (I also seem to recall learning
an awful lot about HDLC framing back in the day).


Nope. They knew nothing about the network. All they knew about was how to code efficiently - including minimizing data transfer. They didn't know whether the data was going across a high speed SDLC connection using SNA or a 120 baud ASCII connection. And it wasn't important, because the connection could change at some point without rewriting any of their code. They were good programmers, because they wrote code which did not depend on underlying hardware or network protocols.

As you say "programmers" - not software developers, not software engineers. As I recall, in a 1980s IBM environment, "system analysts" were the ones who did all brain work.

Which brings us back to the starting point of this conversation - defining "programmer" - and my comment that I'd never hire anyone who is so low level as how you're using the term (and I'd surely never advise anyone to pursue a degree in "programming" - of which there seem to be more and more offered every year).




Oh, and by the way, an awful lot of big data applications are run on
standard x86 hardware - in clusters and distributed across networks.
Things like network topology, file system organization (particularly
vis-a-vis how data is organized on disk) REALLY impacts performance.


That's what I mean about what people thing is "big data". Please show
me ANY X86 hardware which can process petabytes of data before the end
of the universe.  THAT is big data.

Too many people who have never seen anything outside of the PC world
think "big data" is a few gigabytes or even a terabyte of information.
In the big data world, people laugh at such small amounts of data.

There are an awful lot of Beowulf clusters out there.  Stick a few 100
(or a few thousand) multi-core processors in a rack and you've got a
supercomputer - and that's what most of the big data processing I know
of is being done on.  Heck, Google's entire infrastrucure is built out
of commodity x86 hardware.  (There's a reason we don't see a lot of
surviving supercomputer manufacturers.)


Google isn't a "supercomputer".  It is a bunch of processors handling
individual requests.  The real supercomputers do things like weather
forecasting, and nuclear explosion simulations, where billions of
individual pieces of data must be processed concurrently.

Google's indexing and search algorithms (and advertising analytics) are
very definitely "big data" applications.


Google is a data warehouse. While they deal with lots of data, it is not considered a "supercomputer" by anyone who knows supercomputers.

As to "real supercomputers" - NCAR's Yellowstone processor is built out
of something like 10,000 Xeon 8-core processors.  It's the backplace
design and operating system environment that turn it into a
"supercomputer."


I didn't say they weren't commodity chips. But they aren't just a bunch of PC's in a huge network, like Google is.

Google's a bit more than that. It's a globally distributed system optimized for search and analytics across about the biggest database out there. If that isn't "big data" I don't know what is. Most of the "smarts" is at the system and software levels.


Yes, the early generations of supercomputers had small numbers of
processors that ran REALY fast (for their day), and did all kinds of
fancy pipelining.  Today, most "supercomputers" tend to exploit massive
parallelism, but when you dig under the covers, the building blocks are
commodity chips.  The days of really specialized hardware (like the
Connetion Machine, or BBN Butterfly) seem to be over (though there are
some interesting things being done both for graphics processing and very
low level telecom. functions).


And the reason we don't see a lot of surviving supercomputer
manufacturers is there never were very many of them in the first
place.  The cost of the supercomputers was so high only a very few
could afford them - generally governments or government contractors.

Disk i/o is the problem - again, a problem that requires find tuning
algorithms to file system design to how one stores and accesses data on
the physical drives.


Yes, that has always been the biggest problem on PC's. Mainframes use
an entirely different means, and transfer data at rates PC's only
dream about.  Additionally, the transfer is done entirely in hardware,
requiring no CPU between the time the command list (and there could be
dozens or even hundreds of commands in the list) is turned over to the
I/O system and the last command is completed (or an I/O error occurs).

Big deal - fiber channel SANS, with Oracle running on top. Works fine
for some things, not so well for large classes of operations. The
bottleneck remains the drives themselves (physical seek times for
heads).  That's why we've been seeing a new generation of file systems
tailored for massive search.


You're the one who brought it up, so it must be a big deal. And I'm not talking about just the path between the disk and the processor. For instance, disks for mainframes typically have faster seek times than PC disks. Also, mainframe disks spin faster and can read from multiple heads concurrently. The controllers also have larger buffers (multi-gb is not abnormal today). The result is a mainframe disk controller reads an entire cylinder in less time than it takes a PC disk to read one track.

Plus one disk controller can handle multiple disks concurrently and a mainframe can have multiple controllers concurrently. Spreading data across multiple disks and controllers further reduces access time.

Yes... and someone has to figure out the best way to spread that data, as a function of the kinds of operations that are going




I might also mention all the folks who have been developing algorithms
that take advantage of the unique characteristics of graphics
processors
(or is algorithm design outside your definition of "programming" as
well?).


What about them?

Do you consider figuring out algorithms to be part of "programming?" If
so, how do you exclude a knowledge of machine characteristics from the
practice of "programming?"


Yes, I consider figuring out algorithms to be a part of programming.
But unless you are programming a real-time system with critical timing
issues (see above),   Otherwise, a good programmer will write the
algorithm in a machine-independent manner for greater portability.

I'm trying to figure out what kinds of things you see "programmers"
working on that don't need serious knowledge of the underlying operating
system, computer hardware, and i/o environment.



How about any application program?  How much do you need to know about
the OS to write a word processor, for instance?  Or an IDE such as
Eclipse - written entire in Java, and runs on several different
platforms.  Nothing in the application knows anything about the
underlying OS, hardware or I/O.

Not a of people writing word processors these days, or IDEs for that
matter. (Actually, any halfway decent development environment needs to
support low-level debugging - process trace, timing analysis, resource
usage - all kinds of things that take you deep into the o/s and hardware).


You asked for examples. These are two of them. I can list them for days on end. Good programmers can and do) write code which is hardware and OS agnostic.

Even most of Debian and its applications are hardware agnostic.

Debian is an operating system, not an application, and like all operating systems it's designed to abstract away harware details. As you get into the kernel and into a lot of systems functions, it's very much not hardware agnostic - it just hides that stuff from application software.

As to applications - sure, simple applications can be hardware agnostic, complicated one generally can't be, and people who write complicated applications had better know a lot about hardware and o/s level stuff.





In fact, very few applications need to know details about the
underlying OS or hardware.  Any any application who programs
unnecessarily to the hardware is doing his employer a huge disservice.

Judging from what crosses job boards these days, MOST applications tend
to be resource intensive, and hence algorithm design tends to require
deep knowledge of o/s and hardware:
- military systems for sure
- avionics
- industrial control
- video (lots of demand for developers of mobile video applications -
need a lot of knowledge of both video hardware and compression schemes
and the hardware that supports them)
- gaming (need to know a lot about specialized graphics hardware)
- automotive computing (both user facing and on-board systems control)


Job boards are not a good indication of what is being used. It is only an indication of what people can't find. They have always been heavy on specialized experience - and not just in the computer industry.

And different job boards lean towards different specializations. Looking at one job board is far from any reasonable indication of the state of the industry and what is in demand.

Just for the hell of it, I went to a job board (indeed.com) and searched
on "programmer" jobs in the Boston area, and there seems to be an awful
lot of call for people who can write data collection systems (i/o
intensive, and video systems).


How many COBOL programmers were requested? Never mind the fact it is still the most used language in the world.

Probably not as much anymore. The great secret of the year 2000 on-crisis, is that a lot of money was spent on recoding legacy applications - mostly from COBOL to other things.

Anybody who thinks that being able to write code (be it Java, C, or .NET
crap), without knowing a lot about the environment their code is going
to run in, much less general analytic and design skills, is going to
have a very short-lived career.


Anyone who can't write good cross-platform code which doesn't depend on specific hardware and software has already limited his career. Anyone who can write good cross-platform code has a much greater career ahead of him. It is much harder than writing platform-specific code.


For anything but the most simplistic applications, cross-platform code has to accomodate different platforms, not write to the lowest common denominator (GPU model x, do things one way, GPU model y do things another, no GPU do it in software). Ever notice how many different tests get run by any halfway decent makefile, and how many options get set at compile time, based on what the run-time environment looks like?

It's not about platform-specific vs. non-platform-specific code, it's about knowing what your platform can and can't do, and making sure that your code can run on a broad variety of platforms. It's also about picking the right platform for an application (ever notice how many applications include a list of minimum specs for the hardware they run on?).




--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


Reply to: