[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sysadmin qualifications (Re: apt-get vs. aptitude)



On 10/15/2013 6:42 PM, Miles Fidelman wrote:

Sorry for the broken thread.  Let me try this again.

Jerry Stuckle wrote:
On 10/15/2013 2:26 PM, Miles Fidelman wrote:



Geeze Jerry, you're just so wrong, on so many things.

What's a "coder"?  In over 40 years of programming, I've met many
programmers, but no "coders".  Some were better than others - but none
had "limited and low-level skill set".  Otherwise they wouldn't have
been employed as programmers.

If you've never heard the term, you sure have a narrow set of
experiences over those 40 years.  And I've seena LOT of people with very
rudimentary skills hired as programmers.  Not good ones, mind you. Never
quite sure what they've been hired to do (maybe .NET coding for business
applications?).  All the serious professionals I've come across have
titles like "software engineer" and "computer engineer."


I didn't say I haven't heard of the term. I said I've never met any. I have, however, seen a lot of half-assed programmers call others they consider their inferiors "coders".

And most of the serious professionals I've come across have the title "programmer". In many U.S. states, you can't use the term "engineer" legally unless you are registered as one with the state.

For someone to claim they are an "Engineer" without the appropriate qualifications (i.e. 4-year degree and passing the required test(s) in their jurisdiction is not only a lie, it is illegal in many places.


And "Systems Programming" has never mean someone who writes operating
systems; I've known a lot of systems programmers, who's job was to
ensure the system ran.  In some cases it meant compiling the OS with
the require options; other times it meant configuring the OS to meet
their needs.  But they didn't write OS's.

It's ALWAYS meant that, back to the early days of the field.


That's very interesting. Because when I was working for IBM (late 70's on) on mainframes, all of our customers had "Systems Programmers". But NONE of them wrote an OS - IBM did that. The systems programmers were, however, responsible for installation and fine tuning of the software on the mainframes.

There are a lot of "Systems Programmers" out there doing exactly that job. There are very few in comparison who actually write operating systems.

It seems your experience is somewhat limited to PC-based systems.

Can't find any "official definition" - but the WikiPedia definition is
reasonably accurate: "*System programming* (or *systems programming*) is
the activity of computer programming
<http://en.wikipedia.org/wiki/Computer_programming> system software
<http://en.wikipedia.org/wiki/System_software>. The primary
distinguishing characteristic of systems programming when compared to
application programming
<http://en.wikipedia.org/wiki/Application_programming> is that
application <http://en.wikipedia.org/wiki/Application_software>
programming aims to produce software which provides services to the user
(e.g. word processor <http://en.wikipedia.org/wiki/Word_processor>),
whereas systems programming aims to produce software which provides
services to the computer hardware
<http://en.wikipedia.org/wiki/Computer_hardware> (e.g. disk defragmenter
<http://en.wikipedia.org/wiki/Defragmentation>). It requires a greater
degree of hardware awareness.


And Wikipedia is an authority? Nope. It's just one person's opinion. Tomorrow someone else may update it with another opinion.

That's been the usage since the days I took courses in it at MIT (early
1970s), and how the term is used in all the textbooks by folks like
Jerry Saltzer, John Donovan, Corbato - names you should recognive if
you've been in the field for 40 years.


That's very interesting, because some of the Systems Programmers I knew also graduated from MIT. And they agreed they were systems programmers.

And yes, I recognize those names. But they never were that highly regarded except in academia.


The programmers where I'm currently working - application systems for
buses (vehicle location, engine monitoring and diagnostics, scheduling,
passenger information) -- yeah, they have to worry about things like how
often vehicles send updates over-the-air, the vageries of data
transmission over cell networks (what failure modes to account for),
etc., etc., etc.


That's not "real time".  "Real time" is when you get an input and you
have to make an immediate decision and output it.  Motor controllers
are one example; constantly adjusting motor speed to keep a conveyor
belt running at optimum capacity as the load changes.  Another is
steering a radiotelescope to aim at a specific point in the sky and
keep it there.  Oh, and once they radiotelescope is properly aimed,
process the information coming from it and 30-odd others spaced over a
couple of hundred square miles, accounting for the propagation delay
from each one, and combining the outputs into one digital signal which
can be further processed or stored.

There's a spectrum of real-time.  Everything from radar jamming
(picoseconds count), to things that happen on the order of seconds
(reporting and predicting vehicle locations).  Basically anything where
timing and i/o count.


Reporting and predicting vehicle locations is considered "real time" by a small portion of the computing world, I will agree.


And worrying about the vagaries of data transmission over cellular
networks requires no network knowledge below the application level. In
fact, I doubt your programmers even know HOW the cellular network
operates at OSI layers 6 and below.

Ummm... yes.  Our guys write protocols for stuffing vehicle state data
into UDP packets, drivers for talking across funny busses (e.g. J1908
for talking to things like message signs, engine control units,
fareboxes).  Occaisionally we have to deal with controlling on-board
video systems and distributing video streams.  Busses also typically
have multiple systems that share a router that talks both cellular
(on-the-road) and WiFi (at the depot) - lots of resource management
going on.  And don't get me started on running data over trunked radio
networks designed for voice.


So? How much do they (or YOU, for that matter) know about what goes on in your transmissions? Do you actually create the bits in the UDP packets? Do you actually modulate the packets for sending across the airwaves, then demodulate them at the bus? Do you ACK the message reception, and resend if a packet (or ACK) doesn't make it?

Or do you call a set of API's with a destination and data and lest the system take over?


When I worked on military systems - trainers, weapons control, command &
control, intelligence, ..... - you couldn't turn your head without
having to deal with "real world" issues - both of the hardware and
networks one was running on, and the external world you had to interact
with.


I'm sorry your military systems were so unstable.  But programmers
don't worry about the hardware.

Hah.. Tell that to someone who's designing a radar jammer, or a fire
control system.  Now we're talking "hard real-time" - no timing jitter
allowed as control loops execute.  Yeah, now we're talking about custom
microcode to execute funny algorithms, and interrupt-driven programming.


I'm sure you've designed a lot of radar jammers. And I'll bet you've designed fire control systems, also.

Whoops - you said microcode. You just exposed yourself. The only people who write microcode are the designers of the CPU. Once it's out of the factory, people may write assembler (translated into machine code). But they don't write microcode.

And real time systems do not use interrupts. They cannot handle the semi-random timing changes caused by interrupts. If there are interrupts which must be handled, they are handled by an entirely different processor whose only job is to deal with interrupts.

For that matter, anybody who has to deal with displays - e.g. for
simulators or games - has to worry about hardware.


There are thousands of display cards out there for the PC alone. Each one has a different interface. Those who write real simulators do it for a specific set of hardware. Game developers can't (and don't) worry about all the different hardware out there.

Then again, we never hired "programmers" - we hired software engineers,
who were expected to have some serious electrical engineering and
computer hardware background.


Are they licensed engineers?  See above.


If you think anybody can code a halfway decent distributed application,
without worrying about latency, transmission errors and recovery,
network topology, and other aspects of the underlying "stuff" - I'd sure
like some of what you've been smoking.


Been doing it for 30+ years - starting when I was working for IBM back
in '82.

For example?  Distributed systems were still a black art in the early
1980s.  Those were the days of SNA, DECNET, and X.25 public networks -
most of what went across those was remote job entry and remote terminal
access.  Distributed applications were mostly research projects. Network
links were still 56kbps if you were lucky (and noisy as hell), and an
IBM 3081 ran at what, 40 MIPS or so.


Not at all. There were a lot of distributed systems on mainframes back then. Most large companies had them; airlines were a great example. But even large banks and insurance companies used distributed systems.

IBM had perhaps the largest; hundreds of computers around the world capable of talking to each other and getting data from each other. And I could log into any of them (if I had authorization) from almost any terminal in an IBM office, and even many customer sites.

What could you have been working on in the 1980s that WASN'T incredibly
sensitive to memory use, disk usage, cpu use, and network bandwidth?


Large mainframes. Distributed computing resolved much of the memory and cpu usage; network bandwidth was a problem, but then programmers knew how to minimize data transfer. For instance, rather than transfer large amounts of data across the network then filter on the receiving end, programs on the hosting computer would do the filtering and only send what was necessary.

And back then, programs were efficient. They weren't the oversized crap they are today. Heck, back in the mid 80's, I even had a C compiler on a single 360KB diskette, with lots of room left over. And it did basically the same thing the multiple megabyte compilers do today. Sure, the language has changed - but not that much.


Oh, and by the way, an awful lot of big data applications are run on
standard x86 hardware - in clusters and distributed across networks.
Things like network topology, file system organization (particularly
vis-a-vis how data is organized on disk) REALLY impacts performance.


That's what I mean about what people thing is "big data".  Please show
me ANY X86 hardware which can process petabytes of data before the end
of the universe.  THAT is big data.

Too many people who have never seen anything outside of the PC world
think "big data" is a few gigabytes or even a terabyte of information.
In the big data world, people laugh at such small amounts of data.

There are an awful lot of Beowulf clusters out there.  Stick a few 100
(or a few thousand) multi-core processors in a rack and you've got a
supercomputer - and that's what most of the big data processing I know
of is being done on.  Heck, Google's entire infrastrucure is built out
of commodity x86 hardware.  (There's a reason we don't see a lot of
surviving supercomputer manufacturers.)


Google isn't a "supercomputer". It is a bunch of processors handling individual requests. The real supercomputers do things like weather forecasting, and nuclear explosion simulations, where billions of individual pieces of data must be processed concurrently.

And the reason we don't see a lot of surviving supercomputer manufacturers is there never were very many of them in the first place. The cost of the supercomputers was so high only a very few could afford them - generally governments or government contractors.


Disk i/o is the problem - again, a problem that requires find tuning
algorithms to file system design to how one stores and accesses data on
the physical drives.


Yes, that has always been the biggest problem on PC's. Mainframes use an entirely different means, and transfer data at rates PC's only dream about. Additionally, the transfer is done entirely in hardware, requiring no CPU between the time the command list (and there could be dozens or even hundreds of commands in the list) is turned over to the I/O system and the last command is completed (or an I/O error occurs).

I might also mention all the folks who have been developing algorithms
that take advantage of the unique characteristics of graphics processors
(or is algorithm design outside your definition of "programming" as
well?).


What about them?

Do you consider figuring out algorithms to be part of "programming?"  If
so, how do you exclude a knowledge of machine characteristics from the
practice of "programming?"


Yes, I consider figuring out algorithms to be a part of programming. But unless you are programming a real-time system with critical timing issues (see above), Otherwise, a good programmer will write the algorithm in a machine-independent manner for greater portability.

I'm trying to figure out what kinds of things you see "programmers"
working on that don't need serious knowledge of the underlying operating
system, computer hardware, and i/o environment.




How about any application program? How much do you need to know about the OS to write a word processor, for instance? Or an IDE such as Eclipse - written entire in Java, and runs on several different platforms. Nothing in the application knows anything about the underlying OS, hardware or I/O.

In fact, very few applications need to know details about the underlying OS or hardware. Any any application who programs unnecessarily to the hardware is doing his employer a huge disservice.

Jerry


Reply to: