[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sysadmin qualifications (Re: apt-get vs. aptitude)



berenger.morel@neutralite.org wrote:
Le 16.10.2013 17:51, Jerry Stuckle a écrit :
I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, but I do not know a lot about that. C++ is easy to learn, but hard to master.)


Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).

Depends on the smart pointer. shared_ptr indeed have a runtime cost, since it maintains additional data, but unique_ptr does not, afaik, it is made from pure templates, so only compilation-time cost.

You guys should love LISP - it's pointers all the way down. :-)

So, what you name an OS is only drivers+kernel? If so, then ok. But some people consider that it includes various other tools which does not require hardware accesses. I spoke about graphical applications, and you disagree. Matter of opinion, or maybe I did not used the good ones, I do not know. So, what about dpkg in debian? Is it a part of the OS? Is not it a ring 3 program? As for tar or shell?


Boy do you like to raise issues that go into semantic grey areas :-)

One man's opinion only: o/s refers to the code that controls/mediates access to system resources, as distinguished from application software. In an earlier day, you could say that it consisted of all the privileged code, but these days, particularly with Linux, an awful lot of o/s code runs in userland - so it'd definitely more than just kernel and drivers.

But all of this have nothing related to the need of understanding basics
of what you use when doing a program. Not understanding how a resources
you acquired works in its big lines, imply that you will not be able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.


Do you know how the SQL database you're using works?

Sure do.  Don't you?

Kinda have to, to install and configure it; chose between engine types (e.g., INNOdb vs. ISAM for mySQL). And if you're doing any kind of mapping, you'd better know about spatial extensions (POSTGIS, Oracel Spatial). Then you get into triggers and stored procedures, which are somewhat product-specific. And that's before you get into things like replication, transaction rollbacks, 3-phase commits, etc.

For that matter, it kind of helps to know about when to use an SQL database, and when to use something else (graph store, table store, object store, etc.).



No, but I do understand why comparing text is slower than integers on x86 computers. Because I know that an int can be stored into one word, which can be compared with only one instruction, while the text will imply to compare more than one word, which is indeed slower. And it can even become worse when the text is not an ascii one. So I can use that understanding to know why I often avoid to use text as keys. But it happens that sometimes the more problematic cost is not the speed but the memory, and so sometimes I'll use text as keys anyway. Knowing what is the word's size of the SQL server is not needed to make things work, but it is helps to make it working faster. Instead of requiring to buy more hardware.

On the other hand, I could say that building SQL requests is not my job, and to left it to specialists which will be experts of the specific hardware + specific SQL engine used to build better requests. They will indeed build better than I can actually, but it have a time overhead and require to hire specialists, so higher price which may or may not be possible.

Seems to me that you're more right on with your first statement. How can one not consider building SQL requests as part of a programmer's repertoire, in this day and age? Pretty much any reasonably complicated application these dase is a front end to some kind of database - and an awful lot of coding involves translating GUI-requests into database transactions. And that's before recognizing how much code takes the form of stored procedures.

Do you know how
the network works?  Do you even know if you're using wired or wireless
networks.

I said, basic knowledge is used. Knowing what is a packet, that depending on the protocol you'll use, they'll have more or less space available, to send as few packets as possible and so, to improve performances. Indeed, it would not avoid things to work if you send 3 packets where you could have sent only 2, but it will cost less, and so I think it would be a better program.

Probably even more than that. For a lot of applications, there's a choice of protocols available; as well as coding schemes. If you're building a client-server application to run over a fiber network, you're probably going to make different choices than if you're writing a mobile app to run over a cellular data network. There are applications where you get a big win if you can run over IP multicast (multi-player simulators, for example) - and if you can't, then you have to make some hard choices about network topology and protocols (e.g., star network vs. multicast overlay protocol).

For now, I should say that knowing the basics of internals allow to build more efficient softwares, but:

Floating numbers are another problem where understanding basics can help understanding things. They are not precise (and, no, I do not know exactly how they work. I have only basics), and this can give you some bugs, if you do not know that their values should not be considered as reliable than integer's one. (I only spoke about floating numbers, not about fixed real numbers or whatever is the name). But, again, it is not *needed*: you can always have someone who says to do something and do it without understanding why. You'll probably make the error anew, or use that trick he told you to use in a less effective way the next time, but it will work.

And here, we are not in the simple efficiency, but to something which can make an application completely unusable, with "random" errors.

As in the case when Intel shipped a few million chips that mis-performed arithmatic operations under some very odd cases.


Good programmers can write programs which are independent of the hardware.

No. They can't. They can write programs that behave the same on different hardware, but that requires either: a. a lot of care in either testing for and adapting to different hardware environments (hiding things from the user), an/or,
b. selecting a platform that does all of that for you, and/or,
c. a lot of attention to making sure that your build tools take care of things for you (selecting the right version of libraries for the hardware that you're installing on)

Running cross-platform, and hiding the details from an end user are things a good programmer SHOULD do (modulo things that SHOULD run differently on different platforms - like mobile GUIs vs. desktop GUIs). But making something run cross-platform generally requires a knowledge of the hardware of each platform the code will run on.


But now, are most programmers paid by societies with hundreds of programmers?

Really depends on the industry (and whether you actually mean "developer" vs. "programmer"). But even in huge organizations, people tend to work on small project teams (at least that's been my experience).


Not at all.  When you work for a company, especially larger ones, you
use what you are given.  And many of those programmers are working on
mainframes.  Have you priced a mainframe lately? :)

Never seen a single one, to be exact :)

Yes to "you use what you're given" but as to what people are given:

I would expect that most are NOT working on mainframes - though where the line is drawn these days an be arguable. A high-end modern laptop probably packs more memory and horsepower than a 1980-era mainframe. And some of the larger Sun servers

I would expect a LOT more programmers are working on high-end SPARC servers than mainframes. Heck, even in the IBM world, I expect a lot more code is written for blade servers than mainframes.

And, again, just a guess, but I'm guessing the huge percentage of programmers these days are writing .NET code on vanilla Windows machines (not that I like it, but it does seem to be a fact of life). A lot of people also seem to be writing stored SQL procedures to run on MS SQL.

I expect that there are NOT a lot of people writing production code to run on Debian, expect for use on internal servers. When it comes to writing Unix code for Government or Corporate environments, or for products that run on Unix, the target is usually either Solaris, AIX (maybe), and Red Hat.

--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


Reply to: