[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sysadmin qualifications (Re: apt-get vs. aptitude)



On 10/17/2013 12:42 PM, berenger.morel@neutralite.org wrote:
Le 16.10.2013 17:51, Jerry Stuckle a écrit :
I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems, but I
do not know a lot about that. C++ is easy to learn, but hard to master.)


Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).

Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, it
is made from pure templates, so only compilation-time cost.


You need to check your templates. Templates generate code. Code needs resources to execute. Otherwise there would be no difference between a unique_ptr and a C pointer.

Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are applications,
nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between your
application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general, Qt,
wxWidgets, gtk are other tools.


mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is immaterial.

So, what you name an OS is only drivers+kernel? If so, then ok. But some
people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications, and you
disagree. Matter of opinion, or maybe I did not used the good ones, I do
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a ring
3 program? As for tar or shell?


Yes, the OS is what is required to access the hardware. dpkg is an application, as are tar and shell.


Maybe your "standard installation" comes with Gnome DE.  But none of
my servers do.  And even some of my local systems don't have Gnome.
It is not required for any Debian installation.

True. Mine does not have gnome (or other DE) either.
Maybe I used too big applications as examples. So, what about perl?


Perl is a scripting language.  The Perl interpreter is an application.

Just because something is supplied with an OS does not mean it is part of the OS. Even DOS 1.0 came with some applications, like command.com (the command line processor).

But all of this have nothing related to the need of understanding basics
of what you use when doing a program. Not understanding how a resources
you acquired works in its big lines, imply that you will not be able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.


Do you know how the SQL database you're using works?

No, but I do understand why comparing text is slower than integers on
x86 computers. Because I know that an int can be stored into one word,
which can be compared with only one instruction, while the text will
imply to compare more than one word, which is indeed slower. And it can
even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text as
keys. But it happens that sometimes the more problematic cost is not the
speed but the memory, and so sometimes I'll use text as keys anyway.
Knowing what is the word's size of the SQL server is not needed to make
things work, but it is helps to make it working faster. Instead of
requiring to buy more hardware.


First of all, there is no difference between comparing ASCII text and non-ASCII text, if case-sensitivity is observed. The exact same set of machine language instructions is generated. However, if you are doing a case-insensitive comparison, ASCII is definitely slower.

And saying "comparing text is slower than integers" is completely wrong. For instance, a CHAR(4) field can be compared just as quickly as an INT field, and CHAR(2) may in fact be faster, depending on many factors.

But if an extra 4 byte key is going to cause you memory problems, you're hardware is already undersized.

On the other hand, I could say that building SQL requests is not my job,
and to left it to specialists which will be experts of the specific
hardware + specific SQL engine used to build better requests. They will
indeed build better than I can actually, but it have a time overhead and
require to hire specialists, so higher price which may or may not be
possible.


If you're doing database work, SQL IS your job. SQL experts can design more complicated queries quickly, I will agree. However, how the data are used in the program also affects the SQL being used.

Do you know how
the network works?  Do you even know if you're using wired or wireless
networks.

I said, basic knowledge is used. Knowing what is a packet, that
depending on the protocol you'll use, they'll have more or less space
available, to send as few packets as possible and so, to improve
performances.
Indeed, it would not avoid things to work if you send 3 packets where
you could have sent only 2, but it will cost less, and so I think it
would be a better program.


Not necessarily. You send as much information is needed; no more, no less. And packet size is not fixed; in TCP/IP the maximum packet size depends on your TCP/IP stack configuration, which can be changed.

For now, I should say that knowing the basics of internals allow to
build more efficient softwares, but:

Floating numbers are another problem where understanding basics can help
understanding things. They are not precise (and, no, I do not know
exactly how they work. I have only basics), and this can give you some
bugs, if you do not know that their values should not be considered as
reliable than integer's one. (I only spoke about floating numbers, not
about fixed real numbers or whatever is the name).
But, again, it is not *needed*: you can always have someone who says to
do something and do it without understanding why. You'll probably make
the error anew, or use that trick he told you to use in a less effective
way the next time, but it will work.


Again, it's a matter of understanding the language and its limitations, not the hardware. COBOL's PACKED DECIMAL type is exact to about as many decimal places as you want, for instance (which is why it is used so heavily in financial institutions).

There is no reason such a data type could not have been created in C. But it would not be nearly as efficient as the float or double types we have now.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with "random" errors.


Not at all. Again, it's a matter of understanding the language you are using. Different languages have different limitations.


Good programmers can write programs which are independent of the
hardware.

But being aware of the hardware you target can help you


Being aware of the limitations of the language you are using is much more important. Nothing you have indicated in this message has anything to do with hardware - other than how the C compiler is used with that hardware.

But I do not think that this one is the biggest advantage of C. other
ones compete a lot: efficiency, lot of good libraries and ISO standard.
I strongly doubt that C were chosen for portability to write winAPI.


I never said it was.  But you can write portable C code which has
graphical interfaces and is cross-platform also, using GTK+.

Yes, I agree on this. But you will have to manage differences between
systems. For example, default configuration's files location. Maybe I am
wrong. Or in fact, I actually am, since if there is no lib to manage
that, one could be used. But someone will have to write it, and that
person is a programmer.


Default configuration file locations are only a matter of common usage. For instance, in Linux, configuration files are typically located in /etc. In Windows, it is in various places, depending on the Windows version and how it is installed. But nothing says you have to use those locations, for instance. And it is not uncommon for an installer (which generally is system-specific because the files go in different places) to add an environment variable pointing to necessary information. The program just queries this environment variable and gets the file location.

So, ok, if you can find a job when you have every single low level
feature you will require through high level functions/objects, having
knowledge of on what you are sit on is useless. Maybe I am wrong because
I actually am interested by knowing what is behind the screen, and not
only by the screen itself. But still, if you only know about your own
stuff, and the man who will deploy it only knows about his own stuff,
won't you need a 3rd person to allow you to communicate? Which imply
loss of time.

No, it's called a DESIGN - which, amongst other things, defines the communications method between the two. I've been on projects with up to 100 or so programmers working on various pieces. But everything worked because it was properly designed from the outset and every programmer knew how his/her part fit into the whole programs.

In my last job, when we had something to release, we usually talked
directly with the people who had then to deploy it, to explain them some
requirements and consequences, that were not directly our programmer's
job. Indeed, I was not employed by microsoft, google or IBM, but very
far from that, we were less than 10 dev.
But now, are most programmers paid by societies with hundreds of
programmers?


In the jobs I've had, the programmers have never had to talk to the deployers. Both were given the design details; the programmers wrote to the design and the deployers generated the necessary scripts do deploy what the design indicated. When the programmers were done, the deployers were ready to install.

But to a programmer it's
immaterial how it works; all that's important is they make a request
and get an address or NULL back.

I would be happy if every programmers expect to have it returning an
address or null back... sadly, I have seen a lot of code where the
result was not checked. But it is only related to their C knowledge and
uses.


I've seen it, also.  Not good, for sure.  But hackers love it.

(OO is not for everything).

Agree. That's why good languages allows to use more than one paradigm,
imho.


No, you use the language best suited for the project. IMHO it's a shame that C++ allows people to do functional programming; mixing the two creates a huge mess. SmallTalk and Java are much better at this.

What gets me are people who read a little about OO but don't really
understand it.  However, since they read a bit, they are "experts".
There seem to be a lot of those on the internet.

My favorite way to know that someone is not an expert, is to wait. If
they claim to be one, then I have 90% of chances that they are not. And
it is not only for OOP.


My favorite way to know if someone is a programmer is to wait. If they think poorly of programmers, I know they aren't qualified to be one.

They are not even able to choose their tools? They do not know what can
be the interest of multi-core CPU for programming?
It seems really strange to me.


Not at all.  When you work for a company, especially larger ones, you
use what you are given.  And many of those programmers are working on
mainframes.  Have you priced a mainframe lately? :)

Never seen a single one, to be exact :)


They're not as big as they used to be.  But they're still very expensive!


Excuse my bad English, I meant parasitic generator I guess, if it is the
one which means a hardware generate parasites.



Never seen hardware generate parasites.  I have seen a lot of
transmitters generate parasitics, though.

Are not transmitters a sub-class of hardware? :p



Yes, and they generate *parasitics*, not *parasites*.

Jerry


Reply to: