[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: sysadmin qualifications (Re: apt-get vs. aptitude)



On 10/18/2013 1:10 PM, berenger.morel@neutralite.org wrote:
Le 18.10.2013 17:22, Jerry Stuckle a écrit :
On 10/17/2013 12:42 PM, berenger.morel@neutralite.org wrote:
Le 16.10.2013 17:51, Jerry Stuckle a écrit :
I only know few people who actually likes them :)
I liked them too, at a time, but since I can now use standard smart
pointers in C++, I tend to avoid them. I had so much troubles with
them,
so now I only use them for polymorphism and sometimes RTTI.
I hope that someday references will become usable in standard
containers... (I think they are not because of technical problems,
but I
do not know a lot about that. C++ is easy to learn, but hard to
master.)


Good design and code structure eliminates most pointer problems;
proper testing will get the rest.  Smart pointers are nice, but in
real time processing they are an additional overhead (and an unknown
one at that since you don't know the underlying libraries).

Depends on the smart pointer. shared_ptr indeed have a runtime cost,
since it maintains additional data, but unique_ptr does not, afaik, it
is made from pure templates, so only compilation-time cost.


You need to check your templates.  Templates generate code.  Code
needs resources to execute.  Otherwise there would be no difference
between a unique_ptr and a C pointer.

In practice, you can replace every occurrence of std::unique_ptr<int> by
int* in your code. It will still work, and have no bug. Except, of
course, that you will have to remove some ".get()", ".release()" and
things like that here and there.
You can not do the inverse transformation, because you can not copy
unique_ptr.

The only use of unique_ptr is to forbid some operations. The code it
generates is the same as you would have used around your raw pointers:
new, delete, swap, etc.
Of course, you can say that the simple fact of calling a method have an
overhead, but most of unique_ptr's stuff is inlined. Even without
speaking about compiler's optimizations.


Even inlined code requires resources to execute. It is NOT as fast as regular C pointers.

Plus, in an OS, there are applications. Kernels, drivers, and
applications.
Take windows, and say honestly that it does not contains applications?
explorer, mspaint, calc, msconfig, notepad, etc. Those are
applications,
nothing more, nothing less, and they are part of the OS. They simply
have to manage with the OS's API, as you will with any other
applications. Of course, you can use more and more layers between your
application the the OS's API, to stay in a pure windows environment,
there are (or were) for example MFC and .NET. To be more general, Qt,
wxWidgets, gtk are other tools.


mspaint, calc, notepad, etc. have nothing to do with the OS.  They
are just applications shipped with the OS.  They run as user
applications, with no special privileges; they use standard
application interfaces to the OS, and are not required for any other
application to run.  And the fact they are written in C is immaterial.

So, what you name an OS is only drivers+kernel? If so, then ok. But some
people consider that it includes various other tools which does not
require hardware accesses. I spoke about graphical applications, and you
disagree. Matter of opinion, or maybe I did not used the good ones, I do
not know.
So, what about dpkg in debian? Is it a part of the OS? Is not it a ring
3 program? As for tar or shell?


Yes, the OS is what is required to access the hardware.  dpkg is an
application, as are tar and shell.

< snip >
Just because something is supplied with an OS does not mean it is
part of the OS.  Even DOS 1.0 came with some applications, like
command.com (the command line processor).


So, it was not a bad idea to ask what you name an OS. So, everything
which run in rings 0, 1 and 2 is part of the OS, but not softwares using
ring 3? Just for some confirmation.


Not necessarily.  There are parts of the OS which run at ring 3, also.

What's important is not what ring it's running at - it's is the code required to access the hardware on the machine?

I disagree, but it is not important, since at least now I can use the
word in the same meaning as you, which is far more important.

But all of this have nothing related to the need of understanding
basics
of what you use when doing a program. Not understanding how a
resources
you acquired works in its big lines, imply that you will not be
able to
manage it correctly by yourself. It is valid for RAM memory, but also
for CPU, network sockets, etc.


Do you know how the SQL database you're using works?

No, but I do understand why comparing text is slower than integers on
x86 computers. Because I know that an int can be stored into one word,
which can be compared with only one instruction, while the text will
imply to compare more than one word, which is indeed slower. And it can
even become worse when the text is not an ascii one.
So I can use that understanding to know why I often avoid to use text as
keys. But it happens that sometimes the more problematic cost is not the
speed but the memory, and so sometimes I'll use text as keys anyway.
Knowing what is the word's size of the SQL server is not needed to make
things work, but it is helps to make it working faster. Instead of
requiring to buy more hardware.


First of all, there is no difference between comparing ASCII text and
non-ASCII text, if case-sensitivity is observed.

Character's size, in bits. ASCII uses 7 bits, E-ASCII uses 8, UTF8 = 8,
UTF16 = 16, etc. It have an impact, for both memory, bandwidth and
instruction sets used.


But ASCII, even if it only uses 7 bits, is stored in an 8 bit byte. A 4 byte ASCII character will take up exactly the same amount of room as a 32 bit integer. And comparison can use exactly the same machine language instructions for both.

The exact same set
of machine language instructions is generated.  However, if you are
doing a case-insensitive comparison, ASCII is definitely slower.

And saying "comparing text is slower than integers" is completely
wrong.  For instance, a CHAR(4) field can be compared just as quickly
as an INT field, and CHAR(2) may in fact be faster, depending on many
factors.

It is partially wrong. Comparing a text of 6 characters will be slower
than comparing short.
6 characters: "-12345" and you have the same data on only 2 bytes.


I didn't say 6 characters. I SPECIFICALLY said 4 characters - one case where your "strings take longer to compare than integers) is wrong.

Plus, CHAR(4) is not necessarily coded on 4 bytes. Characters and bytes
are different notions.


In any current database, CHAR(4) for ASCII data is encoded in 4 bytes. Please show where that is not the case.

But if an extra 4 byte key is going to cause you memory problems,
you're hardware is already undersized.

Or your program could be too hungry, because you did not know that you
have a limited hardware.


As I said - your hardware is already undersized. If adding 4 bytes to a row is going to cause problems now, you'll have even greater problems later.

On the other hand, I could say that building SQL requests is not my job,
and to left it to specialists which will be experts of the specific
hardware + specific SQL engine used to build better requests. They will
indeed build better than I can actually, but it have a time overhead and
require to hire specialists, so higher price which may or may not be
possible.


If you're doing database work, SQL IS your job.  SQL experts can
design more complicated queries quickly, I will agree.  However, how
the data are used in the program also affects the SQL being used.

Yes. Did not wanted to say that SQL is not your job if you are working
with it. I said "could". But I would be ashamed to really do that.

And here, we are not in the simple efficiency, but to something which
can make an application completely unusable, with "random" errors.


Not at all.  Again, it's a matter of understanding the language you
are using.  Different languages have different limitations.

So it must be that C's limitations are not fixed enough, because size
types can vary according to the hardware (and/or compiler).


Sure.  And you need to understand those limitations.

And BTW - even back in the days if 16 bit PC's, C compilers still used 32 bit ints.


Good programmers can write programs which are independent of the
hardware.

But being aware of the hardware you target can help you


Being aware of the limitations of the language you are using is much
more important.

True.

Nothing you have indicated in this message has
anything to do with hardware - other than how the C compiler is used
with that hardware.

I did not specifically talked about hardware knowledge, but about
knowledge non directly linked with the programs' languages. Which can
imply some network knowledge about, for example, protocols.

So, ok, if you can find a job when you have every single low level
feature you will require through high level functions/objects, having
knowledge of on what you are sit on is useless. Maybe I am wrong because
I actually am interested by knowing what is behind the screen, and not
only by the screen itself. But still, if you only know about your own
stuff, and the man who will deploy it only knows about his own stuff,
won't you need a 3rd person to allow you to communicate? Which imply
loss of time.

No, it's called a DESIGN - which, amongst other things, defines the
communications method between the two.  I've been on projects with up
to 100 or so programmers working on various pieces.  But everything
worked because it was properly designed from the outset and every
programmer knew how his/her part fit into the whole programs.

I do not think that most programmers work in teams of hundreds of
people. But I may be wrong. I do not know.


I didn't say most did.  I DID say they exist, for large projects.

In my last job, when we had something to release, we usually talked
directly with the people who had then to deploy it, to explain them some
requirements and consequences, that were not directly our programmer's
job. Indeed, I was not employed by microsoft, google or IBM, but very
far from that, we were less than 10 dev.
But now, are most programmers paid by societies with hundreds of
programmers?


In the jobs I've had, the programmers have never had to talk to the
deployers.  Both were given the design details; the programmers wrote
to the design and the deployers generated the necessary scripts do
deploy what the design indicated.  When the programmers were done, the
deployers were ready to install.

Maybe you worked only in big structures, or maybe this one was doing
things wrong. But the IT team was quite small, if we only consider
sysadmins, dev, project leads and few other roles. Less than 15 persons.


Even 15 person teams can do it right. Unfortunately, too many companies (both big and small) won't do it right. A major reason why there are so many bugs out there. Also a major reason why projects go over time and over budget.


Excuse my bad English, I meant parasitic generator I guess, if it
is the
one which means a hardware generate parasites.



Never seen hardware generate parasites.  I have seen a lot of
transmitters generate parasitics, though.

Are not transmitters a sub-class of hardware? :p



Yes, and they generate *parasitics*, not *parasites*.

I apologize for that mistake, and thanks you for the correction. I try
to avoid using wrong words but it is not always easy.



NP

Jerry


Reply to: