Tuesday, August 7, 2012

Good Code

No one appreciates good coding, because only coders understand what coding is to begin with. For anyone else, if the thing works then you only did your job - that doesn't make you a good coder. Well, it does, but to anyone who doesn't understand it, computer code that works the first time is just normal, not great. If anything, it only reflects badly on the quality of the testing that was done.

On the other hand, bugs are high visibility and high priority, unlike coding. Only when it's broken do people seem entirely willing to acknowledge that code actually does exist. If you fix a lot of bugs in a few hours (despite it being far simpler than the actual coding, in most cases) then people understand that you've done a great deal of work. Coding nearly always goes unnoticed and unappreciated. If the code works fine, then it's just ignored, like it doesn't even exist.

To get some perspective, apply the same argument to writing in general. Writing software is in many respects very similar to writing prose or poetry. Programming languages have nearly the same semantics and syntax as spoken languages, it's just that their purposes are quite different.

Computers never use programming languages to communicate with us; it's only us communicating to them the instructions that we want performed, the order we want them performed in, and the logic that holds the rules together and the system in place.

With writing, absolutely no one expects you to get the first draft right. What makes you a great writer is being as good at revising (or better) than you are at writing.

The same applies to coding, only the underlying purpose gives the revisions a different purpose as well. In both cases, you are revising in order to make the written medium (words, code) better. Better writing is hard to define, since writing is a little more of an art form than computer programming. Better code, by contrast, is well defined - if it doesn't work, it's not good code. I will now go through some of the differences between good writing and good code, and how the writing systems themselves differ.

Prose writing is more about the context than the actual words, though being a good wordsmith certainly earns you extra points. The context is normally the story elements (plot) which includes character development, something that takes a good deal of time to do outside of the actual writing. The characters must be alive in your head and have their own motives, goals, aspirations, and personality, before you can plug them into the time frame and particular circumstances of your actual story and have it come out sounding realistic. In the case of non-fiction, the context is your particular topic or subject, and the structure of your argument or points. You remember this from high school English class: the first sentence of each paragraph should be the theme or thesis of that paragraph, followed by support, evidence, and further argument about that same point. The first paragraph is normally an introduction and the last is normally a conclusion, with at least three body paragraphs between them.

Poetry, on the other hand, is all about word selection, diction, and imagery. Poetic devices, such as rhyme, alliteration, meter, and various other tools, are of paramount importance. The actual message is usually only a result of the deeper themes and moods created by the specific words and their connotations. I find that rhyme is often a quite underestimated tool, not used as often as you might think in poetry. It can be used to emphasize - the words involved in the rhyme are usually the focal point of the entire phrase or sentence. Thus, it is also important to choose which words are going to rhyme, and as such, this often requires some grammatical flexibility to rearrange the parts of the sentence in a way that doesn't sound archaic or confusing.

Lastly, computer code is like neither. The entire purpose of computer code is logic - that is both its foundation and its end. The building blocks are simply logical constructs, such as a loop that executes a portion of the code over and over (to avoid having to write many instructions that do mostly the same thing). Most logic boils down to conditions - if this, do that, or if this other condition is true, do this other thing. If this condition is false, skip this part of the code. This logic tells computers how, when, and what to do, in a way that bears no interpretation (heh, at least not the kind you're probably thinking!) Here is where we get into the revision.

At this point, I'd like to mention one minor historical anecdote: the first computer bug actually was a bug! A moth had been electrocuted while chewing on some circuitry in one of the first mainframe (room-sized) computers, and was causing a short in the circuitry. What then constituted the 'software' was actually hardware, in the form of vacuum tubes and switches. The switches would be set to input the instructions, and the computer would run through whatever instructions had been set in these switches. A programmer's job back then would have been to manually go to each switch and move it to the right setting, according to a long (and probably quite boring) sheet of numbers. This bug probably took a while to find, as the chances of a programmer losing his focus and mis-setting even a single switch were quite high!

Revising computer code is simple, yet not straightforward. This is because most often, you don't know what specifically is wrong with the code and how the problem is being caused. If you had, you wouldn't have written the wrong code in the first place! The first step is called 'debugging' which means going through the code, one instruction at a time, and watching the computer perform it, then examining the current state of the computer and the resulting output at that point. Once the problem is seen, the instruction last executed is most likely to be the one causing the problem. Now that this is known, it is normally a simple matter to determine where the error in the logic lies, and rewrite the code accordingly. Therefore, until programmers can code perfectly, we are stuck with bugs for the time being.

Now, the existence of bugs is no reason to knock computers themselves! The great thing about computers is that they are seldom at fault for the problems we face. It is normally operator error, either on the programming side or the user side. If the programming is wrong, we call it buggy or glitchy. If the user is wrong, it is known as a PEBKAC (problem exists between keyboard and chair). Computers execute their instructions correctly 99.9 % of the time. Whether those instructions are right or wrong is a different matter. Readers more interested in the subject of computers writing their own instructions should have a look into aspect-oriented programming. It is the newest programming paradigm, a step up from object-oriented (warning for the non-techies: highly advanced technical terms may cause head to explode). See my other post for more information about the different programming paradigms.

No comments:

Post a Comment