#Review – The Practice of Programming

In a world of enormous and intricate interfaces, constantly changing tools and languages and systems, and relentless pressure for more of everything, one can lose sight of the basic principles— simplicity, clarity, generality— that form the bedrock of good software.

Kernighan, Brian W.; Pike, Rob (1999-02-09). The Practice of Programming (Addison-Wesley Professional Computing Series) (p. ix). Pearson Education. Kindle Edition.

 

Programming is a craft. Some programmers refuse to acknowledge this, insisting instead that it’s a scientific or engineering discipline. There are certainly elements of that but anything that allows a human to place their own distinctive style on a made thing is a craft.

Bridges look a certain way because that’s how the physics make them look, not because the engineer was feeling whimsical that day. That’s why one bridge looks a lot like another. When a carpenter makes a bookshelf, it shares the same functionality with other bookshelves. However, there are a hundred individual decisions made by the carpenter during the design and creation process. A bookshelf is physics seasoned by art.

Two software applications may have similar functions but the underlying source code tells a different story. Anyone who reads or writes code knows that the programmer imposes their own personal style on the code in hundreds of different ways. From the use of a favorite decision loop to the design and implementation of a particular data structure, programmers have always found a way to express themselves in their work.

The Practice of Programming was written to bring programmers who are swimming in complexity back to their roots and help them regain perspective. Just to be clear, this is not a book that will teach you how to program. However, if you are learning to program or even if you’re a veteran coder, you’ll get something useful out of this text.

Despite this, Kernighan and Pike don’t romanticize the work of programming. Instead they show that by embracing (or re-embracing) the fundamental principles of coding, you can become a better, more productive programmer.

They start with a style guide, because clean, consistent code is easier to read, debug and maintain. Establishing and maintaining a consistent coding style frees up your higher brain functions for more complex decisions and problem solving.

Next we move on to algorithms and data structures. These building blocks of software should be familiar to all coders but the right algorithm choice can make the difference between a program that takes an hour versus one that takes seconds to produce the desired result.

The authors build on this foundational knowledge with discussions on design, interfaces (how to efficiently pass data), debugging, testing (which reduces debugging), performance, portability and end with a chapter on notation which includes a discussion of tools that will help you generate code automatically.

The writing is crisp and direct. Kernighan and Pike speak to you, programmer to programmer. They have decades of combined experience in the coding trenches and understand the problems you face every day, whether you’re doing an assignment for school or creating a business analytics solution for your business.

Advertisements

#52WeeksOfCode Week 18 – VB.NET

Week: 18

 

Language: VB.NET

 

IDE(s): MS Visual Basic Express

 

Background:

VB, of course, stands for Visual BASIC and I think it’s worthwhile to review the history of the language and it’s influence in not just the genesis of the commercial software industry but it’s role in the genesis of Microsoft.

First of all, let me give a shout-out to Steven Levy’s wonderful book, Hackers: Heroes of the Computer Revolution. If you want a good, behind-the-scenes look at how the personal computer industry was born and the kinds of people involved in the midwifery, this is the book you want. (I’ll be posting a separate review of it at a later time.) BASIC has several mentions in this book and not always in connection with Bill Gates (but Gates is a key figure in this story).

From the chapter “SpaceWar”, an account of the growing code democratization movement that began at MIT in the 1950’s:

 

“The planners were also extremely concerned about getting the power of computers into the hands of more researchers, scientists, statisticians, and students. Some planners worked on making computers easier to use; John Kemeny of Dartmouth showed how this could be done by writing an easier-to-use computer language called BASIC. Programs written in BASIC ran much slower than assembly language and took up more memory space, but did not require the almost monastic commitment that machine language demanded.”

 

Excerpt From: Steven Levy. “Hackers.” iBooks. https://itun.es/us/Em_Px.l

 

BASIC was at the front of the war against the “mainframe priesthood”, the gatekeepers to the mysterious, room-filling machines that were our only access to computing. It was designed to use English-like commands. For example:

 

10 PRINT "Hello, World!"
140 END

 

I took a programming course in 1977 and saw this first-hand. I wrote out my program by hand, typed it into a keypunch machine, took the punched cards to the computer room and handed them to the operator. If I was lucky (i.e. my code worked the first time), several hours later I would come back to pick up a short print-out showing the output of my program. I was unlucky, then I got a much bigger stack of print-out paper which showed me in excruciating detail how badly my program sucked. Much of the paper consisted of a dump of what was in the computer’s memory when my program failed, along with what instructions were running at the time. I then had to grovel through the output, figure out what I had done wrong, fix it and go through the whole process again.

I had heard about these things called personal computers, like the Commodore PET and the Tandy TRS-80 but these seemed expensive and complicated and weren’t really on my radar.

In 1980, Commodore came out with a computer called the VIC-20. It was simple to set up and, more importantly really cheap, so I got one.

Of course, you didn’t get much for the price. It came with 5 kilobytes of RAM (which shrunk to 3.5 kilobytes once you turned it on), had no external storage (but you could buy an external drive that used cassette tapes). It had a cable that you could use to plug it into a television and the display was 22 characters wide and 23 lines tall. It originally sold for $299 but the price eventually dropped to $100.

But that wasn’t what made it magic for me. I remember unpacking the box, scrambling to get the cables connected just so I could turn it on and see this:

VIC-20_Boot_Screen

It wasn’t as powerful as the mainframe I used in school but it was MINE.

The VIC-20 booted directly into a BASIC interpreter and you interacted with it using BASIC commands. CBM stood for Commodore Business Machines and they used a dialect of BASIC called Commodore BASIC. But they didn’t develop their own version, they bought it from someone else. Someone whose entire business model initially consisted of providing BASIC interpreters for different computers.

It’s hard to imagine that before Bill Gates came along, the idea of selling computer software was unthinkable. Not in a bad way, but in that it just didn’t make any sense. After all, the computer business model was: you buy (or lease) the computer and the company gave you software for free (because the computer wouldn’t run without it). They even gave you the tools to build your own software because this added value to their hardware and encouraged more people to use it. So programmers got into the habit of not only getting their software for free but also giving away the software they wrote.

So when Bill Gates and Paul Allen wrote a BASIC interpreter for a new computer called the MITS Altair, someone snagged a copy of the paper tape containing Altair BASIC and ran off a bunch of copies, then handed them out to their friends, asking them to each make two copies and hand copies to their friends.

Of course, when Gates and Allen found out about it, this resulted the infamous Open Letter to Hobbyists. It’s hard to understate the huge rift this incident created in the nascent personal computer industry. (Seriously, read Hackers. It’s great.)

Microsoft, of course, went on to dominate the commercial software world far beyond their initial humble beginnings, but BASIC came along with them. It was even included in early versions of DOS and Windows. It kept evolving adding features and functionality that let it be deeply embedded into the Microsoft software ecosystem. It became tremendously popular in business software because it was an easy way to hook into your software (assuming you ran MS Office, MS Exchange, MS SQL Server, etc.) and let you quickly put together a GUI front-end for your server systems.

To give you a sense of how much the language has changed, here is the standard Hello World program written in VB.NET:

 

Module Module1

    Sub Main()
        Console.WriteLine("Hello, world!")
    End Sub

End Module

 

Frankly, this looks a lot more like Java:

 

public class HelloWorld {

    public static void main(String[] args) {
        System.out.println("Hello, World");
    }

}

 

Can we still call VB a dialect of BASIC? More importantly, with the introduction of cleaner, more modern languages like Ruby, is there still any need for VB, with or without .NET?

I’m going to get some crap for this, but I have to say, “No”. Unless you have an all-Microsoft shop with a substantial investment in legacy Visual BASIC code, there is little reason to use VB.NET.

That being said, let’s fire up Visual BASIC Express and see how it runs.

 

Discussion:

The first thing that shows up:

VB2010_RegWindow

Just to be clear, if you want to use someone’s product then you implicitly agree to whatever conditions they require before you can start using it. However, crap like this is why I actively seek alternatives to Microsoft software. Other commercial software developers make you jump through hoops, but nobody does it with such bloody-minded verve like Microsoft. Fortunately I have a Windows Live account for just such a contingency.

After completing a survey and politely declining membership on several email lists, I finally get a registration key, enter the key, click on Register Now and:

MSWarning

Yes, Microsoft, I’m aware that you have me by the short and curlies. Don’t rub it in.

Once past all that, I was presented with the usual friendly interface. I give Microsoft a lot of crap, but it’s clear that they work very hard making the front-ends to even quite complex software pretty friendly. I selected New Project… and opened the default application template and got this:

WindowsTemplateWarn

Now I appreciate secure computing as much as the next man, but this file was installed on my computer by Microsoft. By. Microsoft. Okay, that’s all I wanted to say about that.

I located a very sketchy tutorial on Udemy, a popular online course site. After a bit of fussing with variable names, I got a simple Windows Forms application:

If you’re familiar with Windows Forms, it doesn’t take too much extra work to get up to speed on VB.NET. The IDE looks and works the same as it does in Visual C# Express, with drag and drop of form elements and easily edited property lists.

I still stand by my opinion that Visual BASIC is essentially a dead-end platform. I thank Gates and Allen for their hard work in bringing programming to a greater audience and doing their part to free us from the grip of the mainframe priesthood.

 

#Coding4Humans Book Review – Programming Pearls

I enjoy reading books about computer programming. (At this point you’re probably saying to yourself, “Of course you do, Tom. You big old nerd, you.”)

But the books I prefer to read aren’t about a particular programming language or operating system but the books about the art, history and philosophy of computer programming. Programming Pearls by Jon Bentley is a classic in this particular genre.

Bentley was a computer researcher at the original Bell Labs in Murray Hills, NJ and he used to write a column on various aspects of programming design and problem-solving for the periodical “Communications of the ACM”. This book is made up of selected essays from that column.

This book has earned a permanent spot on my bookshelf in three ways. First, it’s a fascinating glimpse into the history of computing. The year it was published (1986) was the beginning of the personal computer revolution. We were taking the power back from the mainframe computer priesthood. Having a PC was to be like Prometheus with a piece of stolen fire. We had some of the power for ourselves and were struggling to figure out what to do with it. (Hackers: Heroes of the Computer Revolution by Stephen Levy is an excellent look at the people and personalities that built this era.)

This book is also about problem-solving. As Bentley says in the introduction:

The essays in this book are about a more glamorous aspect of the profession: programming pearls whose origins lie beyond engineering, in the realm of insight and creativity.

These days we’re accustomed to being able to just throw more hardware at computing problems. Bentley reminds us that there is still value in thinking a problem through and presents some interesting ideas, examples and exercises to aid in that work.

Finally, there is my favorite essay, “The Back of the Envelope”. If I had my way, this would be required reading for all of my math students.

Let me explain.

I encourage the use of calculators and computers in my math classes to do the computational heavy-lifting. My logic is that if you understand the problem well enough to explain it to a machine, then the actual computation is just a mechanical exercise. But this doesn’t mean that you should just trust outright whatever a machine tells you. You need to know what the answer should look like by using estimation so you can judge the machine’s output. Bentley devotes an entire section to estimation and these skills also extend into other essays, such as “Perspectives on Performance” and “Algorithm Design Techniques”.

Programming Pearls includes exercises at the end of each essay to help you develop your mental muscles (don’t worry, there are hints in the back of the book) and an appendix with a catalog of algorithms. At 256 pages, it’s a pretty breezy read and the organization of topics makes it easy to just dip in wherever you like and start reading. It’s not just an excellent reference but Bentley’s writing style is friendly and intelligent without being condescending. If you’re a programmer (whether hobbyist, student or professional) you need a copy of this book.

References

Bentley, J. L. (1986). Programming pearls. Reading, MA: Addison-Wesley.

Levy, S. (1984). Hackers: Heroes of the computer revolution. Garden City, NY: Anchor Press/Doubleday.

 

#52WeeksOfCode Week 17 – CSS3

Week: 17

 

Language: CSS3

 

IDE(s): Coda, MAMP

 

Background:

When a new technology comes out, you have to ask yourself two questions:

  1. What problem is this intended to solve?
  2. How well does it solve it?

CSS (Cascading Style Sheets) were designed to solve the problem of adding styles to Web documents. Was this actually a problem? (see question 1)

Web pages, at their heart, are just plain text. Originally if you wanted to add formatting you had to use markup tags to set the text as italic, boldface, underlined or even organized as a list or a table. This required some fiddling with individual text elements and there was no easy way to apply a style or combination of styles to text in separate areas of the page in one go. In addition, if you wanted to use the same content with different styling elements (or to present it for different browsers or platforms), you had to rather painfully re-write your markup.

Cascading Style Sheets let us define styles for different HTML elements (either in the Web page itself or in a separate CSS file) and apply them directly. Whenever the Web browser sees text with a label defined in CSS, it applies the appropriate style. Using a separate CSS file is considered better as it allows you to apply a common look-and-feel to multiple Web pages and makes it easy to re-format a Web page simply by applying a different CSS file.

So CSS does indeed seem to be a solution to a legitimate problem. But how good of a solution is it?

Pros:

  • Keeps content and design separate, making our HTML cleaner and easier to read.
  • Makes it easier to display the same page correctly on multiple devices in multiple browsers
  • Size and positioning of objects (such as images) can be pre-defined in CSS
  • Gives readers more flexibility and control in how they view Web pages

Cons:

  • Not all browsers support CSS the same way
  • Using CSS to position your Web page objects gets ugly really fast.
  • CSS is ‘markup centric’ rather than ‘design centric’, forcing many Web designers to craft their code by hand

 

Needless to say, opinions differ. (The comments are pretty heated at that link. I recommend reading all of them.)

We are trying to move from print (which has had nearly a thousand years to evolve) to the Web (which has only been around since about 1993). I don’t think we’ve figured out the new design paradigms yet, but sites like CSS Zen Garden offer some intriguing possibilities.

 

Discussion:

This week I used MAMP as my Web server as usual, but instead of TextWrangler I decided to use Coda from Panic Software. It’s not free but Panic makes very good, clean, well-behaved software and Coda is not exception. Coda is not only specifically designed to manage Web code but comes with built-in documentation for

I was looking for something interesting to do with CSS and I came across a wonderful little tutorial at CoDrops. It walks you through setting up a Web page with CSS and JQuery that renders an animated billboard that flips between two different signs. I modified the images included using GraphicConverter and ImageMagick.

The output looks like this (as an animated GIF):

Hello World billboard

hello world animated billboard

#52WeeksOfCode Week 16 – HTML 5

Week: 16

Language: HTML 5

IDE(s): TextWrangler

Background:

Technically, HTML isn’t a programming language but a markup language. But version 5 has some interesting features so I’m learning something here.

Boy, did I wade into a crap storm.

It’s been quite a while since I’ve poked around with HTML and I was interested in the multimedia support that was being built into the HTML 5 specification. Like a lot of us, I’ve had to struggle with browser plug-ins like Flash, Silverlight and once even RealPlayer.  All just to watch a video or play an animation or listen to a sound file.

That’s just for a desktop computer, by the way. Once you get into mobile devices, it gets even worse. Either the plug-ins aren’t available (and you are blocked from whole chunks of the Internet) or they chew up your battery life like crazy.

But what if we didn’t have to deal with that? What if we could just play this stuff in our browsers with no extra software required? That’s one of the problems that HTML 5 was designed to solve and that’s also how the W3C ran into a buzz saw of nerd rage.

Let me explain. Every Web browser has something called a DOM or Document Object Model. You can think of it as an abstract model of a Web page that, in theory, lets Web developers write code that works across different platforms without having to create different versions for multiple operating systems or browsers.

Please note the phrase ‘in theory’.

A DOM is supposed to be a standard so, of course, every browser vendor came up with their own version. Sure, the basic elements were supported by everybody but they just couldn’t resist putting their own special little features to differentiate themselves from the competition. So before you knew it, Web developers were having to write different code for different platforms.

If you’ve ever gone to a Web page and got a message that you needed a different browser or if the same page doesn’t work the same way in two different browsers (if it works in either), then you know exactly what I’m talking about.

ANYWAY, it’s gotten better. Kind of.

The W3C created a new DOM element for HTML 5, called HTMLMediaElement, which gives the DOM the ability to present content like audio and video which would have previously required a plug-in.

But the advantage of the plug-ins (for content providers, anyway) was that they allowed restrictions on how that content was consumed using DRM or Digital Rights Management.

Now nerds (and other people — I count myself as a nerd) are not big fans of DRM. “Information wants to be free” and all that. People who create content (or just own it) occasionally want to get paid.

So when the W3C came out with something called Encrypted Media Extensions (EME) that extend the functionality of HTMLMediaElement to play protected content, a large portion of the nerd universe exploded in white-hot rage-gasm. (Remember, I’m a nerd. I can use that word.)

The Free Culture Foundation posted an indignant online editorial entitled “Don’t let the myths fool you: the W3C’s plan for DRM in HTML5 is a betrayal to all Web users.”

(I left the emphasis in to portray just how irritated they were.)

Let’s take a look at the relevant part of the documentation:

This specification does not define a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems. Implementation of Digital Rights Management is not required for compliance with this specification: only the simple clear key system is required to be implemented as a common baseline.

The common API supports a simple set of content encryption capabilities, leaving application functions such as authentication and authorization to page authors. This is achieved by requiring content protection system-specific messaging to be mediated by the page rather than assuming out-of-band communication between the encryption system and a license or other server.

Now I’m just a Simple Country Lawyer, but it seems like this is saying that while DRM is not built in to HTML 5, EME is designed to let content providers still use DRM.

So if you were looking to get rid of browser plug-ins, then good news for you. If you were looking to get rid of protected content on the Web, you’re out of luck.

Discussion:

I used MAMP for my Web server and built a quick and dirty little Web page that includes an audio file. HTML 5 makes this much easier than it once was, since you can let the Web server and browser figure out between themselves how to present the content. Here’s the code:

<!DOCTYPE html>
<html>
<head>
<title>52 Weeks of Code - Week 16 - HTML 5</title>
<meta charset="utf-8">
<meta name="description" content="The head tag contains the title and meta tags - important to the search engines, and information for the browser to properly display the page.">
<body>
<p>My first HTML 5 webpage</p>
<p>Here is the sound of someone saying 'hello'<p>
<p>Just click on the play control.<p>

<audio controls src="sounds/hello.mp3">
<p>It's very easy to embed sounds in HTML 5. Just use the code:<p>
<pre><audio controls src="hello.mp3">
</audio></pre>
</audio>
</body>
</head>
</html>

I snagged the sound file from here. Here’s what it looks like in Google Chrome:

 

#52WeeksOfCode Week 15 – OpenGL

Week: 15

Language: OpenGL

IDE(s): XCode

Background:

Once upon a time (okay, it was the late ‘80s/early ‘90s), if you wanted to code a video game, you had to program ‘down to the metal’. This gave you a lot of control and power, but you had to do everything yourself. This included all of the graphics and since that’s most of the heavy lifting, anyone who could come up with a easier way to render shapes and colors on the screen would have the gratitude of programmers everywhere.

Of course, there’s a problem. The advantage of coding down to the metal is that you can take advantage of the features of your video card. The disadvantage is you have to scramble to support new cards or limit your user base and let your code rot as new video cards roll out.

So you have a couple of choices. You can go the DirectX route like Microsoft and just exert your market influence to control the hardware via your PC industry partners. OR you can do what the OpenGL folks did.

The OpenGL Architecture Review Board produced a set of standards for rendering 2D and 3D graphics. OpenGL is designed to be multi-platform and vendor-neutral. Please note that the Board doesn’t supply any actual software but just a set of standards. If you are a video card manufacturer you can download a set of specifications for free and make sure your hardware supports the graphical operations detailed in the OpenGL standards.

DirectX, on the other hand, is controlled by Microsoft and they are the ones that decide what platforms (hardware and software) support it. Well, I say ‘software’. I mean Windows. The DirectX runtime (the part that lets DirectX software work) is built into Windows so it’s basically plug and play. In addition, while OpenGL only specifies methods for rendering graphics, DirectX includes support for audio and gaming controllers. (To be fair, OpenGL was originally developed for engineering and CAD software, not games).

That being said, Microsoft does support OpenGL in Windows along with DirectX so you can, in theory, get the best of both worlds. Of course, I would question not only what percentage of developers take advantage of this, but why they would bother. Granted, if you’re a fan of OpenGL and want to get your game up fast on Windows, then using DirectX for the non-graphical work would keep you from re-inventing the wheel. As I’ve stated before, I Am Not A Programmer(™) and I’d like an answer to this question someday.

As a general rule, I’m in favor of open standards so I suppose that if I wrote games I would probably gravitate towards OpenGL.  There are others who agree, but opinions differ.

Discussion:

I decided to use Xcode simply because of convenience. I used it back in grad school to do some of my (non-Java) assignments and it’s pretty nice, especially considering that it’s free. The current version (5.1.1) is a definite step forward (and yes, I’ve used Visual Studio) and I was able to set things up pretty quickly. I grabbed some sample code from Apple, opened it in the IDE and (absent a few warnings) was able to build and run a nice little animated cube-type thing. It’s written in Objective-C and looks something like this:

It’s pretty cool. The cube rotates slowly around and you can grab it with your mouse and drag it around. I didn’t customize the code too much (added the HELLO WORLD just for grins) so feel free to grab a copy from Apple and play around.

#52WeeksOfCode Week 14 – C++

Week: 14

Language: C++

IDE(s): XCode

Background:

It’s been a few years since I’ve used C++. I started out with Fortran in 1977 (with punch cards…pardon me while I adjust the onion in my belt), then Basic, moving on through Pascal, shell scripting,Lisp, Assembler, awk and C. I generally picked up programming languages as I needed them for work or school. (Frankly once you get past the first two or three, it becomes easier to get up to speed on new ones. The trick to learning a programming language is to have a project that holds your interest.)

C++ was basically the C language with some extras. (The name is a bit of an inside joke, meaning literally “increment C by 1”.) So this means that you can mix C code with C++ code and it will still compile and run. It’s not a good idea, for the sake of manageability, but it will work.

I like C++. I was comfortable with C and C++ added enough good features that the transition wasn’t too tough. (C# on the other hand…I’m not a fan. Just my opinion.)

C++ was my first introduction to Object-Oriented Programming (OOP). Previously, I felt like I had to micromanage every activity of my software. With OOP, however, I could create software objects with properties (things they know) and methods (things they know how to do) and just set them loose with instructions. It seemed pretty natural to me and there was a good library of pre-written objects that I could use so I didn’t have to re-invent the wheel.

At the time, C++ and Java were the New Hotness so these were the languages I taught in my school’s Software Engineering program. C# did come up later and we incorporated it into the curriculum, displacing both C++ and Java.  But before that happened, I discovered two of what are still my favorite C++ textbooks:

Fundamentals of C++ and Data Structures (Lambert) – Despite the title, this is a surprisingly friendly book. You will need to have some programming background but the book opens with a quick review of the essentials of C++. The writing is friendly, with plenty of diagrams and code examples and it even walks you through the math of analyzing algorithms, a topic that can be a bit intimidating to the newbie coder.

Beginning C++ Game Programming (Dawson) – This is a surprisingly subversive book. Your typical programming textbook is pretty dry and full of dull, mostly theoretical assignments. They might try to liven it up a bit by having you ‘create an inventory management system for a video rental store.’ Get it? Because renting videos and managing store inventory are what all the cool kids are doing these days!

But Dawson takes a different tack and I applaud him for it. He spends a lot of time talking about computer games and how they work with plenty of examples. The fact that these examples just happen to use the programming technique in the current chapter is just a happy coincidence. So the student spends the entire time messing around with games and by the end of the book is dealing with topics like inheritance and polymorphism. I think this is brilliant especially since my edition is copyright 2004, several years prior to the gamification of learning to code. I’m not sure why more computer textbooks for beginners don’t use this technique. I think you could even teach math like this. (Note to self: idea for a math textbook….)

Both of these books still have pride of place on my bookshelf.

Discussion:

C++, Java and C# were all designed to solve the same problem – those darn programmers. All of these languages impose structure and new rules on programmers in an attempt to keep them from stomping all over system resources either accidentally or on purpose. C++ and C# did this by building on C (C# added some features like garbage collection from Java) and Java just wrote the rules from scratch.

As a result, C++ and C# still let you write misbehaving programs and count on you wanting the new features enough to code in a safer, more managed fashion. Java doesn’t let you do that, which is why some programmers think Java is too structured. Which, to Java, is sort of the point.

Once again, this week’s program is pretty simple. Just a little hangman program with some sample code from Dawson:

macpro15:week_14_cplus tsinclair$ ./week_14_cplus
Welcome to Hangman.  Good luck!
You have 8 incorrect guesses left.
You've used the following letters:
So far, the word is:
-----
Enter your guess: e
That's right! E is in the word.
You have 8 incorrect guesses left.
You've used the following letters:
E
So far, the word is:
-E---

Enter your guess: l
That's right! L is in the word.

You have 8 incorrect guesses left.
You've used the following letters:
EL
So far, the word is:
-ELL-

Enter your guess: h
That's right! H is in the word.

You have 8 incorrect guesses left.
You've used the following letters:
ELH
So far, the word is:
HELL-

Enter your guess: o
That's right! O is in the word.
You guessed it!
The word was HELLO

 

In case you’re interested, the source code is available here.

 

References:

Lambert, K., & Naps, T. L. (1998).Fundamentals of program design and data structures with C. Cincinnati: South-Western Educational Pub.

Dawson, Michael. Beginning C through Game Programming. Australia: Course Technology, 2011. Print.