One thing I miss about working at a large organization is watching the blame game get played out. It always amazed me how much more effort went into explaining why a problem wasn’t someone’s fault (and most certainly not his responsibility to solve) than actually solving the problem itself. The larger the issue got and the more experienced players that got involved, the chance of actually solving the problem became nil.

At a certain government agency -- whom I'll call them the National Intelligence Bureau -- several different agents had been involved in a few “high profile” incidents of the international variety. This resulted in a public outcry and strict orders from the top to rectify the problem. As an IT contractor working for the NIB, G.R.G. had a chance to watch the whole thing play out from the inside.

The agents’ superiors said that it certainly wasn’t their fault; the agents weren’t served well by the training department. Training said that the training classes require competent students, and the agents involved should have never been hired by Human Resources. Human Resources said that they use a “perfectly objective” hiring process and the issue certainly laid in the tests developed by the Testing department. Testing assured everyone that the competency tests were just fine, but that the problem was in the test taking process; ergo the Process Engineers were responsible. The Process Engineers agreed, not because it was their fault, but because the test taking process was completely manual.  It was the 1980’s after all, and everything was becoming automated.

G.R.G.’s team was brought in to automate the pre-hire competency testing process. The computer system they would develop would most certainly solve the problem of the unusually high competency test scores for certain “seemingly incompetent” agents.

Of course, due to the highly classified nature of the tests and the process, they could only be shown things first hand. G.R.G. met with a “specially trained” test-giver so that he could be led to the special testing building and into the special testing room to review the special tests. As the process dictated, the test-giver handed G.R.G. the booklet and left the room.

The first thing that G.R.G. noticed was how poorly written the multiple-choice questions were: the second question had no correct answer; the third required basic knowledge of ocean tides and lunar cycles; the fifth assumed that the test taker knew what color public mailboxes were; and so on. But clearly G.R.G. was wrong; the testing department already certified and approved their competency test.

G.R.G. then wondered if the test-takers had talked amongst themselves, pooling their knowledge in some way. After all, the test-giver does not stay in the room while the tests are given. But clearly G.R.G. was wrong; there was a large sign in the room that read “no talking during the test.”

Flipping through the pages, G.R.G. thought that the test-takers might have simply gotten a copy of the answer sheet. What else could explain the high test scores on such an impossible test? G.R.G. walked over to the cabinet where the booklets were stored and checked it out.

The cabinet was built into a small alcove and consisted primarily of a crude, unfinished plywood door. The carpenter who built it must have been the lowest bidder, as the door was slightly crooked, fastened with two giant barndoor hinges, and was about three-inches too small on each side. Though there was a padlock on the door, one could simply reach his hand through the gap and take one of the answer booklets. But clearly G.R.G. was wrong; the test and answer booklets were numbered and were always accounted for prior to validating test scores.

And then G.R.G. noticed something below the cabinet: a Xerox Model 9200 that could copy 240 pages per minute and required no access card or money to be deposited. It seemed to be the perfect way to cheat on such a test. But clearly G.R.G. was wrong; human resources had already verified the integrity of candidates and knew they weren’t the cheating type.

In the end, G.R.G. and his team ended up building the automated testing system. There were a lot of hurdles along the way -- one of the big ones being that it had to run on a 3.66Mhz CP/M system with 64K memory -- but they managed. For some reason though, it didn’t seem to solve the NIB’s competency dilemma. Years later, unusually high test scores were still rampant and agents still found themselves involved in awkward international incidents. G.R.G. didn’t stick around long enough to know if the problem was ever solved, but last he heard, someone was hard at work on figuring out who could best solve the problem.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!