Developing Good Judgment – by Alan Hunt

Ever wondered what programming professionally would be like? I have some answers this week.

Remember when I mentioned faculty members would contribute to Engineer’s lounge? This week, it’s happening. Prof. Alan Hunt wrote an article that I think is timely and important for anyone that ever hopes to program professionally. Whether it’s at an internship, research position or full-time job, and you ever wonder, what will I do when I have to deliver actual products for businesses with the code I write? This article might be able to give you a good idea about what to expect and how to navigate the challenges that come with working professionally as a programmer just starting out. I hope you enjoy it as much as I did. 

Developing Good Judgement

As a hiring manager who has worked with many developers at the start of their careers, and as a developer myself back in the mists of time, I have seen a common theme when a particular development effort falls short – the developers didn’t understand what problem they were trying to solve.

While you’re in school, this is almost never an issue – the problems you are solving are clearly defined, tailored to be finished in a semester, and have expected outcomes that your “boss” has calculated before you even start working on the project.  The primary goal of these problems is to illustrate some aspect of computing theory, or to provide practice in applying ideas learned in your coursework. In other words, the problem that you are trying to solve is the problem of making yourself a better computer scientist!  Unfortunately, upon graduation, the majority of computer science students will get jobs in the industry where none of these things hold true; problems are frequently poorly defined, indefinite in nature, are difficult to measure, and have stakeholders with competing agendas trying to move the project in different directions.  The primary goal of these problems is to solve a business problem, and what it may be can vary from supporting HR in their applicant screening process to helping finance generate monthly reporting to building software that is sold to your customers. In each of these cases, learning and using technology is a means to an end, rather than the end to be achieved.

One place that you see this disconnect frequently is in technology selection and architecture decision making.  The temptation to use the “bleeding edge” of technology is strong for inexperienced developers, who want to do what’s trendy, without taking much time to assess whether or not the tools they are choosing are actually the best tools for the job at hand, and whether or not there are potential hidden costs and ongoing maintenance issues introduced due to those choices.  Let’s imagine a scenario where the project is a relatively small one and is unlikely to require extensive expansion in the short term. Selection of a constellation of frameworks that are not used consistently in the environment to build it will take more time initially (due to the learning curve, cost of tool installation, setup, and configuration, and cost of making the code comply with the framework structure).  It will also take more time in the long run, as each time a change needs to be made, the learning curve is likely to reapply, and the overhead of complying with the framework still exists. Furthermore, it limits the people who can effectively maintain that project to those familiar with the framework, or requires that they go through the same learning curve as the initial developers. Now let’s make matters worse – after six months, there is a critical update in one of the tools used, requiring that it be patched to a new version to fix a security flaw.  Of course, upgrading that tool means that another part of the framework needs to be upgraded, which in turn requires another upgrade… and at the end, the amount of effort expended produces no additional value to the business, it simply reduces risk introduced by the initial technology choice.

A second place that you see this disconnect frequently is in the tendency to accept the requirements that are given to you as complete, correct, and effective in solving the underlying issue that the application is being developed for.  This is sadly almost never true! Requirements originate with customers or business users who are trying to define the problem that you need to solve for them. They frequently make assumptions in that process based on what’s “obvious” to them, in the same way, that you wouldn’t bother to tell another developer to not duplicate code or that they should use descriptive variable names.  They may also not understand what is simple to achieve and what is very difficult, so they may ask for something that is not important to them, but is very hard to build – which can only be done at the expense of things that are more important! They may not even understand themselves all of the intricacies of their problem, and so will provide incomplete information. When this happens, the developer can do two different things.  The first is to provide exactly what is asked for, and then shrug when it doesn’t solve the problem and say that they did their job. They will be right – but no one will be happy, and they will create an unfortunate reputation for themselves. The other possibility is to understand the problem that they are being asked to solve, and ask questions, confirm assumptions, examine the current process and tools, and build incremental prototypes and solicit feedback throughout the process so that no one is surprised at the end.  This may mean that you need to learn enough about finance, or warehousing, or logistics, or pricing, or quality control to ask good questions and to make sound assumptions. Guess which one happens most often?

The core of these problems is that success in the real world depends on developing good judgment.  Knowing how to pick tools, anticipate problems, assess risks and benefits, know when to question and when to push back against your managers and customers are not easy things to learn.  There is an old joke that is surprisingly true… “Where does good judgment come from? Experience. Where does experience come from? Poor judgment.” The only way to really internalize the ability to solve the right problem is to solve lots of problems.  Poorly. And then from each issue that you run into, find ways to avoid making the same mistake the next time. I always told my teams that if you ever look at something you wrote five years ago, and don’t think that it’s junk, you’re not making the progress you need to as a developer.  It’s important to understand that the cycle of trying, failing (or creating a mess you have to clean up later) and then doing it better the next time is a necessary one. No book, no Wikipedia article, no course and no stack overflow post can teach you judgment. It’s acquired through practice, and a constant desire to make each application you write better, more scalable, more maintainable, and more user-friendly.  Like with any profession, mastery will not appear overnight, but the more code you write, and the more you work with non-technical people to solve real problems, the more capable you will become.           

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.