The Desperate Shortage of Designers in Computer Science

This summer I’m interning at a tech company that is very design centric. When building new software, our first consideration is what will be the best user experience and what will look great, not what is technologically feasible.

How often do you see an app that’s absolutely beautiful and a joy to look at but doesn’t really do anything? That’s an all-too-rare problem to encounter. It is much more common to see applications that have a mediocre (at best) interface.

Computer Scientists Need to be Better Designers

It’s tempting to say that design and computer science are different fields, but that’s really not true.

To implement a design you have to be a good programmer, and to build an interface for humans you need to be a good designer.

It’s not good enough to have a graphic designers build pixel-perfect wireframes and then trust non-designer programmers to implement them. There are a million and one little decisions that must be made when this implementation occurs1, and if these aren’t done by folks who understand design, your originally-beautiful-idea is going to have terrible execution.

I get it—not all computer scientists have design chops, it’s unfair to expect that, and you need people who focus strictly on the backend. I’m not saying all computer scientists need to be excellent at visual design, but the problem is that they’re currently non-existent in proportion.

Here’s a novel idea: build something incredibly stupidly simple, and spend 90% of your development time improving the design and user experience of that incredibly stupidly simple idea.

You’ve probably never done this before and you’ll be shocked at the result. Your software will be so simple that the average user is (1) delighted by the unusually good user experience (2) delighted that your app does so little that it’s actually usable.

We need less functionality and more user experience. This sounds like a terrible idea to 10% of the population, and they voice their opinions very loudly. The other 90% are the quiet bread-and-butter user base who are desperately craving for their digital lives to be simplified. Simplicity liberates them and they pay good money for this. The market has proven this over and over again.

How Can Computer Scientists Become Better Designers?

Artists have many books that talk about design: hues, values, composition, and textures. Graphic designers have books that talk about the intricacies of fonts: families, sizes, line heights, and characters per line. These books should not be confined to just artists and designers. They should be referenced by computer scientists just as heavily as textbooks on backend programming. They should be a core part of postsecondary academia. They should be a major component of discussion for every software project that’s intended for humans.2

In short, computer scientists can become better designers by studying and practice. That’s how they learned how to program in the first place, right?

The Danger of Not Becoming Better Designers

Making visual decisions usually requires more creative thinking than building software that’s functionally matching a specifications memo. You can teach a monkey how to program.3 Design is more stimulating and satisfying. And right now, it’s much more rare.

If you don’t get good at design, you will:

  • Confine yourself to enterprise and legacy backend systems where visual chops don’t matter
  • Never have an opportunity to participate in the startup world, where design is the #1 deciding factor of whether you’ll make it or not4
  • Never experience the sheer delight of users of your software. Laypersons who use your software get excited about an incredible design and UX, not about reducing database queries by .015 seconds
  • Outdate yourself as inexpensive labor and futuristic coding-writing-programs make your job obsolete

If you’re a good designer who is capable of executing your designs, you will never be out of work. You can work on what you want to work on. You can build your own stuff. You can write your own ticket.

Now go design, computer scientist. You know how to code, which means you can implement those designs. That’s dangerous. The world needs you.

1Just trust me. I’m young but I’ve been doing this for 5 years.

2Many programs in enterprise are written to be consumed by other programs and systems instead of humans, which is why I make this distinction.

3Almost.

4Okay, I just made up this statistic, but I’ll die defending it. I firmly believe you can sell ice in Antarctica if your design is good enough. And by “good enough” I’m mean better than 99% of what’s passed today as good design. That’s how starved the computer science community is for design skills. There’s lots of design talent in the world, but this talent doesn’t control the code base generally speaking, and that’s a shame.

The Mathematical Probability of Accuracy for Ecological Validity in a Given Experiment for Behavioral Psychology

In behavioral psychology, there are two ways to observe animal and human behavior: in a controlled laboratory environment, and in a non-controlled real-life environment.

Knowing how many independent variables1 exist in such an experiment is crucial to a diagnosis of the experiment’s accuracy. Ideally, one should have a single IV. When this IV changes and the DV consistently changes with it, one can accurately say that a direct correlation exists. Predictable outcomes, after all, are one of the primary goals of psychology.

In a controlled environment, it is quite easy to boil the IVs down to a single one. The time of day, the light intensity, the amount of food present—all of these things can be more or less kept constant from experiment to experiment. This is why laboratory testing was so popular in the early 1900s. Things were easy to control, and experiments like Little Albert and Pavlov’s dogs were in the books while they were still in the laboratories.

The problem with controlled experiments is that they often do not reflect real life at all. Dogs usually don’t hear bells before given food. Babies usually don’t hear loud noises after touching rats. Real life is much less consistent. Laboratories can give key insights into some aspects of existence, but they often don’t have a helpful takeaway for everyday life. They lack ecological validity.

To remedy this, scientists observe humans and animals in real life. The difficulty with this approach is that real life is much harder to control than a laboratory. There are many moving parts. No two mornings are alike in a non-controlled environment.

All of this is stuff you’ll find in a normal psychology textbook; it’s also intuitive even if you have never studied psychology. But as a person interested in mathematics, I wanted to quantify in numbers just what this looked like: as you introduce new IVs into an experiment, what is the statistical likelihood that the results of that experiment accurately reflect a correlation between what you suspect is the primary IV, and the resulting DV? A mathematical formula seemed appropriate to answer such a question. Sitting at my desk with pen in hand, I began to derive a formula.

First, I started with the obvious: a single IV would lead to a 100% accuracy. Let’s say you knew there are 3 ways to get a rabbit scared: by playing a loud noise, by grabbing it suddenly, or by showing it a predator. In a laboratory environment, you could throw away the last two (it’s a controlled environment, remember) and just experiment with a loud noise. This becomes your single IV. You observe that when you play a loud noise, the rabbit is scared, and when you do not, it is not. Direct correlation, 100% predictability.

Second, I needed to find a pattern before I could begin writing my formula, so I asked, what happens when there are two IVs present? To use our example, what if you played a loud noise and grabbed the rabbit at the same time? Assuming you hadn’t performed the earlier experiment, you would conclude the following: something scared the rabbit, and therefore it was either (1) the loud noise (2) the grabbing (3) or both. You know it is one of those three options, but you’re not sure which. If you had to choose among these options, your accuracy would be 33%.

Just by introducing a second IV, the accuracy drastically dropped from 100% to 33%! This was quite a jump. I needed one more data point before I could really write my formula. So, what happened if you introduced a third IV by also showing a predator to the rabbit while performing experiment #2? Then you would have seven explanations for what caused the scaring: (1) the loud noise (2) the grabbing (3) the predator (4) the loud noise and the grabbing (5) the loud noise and the predator (6) the grabbing and the predator (7) or all three. Seven combinations meant that the likelihood of any of them being the right combo was 1 in 7, or about 14%.

It was around this time that I realized we were working with binary math:

  • 1 in binary is 1 in decimal
  • 11 in binary is 3 in decimal
  • 111 in binary is 7 in decimal

There was our formula! With each additional IV, simply you add a “1” to the binary number like tally marks, which then increases the decimal equivalent, which in turn forms the denominator of the answer for that scenario.2

Armed with my formula in hand, it was time to make a visual out of this.3 Thanks to Xcode 6 and Swift, I was able to code it up fairly quickly and post a gist of it for you to scrutinize. This graph is the nexus of psychology, mathematics, and computer programming. It was a fun project to see to completion.

Screen Shot 2014-07-07 at 10.30.31 PM

As you can see from this graphic, ecological validation is very very difficult to attain with certainty. Real life contains many IVs but after less than half a dozen of them, the accuracy of the DV plummets to near-zero.

This is the reason that, despite their limits, laboratories are still in use in psychology. This is also the reason that Facebook tinkered with users’ feeds for a massive psychology experiment: if you’re going to insist on doing experiments in real life, you have to do them at such a scale that you can offset the huge unlikelihood that the IV you suspect is causing the DV outcome, is really the right IV.

1I assume your knowledge of independent and dependent variables. For the remainder of this article, I use the acronyms IV and DV to denote them.

2I studied mathematical proofs in discrete mathematics. You could get a lot more formal than I have here with a comprehensive proofs, ending in quod erat demonstrandum (QED), or “thus it has been demonstrated.” But I’m really not interested in that, and I didn’t think you were either.

3Unless you want to geek out over the code, you can safely ignore the values on the X axis. It’s the Y axis (percentage of accuracy) and the addition of new data points (each new dot is an additional IV) that you should find particularly interesting and, if you’re a behavioral psychologist, disturbing.

This Nonsense about Twitter Followers to Tweets Ratio

I’m noticing some people are perpetually concerned that they have 5,000 or 20,000 tweets but only a dozen or a hundred followers. They’re making a ratio out of these two numbers and it’s making them feel guilty.

Maybe you’ve not met these people, and if that’s the case then count your blessings. But think about this: what’s the purpose of Twitter, for you? 1% of people on Twitter are influential. The other 99% follow them. It takes both. Twitter is just a natural extension of the social validation rules that occur in all areas of life. The one and the many.

But—and this is equally important—Twitter also exists for people in the 99% to talk to other people in the 99%. In this function of Twitter, it’s a one-to-one communication tool.1 Think free SMS with multiple beautiful apps to choose from (notice how with true SMS you’re limited to a single app that your operating system provides, which may or may not have taste).

The SMS analogy is important because the ratio is transferrable. Over the course of months and years, you send thousands of text messages to 10 or 20 people. Nobody ever felt guilty that they had too few contacts for the amount of texting they were doing.2

And so—long live tens of thousands of tweets and 50 followers. That’s healthy and normal.

1Twitter seems to be more heavily promoting this aspect of communication. In the browser version of Twitter, replies are not visible in the timeline of a user’s profile—even if they are replies to someone you follow. The tweets are still technically public assuming you have a link to them, but otherwise, if you’re not a recipient of the reply, I’m not sure how you would get to them. It will be interesting to see if this shift becomes the standard in native Twitter apps, which still show replies on profiles.

2Well, unless you’re under 20 or a very strange person.

The Difference Between Pragmatists and Scientists, And Why Pragmatists are Better When it Comes to Programming

In academia, learning to program a language is always about the journey. It’s never about the end goal. Why should it be? After all, academia in computer programming gives you nothing other than that which has already been done by thousands of students before you. You’re not creating anything new. The finished result may be exciting to you since it’s new from your vantage point, but it’s very very old to the professors. Innovation and building new cool things just doesn’t happen in undergraduate academia and as a result, it tends to produce programmers who are scientists, not pragmatists.

The Difference

  • Scientists want to know everything there is to know about a technology.
  • Pragmatists are only interested in what’s necessary to get the current job done.
  • Scientists, upon learning the mainstream way to do something, immediately think of fringe case scenarios that would break this mainstream convention—and they’re uncomfortable until they’ve found an alternative solution that handles any possible scenario. Any.
  • Pragmatists are content with “that’s just how you do it” and never worry about fringe cases.
  • Scientists gets easily lost in rabbit chases that have ultimately nothing to do with the project at hand.
  • Pragmatists hate deviating more than 5 minutes, so they’re constantly re-assessing to make sure they’re working on exactly what’s at hand. You’ll be hard pressed to find them researching a topic just for the fun of it.
  • Scientists have an irrational fear of being caught dead with suboptimal code.
  • Pragmatists couldn’t care less what their code looks like, so long as it works. They’ll format it and document it later.
  • Scientists spend a lot of time planning, and their day-to-day output is quite small. If they’re building an app, by the end of month #1 they still have a white canvas in Xcode as they work on their UML designs backstage.
  • Pragmatists build an entire MVP app in just a few days. They aren’t even sure what UML stands for.
  • Scientists spend so much time reading about technology and their amount of knowledge is so vast that they appear to have a near-photographic memory.
  • Pragmatists prefer learning by doing. Their conversations have less jargon in them, but their portfolios are much more impressive.
  • Scientists take their art very seriously. It isn’t a game to them.
  • Pragmatists are only peripherally aware that they’re pragmatic in their approach. They’re too biased for action to give it much thought.

Pragmatists Win

At the end of the day, what is the point of knowing code? Is it to be the smartest person in the grave, or to build stuff?

Academia and most programming conferences and programmers seem to think that it’s the former. As a result, most people’s output in a given day is much smaller than their potential output. Their creativity is stifled. They’re mired in the drudgery of non-innovation.

I would rather have built something amazing, remembering little of how I did it, then be accoladed for my knowledge but having built nothing.

Knowledge is only as empowering as you allow it to be.

Now go build some stuff.