POSTS
Some Thinking on Feedback Loops
I’ve been cranking away on CTF challenges lately as a way of testing my knowledge, which has been good at identifying the elements which I understood at the “book” level, but not at the “keyboard” or “brain” level. I think it becomes more important as you grow and continue in the computer field that you’re regularly testing what you know, as a means of ensuring understanding (and figuring out where you can improve). But I’ve wondered if these exercises are adequate in testing what I know.
I’ve seen many projects that have floundered largely as a result of folks (sometimes including myself) making the mistake of thinking they fully understood the problem, and then in the process of trying to deliver realizing their grasp of the tooling or concepts was well below what they’d initially thought. An analogy might be someone reading a book about road design, paving their driveway, and then committing to building a 20-mile highway. Think of how many software projects you’ve been on that lasted months to years, swallowing up resources because a flippant “I read the docs and think this can be done in a few sprints” was the justification to gear up for a death march.
Now, there’s many good reasons to try things like koans, code katas, CTFs and the like. For one thing, they’re prototypes and one-offs that you can throw away. I’ve seen a lot of janky code in product code reviews where the response is “this is my first time trying something like this.” But I also think they’re a good way of building understanding and pushing the boundaries of your knowledge, when properly approached. Good exercises shouldn’t just reiterate what you already know, but require you to reach outside your comfort zone and probably involve a little research. Otherwise, you’re just doing unpaid work for the benefit (and likely enjoyment) of no one.
The Plateau of Adequacy
At a certain point in your career you’ll reach a stage of competency that’s deemed acceptable for the work you do (or you get canned). You could be the “senior” engineer at the company and there’s no place to go besides management, or you’re doing maintenance for a product that’s largely frozen by customer expectations.
While the company you work for may have an impetus and maybe even resources to improve your skills, it’s not a given, and it may not be helpful overall to your career (“have some training to learn another company-internal library/tool”). If you rightly assume that the value of your knowledge has a half-life, you’ll be regularly searching out for the things that will be useful to you later on, either by staying on the fashion treadmill picking up the latest framework or spending time learning things like algorithms that are generally timeless (at least as far as your career is concerned).
You might have a source of feedback on a specific set of skills from work, but that should not be your only source of feedback. Using job performance can be a lousy way of finding where your skills rate in the larger workforce, as you might have no other reference points in the environment, and there’s no one to point out better or different ways of doing things. You might be the only one in the office who knows how to do a SQL injection, but that is the limit to your “1337 h4x0r skillz.” You can understand enough Rails to make a CRUD application, but not know how to run the Ruby debugger if things ever got hairy. You can be a good employee, but an overall mediocre programmer.
There’s likely a bunch of skills that would help you be better at your job, that you would not otherwise learn “on-the-job.” That could be crypto skills, infra skills, networking skills, Unix skills, Comp Sci skills, math skills, scripting skills. These skills are likely complementary to your job, but your company might not pay you to learn this stuff, and you may be a local optimum in the office.
“I’m not a coward I’ve just never been tested, I’d like to think that if I was I would pass”
If you’ve been in the industry for longer than 10+ years and you actually bother to learn things outside of your job (don’t laugh, this applies to a lot more folks than you’d think), you’ll end up with a large amount of knowledge that you’ve heard but never tested. If you spend 20 minutes a day reading blogs, this amounts to 1200+ hours of reading over the course of a decade. And while reading blogs gives you knowledge of what’s going on around you, 20 minutes of reading about an algorithm is not the same as 20 minutes coding and wrestling with an algorithm. You can end up with the illusion of competence, where, because you can speak the lingo and maybe do some of the parts, you’re under the impression that you can simply do the thing you’ve heard about.
The illusion of competence can be a big hazard, not just to yourself,
but to others who might be relying on you to deliver. I can think of
several interviews I’ve done in the past where the candidate talked a
really good game about administering Linux systems, but when they were
handed a laptop and asked to perform some basic operations, they can’t
deliver. I don’t mean that they had some anxiety and butterfingers on
the keyboard (which is expected to some extent), I mean they literally
didn’t know basic Unix commands, like ls
and cat
. This is the
Dunning-Kruger
effect, where they’ve drastically overestimated their abilities,
likely because of hitting a local optimum at their previous job. These
are folks who might have been using Linux for years, but because they
were limited largely to GUI operations and occasionally copy-pasting
things off of StackOverflow, were unable to reflexively reach for the
right tool when “stuck” in a console.
Imagine hiring a plumber who can talk for hours about plumbing, but only knows how to use a monkey wrench. Even if they’re an expert at using that wrench, it’s not going to fix everything, and it’ll be inadequate for many of the tasks it could potentially be used for. While you don’t expect the plumber to know exotic welding techniques to fix a pipe, you have an expectation that they’ll know a variety of ways to approach and solve a problem before you let them deal with the sewer pipe.
On the other end, there’s the Imposter Syndrome, where you can’t internalize what you have done, and you find yourself unsure of your own competency. After discussing this with colleagues, I’m beginning to think some of this comes from weak feedback loops, where you aren’t tested frequently or adequately enough, and because of that poor feedback loop it’s easy to delude oneself that they were merely lucky, or that the testing only shows a surface understanding of the area. This is a complex subject and I don’t like watering it down for the sake of a blog post, but I think it can be possible to deal with Imposter Syndrome to some extent by making an effort to understand what competency in that area looks like, and then looking for adequate feedback mechanisms to test that competency. This is how I’ve been trying to deal with my own IS, and so far this has been helpful (although YMMV).
In trying to think of how confidence and feedback loops collide, I’ve made this unscientific graph of outcomes:
competency | confidence | feedback | outcome |
---|---|---|---|
low | low | low | An honest beginner, knows they know nothing |
low | high | low | Dunning-Kruger |
low | low | high | Beginner student in an ideal setting |
low | high | high | Is this a thing? |
high | low | low | Imposter Syndrome |
high | high | low | Big Fish/Small Pond - “High Potential” |
high | low | high | Really bad Imposter Syndrome |
high | high | high | God Mode? |
Feedback loops - the parrot, the monkey, the human
Feedback loops are a tricky business. You could have a good feedback loop, but because of low confidence in the loop itself continue with a feeling of low confidence in one’s competence. You can also have an inadequate feedback loop, but because of low competency and high confidence you mistake it for a good feedback loop. I can think of several online courses that give a false impression of mastery, but the profitability (I use that a few ways) of the course is not so much in turning out competent students. Those students then generally hit the brick wall of reality in unpleasant ways later on when they find that their understanding was hollow.
Information can be transmitted around via writing or talking, and while the information can be useful, its use can only extend so far. For example, imagine you have trained a parrot in your office to say “AES with CBC is subject to padding oracle attacks.” (Don’t worry if you don’t know what this means, unless you’re using encryption). Now this sounds like an incredibly smart thing for the parrot to say. At fancy dinner parties where people aren’t well versed in cryptography, that parrot would get lots of admiration and treats. However, the parrot doesn’t really understand the meaning of what it has said, which becomes clear once you ask it to explain it. At best, you might be able to train the parrot to squawk any time it sees you using AES-CBC as it watches from your shoulder (“pair-ot programming”), but that’s where it ends. If you end up using ECB to make the parrot stop squawking, you’ve made the problem worse, but the parrot hasn’t a clue. This is a sort of “programming by trivia,” and the results are only as good as what the parrot has heard.
Fortunately, in this office you’ve also got a trained monkey. This
monkey has been so well trained that when it sees AES-CBC ciphertext it
presses the correct keys on a keyboard (./go_bananas.sh
) to break that
string. A monkey this smart could probably serve cocktails, use roller
skates, and could be trained to type a different command when it spotted
a different stimulus. But if that script ever breaks, or the ciphertext
isn’t recognizable, the monkey is out of luck (and probably stuck making
banana daiquiris from now on).
Now let’s bring in a smart human. When confronted with an unknown ciphertext and the binary blob that generated it, she can reverse-engineer the blob to figure out how the ciphertext was generated (and can come up with possible weaknesses in the process). She didn’t come to this position by luck - she had to understand how programs work on a machine (which often includes understanding operating systems and programming languages), how to read and interpret assembly, the math that might be involved in the crypto, and many other skills that are different yet complementary to each other. There’s a lot of work to amass that level of knowledge, but if she hears “AES with CBC is subject to padding oracle attacks,” she can do something with that information, like applying that knowledge to other crypto primitives she knows, or writing new tools for the monkey to use.
When evaluating a feedback loop, it may be helpful to ask whether that feedback loop asks you to be a parrot, a monkey, or a human. It’s really easy to get the impression that you’re gaining a level of mastery, but all you’ve really been trained on is how to say or do the right thing at the right time. However, the ideal feedback loop not only gives you the vocabulary and perhaps some tools or actions to perform, but also engages your brain to work through the problem in ways that allow you to approach problems you haven’t encountered yet, and as you’re abstracting the problem to make it easier to respond to, shows you where the abstractions are leaky.
To extend this a bit to what makes for a good technical interview, it’s not about answering trivia about Linux command line flags (which a parrot could pass) or a matter of typing the right commands on the console when presented with a stimulus (which a monkey could pass) - it would include something that requires reasoning, handling the unexpected, and a bit of novelty.
At some point all of us are going to be either a parrot, monkey, or human, but it’s good to understand where you lie on that spectrum well in advance of a job interview, a major project, in, well, anything. I don’t see this as a problem just in software engineering - the more I look, the more I see it (although that might be confirmation bias in action, too ;-)