Friday, September 27, 2013

Target Fixation


You are driving down the road, it's a dark, rainy night. Your headlights are barely lighting the road ahead in the downpour. As you bob and weave your head and eyes in order to see through the rain and the windshield wipers that are dancing on your windshield, you try to pick out the defining details of the road ahead. Trees, branches, deer, skunk, trash ... all manners of hazard litter the roadway, and from time to time you lose total orientation to the sides of your lane. Good fortune is with you, you continue to acquire the roadway ahead, in spite of your gut telling you to pull off the road and wait this downpour out. Your pulse continues to rise, and has hit a shuddering 170 beats per minute, your palms are sweating, and your knuckles have the telltale pallid hue as you grip the steering wheel tighter and tighter. You are in full-on combat mode with the road and the inclement weather is your enemy.

Suddenly, as you break around a turn, on coming headlights appear to be in your lane and time slams into slow motion.
At this point the mind can take a number of paths.
1. Freeze. Total musculo-skeletal lock. Inability to process the information being fed in.
2. Panic. Swerve radically and risk collision with objects off of the road.
3. Tactical avoidance. Mind and body swing into motion, plotting an escape from the situation in a rapid, coordinated set of motions that take place instantaneously - reflexive response.
4. Target fixation. Is like freezing, but worse. You collide with the target because you actively navigate into it.

The description of target fixation above seems ridiculous, but it is a very real situation in which you focus on the objective so intently that your mind and body conspire to do the exact opposite of your intent. Instead of avoiding the obstacle, you end up colliding with it.

This phenomenon happens more often than you might think, and in the area of application performance monitoring, tuning and testing, it is extremely common.

This video clip is a classic example of target fixation ... in the moment of truth, the rider is so focussed on the thing he so desperately does not want to collide with - that he drives right into it. Don't worry, it's not gory (but it sure hurt, you can bank on that!):



Do you think that the rider in the video wanted to hit the wall? Of course not, that is absurd. It was the very last thing that the rider wanted to do *ever* in his pursuit of doing what he enjoyed, riding the open road on a motorcycle. The motorcycle is not the issue, the wall was not the issue, the rider's brain - and how it processed the threat is the issue. He could have just panicked and layed the motorcycle down (and it would have also slid into the wall, thanks to angular momentum). He could also have taken evasive action and the rest of his day would have been a lot better.

This phenomenon was seen in the early days of aerial combat, where pilots would fly into the targets they were intently trying to destroy - never intending to be a "kamikaze." The focus on the target is so intense, that all other sensory input is shut out/diminished in importance over the one, key, overarching goal - the target itself.

So what? What in the world do motorcyclists and fighter pilots have to do with Performance Engineering? It's all about evasive action, how we train mind and body to work in unison to avoid obstacles - keeping your eye on the target, but also letting peripheral vision and sensory acuity work. In terms of Performance Engineering, we're talking about application performance and adapting to the constantly changing state of our environment.

The heart of this issue in Performance Engineering, is that to approach modern applications and systems as a static entity that we can just create a turn-key, template-ized, formulaic way of finding and resolving all of our potential performance robbing and architectural defects, is sorely misguided. The process and practices of effective PE are just like the opening scenario - every test, every production outage is like driving on a rainy night. You might have been down this same road 10,000 times before, but every rainy night is different.

When you Performance Test large applications, it is an exercise of constant balance - balancing business needs and risks. I would contest that if you are doing Performance Engineering correctly, you will be constantly bombarded with unexpected issues and information that will often require a lot of thought, sometimes deep thought, and analysis to try to understand causation/root-cause and filter out the background noise.

[October 8, 2013 Edit - Example of Target Fixation]
Here is a greatly simplified example of this phenomenon of target fixation, the one that inspired this article:

Using past performance metrics/KPIs/criteria as the sole measure by which current system testing is compared against, and thereby judged to be acceptable or not. Don't get me wrong - there is the issue of having a baseline that you compare change against, but there is a form of data myopia that happens with this type of approach.

For instance, if you are using some key business metrics/KPIs that were derived from a previous season's issues or a specific outage, such as crashes during the Black Friday sales season of 2012, and those and only those metrics/KPIs are used as the ultimate measure of whether current performance is acceptable. This is target fixation.

The take away here is that what is bad is bad, and the current state of the application and environment constantly changes - and while you use previous season's or tests' results to gauge incremental progress or change, you must not avoid what is staring you in the face. More specifically, if in the 2012 event your "add to cart" functionality was measured to be within acceptable performance criteria, and therefore it is not on this list of metrics/KPIs, but in the last few rounds of performance testing the "add to cart" functionality has been consistently slowing down, it needs to be called out and addressed. This falls into the category below of "worst performers." Constantly call out the worst performers and bring attention to them, don't ignore them because they are not on some summary list of "issues that bit us last year."



5 Things that you can do to vastly improve your Enterprise Performance Engineering efforts
Each of these deserve their own dedicated post, but these things can get you past Target Fixation and moving toward true Performance Engineering:

1. Test in Production
  • This sounds so scary that companies dismiss it out of hand, and that is the single largest mistake that they can make.
  • Done right, testing in Production absolutely answers fundamental questions that you NEED to know about your enterprise operations, performance and capacity.
  • Modern systems are digital symphonies that almost never scale linearly across the board. If you only test in an environment that is 1/3 or 1/4 the capacity of your Production servers, you are never guaranteed that your testing is actually uncovering issues or validating performance requirements. Extrapolation is exCrap-olation.

2. A Team Post-Mortem After Every Production Test or Major Pre-Production Test
  • You have got to come out of every testing effort with action items ... nothing ever goes 100% to plan, and there are always unexpected glitches, surprises, and observations that deserve attention.
  • Assign action items, assign dates, regroup with an action plan and a follow up test plan
  • Drive performance - it doesn't happen by accident
  • Come up with a 4 or 5 slide dashboard to present each tests findings, action items, and statistics.

3. Don't Extrapolate for Production
  • Extrapolation is a guessing game that by its very definition cannot end in certainty
  • You can effectively test in scaled down environments, but you cannot accurately project performance, it does not work.

4. Find Time to Chase the White Whale
  • During performance testing, vast amounts of information is collected, and most of it is unobserved, unused, unanalyzed. Many outlying observations are tossed out, but sometimes disturbing trends are also actively ignored because they cannot be explained, or they are outside of the stated scope of goals for the test.
  • Often times when Business politics drives performance testing, the Performance Engineering aspects of our jobs are compromised. Highlight the "things that make you go hmmmm?" and build a team to undertake the challenge of identifying and explaining all the little things - because they add up.
  • Develop targeted test plans for those things that fall in the grey area ... "these are not great, but they're within tolerance, but it could be better" - todays blips are tomorrows bottlenecks.
  • Performance Testing goals need to change as problems are solved. When you fix a bottleneck, validate it against the baseline, and re-baseline. Now you're on to the next issue, which may have been uncovered by that last fix. You just removed a massive bottleneck at your load balancers, well guess what, your next tier in line is going to get hammered. Adapt to the shift, test for it, move on.
  • Identify the worst performing parts of your apps and target them for tuning. Beat the living snot out of them until they perform well.

5. Listen to the Crazy People on Your Team
  • There is that one guy on your team who says crazy stuff like, "That performance curve reminds me of the torque curve of a failing engine...." and people shake their heads and go on. Stop. What? Explain that? What do you mean? That guy has a picture in his head that he hasn't explained, and it might be a key insight to what the rest of your team is missing.
  • In the movie, The January Man, a detective ended up solving a case by listening to his artistic friend who spotted a pattern in the clues that everyone else missed. He sounded insane at first - until he was proven correct. The abstract thinkers have ways of looking through details and spotting things that others cannot see.

No comments:

Post a Comment