How to know when to change course in your research

Eva Lantsoght
5 jun ’14

Research is not a linear process at all. More than anything, doing research is like trying to solve a maze. 

At times, you will get stuck in dead ends. And very often, it will take quite some time to realize that you are blocked by a dead end and just shuffling in place. Maybe your advisor will be convinced that your dead end has a hidden door, somewhere. Maybe you yourself are staring yourself blind on a possibility that does not seem to work.

And always, movement forward in your research does not happen incrementally, but it happens by building up friction and frustration, and then taking a leap forward. So maybe you just need to push through a little more, and something will click and it all will fall in its place...

How can you know whether you have hit a real roadblock and need to change gears, or are just building up friction and doing the work necessary to move forward?

Probably, this question is one of the toughest questions that haunt us in the research process. If we'd know in advance where the dead ends are, we wouldn't even take ourselves in there and spend months (or even years) trying to force a breakthrough on a lost path.

Granted, I don't believe there is a cut-and-paste ready solution for this problem. Research is like this: you need to chew through the hard parts, you need to do the deep work, and there are no shortcuts in the process. There are plenty of tools that can make our lives easier and help us manage our time, but for deep work, for what really matters to research, there are no hacks. 

Nonetheless, I think there are some red flags that might indicate that you are wasting your time on a lost track that I would like to discuss today.

1. You are manipulating your data

First, and biggest red flag in here. Never manipulate your data to make them fit to a theory. Never make up some results. You don't want to follow down the path of Diederik Stapel and other not-so-glorious science fraudsters. If your theory turns to be proven wrong by your data, don't try to make a square peg fit into a round hole, and acknowledge that something is missing in your strategy. But hey, don't you think it is more exciting to try and figure out why your assumptions were proven wrong than to make them fit? Finding ways that don't work is progress too, always keep that in mind! 

2. When you look at results from other labs, yours stick out like a sore thumb

This red flag would mean that something in your measurements is not working, or -unlikely- that you are on to something really new and exciting. If your test results do not match similar tests from other labs, you need to carefully revisit the techniques that you are using. Review all the steps you take in your experimental procedures, check your input (whether that is raw materials from providers, or data) to make sure you have those well-described, and do a few classic benchmark tests to assess if you really have a good grasp of your test setup. If possible, talk to senior colleagues in your lab - they might have ran into the same trouble a couple of years ago, or see if you can get in touch with researchers from other labs, to discuss your deviant results. 

3. You can't think of new ways to approach your problem from the same perspective

You are just utterly, completely stuck. Stuck stucker than stuck. Your motivation is at a subzero level. You try to brainstorm, you try to mindmap and what not, but you can't feel a spark of excitement about a possible different turn to take. You are just shuffling in place. And since you are the expert in your field, you should learn to recognize that "shuffling in place" feeling. It's awful and we all get confronted with it some day or another. And when that day comes, don't get mad at yourself, but take it as a valuable lesson.

4. Your intuition tells you something is wrong

Along the same lines as number 3, your intuition does know something about your research. If your motivation went down the drain, if you feel every morning as if you are being faced with an impossibly daunting task, and if your enthusiasm is completely MIA, then your intuition is as well giving you some signs that something is not working. Worse even, if you start to eat crap, surf the internet all time, stay late in the lab to get nothing really done, start getting strange headaches, being unable to sleep at night or generally feel miserable, your intuition is shouting at you to stop and slow down or change course. Stop beating yourself up, and spend a day going over your possible other approaches, away from what you were developing earlier.

5. You are violating basic assumptions

Big red flag here. If you are applying a theory or method, and you are violating the basic assumptions, then you should not be using said theory or method. Often, the basic assumptions make perfect sense at first sight, and you will have an immediate understanding of their importance for your approach. Sometimes, the basic assumptions are a little more obscure, and then you might need to review the background of the method or theory that you are trying to apply, and spend some extra time understanding and investigating the original assumptions. Theories are only valid when you state their assumptions, remember that very well.

6. You can't return to basic principles from the path you've taken

If you can't match a benchmark test to your theory, something is missing. If you can't solve a supersimple basic case, something is wrong. If you can't return to basic principles and standard cases from the path you have taken, then you should not apply your method to more complex cases. The beauty of science is often in simplicity and clarity. If you need pages of code and a grocery list of assumptions to make something work for a specific case, you are essentially back to the situation of trying to fit a square peg into a round hole.

7. You are applying methods outside of their bounds

In same conditions, you might be able to stretch the application boundaries of certain methods and theories a little bit, but in most cases, there are well-defined reasons why you can't take a theory outside of its domain. Here, I am thinking of a domain as we describe it in mathematics - I'm all for you taking methods from different disciplines and applying them in new and innovative ways. Multi-disciplinarity and learning new skills totally rock, don't get me wrong. But if you take a formula that was derived for certain conditions, then keep it within its conditions (read the background to make sure if those conditions were based on calculations or experiments).

8. You can't make your boundary conditions make sense

Along the same lines as number 7: if you are either running outside of the bounds of the domain of application, or you are not able to make your boundary conditions to make sense, then something is rather iffy. As always, go and check the original references to figure out how your precursor researchers dealt with the boundary conditions, and what they were based on, and then judge for yourself to see if it is time to pack your bags and leave the point of no return.

9. You need extremely complicated formulas

The solutions of complex problems are often beautifully simple. If you need extremely complicated formulas, ask yourself if you can simplify these formulas into something much more useful and handy, and still have a result that is within 20% bounds of the experimental results (20 - 30% for structural concrete problems is an acceptable level of uncertainty for a simplified method - in your field these values might differ of course!). If you can't reduce your solution to something simpler that works good enough as well, then something is not working properly.

10. You get irrational values

Ah, the mother of all frustrations. You program an entire suit of assumptions and theories that you applied together to come up with a prediction calculation, and after looking at your computer screen for a couple of minutes while your machine churns away the numbers, you get a result of 1562111+732i - absolute nonsense in other words. Typically, you will get awful results like that when you are violating basic assumptions, are applying methods outside of their bounds or don't master your boundary conditions (or, you might have a coding error of course).

11. You require extraordinary amounts of computational time

If you can't solve it in a beautifully simple way and understand the physics/mechanics behind the problem, then you don't have a solid understanding of the problem, and you're most likely overcomplicating things. Your code or finite element analysis that takes forever to run, might be just a waste of computational time and capacity. Granted, I know that Monte Carlo simulations with many iterations and nonlinear finite element analyses take as much time as they take, but you should have an idea of the time it takes before you run your code. And if it takes much much longer than expected, something is most likely missing.

12. You start to feel protective about your strategy

If you start to feel emotional about your strategy, and protect it as if it were your baby, you are walking away from the wonderful lands of science and uncertainty. Never become obsessed with a certain approach. Instead, acknowledge the limitations of the theories you are working with, and embrace these limitations. Remain humble - chances that you just invented the hot water are very small. Don't pick up a fight with others when they question your methods, instead, use constructive discussions with your colleagues and supervisors to sharpen your ax and improve your method. Or, if you start to get emotionally intertwined with the strategy you are applying, recognize that it is time to put this strategy in the freezer for a moment, and go and dabble in another approach.

Recente blog posts