In my opinion, this debate is unnecessary for two reasons:
1) "Manual" and "automated" are arbitrary designations that don't really mean anything. If I write a Python script that will generate some test data for me, am I now an automation engineer? If I log into an application and click around for a while before I write a Selenium test, am I now a manual tester?
2) The whole point of software testing- to put it bluntly- is to do as much as we can to ensure that our software doesn't suck. We often have limited time in which to do this. So we should use whatever strategies we have available to test as thoroughly as we can, as quickly as possible.
Let's take a look at three software testers: Marcia, Cindy, and Jan. Each of them is asked to test the Superball Sorter (a hypothetical feature I created, described in this post).
Marcia is very proud of her role as a "Software Developer in Test". When she's asked to test the Superball Sorter, she thinks it would be really great to create a tool that would randomly generate sorting rules for each child. She spends a couple of days working on this, then writes a Selenium test that will set those generated rules, run the sorter, and verify that the balls were sorted as expected. Then she sets her test to run nightly and with every build.
Unfortunately, Marcia didn't take much time to read the acceptance criteria, and she didn't do any exploratory testing. She completely missed the fact that it's possible to have an invalid set of rules, so there are times when her randomly generated rules are invalid. When this happens, the sorter returns an error, and because she didn't account for this, her Selenium test fails. Moreover, it takes a long time for the test to run because the rules need to be set with each test and she needed to build in many explicit waits for the browser to respond to her requests.
Cindy is often referred to as a "Manual Tester". She doesn't have any interest in learning to code, but she's careful to read the acceptance criteria for the Superball Sorter feature, and she asks good questions of the developers. She creates a huge test plan that accounts for many different variations of the sorting rules, and she comes up with a number of edge cases to test. As a result, she finds a couple of bugs, which the developers then fix.
After she does her initial testing, she creates a regression test plan, which she faithfully executes at every software release. Unfortunately, the test plan takes an hour to run, and combined with the other features that she is manually testing, it now takes her three hours to run a full regression suite. When the team releases software, they are often held up by the time it takes for her to run these tests. Moreover, there's no way she can run these tests whenever the developers do a build, so they are often introducing bugs that don't get caught until a few days later.
Jan is a software tester who doesn't concern herself with what label she has. She pays attention during feature meetings to understand how the Superball Sorter will work long before it's ready for testing. Like Cindy, she creates a huge test plan with lots of permutations of sorting rules. But she also familiarizes herself with the API call that's used to set the sorting rules, and she starts setting up a collection of requests that will allow her to create rules quickly. With this collection, she's able to run through all her manual test cases in record time, and she finds a couple of bugs along the way.
She also learns about the API call that triggers the sorting process, and the call that returns data about what balls each child has after sorting. With these three API calls and the use of environment variables, she's able to set up a collection of requests that sets the rules, triggers the sorting, and verifies that the children receive the correct balls.
She now combines features from her two collections to create test suites for build testing, nightly regression testing, and deployment testing. She sets up scripts that will trigger the tests through her company's CI tool. Finally, she writes a couple of UI tests with Selenium that will verify that the Sorter's page elements appear in the browser correctly, and sets those to run nightly and with every deployment.
With Jan's work, the developers are able to discover quickly if they've made any changes in logic that cause the Superball Sorter to behave differently. With each deployment, Jan can rest assured that the feature is working correctly as long as her API and UI tests are passing. This frees up Jan to do exploratory testing on the next feature.
Which of these testers came up with a process that more efficiently tested the quality of the software? Which one is more likely to catch any bugs that come up in the future? My money's on Jan! Jan isn't simply a "manual tester", but she isn't a "software developer in test" either. Jan spends time learning about the features her team is writing, and about the best tools for testing them. She doesn't code for coding's sake, but she doesn't shy away from code either. The tools and skills she utilizes are a means to ensure the highest quality product for her team.