2019-11-18
2148
Paul Cowan
9940
Nov 18, 2019 â‹… 7 min read

Automated testing is not working

Paul Cowan Contract software developer.

Recent posts:

Enhancing Two-Way Data Binding In Angular

Enhancing two-way data binding in Angular

Angular’s two-way data binding has evolved with signals, offering improved performance, simpler syntax, and better type inference.

Alexander Godwin
Nov 14, 2024 â‹… 6 min read
Hand holding purple sticky notes for CSS sticky positioning guide.

Troubleshooting CSS sticky positioning

Fix sticky positioning issues in CSS, from missing offsets to overflow conflicts in flex, grid, and container height constraints.

Ibadehin Mojeed
Nov 13, 2024 â‹… 5 min read
Task Scheduling and cron Jobs in Node Using node-cron

Scheduling tasks in Node.js using node-cron

From basic syntax and advanced techniques to practical applications and error handling, here’s how to use node-cron.

Godwin Ekuma
Nov 12, 2024 â‹… 7 min read
Working With The Angular Tree: Flat Vs Nested Trees And More

Working with the Angular tree

The Angular tree view can be hard to get right, but once you understand it, it can be quite a powerful visual representation.

Lewis Cianci
Nov 12, 2024 â‹… 21 min read
View all posts

15 Replies to "Automated testing is not working"

  1. I mostly agree… though browser derived integration tests can be very useful. Personally, my preference is Jest + Puppeteer.

  2. Puppeteer does not allow access to the dev console for investigation. A better solution is headless chrome which can be turned off to allow developers to see the display and have access to the dev console for investigation.

  3. Wow just wow. The amount of wrong info in this is amazing. I don’t even know where to start. I guess I would venture a guess that the person who wrote this lives in an area that’s pretty far behind. Number one not everything should have an end to end test. Google Martin Fowlers. Next the way in which the examples are written are extremely brittle which is why your test automation is useless. I honestly thought people were starting to get over this false perception that automation is a waste but it is possible to go full CI/CD straight to prod on a merge and the only way to do it is test automation.

  4. A bunch of common sense bad practise which any good test automation engineer just don’t do, not sure what kind of testers you are used to work with??? Ofc UI test automation is only for happy path, ofc test pyramid test other edge cases, ofc test need to be stable and not flakey, ofc for long term project scalability manual test everything is not the way to go….

  5. I totally agree with Lorenzo and Austin. Programming 101, garbage in garbage out. Follow best practices and common sense and you can’t go wrong with automation.

  6. Surely one can make the case that writing code to make software is inherently bad because of a buggy poorly written application by following the logic put forth in this article. There are both bad developers and bad testers.

  7. I completely agree with you. While test automation has it’s flaws, its necessary for a CI/CD process to be successful. Most organizations, with a fully implemented test automation framework in place, actually have their developers write the e2e tests. I doubt any dev would write brittle code like what’s in the article. Even I, an automated tester, wouldn’t write code that bad. Now who’s the least experienced developer?

  8. Visual and API testing could solve the gaps leaving by selenium abd will provide a/b testing out of The box.

  9. “Why is nobody questioning if the payback is worth the effort?”
    Will answer this through personal experience. I am working on an application which is critical to the business and central point for the company. 2 people have been writing end to end tests with appropriate mix of API, Web and App based tests. We have automated much of the regression suite in about 4 months and already caught more than 20 bugs which were pure regressions. Now why only 20 bugs you may ask. Then we need to be reminded that they were the bugs which were missed by dev-unit tests as well as manual guys. Also needs to be reminded that they were high severity bugs. With heavy changes performed each sprint especially in a large scale application there is a huge risk for existing functionality to break, something which we need to test before every release. Can we burden the responsibility of running these tests to manual QAs at each and every sprint? Can we expect same level of reliability and accuracy from manual testing as we repeatedly have seen from automation? And that too each and every time? Do we exactly know impact area for each and every changes we make?

    This attitude of trying to too much expect a payback from each and every effort is something which big companies dont want to be involved in. Automation solves the problem of repeated checks. It provides peace of mind to all the business people whenever we make business flow changes or even technical optimizations. It frees up time for manual testers to perform more and more exploratory tests. That is what automation is for. Setting wrong expectations and not following a correct process does not mean there is something wrong with the job itself.

    ‘Why are we allowing flakey tests to be the norm, not the exception?”
    We definitely are not allowing flaky tests to be the norm. We invest good time amount of time in tests stability. If automation testers are not able to make atleast 90% of their tests stable, they are not doing their job correctly.

    “Why is selenium the norm when it is not fit for purpose?”
    You have mentioned Cypress, while its an amazing tool its not fit for large scale testing projects. Refer https://dzone.com/articles/cypress-vs-selenium-webdriver-which-is-better-for

    Selenium has been the de-facto tool since years. There must be and is a reason for it.

    “Why is re-running a test with the same inputs and getting a different result excusable to the point where we have runners such as courgette that do this automatically?”
    Ok, this is a tough question. Agreed when we have too many non-deterministic tests, there is some problem. But for any complex projects, even many bugs found from manual testing fails to be reproduced again. Many a times there goes something wrong with the system itself. And this is expected. Try to work to on a large scale systems and you will these problems every now and then. Many times scripts are at fault too but again once properly designed, need to retry tests becomes lesser and lesser.

  10. This is a really helpful guide for QA professionals to overcome some common challenges while using automation testing tools like QARA Enterprise, Ranorex, Katalon Studio, etc.

  11. I read the blog post “Automated Testing is Not Working” with great interest, and I must say, the author raises some valid points. As a software developer who has dealt with automated testing extensively, I can relate to the challenges mentioned.

Leave a Reply