DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Implementing BCDR Testing in Your Software Development Lifecycle
  • Advancements in Mobile App Testing: Harnessing AI's Potential
  • Test Automation Success With Measurable Metrics
  • The Illusion of Safety: Thoughts on 100% Test Coverage

Trending

  • Navigating the Skies
  • A Better Web3 Experience: Account Abstraction From Flow (Part 2)
  • Debugging Tips and Tricks: A Comprehensive Guide
  • How To Deploy Helidon Application to Kubernetes With Kubernetes Maven Plugin

Tweaking xUnit

Oren Eini user avatar by
Oren Eini
·
Mar. 04, 14 · Interview
Like (0)
Save
Tweet
Share
5.07K Views

Join the DZone community and get the full member experience.

Join For Free

one of the interesting challenges that we have with ravendb is the number and duration of our tests.

in particular, we current have over three thousands tests, and they take hours to run. we are doing a lot of stuff there “let us insert million docs, write a map/reduce index, query on that, then do a mass update, see what happens”, etc. we are also doing a lot of stuff that really can’t be emulated easily. if i’m testing replication for a non existent target, i need to check that actual behavior, etc. oh, and we’re probably doing silly stuff in there, too.

in order to try to increase our feedback cycle times, i made some modifications to xunit. it is now going to record the test duration of the tests, the results look like that:

image

you can see that troy is taking too long. in fact, there is a bug that those tests currently expose that result in a timeout exception, that is why they take so long.

but this is just to demonstrate the issue. the real power here is that we also use this when decided how to run the tests. we are simply sorting them by how long they took to run. if we don’t have a record for that, we’ll give them a run time of –1.

this has a bunch of interesting implications:

  • the faster tests are going to run first. that means that we’ll have earlier feedback if we broke something.
  • the new tests (haven’t had chance to run ever) will run first, those are were the problems are more likely anyway.
  • we only run report this for passing tests, that means that we are going to run failed tests first as well.

in addition to that, this will also give us better feedback on what are slow tests are. so we can actually give them some attention and see if they are really required to be slow or they can be fixed.

hopefully, we can find a lot of the tests that are long, and just split them off into a separate test project, to be run at a later time.

the important thing is, now we have the information to handle this.

Testing

Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Implementing BCDR Testing in Your Software Development Lifecycle
  • Advancements in Mobile App Testing: Harnessing AI's Potential
  • Test Automation Success With Measurable Metrics
  • The Illusion of Safety: Thoughts on 100% Test Coverage

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: