Applications are kind of like machines in that they are composed of parts working together to produce a desired effect, but I don't think we tend to conceptualize digital parts like mechanical parts. Machine components wear down over time due the forces that they themselves create, like friction, as well as environmental variables like humidity and temperature.
Ernest Mueller at the agile admin has written a thought-provoking article on the subject of "application reliability" that addresses these questions and others. He reflects on how some developers act like "their application is always working fine, unless you can prove otherwise," and that if an application seems to be running fine now, it's always going to run fine.
Applications deployed on the cloud provide a clear example of how this isn't the case:
Anyway, to me part of the good part about the cloud is that they come out and say “we’re going to fail 2-5% of the time, code for it.” Because before the cloud, there were failures all the time too, but people managed to delude themselves into thinking there weren’t; that an application (even a complex Internet-based application) should just work, hit after hit, day after day, on into the future.
Mueller's article is a call to take the example provided by cloud and use it to reconfigure your approach to developing applications for all computing contexts, to create smarter applications that anticipate and react to failure. If you want to learn more about developing apps that are well-oiled machines, check out the article at the agile admin.